Category: Technology

OpenHAB CalDav Personal Binding – Item Name Filtering

We use the CalDav Personal binding to select items from our Exchange calendar to populate date/time OpenHAB Items — when does Anya have her next gymnastics class, when is the next Trustee meeting, etc. When we had first set this up, we were using manually created appointments. The appointments were assigned unique categories so the binding could determine which appointment should be used to update the Item. I’ve since started creating calendar items based on published calendars (so far a Google calendar and a SchoolPointe calendar), but the Python module for interacting with Exchange cannot assign a category to the appointments it creates.

The binding allows you to filter on the appointment subject (‘name’) using a regex. The binding documentation says to use name-filter:’\<Some Filter\>’ which … well, doesn’t work. We tried omitting the back-slashes in case they were meant to be escape characters. We tried omitting the greater and less-than symbols in case those were meant in the way I often use them, to designate <the part you replace>. Still doesn’t work. We tried using forward-slashes instead of backslashes because that’s the normal regular expression syntax. Nope. We tried adding a ‘starts with’ ^, trailing .* to ensure it would match anything that started with what we wanted. Nope. We’d alternately match all of the appointments or none.

Consulting the source, there is a very restrictive character set available in your name filter regex. The binding uses Java’s Matcher with a regex to extract your regex from the item configuration. You need to have filter-name: then you may have a single quote (‘? means 0 or 1 of ‘). This is Followed by one or more characters from the class which is all upper and lower case letters A-Z, a full stop, an asterisk, a plus sign, a minus sign, a space, and a pipe bar. Then you may have another single quote. The bit with one or more characters from the restricted class is extracted — this is how the binding gets your regex from the item config.

private static final String REGEX_FILTER_NAME ="filter-name:'?([A-Za-z\\.\\*\\+\\- \\|]+)'?";

Using unsupported characters in your filter-name regex alternately match all appointments or none. Using filter-name:’\<test>\’ (as the documentation literally instructs me to do) doesn’t return a match with anything as + requires one or more matches from the character set. I have zero such characters after the opening single quote. Similarly filter-name:’^Beginning of string.*’ doesn’t return a match. It appears that, in cases where the name filter is null … all items are matched. Explains why we were getting the same appointment’s details posted into each item.

On the other extreme — a filter like filter-name:’Pick up dry-cleaning at 1 Main Street’ will truncate your regular expression at the number character. The extracted matched group is Pick up dry-cleaning at … which won’t match anything unless you actually have an appointment titled “Pick up dry-cleaning at ” with a trailing space. I’ve seen posts on the OpenHAB forum where individuals have non-English words in their match … filter-name:’Trip to Askøy’ which, again, match nothing since the actual regex used by the binding is Trip to Ask  The same thing happens when looking for character classes (i.e. I don’t know if this will be capitalized, so I want to match [Tt]est).

The solution, since a question mark isn’t an option, is to use a plus or splat to replace any character that isn’t supported by the binding. Using a plus ensures there’s something where you expect the character to occur, although the * is a broader match (we use “Township: Event Name”, but I don’t need the colon to successfully match my item. “Township Event Name” would match. I could even use a different delimiter as “Township, Event Name” would also match). Where you are unsure of the case, you need to use a pipebar (e.g. filter-name:’Test|test’)

The Items that are populated with the start time and event name for the next Township meeting look like this:

DateTime Calendar_Upcoming_Township "Upcoming Township meeting (start) [%1$tA, %1$tB %1$te, %1$tY at %1$tl:%1$tM%1$tp]" <calendar> (gCalendar) {caldavPersonal="calendar:ourcalendar type:UPCOMING eventNr:1 value:START filter-name:'Township. .*'"}
String Calendar_Upcoming_Township_Title "Upcoming Township meeting [%s]" <calendar> (gCalendar) {caldavPersonal="calendar:ourcalendar type:UPCOMING eventNr:1 value:NAME filter-name:'Township. .*'"}

And the calendar events titled “Township: Trustee Regular Meeting” or “Township: Craft Fair” are all identified by the filter.

Note: Scott submitted a PR to change the regex used to extract your filter-name regex. Once this change gets merged, you’ll be able to use character sets (e.g. [T|t]est), numbers, and ‘special’ characters excluding the single quote. Including the single quotes around the filter will be required.

Scraping Calendar Events

We’ve learned the value of engaging with local government — with few people involved in local proceedings, it’s pretty easy for a generally unpopular proposal to seem reasonable. And then we’re all stuck with the generally unpopular regulation. It is a pain, however, to keep manually adding the next Trustee meeting. And there’s no way I’m checking the website daily to find out about any emergency meetings.

Now I’m pulling the events from their Google calendar and creating new meeting items in my Exchange calendar:

  1. Register the app with Google to use the API
  2. Install exchangelib
  3. Copy config.sample to config.py and add personal information
  4. Create a ca.crt file with the CA signing key for your Exchange server (or remove the custom adapter if your server cert is signed by a public key)
  5. Run getCalendarEvents.py and follow the URL to authorize access to your calendar

I’ve tweaked the script to grab events from the school district’s calendar in SchoolPointe too. Now we know when there’s a school board meeting or dress-up day.

MetaSolv Solutions Service Request Search Status Buttons

I’m working on a project to automate the creation of work orders in MetaSolv – it was going well until I tried to find one of the work orders I created within the GUI. Aaand … they don’t show up. Before bothering sys admins with silly questions, I wanted to make sure I wasn’t somehow searching wrong. While most of the search dialog is easily correlated to the API XML input (responsible party is responsiblePerson from the XML, work order number is orderNumber, Description is description) … the little status buttons don’t have convenient tooltips to help decipher their meaning.

Ten minutes perusing the internal training documents yielded “select them all” which … yeah, I get. And it was nice to confirm that the “pushed in” button is selected. But that still doesn’t tell me what the little pictures mean.

So I searched the Internet, Oracle’s generally excellent documentation online, the F1 help within the app … nothing. Either these status values are so obvious to people who regularly use MetaSolv that it’s not worth mentioning or no one knows what these little buttons mean.

Which just made me more curious. So I performed a search limited to a single button, got a few work order numbers, and then looked the things up in the database tables. Numbering the buttons from left to right, I now have corresponding service_request_status values for each one:

Button # service_request_status
1 101
2 1
3 0
4 801
5 901

Fortunately, the back-end MSS documentation tells me what these status values mean:

0 – The service request has been entered and the tasks have been successfully generated and distributed to work queues.

1-99 – The service request is still being entered (tasks have not been generated and distributed to work queues).

101 – The service request has been electronically received but has not been processed.

801 – The service request has had its Due Date task completed.

901 – The service request has had all of its tasks, including billing, completed.

 

Microsoft Teams Meeting Notes

The trick to understanding this is knowing that “Meeting Notes” are, for some reason, Wiki pages and not OneNote documents. There are two types of meetings — those held in a Teams channel and those held outside of a channel — and the ability to get a useful link to the Meeting Notes depends on which type of meeting you have.

Meetings in a Teams Channel:

When your meeting is in a Teams channel, you can use the ellipsis to grab a link to the Meeting Notes location in Microsoft Teams.

This link points to the “Meeting Notes” tab created in the channel. That tab is available without a link, too — so I can access the meeting notes just by going to the channel where the meeting was held.

Meetings Outside of a Teams Channel:

The meeting notes wiki file is stored in your OneDrive. You can find that file by searching your OneDrive for the name of the meeting. In this example, I have a meeting titled “Super Important”. You can right-click on this and select “copy link” to grab a link to the file.

The problem is that it’s an MHT (basically a self contained web page) file. I can give you a link to the file, but it’s not a convenient link to a OneNote page like you’d expect. For some reason, Chrome wants to save it as an EML (email) so the file opens in Outlook (or change the extension to MHT manually). Firefox keeps the MHT extension, and the file opens up in a browser so you can view the notes.

 

Identifying System-Only AD Attributes

This information is specific to Active Directory. MSDN has documentation for each schema attribute — e.g. CN — which documents if the attribute is “system only” or not.

For an automated process, search at the base cn=schema,cn=configuration,dc=example,dc=com with the filter (&(ldapDisplayName=AttributeName))and return the value of systemOnly. E.G. this shows that operatingSystemServicePack is user writable.

***Searching...
ldap_search_s(ld, "cn=schema,cn=configuration,dc=example,dc=com", 2, "(&(ldapDisplayName=operatingSystemServicePack))", attrList,  0, &msg)
Getting 1 entries:
Dn: CN=Operating-System-Service-Pack,CN=Schema,CN=Configuration,dc=example,dc=com
systemOnly: FALSE; 

You can also list all of the system-only attributes by using the filter (&(systemOnly=TRUE)) and returning ldapDisplayName

***Searching...
ldap_search_s(ld, "cn=schema,cn=configuration,dc=example,dc=com", 2, "(&(systemOnly=TRUE))", attrList,  0, &msg)
Getting 189 entries:
Dn: CN=OM-Object-Class,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: oMObjectClass; 

Dn: CN=Canonical-Name,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: canonicalName; 

Dn: CN=Managed-Objects,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: managedObjects; 

Dn: CN=MAPI-ID,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: mAPIID; 

Dn: CN=Mastered-By,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: masteredBy; 

Dn: CN=Top,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: top; 

Dn: CN=NTDS-DSA-RO,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: nTDSDSARO; 

Dn: CN=Application-Process,CN=Schema,CN=Configuration,dc=example,dc=com
lDAPDisplayName: applicationProcess; 
...

 

Asus Router NVRAM Usage

I had a really strange problem with an Asus router — the port forwarding disappeared. And while I could use the UI and put everything back in, it didn’t stick around. Turns out the NVRAM was full — there wasn’t anywhere to put the port forwarding rules (vts_rulelist). Fortunately, there were a few old DHCP reservations I was able to delete and free up some space. For future reference, the following command reports what is using the NVRAM.

nvram show | awk '{print length(), $0 | "sort -n -r"}' | cut -d"=" -f 1

Extracting Waste Stream Collection Dates for the Netherlands

Yeah … mostly saving this for the regex search with a start and end flag that spans newlines because I don’t really need to know the date they collect each waste stream in the Netherlands. Although it’s cool that they’ve got five different waste streams to collect.

import requests
import re

strBaseURL = 'https://afvalkalender.waalre.nl/adres/<some component of your address in the Netherlands>' 
iTimeout = 600 
strHeader = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}

# Start and end flags for waste stream collection schedule content
START = '<ul id="ophaaldata" class="line">' 
END = '</ul>' 

page = requests.get(strBaseURL, timeout=iTimeout, headers=strHeader)
strContent = page.content
strContent = strContent.decode("utf-8")

result = re.search('{}(.*?){}'.format(START, END), strContent, re.DOTALL)
strCollectionDateSource = result.group(1)

resultWasteStreamData = re.findall('<li>(.*?)</li>', strCollectionDateSource, re.DOTALL)
for strWasteStreamRecord in resultWasteStreamData:
    listWasteStreamRecord = strWasteStreamRecord.split("\n")
    strDate = listWasteStreamRecord[3]
    strWasteType = listWasteStreamRecord[4]
    print("On {}, they collect {}".format(strDate.strip().replace('<i class="date">','').replace('</i>',''), strWasteType.strip().replace('<i>','').replace('</i>','')))

Create 2 With a Pi

Scott has a friend who is linking a Pi up to a Create 2 — using the Ubiquity Pi image. Unfortunately, it doesn’t seem to communicate with the robot. Tx but to Rx. There are a few issues in the create_autonomy and libcreate repos that report the same issue and have subsequently merged in a fix. But the Ubiquity image is from June 2019 … which pre-dates the fix. I’ve run through the process of installing ROS and create_autonomy on a base Ubuntu. Thought I’d save the process for later 🙂

 

# Get Ubuntu 16.4 environment to match what Ubiquity Robotics is using … obviously, this would all be run on the Pi instead
docker run -dit –name UbuntuSandbox ubuntu:xenial

# Shell into running container
docker exec -it UbuntuSandbox bash

# Update software catalog
apt-get update

# Install Prerequisities
apt-get install python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake git libblkid-dev e2fslibs-dev libboost-all-dev libaudit-dev vim python-pip python-rosdep wget unzip

# Create a directory for this project
mkdir /roomba
cd /roomba

# Install ROS
apt-get install lsb-release
sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main” > /etc/apt/sources.list.d/ros-latest.list’
apt-key adv –keyserver hkp://ha.pool.sks-keyservers.net:80 –recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
apt-get update
apt-get upgrade
apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake
mkdir /roomba/ros
cd /roomba/ros
rosdep init
rosdep update
mkdir -p /roomba/ros_catkin_ws
cd /roomba/ros_catkin_ws
rosinstall_generator ros_comm –rosdistro kinetic –deps –wet-only –tar > kinetic-ros_comm-wet.rosinstall
wstool init src kinetic-ros_comm-wet.rosinstall

mkdir -p /roomba/ros_catkin_ws/external_src
cd /roomba/ros_catkin_ws/external_src
wget http://sourceforge.net/projects/assimp/files/assimp-3.1/assimp-3.1.1_no_test_models.zip/download -O assimp-3.1.1_no_test_models.zip
unzip assimp-3.1.1_no_test_models.zip
cd assimp-3.1.1
cmake .
make
make install

cd /roomba/ros_catkin_ws
rosdep install -y –from-paths src –ignore-src –rosdistro kinetic -r

./src/catkin/bin/catkin_make_isolated –install -DCMAKE_BUILD_TYPE=Release –install-space /opt/ros/kinetic

# Source in the ROS bash setup stuff
source /opt/ros/kinetic/setup.bash

# Build libcreate
git clone https://github.com/AutonomyLab/libcreate.git
cd libcreate
mkdir build
cd build
cmake ..
make

# Build create_autonomy
apt-get install python-rosdep python-catkin-tools
mkdir -p /roomba/create_ws/src
cd /roomba/create_ws
catkin init
cd /roomba/create_ws/src
git clone https://github.com/AutonomyLab/create_autonomy.git

cd /roomba/create_ws
rosdep update
rosdep install –from-paths src -i

# Setup so CMake will find our libcreate when we build create_autonomy
mv /opt/ros/kinetic/lib/libcreate.so /opt/ros/kinetic/lib/libcreate.so.orig
cp /roomba/libcreate/build/libcreate.so /opt/ros/kinetic/lib/libcreate.so

# Continue building create_autonomy
catkin build

# Source in the autonomy bash environment
source /roomba/create_ws/devel/setup.bash

# Create config yaml
vi /roomba/config.yaml
## File content:
# Create config YAML
# The device path for the robot
dev: “/dev/ttyUSB0”

# Baud rate. Passing this parameter overwrites the inferred value based on the robot_model
baud: 115200

# Base frame ID
base_frame: “base_footprint”

# Odometry frame ID
odom_frame: “odom”

# Time (s) without receiving a velocity command before stopping the robot
latch_cmd_duration: 0.2

# Internal loop update rate (Hz)
loop_hz: 10.0

# Whether to publish the transform between odom_frame and base_frame
publish_tf: true
## End file content

# Run it
roslaunch ca_driver create_2.launch config:=/roomba/config.yaml desc:=false

# Logs at ~/.ros/log/latest

Network Manager GUI

You can use nmcli to configure network interfaces controlled by NetworkManager. But, honestly, I don’t see any advantage to learning the cryptic CLI instead of using the cryptic config file stuff I’ve already learned. And yet … I need my server to have /etc/resolv.conf populated when it reboots. So I figured out how to launch the KDE system settings (assuming you’ve got the X display redirected to your host) — systemsettings5

Mine takes a few minutes to render, during which time the window is black and a handful of errors are written out to the console. But it got there eventually, and I was able to edit the network interface.