Category: Technology

Using Microsoft Graph

Single Sign-On: Microsoft Graph

End Result: This will allow in-domain computers to automatically log in to web sites and applications. Computers not currently logged into the company domain will, when they do not have an active authenticated session, be presented with Microsoft’s authentication page.
Requirements: The application must be registered on Microsoft Graph.

Beyond that, requirements are language specific – I will be demonstrating a pre-built Python example here because it is simple and straight-forward. There are examples for a plethora of other languages available at https://github.com/microsoftgraph

Process – Application Development:
Application Registration

To register your application, go to the Application Registration Portal (https://apps.dev.microsoft.com/). Elect to sign in with your company credentials.

You will be redirected to the company’s authentication page

If ADSF finds a valid token for you, you will be directed to the application registration portal. Otherwise you’ll get the same logon page you see for many other MS cloud-hosted apps. Once you have authenticated, click “Add an app” in the upper right-hand corner of the page.

Provide a descriptive name for the application and click “Create”

Click “Generate New Password” to generate a new application secret. Copy it into a temporary document. Copy the “Application Id” into the same temporary document.

Click “Add Platform” and select “Web”

Enter the appropriate redirect/logout URLs (this will be application specific – in the pre-built examples, the post-authentication redirect URL is http://localhost:5000/login/authorized

Delegated permissions impersonate the signed in user, application permissions use the application’s credentials to perform actions. I use delegated permissions, although there are use cases where application permissions would be appropriate (batch jobs, for instance).

Add any permissions your app requires – for simple authentication, the default delegated permission “User.Read” is sufficient. If you want to perform additional actions – write files, send mail, etc – then you will need to click “Add” and select the extra permissions.

Profile information does not need to be entered, but I have entered the “Home page URL” for all of my applications so I am confident that I know which registered app corresponds with which deployed application (i.e. eighteen months from now, I can still figure out site is using the registered “ADSF Graph Sample” app and don’t accidentally delete it when it is still in use).

Click Save. You can return to your “My Applications” listing to verify the app was created successfully.

Application Implementation:

To use an example app from Microsoft’s repository, clone it.

git clone https://github.com/microsoftgraph/python-sample-auth.git

Edit the config.py file and update the “CLIENT_ID” variable with your Application Id and update the “CLIENT_SECRET” variable with your Application Secret password. (As they note, in a production implementation you would hash this out and store it somewhere else, not just drop it in clear text in your code … also if you publish a screen shot of your app ID & secret somewhere, generate a new password or delete the app registration and create a new one. Which is to say, do not retype the info in my example, I’ve already deleted the registration used herein.)

Install the prerequisites using “pip install -r requirements.txt”

Then run the application – in the authentication example, there are multiple web applications that use different interfaces. I am running “python sample_flask.py”

Once it is running, access your site at http://localhost:5000

The initial page will load; click on “Connect”

Enter your company user ID and click “Next”

This will redirect to the company’s sign-on page. For in-domain computers or computers that have already authenticated to ADSF, you won’t have to enter credentials. Otherwise, you’ll be asked to logon (and possibly perform the two-factor authentication verification).

Voila, the user is authenticated and you’ve got access to some basic directory info about the individual.

Process – Tenant Owner:
None! Any valid user within the tenant is able to register applications.
Implementation Recommendations:
There is currently no way to backup/restore applications. If an application is accidentally or maliciously deleted, a new application will need to be registered. The application’s code will need to be updated with a new ID and secret. Documenting the options selected when registering the application will ensure the application can be re-registered quickly and without guessing values such as the callback URL.

There is currently no way to assign ownership of orphaned applications. If the owner’s account is terminated, no one can manage the application. The application continues to function, so it may be some time before anyone realizes the application is orphaned. For some period of time after the account is disabled, it may remain in the directory — which means a directory administrator could re-enable the account and set the password to a known value. Someone could then log into the Microsoft App Registration Portal under that ID and add new owners. Even if the ID has been deleted from the directory, it exists as a tombstone and can be restored for some period of time. Eventually, though, the account ceases to exist — at which time the only option would be to register a new app under someone else’s ID and change the code to use the new ID and secret. Ensure multiple individuals are listed as the application owner helps avoid orphaned applications.

Edit the application and click the “Add Owner” button.

You can enter the person’s logon ID or their name in “last, first” format. You can enter their first name – with a unique first name, that may work. Enter “Robert” and you’re in for a lot of scrolling! Once you find the person, click “Add” to set them up as an owner of the application. Click “Save” at the bottom of the page to commit this change.

I have submitted a feature request to Microsoft both for reassigning orphaned applications within your tenant and for a mechanism to restore deleted applications — apparently their feature requests have a voting process, so it would be helpful if people would up-vote my feature request.

Ongoing Maintenance:
There is little ongoing maintenance – once the application is registered, it’s done.

Updating The Secret:

You can change the application secret via the web portal – this would be a good step to take when an individual has left the team, and can be done as a proactive security step as a routine. Within the application, select “Generate New Password” and create a new secret. Update your code with the new secret, verify it works (roll-back is to restore the old secret to the config – it’s still in the web portal and works). Once the application is verified to work with the new secret, click “Delete” next to the old one. Both the create time and first three characters of the secret are displayed on the site to ensure the proper one is removed.

Maintaining Application Owners:

Any application owner can remove other owners – were I to move to a different team, the owners I delegated could revoke my access. Just click the “X” to the far right of the owner you wish to remove.

 

 

Building A Jenkins Sandbox

You can use a pre-built docker container (the “long term support” iteration is published as jenkins/jenkins:lts) or perform a local installation from https://jenkins.io/download/, add a package repo to your package manager config (http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo for RedHat-based systems), or build it from the source repo. In this sandbox example, I will be using a Docker container.

Map the /var/Jenkins_home value to something. This allows you to store user-specific data on your local drive, not within the Docker image. In my case, c: is shared in Docker and I’m using c:\docker\jenkins\jenkins_home to store the data.

I have a java cacerts file mounted to the container as well – my CA chain has been imported into this file, and the default password, changeit, is used. This will allow Java to trust internally signed certificates. The keystore password appears as part of the process (i.e. anyone who can run commands like “ps aux” or “ps -efww” will see this value, so while security best practices dictate the default password should be changed … don’t change it to something like your root password!).

We can now start the Docker container:

docker run -p 8080:8080 -p 50000:50000 -v /c/docker/jenkins/jenkins_home:/var/jenkins_home -v /c/docker/jenkins/cacerts:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts jenkins/jenkins:lts

Once the container is running, you can visit the management web site (http://localhost:8080) and install the modules you want – or just take the defaults (you’ll end up with ‘stuff’ you don’t need … I don’t use subversion, for instance, and don’t really need a plugin for it). For a sandbox, I accept the defaults and then use Jenkins => Manage Jenkins => Manage Plug-ins to remove obviously unnecessary ones. And add any that may be needed (e.g. if you are using Visual Studio solution files, add in the MSBuild plugin).

 

Configuring Authentication (LDAP)

First install the appropriate plug-in – referrals cause authentication problems when using AD as the LDAP authentication source, if you are using AD for authentication … use the Active Directory plugin).

Manage Jenkins => Configure Global Security. Under access control, select the radio button for “LDAP” or “Active Directory”. Configuration is implementation specific.

AD:

Click the button to expand the advanced configuration. You should not need to specify a domain controller if service records for the domain are present in DNS. The “Site” should be “UserAuth”. For the Bind DN, you can use your userid (user@domain.ccTLD or domain\uid format) with your password. Or you can create a dedicated service account – for a “real world” implementation, you would want a dedicated service account (using *your* account means you’ll need to update your Jenkins config whenever you change your password … and when you forget this update, auth fails).

A note about the group membership lookup strategy:

For some reason, Jenkins assumes recursive group memberships will be used (e.g. there is a “App XYZ DevOps Team” that is placed into the “Jenkins Users” group, and “Jenkins Users” is assigned authorizations within the system). Bit of a shame that “none” isn’t an option for cases where there isn’t hierarchical group membership being built out.

There are three lookup strategies available: recursive group queries, LDAP_MATCHING_RULE_IN_CHAIN, and Toke-Groups user attribute. There have been bugs in the “Automatic” strategy that caused timeout failures. Additionally, the group list returned by the three strategies is not identical … so it is possible to have inconsistent authorization results as different strategies are used. To ensure consistent behaviour, I select a specific strategy.

Token-Groups: If you are not using Distribution groups within Jenkins to assign authorization (and you probably shouldn’t since it’s a distribution group, not a security group), you can select the Token-groups user attribute to handle recursive group membership. Token-groups won’t work if you are using distribution groups within Jenkins, though, as only security groups show up in the token-groups attribute.

LDAP_MATCHING_RULE_IN_CHAIN: OID 1.2.840.113556.1.4.1941, LDAP_MATCHING_GROUP_IN_CHAIN is an extended matching operator (something Microsoft added back in Windows 2003 R2) that can be used in LDAP filters:

(member:1.2.840.113556.1.4.1941:=cn=Bob,ou=ResourceUsers,dc=domain,dc=ccTLD)

This operator has known issues with high fan-outs and can cause hangs while data is retrieved. It is, however, a more efficient way of handling recursive group memberships. If your Jenkins groups contain only users, you will not encounter the known issue. If you are using nested groups, my personal recommendation would be to test each option and time logon activities … but if you do not wish to perform a test, this is a good starting option.

Recursive Group Queries: Jenkins issues a new LDAP query for each group – a lot of queries, but straight-forward queries. This is my last choice – i.e. if everything else hangs and causes poor user experience, try this selection.

For Active Directory domains that experience slow authentication through the AD plug-in regardless of the selected recursion scheme, I’ve set up the LDAP plug-in (it does not handle recursive group memberships) but it’s not a straight-forward configuration.

LDAP:

Click the button to expand the advanced server configuration. Enter the LDAP directory connection details. I usually start with clear text LDAP. Once the clear text connection tests successfully, the certificate trust can be established.

You can add a group search filter, but this is not required. If you request your group names start with a specific string, e.g. my ITSS CSG organization’s Jenkins server might use groups that start with ITSS-CSG-Jenkins, you can add a cn filter here to restrict the number of groups your implementation looks through to determine authorization. My filter, for example, is cn=ITSS-CSG-Jenkins*

Once everything is working with clear text, load the Root and Web CA public keys into your Java instance’s cacerts file (if you have more than once instance of Java and don’t know which one is being used … either figure out which one is actually being used or repeat the keytool commands for each cacerts file on your machine).

In the Docker container, the file is /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts and I’ve mapped in from a locally maintained cacerts file that already contains our public keys for our CA chain.

Before saving your changes, make sure you TEST the connection.

Under Authorization, you can add any of your AD/LDAP groups and assign them rights (make sure your local back door account has full rights too!).

Finally, we want to set up an SSL web site. Request a certificate for your server’s hostname (make sure to include a SAN if you don’t want Chrome to call your cert invalid). Shell into the Docker instance, cd into $JENKINS_HOME, and scp the certificate pfx file.

Use the keytool command to create a JKS file from this PFX file – make sure the certificate (PFX) and keystore (JKS) passwords are the same.

Now remove the container we created earlier. Don’t delete the local files, just “docker rm <containerid>” and create It again

docker run –name jenkins -p 8443:8443 -p 50000:50000 -v /c/docker/jenkins/jenkins_home:/var/jenkins_home -v /c/docker/jenkins/cacerts:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/cacerts jenkins/jenkins:lts –httpPort=-1 –httpsPort=8443 –httpsKeyStore=/var/jenkins_home/jenkins.cert.file.jks –httpsKeyStorePassword=keystorepassword

Voila, you can access your server using an HTTPS URL. If you review the Jenkins documentation, they prefer leaving the Jenkins web server on http and using something like a reverse proxy to perform SSL offloading. This is reasonable in a production environment, but for a sandbox … there’s no need to bring up a sandbox Apache server just to configure a reverse proxy. Since we’re connecting our instance to the real user passwords, sending passwords around in clear text isn’t a good idea either. If only you will be accessing your sandbox (i.e. http://localhost) then there’s no need to perform this additional step. The server traffic to the LDAP / AD directory for authentication is encrypted. This encryption is just for the client communication with the web server.

 

Using Jenkins – System Admin Stuff

There are several of “hidden” URLs that can be used to control the Jenkins service (LMGTFY, basically). When testing and playing with config parameters, restarting the service was a frequent operation, so I’ve included two service restart URLs here:

   https://jenkins.domain.ccTLD:8443/safeRestart ==> enter quiet mode, wait for running builds to complete, then restart

   https://jenkins.domain.ccTLD:8443/restart ==> Restart not so cleanly

Multiple discussions about creating a more fault tolerant authentication scheme within Jenkins exist on their ‘Issues’ site. Currently, you cannot use local accounts if the directory service is unavailable. Not a big deal if you’re on the company network and using one of our highly available directory solutions. Bit of a shocker, though, if your sandbox environment is on your laptop and you try to play with the instance when not on the company network. In production implementations, this would be a DR consideration (dependency on the directory being recovered). In a cloud-hosted implementation, this creates a dependency on network connectivity into the company.

As an emergency solution, you can disable security on your Jenkins installation. I’d also get some sort of firewall rule (OS-based or hardware firewall) to restrict console access to a trusted terminal server or workstation. To disable security, stop Jenkins. Edit the config.xml file in $JENKINS_HOME, and ifnd the <useSecurity> section. Change ‘true’ to ‘false’ and start Jenkins. You’ll be able to access the console without credentials.

Updating Jenkins Image

General practice for updating an application is not to update a container. Instead, download an updated image and recreate the container with the new image. I store the container initialization command along with the folder to which image directories are mapped. My file system has /path/to/docker/storage/AppName that contains a text file with the initialization command and folder(s) that are mapped into the container. This avoids having to find the proper initialization parameters when I upgrade the container.

To update the container, pull a new image, stop the container, remove the container, and create it again. That is:

docker stop jenkins
docker pull jenkins/jenkins:lts
docker rm jenkins
<whatever you used to create the container>

Agile Methodology Is Not Anarchy

For the past several years, my employer has been moving toward an Agile development methodology. There are some challenges when mapping this methodology into operations because it’s not the same thing; but, surprisingly, those are not where I have experienced challenges. The biggest challenge during this transition is some of my coworkers seem to think the methodology is that there are no rules.

A friend of mine, a fairly eccentric history professor, used to say that a little knowledge is a dangerous thing but you’ve got to emphasize LITTLE. And it seems like we’re encountering a situation where Phil’s emphasis holds true: the only thing garnered from from Agile training is that the documentation and process from waterfall projects are no more. But breaking away from the large-scale view of a P.R.O.J.E.C.T. for Agile development is a bit like breaking a monolithic application out into microservices — it still needs to do all of the same ‘stuff’, it just does it differently. And there are still policies and procedures — even a microservice team is going to have a coding standard, a process for handling merges, a way of scheduling time off, and some basic idea of what their application needs to accomplish. Sure, the app’s design will change incrementally over time. But it’s not an emergent property like chaos/complexity theory.

Maybe the “what Agile means to me” mentality comes from failing to clearly map a development methodology into an operations framework. Maybe it’s just a good excuse to avoid components of work that they do not enjoy. To avoid “agile operations” becoming “no boring planning stuff!!!”, I’ve outlined ways in which the Scrum methodologies the company wishes to adopt can be used to streamline operations. It helps that our group is reorganizing into an operational/support group and an architecture/design group — I see a lot of places within the operations team where Scrum approaches make sense.

Backlog — prioritizing the ticket queue like a backlog and having support staff constantly pulling from the top of the list — not only is this an awesome way to avoid the guy who scans the queue for the easy jobs, but it ensures the most important problems are being resolved first. A universal set of stakeholders does not exist for the ticket queue — someone whose ticket is ranked fifteenth on the list may disagree, and they are welcome to add details explaining why the issue is more impacting that it seems on its face. But 90% or more of our tickets are “Sev3” — which basically means both “we want it done ASAP” and “it isn’t a wide-spread high impact outage”. Realistically, dozens of tickets do not have the exact same time constraint and impact. There is extra work for management in converting a ticket bucket into an ordered backlog, but the payoff is that tickets are resolved in an order that correlates to the importance of the issue. In addition to the ticket queue, routine maintenance tasks will be included in the backlog. And prioritized accordingly.

Very short sprints — while developers moving from Waterfall to Agile might start from a month (or two) long sprint and trim weeks as they evolve into the process, operations starts from the other end of the spectrum. Our norm is to grab a ticket, sort it, then look at the queue and grab another one. We are planning for hours, maybe a day or two. This means we might establish application access on Tuesday that isn’t needed until next Monday. Establish a sprint that lasts a week, and use the backlog to get tickets that have lower priority (either because the impact is lower or because resolution is not needed for a week) included in the sprint. Service interruptions, SEV1 and SEV2 tickets, will occur and should be assumed in the sprint planning (i.e. either take enough work that you think it will just get done with no service interruption tickets and accept that some tickets from the sprint will be incomplete or leave some space for service interruption tickets and have staff pull “bonus” tickets from the top of the backlog if they have no work toward the end of the sprint).

Estimation — going through the tickets and classifying each incident as a quick little task, something that will take a few hours, or a significant undertaking facilitates in sprint planning. It’s difficult to know how many tickets I can reasonably expect to include in a sprint if I cannot differentiate between a three minute config change and a three day application rollout.

Multi-tasking — Implementation, support, and ticket resolution tasks are no longer a big bucket of work that individuals attempt to multi-task to complete. There are distinct tasks that are completed in series. Some tickets require information from the user; put the ticket on hold until a response is received and move on to the next unit of work.

Velocity — historic data based on time estimates cannot be generated, but simple number of tickets per week pre- and post- can certainly be compared. And going forward, ticket counts can be weighted by estimation values.

Stand-ups are a bit of a mental sticking point for me. I can conceive the value of spending a few minutes reviewing what you’ve done, what you plan on doing, and ensuring there is a ready forum to discuss any sticking points (maybe someone else has encountered a similar situation and can offer assistance). Stand-ups could include a quick discussion of any priority shifts (escalations, service interruptions) too. *But* my experience with stand-ups has been the attendance test variety — stand-ups that were used to hurt individuals who didn’t make it to the office by 08:00. Or those who weren’t around at 16:50. I don’t think it’s reasonable to ask someone who got into an issue and worked until 7P to show up at 8A the next day. I also don’t think it is reasonable to expect someone who came in at 6A to continue working until 5P. Were a stand-up scheduled in the middle of the day, I might feel differently about them.

New Process Police

As an operational support group, we did not have a software development methodology. Doesn’t mean we didn’t develop software — one of the great things within operational support is the ability to automate day-to-day tasks to reduce workload. Why have someone check for application patches when a process can watch an RSS feed or file repository and notify us when there’s an update. Why have someone clickity-click provisioning users into groups when the user can make a web request, the group owner can approve the request, and an automated process can add the user into the group? The end result of our automation programming is, well, quite a bit of software.

And with a small number of people, informal application development worked. Wasn’t ideal, but it worked. If you want to write in Java while I use C# … not ideal, but the alternative is that one of us needs to learn a new programming language. Problem is the next guy we hired uses VBS, the next guy uses PowerShell … and I’ll use perl for simpler processes. Then someone starts tweaking my code and buggers it up … and we’ve got to figure out what happened and roll back based on some tape backup.

To get our internal software development processes organized, I developed a process. And ran a training session so everyone was familiar with both the process and the tools. Some of us have used the process well — don’t edit production code, clone the repo locally, make a branch for your edits, test it and have another group member sign off on the changes, merge your branch back into master, test more, then pull the code into production. The majority, it seems, have not followed the process at all. Changes are made to the code running in production, not incorporated into the Git repo. Six months after the new development process went in place, half of our code has improperly made changes!

To an extent, I consider this a management problem … if the department doesn’t want software development to be a free-for-all, then the department managers need to ensure their staff follows the process. If the department wants everyone to do their own thing — then get rid of the process and declare our methodology as “do whatever you want”. The challenge for managers, though, is that they don’t know that someone has edited code in production and failed to commit their changes into the repo. If only there were some way to watch for improperly edited code and alert us promptly.

Other scripts I’ve found to perform a similar function attempt to parse ‘git status’ to identify all sorts of issues — but that doesn’t address the specific problem that I’ve got. To facilitate identifying offenders, I wrote a quick Python script that searches a directory tree for git repositories and alerts us when changes have not been staged for commit. If you’ve staged the changes for commit, that won’t be identified. But the particular problem we encounter frequently … there are alerts for that.

Microsoft Teams: E-mail Notifications

Click on your initials / picture in the upper right-hand corner of the screen and select “Settings”

Within the Settings pane, select “Notifications”

Change the “Followed Channels” notification to “Banner and email”; change the frequency to “As soon as possible”. Changes are saved as you make them. Use the “X” to close the Settings pane.

To receive e-mail notifications for a channel, follow it. Click the three dots to the right of the channel name.

And click “Follow this channel”. If you click the three dots again, you can elect to “Unfollow this channel” and cease receiving e-mail notifications for posts to the channel.

You have to do this for each channel – if someone spins up a new channel, you won’t see notifications for those posts. I’ve been posting to the “General” thread whenever I break a discussion out into a new thread. This ensures anyone who wants e-mail notifications for the main thread at least knows there is a new thread that they may wish to follow.

** Microsoft’s algorithm for delivering e-mail notifications is a little … unique. Like most of these types of apps, an attempt is made to not deliver notifications for messages you’ve already seen. At last test, Microsoft used a 90-minute timer (not quite my definition of “ASAP”). If you have Teams open in a minimized web browser, if the notification went to your mobile client, if the notification went to your desktop client, if there were solar flares with peak flux over 10^(-5) W/m^2 … an e-mail notification is not sent. Point being, don’t rely on the e-mail notifications.

Search engines as bias algorithms

Sigh! Once again, Trump is the most lamentable victim of any persecution in history (including actual people who were bloody well burned alive as witches). Anyone remember people google-bombing Bush2 to link moron and “miserable failure” over to him? Algorithms can be used against themselves.
 
Singling out a specific feedback cycle that is damaging *to you* is a whole heap of hypocrisy (and thus neo-presidential). But Google’s “personalized” news seems to be just as much a victim of commercialization as cable news. I don’t get many articles that are complimentary of Trump. My husband does. Seeing information that substantiates your point of view is not a good thing for personal growth and awareness, but this is not a conspiracy. Just Google’s profile of us — we’re getting news that Google thinks we’ll like. I know search results are personalized in an attempt to deliver what *you* want … so it’s quite possible my search results will skew toward Trump-negative content (although as a matter of business, I would expect my husband’s to skew toward Trump-positive content). I guess we all know what Trump clicks on and what he doesn’t if *his* results aren’t just a summary of the Fox News homepage.
And thinking about search engines logically — the “secret sauce” is a bias algorithm. Back in 1994, there was an “Internet Directory” — a alphabetized listing (maybe categorized by subject … don’t recall) of web sites. It missed some. More importantly, though, there were not millions of web sites with new ones popping up every day, so I’m thinking the phone book approach to web sites might not work. If search engines crawled the Internet and returned a newest to oldest list of everything that contains the word … or a list alphabetized by author, page title, etc, … would you use a search engine??? Someone would have to write a bias search algorithm to work against search engine results. Oh, wait.

Temporary Fix: ZoneMinder, PHP7.2, openHAB ZoneMinder Binding

I got Zoneminder 1.31.45 (which includes the new CakePHP framework that doesn’t use what have become reserved words in PHP7) working with the openHAB ZoneMinder binding (which relies on data from the API at  /zm/api/configs/view/ATTR_NAME.json). There are two options, ZM_PATH_ZMS and ZM_OPT_FRAME_SERVER which now return bad parameter errors when attempting to retrieve the config using /view/. Looking through the database update scripts, it appears both of these parameters were removed at ZoneMinder 1.31.1

ZM_PATH_ZMS was removed from the Config database and placed in a config file, /etc/zm/conf.d/01-system-paths.conf. There is a PR to “munge” this value into the API so /viewByName returns its value … but that doesn’t expose it through /view.

ZM_OPT_FRAME_SERVER appears to have been eliminated as a configuration option.

You cannot simply re-insert the config options into the database, as ZoneMinder itself loads the ZM_PATH_ZMS value from the config file and then proceeds to use it. When it attempts to load config parameters from the Config table and encounters a duplicate … it falls over. We were unable to view our video through the ZoneMinder server.

*But* editing /usr/share/zoneminder/www/includes/config.php (exact path may vary, list the files from your package install and find the config.php in www/includes) to include an if clause around the section that loads config parameters from the database, and only loading the parameter when the Name is not ZM_PATH_ZMS (bit in yellow below) avoids this overlapping config value.

$result = $dbConn->query( 'select * from Config order by Id asc' );
if ( !$result )
   echo mysql_error();
   $monitors = array();
   while( $row = dbFetchNext( $result ) ) {
      if ( $defineConsts )
      // LJR 2018-08-18 I inserted this config parameter into DB to get OH2-ZM running, and need to ignore it in the ZM web code
      if( strcmp($row['Name'],'ZM_PATH_ZMS') != 0){
         define( $row['Name'], $row['Value'] );
      }
   $config[$row['Name']] = $row;
   if ( !($configCat = &$configCats[$row['Category']]) ) {
      $configCats[$row['Category']] = array();
      $configCat = &$configCats[$row['Category']];
   }
   $configCat[$row['Name']] = $row;
}

Once the ZoneMinder web site happily ignores the presence of ZM_PATH_ZMS from the database config table, you can insert it and ZM_OPT_FRAME_SERVER (an option which appears to have been removed at ZoneMinder 1.31.1) back into the Config table. **Important** — change the actual value of ZM_PATH_ZMS to whatever is appropriate for your installation. In my ZoneMinder installation, /cgi-bin-zm is the cgi-bin directory, and /cgi-bin-zm/nph-zms is the ZMS binary.

From a MySQL command line:

use zm; #Assuming your zoneminder database is actually named zm
INSERT INTO `Config` VALUES (225,'ZM_PATH_ZMS','/cgi-bin-zm/nph-zms','string','/cgi-bin-zm/nph-zms','relative/path/to/somewhere','(?^:^((?:[^/].*)?)/?$)',' $1 ','Web path to zms streaming server',' The ZoneMinder streaming server is required to send streamed images to your browser. It will be installed into the cgi-bin path given at configuration time. This option determines what the web path to the server is rather than the local path on your machine. Ordinarily the streaming server runs in parser-header mode however if you experience problems with streaming you can change this to non-parsed-header (nph) mode by changing \'zms\' to \'nph-zms\'. ','hidden',0,NULL);
INSERT INTO `Config` VALUES (226,'ZM_OPT_FRAME_SERVER','0','boolean','no','yes|no','(?^i:^([yn]))',' ($1 =~ /^y/) ? \"yes\" : \"no\" ','Should analysis farm out the writing of images to disk',' In some circumstances it is possible for a slow disk to take so long writing images to disk that it causes the analysis daemon to fall behind especially during high frame rate events. Setting this option to yes enables a frame server daemon (zmf) which will be sent the images from the analysis daemon and will do the actual writing of images itself freeing up the analysis daemon to get on with other things. Should this transmission fail or other permanent or transient error occur, this function will fall back to the analysis daemon. ','system',0,NULL);

Now restart ZoneMinder and the OH2 ZoneMinder binding. We’ve got monitors on the ZoneMinder web site, we are able to view the video stream, and OH2 picks up alarms from the ZoneMinder server.

If you re-run zmupdate.pl, it will remove these two records from the Config table. If you upgrade ZoneMinder, the change to the PHP file will be reverted.

DevOps Alternatives

While many people involved in the tech industry have a wide range of experience in technologies and are interested in expanding the breadth of that knowledge, they do not have the depth of knowledge that a dedicated Unix support person, a dedicated Oracle DBA, a dedicated SAN engineer person has. How much time can a development team reasonably dedicate to expanding the depth of their developer’s knowledge? Is a developer’s time well spent troubleshooting user issues? That’s something that makes the DevOps methodology a bit confusing to me. Most developers I know … while they may complain (loudly) about unresponsive operational support teams, about poor user support troubleshooting skills … they don’t want to spend half of their day diagnosing server issues and walking users through basic how-to’s.

The DevOps methodology reminds me a lot of GTE Wireline’s desktop and server support structure. Individual verticals had their own desktop support workforce. Groups with their own desktop support engineer didn’t share a desktop support person with 1,500 other employees in the region. Their tickets didn’t sit in a queue whilst the desktop tech sorted issues for three other groups. Their desktop support tech fixed problems for their group of 100 people. This meant problems were generally resolved quickly, and some money was saved in reduced downtime. *But* it wasn’t like downtime avoidance funded the tech’s salary. The business, and the department, decided to spend money for rapid problem resolution. Some groups didn’t want to spend money on dedicated desktop support, and they relied on corporate IT. Hell, the techs employed by individual business units relied on corporate IT for escalation support. I’ve seen server support managed the same way — the call center employed techs to manage the IVR and telephony system. The IVR is malfunctioning, you don’t put a ticket in a queue with the Unix support group and wait. You get the call center technologies support person to look at it NOW. The added advantage of working closely with a specific group is that you got to know how they worked, and could recommend and customize technologies based on their specific needs. An IM platform that allowed supervisors and resource management teams to initiate messages and call center reps to respond to messages. System usage reporting to ensure individuals were online and working during their prescribed times.

Thing is, the “proof” I see offered is how quickly new code can be deployed using DevOps methodologies. Comparing time-to-market for automated builds, testing, and deployment to manual processes in no way substantiates that DevOps is a highly efficient way to go about development. It shows me that automated processes that don’t involve waiting for someone to get around to doing it are quick, efficient, and generally reduce errors. Could similar efficiency be gained by having operation teams adopt automated processes?

Thing is, there was a down-side to having the major accounts technical support team in PA employ a desktop support technician. The major accounts technical support did not have broken computers forty hours a week. But they wanted someone available from … well, I think it was like 6AM to 10PM and they employed a handful of part time techs, but point remains they paid someone to sit around and wait for a computer to break. Their techs tended to be younger people going to school for IT. One sales pitch of the position, beyond on-the-job experience was that you could use your free time to study. Company saw it as an investment – we get a loyal employee with a B.S. in IT who moved into other orgs, college kid gets some resume-building experience, a chance to network with other support teams, and a LOT of study time that the local fast food joint didn’t offer. The access design engineering department hired a desktop tech who knew the Unix-based proprietary graphic workstations they used within the group. She also maintained their access design engineering servers. She was busier than the major accounts support techs, but even with server and desktop support she had technical development time.

Within the IT org, we had desktop support people who were nearly maxed out. By design — otherwise we were paying someone to sit around and do nothing. Pretty much the same methodology that went into staffing the call center — we might only expect two calls overnight, but we’d still employ two people to staff the phones between 10P and 6A *just* so we could say we had a 24×7 tech support line. During the day? We certainly wouldn’t hire two hundred people to handle one hundred’s worth of calls. Wouldn’t operations teams be quicker to turn around requests if they were massively overstaffed?

As a pure reporting change, where you’ve got developers and operations people who just report through the same structure to ensure priorities and goals align … reasonable. Not cost effective, but it’s a valid business decision. In a way, though, DevOps as a vogue ideology is the impetus behind financial decisions (just hire more people) and methodology changes (automate it!) that would likely have similar efficacy if implemented in silo’d verticals.

 

openHAB With Custom Built Serial Binding – fix to locking permission issue

When we updated our openHAB server to Fedora 28 and changed to a non-root user, the openhab user was unable to create lock files in /run/lock. As an interim fix, we just changed the permission on the lock folder to allow the openhab account to create files. As a more elegant solution, I’ve built the nrjavaserial JAR file from the source in NeuronRobotics’ repository.

The process to build and use a JAR built from this source follows. Before attempting to build the nrjavaserial jar from source, ensure you have gradle (which will install a LOT of additional packages), lockdev, lockdev-devel, some jdk, and some jdk-devel (I used java-1.8.0-openjdk-1.8.0.181-7.b13.fc28.x86_64 and java-1.8.0-openjdk-devel-1.8.0.181-7.b13.fc28.x86_64 because they were already installed for other projects).

# Set ossrhUsername and ossrhPassword values for the account used to build the project – username and password can be null
[lisa@server ~]# cat ~/.gradle/gradle.properties
ossrhUsername=
ossrhPassword=

# Grab the source
[lisa@server ~]# git clone https://github.com/NeuronRobotics/nrjavaserial.git

# Build the project
[lisa@server ~]# cd nrjavaserial
[lisa@server nrjavaserial]# make linux64 # assuming you’ve got 64-bit linux

# Voila, a jar file
[lisa@server nrjavaserial]# cd build/libs
[lisa@server libs]# ll
total 852
-rw-r–r– 1 root root 611694 Aug 16 10:08 nrjavaserial-3.14.0.jar
-rw-r–r– 1 root root 170546 Aug 16 10:08 nrjavaserial-3.14.0-javadoc.jar
-rw-r–r– 1 root root 85833 Aug 16 10:08 nrjavaserial-3.14.0-sources.jar

Before installing the newly built nrjavaserial-3.14.0.jar into openHAB, ensure you have lockdev installed on your Fedora machine and add your openhab user account to the lock group.

# Verify the lockdev folder was created
[lisa@server ~]# ll /run/lock/
total 4
-rw-r–r– 1 root root 22 Aug 10 15:35 asound.state.lock
drwx—— 2 root root 60 Aug 10 15:30 iscsi
drwxrwxr-x 2 root lock 140 Aug 16 12:19 lockdev
drwx—— 2 root root 40 Aug 10 15:30 lvm
drwxr-xr-x 2 root root 40 Aug 10 15:30 ppp
drwxr-xr-x 2 root root 40 Aug 10 15:30 subsys
# Add the openhab user to the lock group
[lisa@server ~]# usermod -a -G lock openhab

The openhab user account can now write to the /run/lock/lockdev folder. Install the new jar file into openHAB. When you restart openHAB, verify lock files are created as expected.
[lisa@server ~]# ll /run/lock/lockdev/
total 20
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK…31525
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK..ttyUSB-5
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LCK..ttyUSB-55
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LK.000.188.000
-rw-rw-r– 5 openhab openhab 11 Aug 16 12:19 LK.000.188.001