Category: System Administration

OpenSSL As A Trusted CA

There are wrappers for OpenSSL that provide certificate authority functionality, but I found myself spending a lot of time trying to get any to work. Since I only wanted to generate a few internal certificates (i.e. not something that needed a simple interface for non-techies), so I set up an OpenSSL certificate authority and used it to sign certificates.

First, generate a public/private keypair for your CA (use however many days you want, this is ten years:

openssl genrsa -aes256 -out ca.key 2048
openssl req -new -x509 -key ca.key -out ca.cer -days 3652 -sha256

Take ca.cer and publish it in our domain GPO as a trusted root certificate authority (Computer Configuration => Policies => Windows Settings => Security Settings => Public Key Policies => Trusted Root Certification Authorities)

If you are impatient, force client to update GPO. Otherwise wait. Eventually you will see your CA in the Windows computer’s certificate store as a trusted root certification authority.

Now generate certificate(s) against the CA (again use whatever value for days is reasonable for your purpose):

openssl genrsa -aes256 -out gitlab.rushworth.us.key 2048
openssl req -new -key gitlab.rushworth.us.key -out gitlab.rushworth.us.req
openssl x509 -req -in gitlab.rushworth.us.req -out gitlab.rushworth.us.cer -days 365 -CA /ca/ca.cer -CAkey /ca/ca.key -sha256 -CAcreateserial

On subsequent requests, you can omit the “-CAcreateserial” option.

In domain clients will trust your certificate. Non-domain clients will need to import the CA public key to their trust store.

Git Deployment

I ‘inherited’ the Git server at work — which means I had to learn how the back end component of Git works (beyond my file-system based implementation where there are just clients and a disk location). It is not as complicated as I feared. The chap who had deployed the Git backend at work chose Bonobo — since he no longer works for the company, I cannot just ask why this particular implementation. It’s Windows based and priced in our 0$ budget, and I am certain these were selling points. It seems quite stripped down compared to GitHub too — none of the issue tracking / Wiki / chat about it features. Which, for what my department does, is fine. We are not software developers. We have a lot of internal code for task automation, we have some internal code for departmental web sites, and we have some sample code we hand out to other developers (i.e. someone wants to start using LDAP or ADFS authentication, we can give them a sample implementation in their language). There aren’t feature requests. Generally speaking, there aren’t simultaneous development tasks on a project.

Since I deciphered the server implementation at work, I wanted to set up a Git server at home too. The limited feature set of Bonobo was off-putting. I wanted integrated issue tracking. Looking at the available opensource and free options, I selected GitLab. As a sandbox — poke around the server, see how it works and what features it offers — I wanted something ready-to-go. I noticed that there is a Docker container for the project. I helped a few friends who were testing Docker as a development and deployment methodology (I’ve even suggested it for my employer’s internal development staff … being able to develop and run an application with an integrated web server *without* needing the Windows permissions and configuration for a web server (and doing it all over again when your computer is replaced) seemed efficient. But I’d never actually used a Docker container before. It is incredibly easy.

Install docker — a bit obvious, but that was the most time consuming part of the process. I elected to install it on my Windows laptop for expediency. If we decide not to use GitLab, I haven’t thrown a bunch of unnecessary binaries on the server. Lenovo, as a default, does not enable virtualisation. Getting into the BIOS config tool (shift then click the power button, keep holding shift whilst you click restart) was the most time consuming bit of the installation.

Once Docker is installed, pull the container from the Docker store (docker pull gitlab/gitlab-ce). Then run it (docker run –detach –hostname gitlab.rushworth.us –publish 443:443 –publish 80:80 –publish 22:22 –name gitlab –restart always –volume /srv/gitlab/config://c/gldata/etc –volume /srv/gitlab/logs:/var/log/gitlab –volume /srv/gitlab/data://c/gldata/data –volume /svr/docker/gitlab/gitlab://c/gldata/gitlab gitlab/gitlab-ce:latest). You can remap ports (e.g. publish 8443:443) if needed.

Not quite there yet — you’ve got to edit the container config (docker exec -it gitlab vi /etc/gitlab/gitlab.rb) for your environment. Set a valid external url (external_url ‘http://gitlab.rushworth.us’). I also enabled LDAP authentication to test that out.


gitlab_rails[‘ldap_enabled’] = true

###! **remember to close this block with ‘EOS’ below**
gitlab_rails[‘ldap_servers’] = YAML.load <<-‘EOS’
main: # ‘main’ is the GitLab ‘provider ID’ of this LDAP server
label: ‘LDAP’
host: ‘ADHostname.rushworth.us’
port: 636
uid: ‘sAMAccountName’
method: ‘ssl’ # “tls” or “ssl” or “plain”
bind_dn: ‘cn=UserID,ou=SystemAccounts,dc=domain,dc=ccTLD’
password: ‘AccountPasswordGoesHere’
active_directory: true
allow_username_or_email_login: false
block_auto_created_users: false
base: ‘ou=ResourceUsers,dc=domain,dc=ccTLD’
user_filter: ‘(&(sAMAccountName=*))’ # Can add attribute value to restrict authorized users to GitLab access, we leave open to all valid user accounts in the OU. Should be able to authorize based on group membership using linked attribute value like (&(memberOf=cn=group,ou=groupOU,dc=domain,dc=ccTLD))
attributes:
username: [‘uid’, ‘userid’, ‘sAMAccountName’]
email: [‘mail’, ’email’, ‘userPrincipalName’]
name: ‘cn’
first_name: ‘givenName’
last_name: ‘sn’

EOS


The default is to retain a lot of log files — 30 days! This might be reasonable in a corporate environment, but even for production at home … that’s a lot of space dedicated to log files.


logging[‘logrotate_frequency’] = “daily” # rotate logs daily
logging[‘logrotate_rotate’] = 3 # keep 3 rotated logs
logging[‘logrotate_compress’] = “compress” # see ‘man logrotate’
logging[‘logrotate_method’] = “copytruncate” # see ‘man logrotate’


And finally configure SMTP for outbound mail. We don’t use authentication on our SMTP server; it controls relay based on source IP. We do use starttls, but the certificate is not going to be trusted without additional configuration … so I set the ssl verify mode to none.


gitlab_rails[‘smtp_enable’] = true
gitlab_rails[‘smtp_address’] = “smtp.hostname.ccTLD”
gitlab_rails[‘smtp_port’] = 25
# gitlab_rails[‘smtp_user_name’] = “smtp user”
# gitlab_rails[‘smtp_password’] = “smtp password”
# gitlab_rails[‘smtp_domain’] = “example.com”
# gitlab_rails[‘smtp_authentication’] = “login”
gitlab_rails[‘smtp_enable_starttls_auto’] = true
# gitlab_rails[‘smtp_tls’] = false

###! **Can be: ‘none’, ‘peer’, ‘client_once’, ‘fail_if_no_peer_cert’**
###! Docs: http://api.rubyonrails.org/classes/ActionMailer/Base.html
gitlab_rails[‘smtp_openssl_verify_mode’] = ‘none’


Once the config has been updated, restart the container (docker restart gitlab).

Access the web site and you’ll be prompted to set a password for the admin user, root. You can click the ‘ldap’ tab and log in with Active Directory credentials. Fin.

If we deploy this for a production system, I would set up SSL on the web site and possibly externalize the GitLab database to MySQL. The external database is more of an academic experiment because we already use MySQL (and I still don’t want  to learn about vacuuming PostgreSQL).

Apache Airflow — No Backfill

A lot of software seems to be designed to save the user from themselves. This is great 90% of the time when you mess up and really want their help (or when the software’s help is cosmetic … my gripe against auto-correcting smart quotes, as an example). But I seem to fall into the other 10% a lot. And I mean a LOT. Apache Airflow jobs try to grab new information all.of.the.time. It’s a feature called “backfill”, and I’m sure it helps all sorts of people do exactly what they really wanted done. Not me 🙁

Having updated to 1.8, though, I now see a configuration parameter to instruct a DAG not to do me any favors. Just do what you’re asked when you’re asked to do it: catchup = False

DAG('testjob', default_args=default_args, schedule_interval='0 * * * *', catchup=False)

OK, Google

Chrome 58 was released last month – and since then, I’ve gotten a LOT of certificate errors. Especially internally (Windows CA signed certs @ home and @ work). It’s really annoying – yeah, we don’t have SAN dnsHost attributes defined. And I know the RFC says falling back to CN is deprecated (seriously, search https://tools.ietf.org/html/rfc2818 for subjectAltName) but the same text was in there in 1999 … so not exactly a new innovation in SSL policy. Fortunately there’s a registry key that will override this for now.

The problem I have with SAN certificates is exemplified in Google’s cert on the web server that hosts the chromium changes site:

Seriously – this certificate ensures that the web site is any of these hundred wild-carded hostnames … and the more places you use a certificate, the greater the possibility of it being compromised. I get why people like wildcards — UALR was able to buy one cert & use it across the entire organisation. Cost effective and easy. The second through nth guy who wanted an SSL cert didn’t need to go about establishing his credentials within the organisation. He didn’t have to figure out how to make a cert request or how to pay for it. Just ask the first guy for a copy of his public/private key pair. Or run everything through your load balancer on the wildcard certificate & trust whatever backend cert happens to be in place.

But the point of security design is not trusting large groups of people do act properly. To secure their data appropriately. To patch their systems, configure their system to avoid attacks, to replace the certificate EVERYWHERE every TIME someone leaves the organisation, and otherwise prevent a certificate installed on dozens of servers from being accessed by a malicious party. My personal security preference would be seeing a browser flag every time a cert has a wildcard or more than one SAN.

Exchange Online

We’re moving users to the magic in-the-cloud Exchange. Is this a cost effective solution? Well – that depends on how you look at the cost. The on prem cost includes a lot of money to external groups that are still inside the company. If the SAN team employs ten people … well, that’s a sunk cost if they’re administering our disk space or not. If we were laying people off because services moved out to magic cloud hosted locations … then there’s a cost savings. But that’s not reality. Point being, there’s no good comparison because the internal “costs” are inflated. Microsoft’s pricing to promote cloud adoption means EOL is essentially free with purchase too. I’m sure the MS cost will go up in the future — I remember them floating “leased” software back in the late 90’s (prelude to SaaS) and thinking that was a total racket. You move all your licensing to this convenient “pay for what you use” model. And once a plurality of customers have adopted the licensing scheme, start bumping up rates. It’s a significant undertaking to migrate over – but if I’m saving hundreds of thousands of dollars a year … worth it. Rates go up, and the extra fifty grand a year isn’t worth the cost and time for migrating back to on prem. And next year that fifty grand more isn’t worth it either. Economies of scale say MS (or Amazon, or whomever) can purchase ten thousand servers and petabytes of disk space for less money than I can get two thousand servers and a hundred terabytes … but they want to make a profit too. There might be a small cost savings in the long term, but nothing like the hundreds of thousands we’re being sold up front.

Regardless – business accounting isn’t my thing. A lot of it seems counter-productive if not outright nonsensical. There are actually features in Exchange Online that do not exist in the on prem solution. The one I discovered today is subaddressing. At home, we use the virtusertable in sendmail to map entire subdomains to a single mailbox. This means I can provide a functional e-mail address, on the fly, to a new company and have mail delivered into my mailbox. Works fine for a small number of people, but it is not a scalable solution. Some e-mail providers started using a delimiter after which any string was ignored. This means I could have a GMail account of DevNull@gmail.com but get mail as DevNull+SomeRandomString@gmail.com or DevNull+CompanyNameHere@gmail.com … great for identifying who is losing your e-mail address out in Internet-land. Also somewhat trivial to write a rule that takes +SomeCompromisedAddress and move it to trash. EOL lets us do that.

Another interesting feature that is available on prem but not convenient is free busy federation (now termed an “organisational relationship”). In previous iterations, both parties needed to establish firewall rules (and preferably a B2B connection) to transfer the free busy data. But two companies with MS tenants should be able to link up without having to enact firewall changes. We still connect to the tenant. The other party still connects to the tenant. It’s our two tenants that communicate via MS’s network. Something I’m interested in playing around with … might try to see if we can link our sandbox tenant up to the production one just to see what exactly is involved.

Git, Version Management, Branches, and Sub-modules

As we have increased in staff, we’ve gained a few new programmers. While it was easy enough for us to avoid stepping on each other’s toes, we have experienced several production problems that could be addressed by rethinking our repository configuration.

Current state: We have a monolithic repository for different batch servers. Each server has a clone of the repository, and the development equivalent has a clone of the same repository. The repository has top-level folders for each independent script. There is a SharedTools top-level folder for reusable functions.

Changes are made on forks located both on the development server and individuals’ computers, tested on the development server, then pushed to the repo. Under a CRQ, a pull is performed from the production server to elevate the new code. Glomming dozens of scripts into a single repository was simple and quick; but, with new people involved with development efforts, we have experienced challenges with changes being lost, unintentional elevation of code, and having UAT run against under-development code.

Pitfalls: Four people working on four different scripts are working in the same repository. We have had individuals developing on their laptop overwrite changes (force push is dangerous, even if force-with-lease is used), we have had individuals developing on the dev server commit other people’s edits (git add * isn’t a good idea in a shared environment – specifically add changed files to your commit), and we’ve had duplication of effort (which is certainly a problem outside of development efforts, and one that can be addressed outside of git).

We could address the issues we’ve seen through training and communication – ensure anyone contributing code to the repository adequately understands what force push means, appreciates what wildcards include, and generally have a more nuanced understanding of git than the one-hour training I provided last year. But I think we should consider the LOE and advantages of using a technical solution to ensure less experienced git users are able to successfully use our repositories.

Proposal – Functional Splits:

While we have a few more individuals with development experience, they are quite specifically Windows script developers (PowerShell, VBScript, etc). We could just stop using the Windows batch server and let the two or three Microsoft guys figure it out for themselves. This limits individual growth – I “don’t do” PowerShell development, the Windows guys don’t learn Linux. And, as the group changes over time, we have not addressed the underlying problem of multiple people working on the same codebase.

Proposal – Git Changes:

We can begin using branches for development efforts and reserve “master” for ready-for-deployment code. Doing so, we eliminate the possibility of inadvertently elevating code before it is ready – only commands targeted to “origin master” will be run on production servers.

Using descriptive branch names (Initials-ScriptFolderName-SummaryOfChange) will help eliminate duplicated efforts. If I notice we need to send a few mass mails with inline images, seeing “TJR-sendMassMail-AddInlineImages” in the branch list lets me know you’ve got it covered. And “TJR-sendMassMail-RecipientListFromLiveLDAPQuery” lets me know you’re working on something else and I’m setting myself up for merge challenges by working on my change right now. If both of our changes are high priority, we might choose to work through a merge operation. This would be an informed, upfront decision instead of a surprise message indicating that fast-forward merging is not possible.

In large development projects, branch management can become a full-time pursuit. I do not think that will be an issue in our case. Minimizing the number of branches used, and not creating branches based on branches, makes branch management a simpler task. We should be able to perform fast-forward merges to push code into master because our branches modify different files in the repository.

To begin a development effort, create a branch and push it to the git server. Make your changes within that branch, and ensure you keep your branch in sync with master – you cannot merge branches that are “behind” into master without force. Once you are finished with your development, merge your branch into master and delete your branch. This approach will require additional training to ensure everyone understands how to create, rebase, merge, and delete branches (and not to just force operations because it lets you complete your task).

Instead of using ‘master’ for production code, the inverse is equally viable: create a “stable” branch that is for production code and only pull that branch to PROD servers. I believe this approach is done to prevent accidental changes to prod code – you’ve got to intentionally target “origin stable” with an operation to impact production code.

Our single repository configuration is a detriment to using branches if development is performed on the DEV server. To illustrate the issue, create a BranchTesting repo and add a single file to master. Create a Branch1 branch in one command window and check it out. Create a Branch2 in a second command window and check it out. In your first command window, add a file and commit it. In your second command window, add a file and commit it. You will find that both files have been committed to Branch2.

How can we address this issue?

Develop on our individual workstations instead of the DEV server. Not sharing a file set for our development efforts eliminates the branch context switching problem. If you clone the repo to your laptop, Gary clones the repo to his laptop, and I clone the repo to my laptop … you can create TJR-sendMassMail-AddInlineImages on your computer, write and test the changes locally, commit the changes and pull them to the DEV server for more robust testing, and then merge your changes into master when you are ready to elevate the code. I can simultaneously create LJR-monitorLDAPReplication-AddOUD11Servers, do my thing, commit changes and pull them to the DEV server (first using “git branch” to determine if someone else is already testing their branch on the DEV server), and merge my stuff into master when I’m ready to elevate. Other than remembering to ensure you verify that DEV has master checked out (i.e. no one else is testing, so the resource is free), we do not have resource contention.

While it may not be desirable to fill up our laptop drives with the entire code set from six different application servers, sparse-checkout allows you to select the specific folders that will come down to your fork.

The advantage of this approach is that it has no initial LOE beyond training and process change. The repositories are left as-is, and we start using them differently.

Unfortunately, this approach may not be viable in some instances – when access to data sources is restricted by IP ACL, you may not be able to do more than linting on your laptop. It may not even be possible to configure a Windows laptop to run some of our code – some Linux requirements are difficult to address in Windows (the PKI website’s cert info check, for instance), and testing code on Windows may not ensure successful operation on the Linux hosts.

Break the monolithic repositories into discrete repositories and use submodules allow the multiple independent repositories to be “rolled up” into a top-level repository. Development is done in the submodule repositories. I can clone monitorLDAPReplication, you can clone sendMassMail, etc. changes can be made within our branches of these completely different repositories and merged into the individual repository’s master branch for release to the production environment. Release can be done for the superset (“–recurse-submodules”) or individual sub-modules.

This would require splitting a repository into its individual components and configuring the sub-module relationships. This can be a scripted operation, and it is an incremental change to the script I used to create the repositories and ingest code; but the LOE for implementation is a few days of script writing / testing. Training will be required to ensure individuals can register their submodules within the top-level repo, and we will need to accustom ourselves to maintaining individual repos.

Or just break monolithic repositories into discrete repositories. The level of effort is about the same initially, but no one needs to learn how to set up a new submodule. We lose single-repo conveniences, but there’s literally no association between our different script folders where someone working in X could inadvertently impact Y.

Owntracks Stuck In “Connecting” To MQTT When Using WebSockets

Our home automation presence is maintained through an Android app, OwnTracks, which updates a Mosquitto server via a WebSockets reverse proxy. Mosquitto runs on a Fedora 25 server and was installed from the default RPM repository.

Recently, we stopped receiving location updates – both of our Android clients were stuck “Connecting” to the MQTT server. Nothing appeared in the Apache access or error logs, and capturing network traffic only got a small number of packets (TCP session overhead ‘stuff’). Even bypassing the reverse proxy and using the internal network to communicate directly to the Mosquitto server only created a couple of packets. Using a test client (http://www.hivemq.com/demos/websocket-client/), I saw strange connection failures — so I knew the problem was not specific to the OwnTracks client.

It seems there was a bug in libwebsockets v2.1.1 (and possibly others) — when we updated our Fedora installation, the new libwebsockets broke our MQTT over WebSockets. Currently, the Fedora repository still contains an impacted version of libwebsockets. To resolve the issue, I built the latest stable libwebsockets and built mosquitto against this updated library.

Process: The first step is to remove the dnf managed packages (rpm -e libwebsockets libwebsockets-devel mosquitto). Then build libwebsockets and mosquitto.

To Build LibWebSockets:

wget https://github.com/warmcat/libwebsockets/archive/master.zip
unzip master.zip
cd libwebsockets-master/
mkdir build
cd build
cmake ..
make
make install
cp libwebsockets.pc /usr/lib/
cp lws_config.h /usr/include/
cp ../lib/libwebsockets.h /usr/include/
cp ./lib/libwebsockets.so /usr/lib/

To Build Mosquitto:

wget https://github.com/eclipse/mosquitto/archive/master.zip
unzip master.zip
cd mosquitto-1.4.11
vi config.mk # Line 68, change to “WITH_WEBSOCKETS:=yes”
make
make install

Start the Mosquitto server and try again. Voila, presence works again!

Compiling Open ZWave On Fedora 25

Mostly writing this down for me, next time we need to run Open ZWave and try to build the latest version:

Download libmicrohttpd
Gunzip & untar it
cd libmicrohttpd
./configure
make
make install

Download and build the open-zwave library

mkdir /opt/ozw
cd /opt/ozw
git clone https://github.com/OpenZWave/open-zwave.git
cd open-zwave-master
make

Find error in build that says you don’t have libudev.h, install systemd-devel (dnf install systemd-devel) & try that make again.

Download open-zwave-control-panel
cd /opt/ozw
git clone https://github.com/OpenZWave/open-zwave-control-panel.git
cd open-zwave-control-panel-master

Open the Makefile and find the following line:
OPENZWAVE := ../

Change it to:
OPENZWAVE := ../open-zwave-master

Then find the section that says:
# for Linux uncomment out next three lines
LIBZWAVE := $(wildcard $(OPENZWAVE)/*.a)
#LIBUSB := -ludev
#LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) -lresolv

# for Mac OS X comment out above 2 lines and uncomment next 5 lines
#ARCH := -arch i386 -arch x86_64
#CFLAGS += $(ARCH)
#LIBZWAVE := $(wildcard $(OPENZWAVE)/cpp/lib/mac/*.a)
LIBUSB := -framework IOKit -framework CoreFoundation
LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) $(ARCH) -lresolv

And switch it around to be Linux … the Makefile becomes:
# for Linux uncomment out next three lines
LIBZWAVE := $(wildcard $(OPENZWAVE)/*.a)
LIBUSB := -ludev
LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) -lresolv

# for Mac OS X comment out above 2 lines and uncomment next 5 lines
#ARCH := -arch i386 -arch x86_64
#CFLAGS += $(ARCH)
#LIBZWAVE := $(wildcard $(OPENZWAVE)/cpp/lib/mac/*.a)
#LIBUSB := -framework IOKit -framework CoreFoundation
#LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) $(ARCH) -lresolv

ln -sd ../open-zwave/config
make

Then you can run it:
./ozwcp -p 8889

./ozwcp: error while loading shared libraries: libmicrohttpd.so.12: cannot open shared object file: No such file or directory

strace it (strace ./ozwcp -p 8889)

open(“/lib64/tls/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/tls/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/lib64/tls/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/tls”, {st_mode=S_IFDIR|0555, st_size=4096, …}) = 0
open(“/lib64/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/lib64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64”, {st_mode=S_IFDIR|0555, st_size=122880, …}) = 0
open(“/usr/lib64/tls/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/tls/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/usr/lib64/tls/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/tls”, {st_mode=S_IFDIR|0555, st_size=4096, …}) = 0
open(“/usr/lib64/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/usr/lib64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64”, {st_mode=S_IFDIR|0555, st_size=122880, …}) = 0

Huh … not looking in the right place. I’m sure there’s a right way to sort this, but we’re using Open ZWave for a couple of minutes to test some ZWave security stuff. Not worth the time:

ln -s /usr/local/lib/libmicrohttpd.so.12.41.0 /usr/lib64/libmicrohttpd.so.12

Try again (./ozwcp -p 8889). Voila, “2017-04-17 20:35:05.223 Always, OpenZwave Version 1.4.0 Starting Up”. Use your browser to hit http://<ipaddress>:8888 to access the Open ZWave Control Panel.

Uninformed Upgrades (PHP 5 => 7)

TL;DR: Check the list of what is being updated before you let an OS automatically update its programs.

We have a home automation / MythTV / ZoneMinder server with automatic updates disabled. In the process of updating OpenHAB to OpenHAB2, Scott suggested we update everything else while we’re at it. No big, did a quick “dnf update” … got a gig of packages downloaded, waiting for >1400 packages to install, and rebooted.

PHP could not talk to MySQL. At all. ZoneMinder just threw an error saying we didn’t have the PHP MySQL module installed (it worked half an hour ago, so it is INSTALLED). MythWeb completely failed to load – just a white screen. The quick web view of OpenHAB persistence history threw a class not found error.

I checked to see if the extensions were loaded (use the command “print_r(get_loaded_extensions());” in a PHP page) – huh, a LOT of my modules were missing. But there weren’t any useful errors anywhere indicating why.

I modified the php.ini file to show startup errors.

[root@fedora01 conf.modules.d]# grep display_startup_errors /etc/php.ini
; display_startup_errors
display_startup_errors = On

Oooooh, now there are errors! A lot of them. Not particularly useful, but at least a good clue that this isn’t going to go so well for me:

PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo.so’ – /usr/lib64/php/modules/pdo.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqlnd.so’ – /usr/lib64/php/modules/mysqlnd.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/bcmath.so’ – /usr/lib64/php/modules/bcmath.so: undefined symbol: _emalloc_16 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/bz2.so’ – /usr/lib64/php/modules/bz2.so: undefined symbol: zend_fetch_resource2_ex in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/calendar.so’ – /usr/lib64/php/modules/calendar.so: undefined symbol: _emalloc_32 in Unknown on line 0
PHP Warning: PHP Startup: ctype: Unable to initialize module\nModule compiled with module API=20151012\nPHP compiled with module API=20131226\nThese options need to match\n in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/curl.so’ – /usr/lib64/php/modules/curl.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/dom.so’ – /usr/lib64/php/modules/dom.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/exif.so’ – /usr/lib64/php/modules/exif.so: undefined symbol: zend_hash_str_exists in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/fileinfo.so’ – /usr/lib64/php/modules/fileinfo.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/ftp.so’ – /usr/lib64/php/modules/ftp.so: undefined symbol: zend_fetch_resource2 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/gd.so’ – /usr/lib64/php/modules/gd.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/gettext.so’ – /usr/lib64/php/modules/gettext.so: undefined symbol: zend_parse_arg_str_slow in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/iconv.so’ – /usr/lib64/php/modules/iconv.so: undefined symbol: _zval_get_string_func in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/json.so’ – /usr/lib64/php/modules/json.so: undefined symbol: _emalloc_56 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mbstring.so’ – /usr/lib64/php/modules/mbstring.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqlnd.so’ – /usr/lib64/php/modules/mysqlnd.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/phar.so’ – /usr/lib64/php/modules/phar.so: undefined symbol: zend_sort in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/posix.so’ – /usr/lib64/php/modules/posix.so: undefined symbol: _zend_hash_str_update in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/shmop.so’ – /usr/lib64/php/modules/shmop.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/simplexml.so’ – /usr/lib64/php/modules/simplexml.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sockets.so’ – /usr/lib64/php/modules/sockets.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sqlite3.so’ – /usr/lib64/php/modules/sqlite3.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvmsg.so’ – /usr/lib64/php/modules/sysvmsg.so: undefined symbol: _emalloc_64 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvsem.so’ – /usr/lib64/php/modules/sysvsem.so: undefined symbol: _emalloc_24 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvshm.so’ – /usr/lib64/php/modules/sysvshm.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/tidy.so’ – /usr/lib64/php/modules/tidy.so: undefined symbol: _zend_hash_str_update in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/tokenizer.so’ – /usr/lib64/php/modules/tokenizer.so: undefined symbol: _emalloc_large in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xml.so’ – /usr/lib64/php/modules/xml.so: undefined symbol: _zend_hash_str_add in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xmlwriter.so’ – /usr/lib64/php/modules/xmlwriter.so: undefined symbol: _emalloc_16 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xsl.so’ – /usr/lib64/php/modules/xsl.so: undefined symbol: dom_node_class_entry in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysql.so’ – /usr/lib64/php/modules/mysql.so: undefined symbol: mysqlnd_connect in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqli.so’ – /usr/lib64/php/modules/mysqli.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo_mysql.so’ – /usr/lib64/php/modules/pdo_mysql.so: undefined symbol: mysqlnd_allocator in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo_sqlite.so’ – /usr/lib64/php/modules/pdo_sqlite.so: undefined symbol: php_pdo_unregister_driver in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/wddx.so’ – /usr/lib64/php/modules/wddx.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xmlreader.so’ – /usr/lib64/php/modules/xmlreader.so: undefined symbol: dom_node_class_entry in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/json.so’ – /usr/lib64/php/modules/json.so: undefined symbol: _emalloc_56 in Unknown on line 0

Turns out DNF installed PHP 7, but didn’t do anything to remove the PHP 5 modules from my Apache configuration:

[root@fedora01 tmp]# cd /etc/httpd/modules
[root@fedora01 modules]# grep php *
Binary file libphp5.so matches
Binary file libphp5-zts.so matches
Binary file libphp7.so matches
Binary file libphp7-zts.so matches

[root@fedora01 modules]# mkdir /tmp/oldphp
[root@fedora01 modules]# mv libphp5* /tmp/oldphp

And remove them from the conf.modules.d too (if you just remove the module files but try to load them in the conf.modules.d … Apache will just fail to load. You could remove them from conf.modules.d … but I don’t want a lot of no-longer-used files sitting there to confuse me in a year or two!)

[root@fedora01 modules]# cd /etc/httpd/conf.modules.d/
[root@fedora01 conf.modules.d]# grep php *
10-php.conf: LoadModule php5_module modules/libphp5.so
10-php.conf: LoadModule php5_module modules/libphp5-zts.so
15-php.conf:# Cannot load both php5 and php7 modules
15-php.conf:<IfModule !mod_php5.c>
15-php.conf: LoadModule php7_module modules/libphp7.so
15-php.conf:<IfModule !mod_php5.c>
15-php.conf: LoadModule php7_module modules/libphp7-zts.so

[root@fedora01 conf.modules.d]# mv 10-php.conf /tmp/oldphp/

Then restart Apache without PHP 5:

root@fedora01 conf.modules.d]# service httpd start
Redirecting to /bin/systemctl start httpd.service

Voila, perfectly functioning web sites. And, yeah, I should probably check the list of “what will be updated” when I update a server. Would save HOURS of reading through strace output to find out old versions were still hanging about.

 

LAPS For Local Computer Administrator Passwords

Overview

LAPS is Microsoft’s solution to a long-existing problem within a corporation using Windows computers: when you image computers, all of the local administrator passwords are the same. Now some organizations implemented a process to routinely change that password, but someone who is able to compromise the local administrator password on one box basically owns all of the other imaged workstations until the next password change.

Because your computer’s local administrator password is the same as everyone else’s, IT support cannot just give you a local password to access your box when it is malfunctioning. This means remote employees with incorrect system settings end up driving into an office just to allow an IT person to log into the box.

With LAPS, there is no longer one ring to rule them all – LAPS allows us to maintain unique local administrator passwords on domain member computers. A user can be provided their local administrator password without allowing access to all of the other domain-member PCs (or a compromised password one one box lets the attacker own only that box). A compromised box is still a problem, but access to other boxes within the domain would only be possible by retrieving other credentials stored on the device.

Considerations

Security: The end user is prevented from accessing the password or interacting with the process. The computer account manages the password, not the user (per section 4 LAPS_TechnicalSpecification.docx from https://www.microsoft.com/en-us/download/details.aspx?id=46899).

Within the directory, read access is insufficient (per https://blogs.msdn.microsoft.com/laps/2015/06/01/laps-and-password-storage-in-clear-text-in-ad/) to view the attribute values. In my proposed deployment, users (even those who will be retrieving the password legitimately) will use a web interface, so a single service acct will have read access to the confidential ms-Mcs-AdmPwd attribute and write access to ms-Mcs-AdminPwdExpirationTime. There are already powershell scripts published to search an improperly secured directory and dump a list of computer names & local administrator passwords. You should run Find-AdmPwdExtendedrights -identity :<OU FQDN> to determine who has the ability to read the password values to avoid this really embarrassing oversight.

Should anyone have access to read the ms-Mcs-AdmPwd value beyond the service account? If the web interface goes down for some reason, is obtaining the local administrator password sufficiently important that, for example, help desk management should be able to see the password through the MS provided client? Depends on the use cases, but I’m guessing yes (if for no other reason than the top level AD admins will have access and will probably get rung up to find the password if the site goes down).

In the AD permissions, watch who has write permission to ms-Mcs-AdminPwdExpirationTime as write access allows someone to bump out the expiry date for the local admin password. Are we paranoid enough to run a filter for expiry > GPO interval? Or does setting “not not allow password expiration time longer than required by policy” to Enabled sufficently mitigate the issue? To me, it does … but the answer really depends on how confidential the data on these computers happens to be.

With read access to ms-Mcs-AdmPwdExpirationTime, you can ascertain which computers are using LAPS to manage the local administrator password (a future value is set in the attribute) and which are not (a null or past value). Is that a significant enough security risk to worry about mitigating? An attacker may try to limit their attacks to computers that do not use LAPS to manage the local admin password. They can also ascertain how long the current password will be valid.

How do you gain access to the box if the local admin password stored in AD does not work (for whatever reason)? I don’t think you’re worse off than you would be today – someone might give you the local desktop password, someone might make you drive into the office … but bears considering if we’ve created a scenario where someone might have a bigger problem than under the current setup.

Does this interact at all with workplace join computers? My guess is no, but haven’t found anything specific about how workplace joined computers interact with corporate GPOs.

Server Side

Potential AD load – depends on expiry interval. Not huge, but non-zero.

Schema extension needs to be loaded. Remove extended rights from attribute for everyone who has it. Add computer self rights. Add control access for web service acct – some individuals too as backup in case web server is down??

Does a report on almost expired passwords and notify someone have value?

Client Side

Someone else figures this out, not my deal-e-o. Set GPO for test machines, make sure value populates, test logon to machine with password from AD. Provide mechanism to force update of local admin password on specific machine (i.e. if I ring in and get the local admin password today, it should get changed to a new password in some short delta time).

Admin Interface

Web interface, provide computer name & get password. Log who made request & what computer name. If more than X requests made per user in a (delta time), send e-mail alert to admin user just in case it is suspicious activity. If more than Y requests made per user in a (longer delta time), send e-mail alert to admin user manager.

Additionally we need a function to clear the password expiry (force the machine to set a new password) to be used after local password is given to an end user.

User Interface

Can we map user to computer name and give the user a process to recover password without calling HD? Or have the manager log in & be able to pull local administrator for their directs? Or some other way to go about actually reducing call volume.

Future Considerations

Excluding ms-Mcs-AdmPwd  from repl to RODC – really no point to it being there.

Do we get this hooked up for acquired company domains too, or do they wait until they get in the WIN domain?

Does this facilitate new machine deployment to remote users? If you get a newly imaged machine & know its name, get the local admin password, log in, VPN in … can you do a run-as to get your creds cached? Or do a change user and still have the VPN session running so you can change to a domain user account?

LAPS For Servers: Should this be done on servers too? Web site could restrict who could view desktops v/s who could view servers … but it would save time/effort when someone leaves the group/company there too. Could even have non-TSG folks who would be able to get access to specific boxes – no idea if that’s something Michael would want, but same idea as the desktop side where now I wouldn’t give someone the password ‘cause it’s the password for thousands of other computers … may be people they wouldn’t want having local admin on any WIN box they maintain … but having local admin on the four boxes that run their app … maybe that’s a bonus. If it is deployed to servers, make sure they don’t put it on DCs (unless you want to use LAPS to manage the domain administrator password … which is an interesting consideration but has so many potential problems I don’t want to think about it right now especially since you’d have to find which DC updated the password most recently).

LAPS For VDI: Should this be done on VDI workstations? Even though it’s a easier to set the password on VDI the base VDI images than each individual workstation, it’s still manual effort & provides an attack vector for all of the *other* VDI sessions. Persistent sessions are OK without any thought because functionally no different than workstations. Non-persistent with new name each time are OK too – although I suspect you end up with a BUNCH of machine objects in AD that need to be cleaned up as new machine names come online. Maybe VDI sorts this … but the LAPS ‘stuff’ is functionally no different than bringing a whole bunch of new workstations online all the time.

Non-persistent sessions with same computer name … since the password update interval probably won’t have elapsed, the in-image password will be used. Can implement an on-boot script that clears AdmPwdExpirationTime to force change. Or a script to clear value on system shutdown (but that would need to handle non-clean shutdowns). That would require some testing.

 

Testing Process

We can have a full proof of concept type test by loading schema into test active directory (verify no adverse impact is seen) and having a workstation joined to the test domain. We could provide a quick web site where you input a computer name & get back a password (basically lacking the security-related controls where # of requests generate some action). This would allow testing of the password on the local machine. Would also allow testing of force-updating the local admin password.

Once we determine that this is worth the effort, web site would need to be flushed out (DB created for audit tracking). Schema and rights would need to be set up in AD. Then it’s pretty much on the desktop / GPO side. I’d recommend setting the GPO for a small number of test workstations first … but that’s what they do for pretty much any GPO change so not exactly ground breaking.