Category: Technology

Android Mail Client Malfunction On FierceXL

Both Scott and I have an odd issue with our FierceXL using the stock mail client to communicate with Exchange 2013 over the OWA interface. Randomly, one or more of the connected accounts stops receiving e-mail. We know OWA still works and is available from the phone — we can go into Chrome on the phone and log into OWA. There is absolutely no traffic coming across the reverse proxy / Exchange server from the phone IP. Switching between the cellular network and home WiFi has no impact.

I had just been rebooting my phone. Upon startup, communication is again seen on the reverse proxy server. A few seconds later, the backlog of new mail starts popping into the mail client. Scott recently discovered that you can resolve the issue by closing the mail app (bringing up the recent/running application list and swiping mail off of the screen) and re-opening it.

It appears something within the mail client is getting a thread hung — not the mail client en toto as I often cease receiving messages to one of three accounts. Ending the process and re-spawning it clears whatever is hung. Unfortunately, we have not had any updates for these phones since November of last year so there’s not a quick software fix that can be applied to resolve the issue.

The Peril Of Hosting Your Own Services

I love hosting my own services — home automation, file shares, backups, e-mail, web servers, DNS … bit of paranoia, a bit of control freak, and a bit of pride. But every now and again, hosting my own services causes problems because, well, vendors don’t develop processes around someone with servers in their house.

We got a new cable modem. Scott went to a web page (happened to be Google) and got redirected to the TWC activation page. Went through whatever, ended up calling into support, and finally our account was sorted. Woohoo! Everything works … umm, except I cannot search Google.

Turns out TWC manages their activation redirection by serving up bogus DNS info — their server IP instead of the real one. Which then got cached on our DNS server. No idea what TTL TWC set on their bogus data, but it was more than a minute or two. Had to clear the DNS server cache before we were able to hit Google sites again.

OK, Google

Chrome 58 was released last month – and since then, I’ve gotten a LOT of certificate errors. Especially internally (Windows CA signed certs @ home and @ work). It’s really annoying – yeah, we don’t have SAN dnsHost attributes defined. And I know the RFC says falling back to CN is deprecated (seriously, search https://tools.ietf.org/html/rfc2818 for subjectAltName) but the same text was in there in 1999 … so not exactly a new innovation in SSL policy. Fortunately there’s a registry key that will override this for now.

The problem I have with SAN certificates is exemplified in Google’s cert on the web server that hosts the chromium changes site:

Seriously – this certificate ensures that the web site is any of these hundred wild-carded hostnames … and the more places you use a certificate, the greater the possibility of it being compromised. I get why people like wildcards — UALR was able to buy one cert & use it across the entire organisation. Cost effective and easy. The second through nth guy who wanted an SSL cert didn’t need to go about establishing his credentials within the organisation. He didn’t have to figure out how to make a cert request or how to pay for it. Just ask the first guy for a copy of his public/private key pair. Or run everything through your load balancer on the wildcard certificate & trust whatever backend cert happens to be in place.

But the point of security design is not trusting large groups of people do act properly. To secure their data appropriately. To patch their systems, configure their system to avoid attacks, to replace the certificate EVERYWHERE every TIME someone leaves the organisation, and otherwise prevent a certificate installed on dozens of servers from being accessed by a malicious party. My personal security preference would be seeing a browser flag every time a cert has a wildcard or more than one SAN.

Exchange Online

We’re moving users to the magic in-the-cloud Exchange. Is this a cost effective solution? Well – that depends on how you look at the cost. The on prem cost includes a lot of money to external groups that are still inside the company. If the SAN team employs ten people … well, that’s a sunk cost if they’re administering our disk space or not. If we were laying people off because services moved out to magic cloud hosted locations … then there’s a cost savings. But that’s not reality. Point being, there’s no good comparison because the internal “costs” are inflated. Microsoft’s pricing to promote cloud adoption means EOL is essentially free with purchase too. I’m sure the MS cost will go up in the future — I remember them floating “leased” software back in the late 90’s (prelude to SaaS) and thinking that was a total racket. You move all your licensing to this convenient “pay for what you use” model. And once a plurality of customers have adopted the licensing scheme, start bumping up rates. It’s a significant undertaking to migrate over – but if I’m saving hundreds of thousands of dollars a year … worth it. Rates go up, and the extra fifty grand a year isn’t worth the cost and time for migrating back to on prem. And next year that fifty grand more isn’t worth it either. Economies of scale say MS (or Amazon, or whomever) can purchase ten thousand servers and petabytes of disk space for less money than I can get two thousand servers and a hundred terabytes … but they want to make a profit too. There might be a small cost savings in the long term, but nothing like the hundreds of thousands we’re being sold up front.

Regardless – business accounting isn’t my thing. A lot of it seems counter-productive if not outright nonsensical. There are actually features in Exchange Online that do not exist in the on prem solution. The one I discovered today is subaddressing. At home, we use the virtusertable in sendmail to map entire subdomains to a single mailbox. This means I can provide a functional e-mail address, on the fly, to a new company and have mail delivered into my mailbox. Works fine for a small number of people, but it is not a scalable solution. Some e-mail providers started using a delimiter after which any string was ignored. This means I could have a GMail account of DevNull@gmail.com but get mail as DevNull+SomeRandomString@gmail.com or DevNull+CompanyNameHere@gmail.com … great for identifying who is losing your e-mail address out in Internet-land. Also somewhat trivial to write a rule that takes +SomeCompromisedAddress and move it to trash. EOL lets us do that.

Another interesting feature that is available on prem but not convenient is free busy federation (now termed an “organisational relationship”). In previous iterations, both parties needed to establish firewall rules (and preferably a B2B connection) to transfer the free busy data. But two companies with MS tenants should be able to link up without having to enact firewall changes. We still connect to the tenant. The other party still connects to the tenant. It’s our two tenants that communicate via MS’s network. Something I’m interested in playing around with … might try to see if we can link our sandbox tenant up to the production one just to see what exactly is involved.

Git, Version Management, Branches, and Sub-modules

As we have increased in staff, we’ve gained a few new programmers. While it was easy enough for us to avoid stepping on each other’s toes, we have experienced several production problems that could be addressed by rethinking our repository configuration.

Current state: We have a monolithic repository for different batch servers. Each server has a clone of the repository, and the development equivalent has a clone of the same repository. The repository has top-level folders for each independent script. There is a SharedTools top-level folder for reusable functions.

Changes are made on forks located both on the development server and individuals’ computers, tested on the development server, then pushed to the repo. Under a CRQ, a pull is performed from the production server to elevate the new code. Glomming dozens of scripts into a single repository was simple and quick; but, with new people involved with development efforts, we have experienced challenges with changes being lost, unintentional elevation of code, and having UAT run against under-development code.

Pitfalls: Four people working on four different scripts are working in the same repository. We have had individuals developing on their laptop overwrite changes (force push is dangerous, even if force-with-lease is used), we have had individuals developing on the dev server commit other people’s edits (git add * isn’t a good idea in a shared environment – specifically add changed files to your commit), and we’ve had duplication of effort (which is certainly a problem outside of development efforts, and one that can be addressed outside of git).

We could address the issues we’ve seen through training and communication – ensure anyone contributing code to the repository adequately understands what force push means, appreciates what wildcards include, and generally have a more nuanced understanding of git than the one-hour training I provided last year. But I think we should consider the LOE and advantages of using a technical solution to ensure less experienced git users are able to successfully use our repositories.

Proposal – Functional Splits:

While we have a few more individuals with development experience, they are quite specifically Windows script developers (PowerShell, VBScript, etc). We could just stop using the Windows batch server and let the two or three Microsoft guys figure it out for themselves. This limits individual growth – I “don’t do” PowerShell development, the Windows guys don’t learn Linux. And, as the group changes over time, we have not addressed the underlying problem of multiple people working on the same codebase.

Proposal – Git Changes:

We can begin using branches for development efforts and reserve “master” for ready-for-deployment code. Doing so, we eliminate the possibility of inadvertently elevating code before it is ready – only commands targeted to “origin master” will be run on production servers.

Using descriptive branch names (Initials-ScriptFolderName-SummaryOfChange) will help eliminate duplicated efforts. If I notice we need to send a few mass mails with inline images, seeing “TJR-sendMassMail-AddInlineImages” in the branch list lets me know you’ve got it covered. And “TJR-sendMassMail-RecipientListFromLiveLDAPQuery” lets me know you’re working on something else and I’m setting myself up for merge challenges by working on my change right now. If both of our changes are high priority, we might choose to work through a merge operation. This would be an informed, upfront decision instead of a surprise message indicating that fast-forward merging is not possible.

In large development projects, branch management can become a full-time pursuit. I do not think that will be an issue in our case. Minimizing the number of branches used, and not creating branches based on branches, makes branch management a simpler task. We should be able to perform fast-forward merges to push code into master because our branches modify different files in the repository.

To begin a development effort, create a branch and push it to the git server. Make your changes within that branch, and ensure you keep your branch in sync with master – you cannot merge branches that are “behind” into master without force. Once you are finished with your development, merge your branch into master and delete your branch. This approach will require additional training to ensure everyone understands how to create, rebase, merge, and delete branches (and not to just force operations because it lets you complete your task).

Instead of using ‘master’ for production code, the inverse is equally viable: create a “stable” branch that is for production code and only pull that branch to PROD servers. I believe this approach is done to prevent accidental changes to prod code – you’ve got to intentionally target “origin stable” with an operation to impact production code.

Our single repository configuration is a detriment to using branches if development is performed on the DEV server. To illustrate the issue, create a BranchTesting repo and add a single file to master. Create a Branch1 branch in one command window and check it out. Create a Branch2 in a second command window and check it out. In your first command window, add a file and commit it. In your second command window, add a file and commit it. You will find that both files have been committed to Branch2.

How can we address this issue?

Develop on our individual workstations instead of the DEV server. Not sharing a file set for our development efforts eliminates the branch context switching problem. If you clone the repo to your laptop, Gary clones the repo to his laptop, and I clone the repo to my laptop … you can create TJR-sendMassMail-AddInlineImages on your computer, write and test the changes locally, commit the changes and pull them to the DEV server for more robust testing, and then merge your changes into master when you are ready to elevate the code. I can simultaneously create LJR-monitorLDAPReplication-AddOUD11Servers, do my thing, commit changes and pull them to the DEV server (first using “git branch” to determine if someone else is already testing their branch on the DEV server), and merge my stuff into master when I’m ready to elevate. Other than remembering to ensure you verify that DEV has master checked out (i.e. no one else is testing, so the resource is free), we do not have resource contention.

While it may not be desirable to fill up our laptop drives with the entire code set from six different application servers, sparse-checkout allows you to select the specific folders that will come down to your fork.

The advantage of this approach is that it has no initial LOE beyond training and process change. The repositories are left as-is, and we start using them differently.

Unfortunately, this approach may not be viable in some instances – when access to data sources is restricted by IP ACL, you may not be able to do more than linting on your laptop. It may not even be possible to configure a Windows laptop to run some of our code – some Linux requirements are difficult to address in Windows (the PKI website’s cert info check, for instance), and testing code on Windows may not ensure successful operation on the Linux hosts.

Break the monolithic repositories into discrete repositories and use submodules allow the multiple independent repositories to be “rolled up” into a top-level repository. Development is done in the submodule repositories. I can clone monitorLDAPReplication, you can clone sendMassMail, etc. changes can be made within our branches of these completely different repositories and merged into the individual repository’s master branch for release to the production environment. Release can be done for the superset (“–recurse-submodules”) or individual sub-modules.

This would require splitting a repository into its individual components and configuring the sub-module relationships. This can be a scripted operation, and it is an incremental change to the script I used to create the repositories and ingest code; but the LOE for implementation is a few days of script writing / testing. Training will be required to ensure individuals can register their submodules within the top-level repo, and we will need to accustom ourselves to maintaining individual repos.

Or just break monolithic repositories into discrete repositories. The level of effort is about the same initially, but no one needs to learn how to set up a new submodule. We lose single-repo conveniences, but there’s literally no association between our different script folders where someone working in X could inadvertently impact Y.

GoFCCYourself(.com)

You know what you find when you drain a swamp? A whole bunch of rotting detritus. I’m not going to pretend astonishment that a former Associate General Counsel from Verizon thinks net neutrality is a terrible idea. I remember getting an e-mail message from my employer, another network provider, detailing how this terrible proposal was going to drive us all out of business. Or something similarly over-dramatic.

Facilitating public comment on Executive branch proceedings, such as GoFCCYourself.com, is an interesting idea. Take a circuitous government web site that ostensibly allows individuals to post comments on issues and circumvent the terrible user interface by getting your own URL and I assume including the appropriate POST headers to get individuals in exactly the right place to submit their comments.

I’ve used this short-cut to submit my opinion to the FCC, but I also forwarded the same message to my rep in the House and my two state Senators:

I have submitted this to the FCC for Docket 17-108 but wanted to include you as well. If the FCC does roll back net neutrality, as their chairman indicates is his desire, I beseech you to ready legislative controls to prevent ISPs from using speed controls to essentially censor Internet content.

I am writing to express my support for “net neutrality” — while you want to claim it reduces carrier investment or innovation, customer acquisition and retention drives carrier investment and innovation. Lowered cost of operations, creating a service that allows a higher price point, or offering a new service unavailable through a competitor drive innovation. Allowing a carrier to create a new revenue stream by charging content providers for faster access is not innovation – QoS has been around for decades. And it isn’t like the content is being delivered to the Internet for free. Content providers already pay for bandwidth — and a company like Netflix probably paid a LOT of money for bandwidth at their locations. If Verizon didn’t win a bid for network services to those locations, that’s Verizon’s problem. Don’t create a legal framework for every ISP to profit from *not* providing network services for popular sites; the network provider needs to submit a more competitive bid.

What rolling back net neutrality *does* is stifle customers and content providers. If I, as a customer, am paying 50$ a month for my Internet service but find the content that I *want* is de-prioritized and slowed … well, in a perfect capitalist system, I would switch to the provider who ‘innovates’ and goes back to their 2017 configurations. But broadband access – apart from some major metro areas – is not a capitalist system. Where I live, outside of the Cleveland suburbs, I have my choice of the local cable company or sat – sat based Internet introduces a lot of latency and is quite expensive for both the customer and the operator (and has data limits, which themselves preclude a lot of network-intensive traffic that ISPs wish to de-prioritize). That’s not a real choice — pay 50$ to this company who is going to de-prioritize anyone who doesn’t pay their network bandwidth ransom or pay 100$ to some other company that is unable to provide sufficiently low latency to allow me to work from home. So add a hour of commute time, fuel, vehicle wear, and reduced family time to that 100$ bill.

Rolling back net neutrality stifles small businesses — it’s already difficult to compete with large corporations who have comparatively unlimited budgets for advertising and lawyers. Today, a small business is able to present their product online with equal footing. In 1994, I worked at a small University. One of my initiatives was to train departmental representatives on basic HTML coding so the college would have an outstanding presence on the Internet. First hour of the first day of the training session included a method for checking load times off campus without actually having to leave the campus network. On campus, we were 10 meg between buildings and the server room and anything loaded quite quickly. At home, a prospective student was dialing in on a 28.8 modem. If your content is a web page for MIT, a prospective engineering student may be willing to click your site, go eat dinner, and come back. Load time isn’t as much of a problem for an organisation with a big name and reputation. Unknown little University in Western PA? Click … wait … wait, eh, never mind. The advent of DSL was amazing to me because it provided sufficient bandwidth and delivered content with parity that allowed an unknown Uni to offer a robust web site with videos of the exciting research opportunities available to students and the individual attention from professors that small class sizes allow. No longer did we need to restrict graphics and AV on our site because we weren’t a ‘big name’ University. That there ever was a debate about removing this parity astonished me.

Aside from my personal opinion, what is the impact of non-neutral networks on free speech? Without robust legal controls, ISPs engage in a form of quasi-censorship. How do you intend to prevent abuse of the system? Is a large corporation going to be able to direct “marketing” dollars to speeding up their page to the harm of their competitors? Can the Coca-Cola Company pay millions of dollars to have their content delivered faster than PepsiCo’s? Is the ISP then the winner in a bidding war between the two companies? What about political content? Does my ISP now control the speed at which political content is delivered? What happens when Democrats raise more money in the Cleveland metro area and conservative views are relegated to the ‘slow’ lane? What happens when the FCC gets de-prioritized because ISPs want even less regulation??

I would still worry about the legal controls to prevent quasi-censorship, but I would object less if the FCC were to implement the net neutrality requirements like some of the telco regulations for CLEC’s where there were no ILEC’s had been — where there is no or limited competition, net neutrality is a requirement. Where there are a dozen different ISP options, they can try selling the QoS’d packages. Polls and voting aside, the ISP will find out exactly how many customers or content providers support non-neutral networks.

Owntracks Stuck In “Connecting” To MQTT When Using WebSockets

Our home automation presence is maintained through an Android app, OwnTracks, which updates a Mosquitto server via a WebSockets reverse proxy. Mosquitto runs on a Fedora 25 server and was installed from the default RPM repository.

Recently, we stopped receiving location updates – both of our Android clients were stuck “Connecting” to the MQTT server. Nothing appeared in the Apache access or error logs, and capturing network traffic only got a small number of packets (TCP session overhead ‘stuff’). Even bypassing the reverse proxy and using the internal network to communicate directly to the Mosquitto server only created a couple of packets. Using a test client (http://www.hivemq.com/demos/websocket-client/), I saw strange connection failures — so I knew the problem was not specific to the OwnTracks client.

It seems there was a bug in libwebsockets v2.1.1 (and possibly others) — when we updated our Fedora installation, the new libwebsockets broke our MQTT over WebSockets. Currently, the Fedora repository still contains an impacted version of libwebsockets. To resolve the issue, I built the latest stable libwebsockets and built mosquitto against this updated library.

Process: The first step is to remove the dnf managed packages (rpm -e libwebsockets libwebsockets-devel mosquitto). Then build libwebsockets and mosquitto.

To Build LibWebSockets:

wget https://github.com/warmcat/libwebsockets/archive/master.zip
unzip master.zip
cd libwebsockets-master/
mkdir build
cd build
cmake ..
make
make install
cp libwebsockets.pc /usr/lib/
cp lws_config.h /usr/include/
cp ../lib/libwebsockets.h /usr/include/
cp ./lib/libwebsockets.so /usr/lib/

To Build Mosquitto:

wget https://github.com/eclipse/mosquitto/archive/master.zip
unzip master.zip
cd mosquitto-1.4.11
vi config.mk # Line 68, change to “WITH_WEBSOCKETS:=yes”
make
make install

Start the Mosquitto server and try again. Voila, presence works again!

Compiling Open ZWave On Fedora 25

Mostly writing this down for me, next time we need to run Open ZWave and try to build the latest version:

Download libmicrohttpd
Gunzip & untar it
cd libmicrohttpd
./configure
make
make install

Download and build the open-zwave library

mkdir /opt/ozw
cd /opt/ozw
git clone https://github.com/OpenZWave/open-zwave.git
cd open-zwave-master
make

Find error in build that says you don’t have libudev.h, install systemd-devel (dnf install systemd-devel) & try that make again.

Download open-zwave-control-panel
cd /opt/ozw
git clone https://github.com/OpenZWave/open-zwave-control-panel.git
cd open-zwave-control-panel-master

Open the Makefile and find the following line:
OPENZWAVE := ../

Change it to:
OPENZWAVE := ../open-zwave-master

Then find the section that says:
# for Linux uncomment out next three lines
LIBZWAVE := $(wildcard $(OPENZWAVE)/*.a)
#LIBUSB := -ludev
#LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) -lresolv

# for Mac OS X comment out above 2 lines and uncomment next 5 lines
#ARCH := -arch i386 -arch x86_64
#CFLAGS += $(ARCH)
#LIBZWAVE := $(wildcard $(OPENZWAVE)/cpp/lib/mac/*.a)
LIBUSB := -framework IOKit -framework CoreFoundation
LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) $(ARCH) -lresolv

And switch it around to be Linux … the Makefile becomes:
# for Linux uncomment out next three lines
LIBZWAVE := $(wildcard $(OPENZWAVE)/*.a)
LIBUSB := -ludev
LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) -lresolv

# for Mac OS X comment out above 2 lines and uncomment next 5 lines
#ARCH := -arch i386 -arch x86_64
#CFLAGS += $(ARCH)
#LIBZWAVE := $(wildcard $(OPENZWAVE)/cpp/lib/mac/*.a)
#LIBUSB := -framework IOKit -framework CoreFoundation
#LIBS := $(LIBZWAVE) $(GNUTLS) $(LIBMICROHTTPD) -pthread $(LIBUSB) $(ARCH) -lresolv

ln -sd ../open-zwave/config
make

Then you can run it:
./ozwcp -p 8889

./ozwcp: error while loading shared libraries: libmicrohttpd.so.12: cannot open shared object file: No such file or directory

strace it (strace ./ozwcp -p 8889)

open(“/lib64/tls/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/tls/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/lib64/tls/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/tls”, {st_mode=S_IFDIR|0555, st_size=4096, …}) = 0
open(“/lib64/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/lib64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/lib64”, {st_mode=S_IFDIR|0555, st_size=122880, …}) = 0
open(“/usr/lib64/tls/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/tls/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/usr/lib64/tls/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/tls”, {st_mode=S_IFDIR|0555, st_size=4096, …}) = 0
open(“/usr/lib64/x86_64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64/x86_64”, 0x7ffefb50d660) = -1 ENOENT (No such file or directory)
open(“/usr/lib64/libmicrohttpd.so.12”, O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat(“/usr/lib64”, {st_mode=S_IFDIR|0555, st_size=122880, …}) = 0

Huh … not looking in the right place. I’m sure there’s a right way to sort this, but we’re using Open ZWave for a couple of minutes to test some ZWave security stuff. Not worth the time:

ln -s /usr/local/lib/libmicrohttpd.so.12.41.0 /usr/lib64/libmicrohttpd.so.12

Try again (./ozwcp -p 8889). Voila, “2017-04-17 20:35:05.223 Always, OpenZwave Version 1.4.0 Starting Up”. Use your browser to hit http://<ipaddress>:8888 to access the Open ZWave Control Panel.

Uninformed Upgrades (PHP 5 => 7)

TL;DR: Check the list of what is being updated before you let an OS automatically update its programs.

We have a home automation / MythTV / ZoneMinder server with automatic updates disabled. In the process of updating OpenHAB to OpenHAB2, Scott suggested we update everything else while we’re at it. No big, did a quick “dnf update” … got a gig of packages downloaded, waiting for >1400 packages to install, and rebooted.

PHP could not talk to MySQL. At all. ZoneMinder just threw an error saying we didn’t have the PHP MySQL module installed (it worked half an hour ago, so it is INSTALLED). MythWeb completely failed to load – just a white screen. The quick web view of OpenHAB persistence history threw a class not found error.

I checked to see if the extensions were loaded (use the command “print_r(get_loaded_extensions());” in a PHP page) – huh, a LOT of my modules were missing. But there weren’t any useful errors anywhere indicating why.

I modified the php.ini file to show startup errors.

[root@fedora01 conf.modules.d]# grep display_startup_errors /etc/php.ini
; display_startup_errors
display_startup_errors = On

Oooooh, now there are errors! A lot of them. Not particularly useful, but at least a good clue that this isn’t going to go so well for me:

PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo.so’ – /usr/lib64/php/modules/pdo.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqlnd.so’ – /usr/lib64/php/modules/mysqlnd.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/bcmath.so’ – /usr/lib64/php/modules/bcmath.so: undefined symbol: _emalloc_16 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/bz2.so’ – /usr/lib64/php/modules/bz2.so: undefined symbol: zend_fetch_resource2_ex in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/calendar.so’ – /usr/lib64/php/modules/calendar.so: undefined symbol: _emalloc_32 in Unknown on line 0
PHP Warning: PHP Startup: ctype: Unable to initialize module\nModule compiled with module API=20151012\nPHP compiled with module API=20131226\nThese options need to match\n in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/curl.so’ – /usr/lib64/php/modules/curl.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/dom.so’ – /usr/lib64/php/modules/dom.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/exif.so’ – /usr/lib64/php/modules/exif.so: undefined symbol: zend_hash_str_exists in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/fileinfo.so’ – /usr/lib64/php/modules/fileinfo.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/ftp.so’ – /usr/lib64/php/modules/ftp.so: undefined symbol: zend_fetch_resource2 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/gd.so’ – /usr/lib64/php/modules/gd.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/gettext.so’ – /usr/lib64/php/modules/gettext.so: undefined symbol: zend_parse_arg_str_slow in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/iconv.so’ – /usr/lib64/php/modules/iconv.so: undefined symbol: _zval_get_string_func in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/json.so’ – /usr/lib64/php/modules/json.so: undefined symbol: _emalloc_56 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mbstring.so’ – /usr/lib64/php/modules/mbstring.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqlnd.so’ – /usr/lib64/php/modules/mysqlnd.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/phar.so’ – /usr/lib64/php/modules/phar.so: undefined symbol: zend_sort in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/posix.so’ – /usr/lib64/php/modules/posix.so: undefined symbol: _zend_hash_str_update in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/shmop.so’ – /usr/lib64/php/modules/shmop.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/simplexml.so’ – /usr/lib64/php/modules/simplexml.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sockets.so’ – /usr/lib64/php/modules/sockets.so: undefined symbol: zend_hash_str_del in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sqlite3.so’ – /usr/lib64/php/modules/sqlite3.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvmsg.so’ – /usr/lib64/php/modules/sysvmsg.so: undefined symbol: _emalloc_64 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvsem.so’ – /usr/lib64/php/modules/sysvsem.so: undefined symbol: _emalloc_24 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/sysvshm.so’ – /usr/lib64/php/modules/sysvshm.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/tidy.so’ – /usr/lib64/php/modules/tidy.so: undefined symbol: _zend_hash_str_update in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/tokenizer.so’ – /usr/lib64/php/modules/tokenizer.so: undefined symbol: _emalloc_large in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xml.so’ – /usr/lib64/php/modules/xml.so: undefined symbol: _zend_hash_str_add in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xmlwriter.so’ – /usr/lib64/php/modules/xmlwriter.so: undefined symbol: _emalloc_16 in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xsl.so’ – /usr/lib64/php/modules/xsl.so: undefined symbol: dom_node_class_entry in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysql.so’ – /usr/lib64/php/modules/mysql.so: undefined symbol: mysqlnd_connect in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/mysqli.so’ – /usr/lib64/php/modules/mysqli.so: undefined symbol: zend_ce_exception in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo_mysql.so’ – /usr/lib64/php/modules/pdo_mysql.so: undefined symbol: mysqlnd_allocator in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/pdo_sqlite.so’ – /usr/lib64/php/modules/pdo_sqlite.so: undefined symbol: php_pdo_unregister_driver in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/wddx.so’ – /usr/lib64/php/modules/wddx.so: undefined symbol: zend_list_close in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/xmlreader.so’ – /usr/lib64/php/modules/xmlreader.so: undefined symbol: dom_node_class_entry in Unknown on line 0
PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib64/php/modules/json.so’ – /usr/lib64/php/modules/json.so: undefined symbol: _emalloc_56 in Unknown on line 0

Turns out DNF installed PHP 7, but didn’t do anything to remove the PHP 5 modules from my Apache configuration:

[root@fedora01 tmp]# cd /etc/httpd/modules
[root@fedora01 modules]# grep php *
Binary file libphp5.so matches
Binary file libphp5-zts.so matches
Binary file libphp7.so matches
Binary file libphp7-zts.so matches

[root@fedora01 modules]# mkdir /tmp/oldphp
[root@fedora01 modules]# mv libphp5* /tmp/oldphp

And remove them from the conf.modules.d too (if you just remove the module files but try to load them in the conf.modules.d … Apache will just fail to load. You could remove them from conf.modules.d … but I don’t want a lot of no-longer-used files sitting there to confuse me in a year or two!)

[root@fedora01 modules]# cd /etc/httpd/conf.modules.d/
[root@fedora01 conf.modules.d]# grep php *
10-php.conf: LoadModule php5_module modules/libphp5.so
10-php.conf: LoadModule php5_module modules/libphp5-zts.so
15-php.conf:# Cannot load both php5 and php7 modules
15-php.conf:<IfModule !mod_php5.c>
15-php.conf: LoadModule php7_module modules/libphp7.so
15-php.conf:<IfModule !mod_php5.c>
15-php.conf: LoadModule php7_module modules/libphp7-zts.so

[root@fedora01 conf.modules.d]# mv 10-php.conf /tmp/oldphp/

Then restart Apache without PHP 5:

root@fedora01 conf.modules.d]# service httpd start
Redirecting to /bin/systemctl start httpd.service

Voila, perfectly functioning web sites. And, yeah, I should probably check the list of “what will be updated” when I update a server. Would save HOURS of reading through strace output to find out old versions were still hanging about.

 

Smart Home (In)Security

I’ve seen a lot of articles recently about hacked IoT devices (and now one about a malicious company disrupting the customer’s service in retaliation for poor reviews (and possibly abusive calls to technical support). I certainly don’t think *everything* needs to be connected to the Internet. If you want to write messages on toast remotely, whatever … but beyond gimmicks, there are certainly products where the Internet offers no real advantage. But a lot of articles disparage the idea of a smart home based on goofy products.

There are devices that are more convenient than their ‘dumb’ counterparts. Locks that unlock when you are nearby. Garage lights that come on when the door is unlocked or opened. And if that was the extent of home automation, I guess you could still call it a silly fad.

But there are a LOT of connected devices that save resources: Exterior lighting that illuminates as you near your house. With motion detectors controlling light switches and bulbs, you (or the kids) cannot forget to turn out the lights. An outlet that turn OFF to eliminate draw when appliances are in ‘standby’ mode saved us about 50$/year just on the television/receiver. Use moisture sensors to control a sprinkler system so the grass is only watered when there is actual need. Water flow sensors that can alert you to unusual usage (e.g. when the water filter system gasket goes and it starts dumping water through the thing 24×7).

And some that prevent real damages to your home or person. If your house uses combustion for heat, configure the carbon monoxide sensor to shut off the HVAC system when CO levels are too high. Leak sensors shut off the water mains when a leak is detected (and turn off appliances in the wet area if there’s potential for shorting).

The major security problem with any IoT device, smart home systems included, is that you’ve connect private resources to the Internet. With all the hackers, punks, and downright malicious people out there. And from a privacy standpoint, you are providing information that can be mined to enhance marketing profiles — very carefully read the privacy policies of any company whose platform you will be using. Maybe a ‘smart’ coffee machine sounds good to you — but are they collecting (and potentially selling to third parties) information about how many cups of coffee you brew and the times of day you brew them? If you care is a personal decision, but it’s something that should be considered just the same.

When each individual device has its own platform, the privacy and security risks grow. A great number of these devices don’t need to be connected to the INTERNET directly but rather a relay point (hub). From a business perspective, this is a boon … since you have a Trane furnace (big money, not apt to be replaced yearly), you should also buy these other products that we sell and pay the monthly recurring to use our Nexia platform for all of your other smart devices. Or since you have a Samsung TV with a built-in hub … you should not only buy these other Samsung products, but hook all of your other smart ‘things’ up to SmartThings. And in a year or two when you’re shopping for a new TV … wait, you need one with a SmartThings hub or you’re going to have to port your existing configuration to a new vendor. Instant customer loyalty.

For an individual, the single relay point reduce risk (it’s not one of a dozen companies that need to be compromised to affect me, just this one) and confusion (I only have to keep track of one company’s privacy policy). *But* it also gives one company a lot more information. The device type is often indicative, but most people name the devices according to location (i.e. bedroom light, garage light, front door). Using SmartThings, Samsung knew when we went to bed and woke up, that we ate breakfast before brushing teeth (motion in hallway, motion in kitchen, water usage, power draw on appliances, motion in hallway, motion in bathroom, water usage) or showering (power draw on hot water tank, increased water usage). Which rooms we frequented (motion), when we watched TV (not what we watched, but when), when we left the house (no motion, presence change). How often we wash laundry (power draw on washer, water usage) and dishes (power draw in dishwasher, water usage). Temperature in the house (as reported from multi-sensor devices or from a smart thermostat), if we change settings for day/night. How often we drive a car (garage door open/closed with presence change, or speed of location change on presence), how much time we spend away from home. How often we have overnight guests (motion in guest bedroom at night).

And, yeah, the profile they glean is a guess. I might open the garage door when mowing grass. Or I might have rooms with no motion sensors for which they cannot account. But they have a LOT of data on which to base their guesses and no one selling targeted advertising profiles claims to be 100% accurate. Facebook’s algorithm, for quite some time, had me listed as a right-leaning Trump supporter. I finally tired of seeing campaign ads on their site and manually updated my advertising profile. Point is, one company has a lot of data from which they build fairly good targeted profiles. How much of our house is actually used (a lot of bedrooms that rarely get motion, get a ‘downsizing specialist’ real estate flyer. All rooms constantly with motion, get a flyer specific to finding a larger home to give you all some space). If the HVAC system is connected, they could create a target group “people who could use additional insulation or sealing in their house” (outdoor temp for location v/s indoor temp for location v/s energy draw).

In some ways, it’s cool that a company might be able to look at my life and determine a need of which I am not even aware. Didn’t realize how much of our energy bill was HVAC – wow, tightening the house and insulation will save how much?! But it’s also potentially offensive: yeah, we could use a bigger house for all of these people. We could also use a bigger pay cheque, what of it? Yeah, the kids moved out … but this is our house and why would you tell me I should be leaving? And generally invasive — information that doesn’t really cause harm but they’ve got no reason to know either.

What articles highlighting the insecurity of IoT devices seem to miss is that the relay point can reside on your local network with no Internet access. We personally use OpenHAB – which enables our home automation to function completely inside our local network. You trust the developers (or don’t, ours is open source … you can read the whole thing if you don’t want to trust developers), but you own the data and what is done with it.

You don’t need an expensive dedicated server to host your own home automation controller – a Raspberry PI will do. What you do need is technical knowledge and a good bit of time (or hire someone to do it for you, in which case you need money and someone else’s time). But the end result is the same — physical presence is required to compromise the system. Since physical presence will also let you bump locks, smash windows, cut power, flick light switches, open doors … you’re not worse off than before.