The Peril Of Hosting Your Own Services

I love hosting my own services — home automation, file shares, backups, e-mail, web servers, DNS … bit of paranoia, a bit of control freak, and a bit of pride. But every now and again, hosting my own services causes problems because, well, vendors don’t develop processes around someone with servers in their house.

We got a new cable modem. Scott went to a web page (happened to be Google) and got redirected to the TWC activation page. Went through whatever, ended up calling into support, and finally our account was sorted. Woohoo! Everything works … umm, except I cannot search Google.

Turns out TWC manages their activation redirection by serving up bogus DNS info — their server IP instead of the real one. Which then got cached on our DNS server. No idea what TTL TWC set on their bogus data, but it was more than a minute or two. Had to clear the DNS server cache before we were able to hit Google sites again.

Alternative Facts: NATO

Alternative Fact: NATO countries owe money for defence expenditures the US has made.

Real Fact: The target was for member nations to devote 2% of GDP to defence spending. A target is not a guarantee. Not meeting a target may be disappointing, but it doesn’t mean you owe someone money. If your target is to donate 5% of your net income to charity … but at the end of the year have only managed 3%, it does not mean you owe charities 2% of your net income! It means you didn’t meet your goal. Consistently missing goals can also be a clue that the goal is not realistic. Take, for instance, someone whose goal is to donate 80% of their net income to charity. But they also pay their rent/mortgage, buy some food, turn the lights on occasionally. And don’t have 80% of their net income available after covering essentials. The person can commit to the goal and evaluate their other spending (move into a smaller residence, buy cheaper food, conserve on utilities) or they can change their goal to meet the 10% of their net income that is actually discretionary.

Another real fact? NATO countries, by and large, fund their own military. One might make the argument that the US would have been able to scale back the military budget if only other partners increased their expenditures. *But* that’s disingenuous from someone seeking an enormous increase in the military budget whilst questioning the nation’s continued commitment to NATO. But even if the ‘target’ was actually a contractual obligation … it would be to NATO and not the US.

OK, Google

Chrome 58 was released last month – and since then, I’ve gotten a LOT of certificate errors. Especially internally (Windows CA signed certs @ home and @ work). It’s really annoying – yeah, we don’t have SAN dnsHost attributes defined. And I know the RFC says falling back to CN is deprecated (seriously, search https://tools.ietf.org/html/rfc2818 for subjectAltName) but the same text was in there in 1999 … so not exactly a new innovation in SSL policy. Fortunately there’s a registry key that will override this for now.

The problem I have with SAN certificates is exemplified in Google’s cert on the web server that hosts the chromium changes site:

Seriously – this certificate ensures that the web site is any of these hundred wild-carded hostnames … and the more places you use a certificate, the greater the possibility of it being compromised. I get why people like wildcards — UALR was able to buy one cert & use it across the entire organisation. Cost effective and easy. The second through nth guy who wanted an SSL cert didn’t need to go about establishing his credentials within the organisation. He didn’t have to figure out how to make a cert request or how to pay for it. Just ask the first guy for a copy of his public/private key pair. Or run everything through your load balancer on the wildcard certificate & trust whatever backend cert happens to be in place.

But the point of security design is not trusting large groups of people do act properly. To secure their data appropriately. To patch their systems, configure their system to avoid attacks, to replace the certificate EVERYWHERE every TIME someone leaves the organisation, and otherwise prevent a certificate installed on dozens of servers from being accessed by a malicious party. My personal security preference would be seeing a browser flag every time a cert has a wildcard or more than one SAN.

New Soaps

We’ve made a bunch of new soaps this past week — mostly using the same 20% super-fat all coconut oil recipe, although I made a 0% super-fat coconut oil soap to use as laundry detergent. We just have to visit some store that actually stocks washing soda (WalMart – not somewhere I frequent, but according to their web site … it’s stocked at every local store here).

We made a rainbow swirl soap with orange essential oil — important thing about making rainbow swirl soap? Don’t try to smooth out the top! The whole top is a consistent lavender colour … cool, though, because the rainbow bits appear as you use the soap. Totally not what I was going for, though.

Another swirled soap using activated charcoal and green zeolite clay with tea tree essential oil. Again the swirl didn’t turn out the way I wanted … I think you’ve got to have really fluid soap batter to get these swirl techniques to succeed. This batch was less thick than the rainbow above … but it still got gloppy as I poured it. Also – there’s a reason the ‘column pour’ technique has a square in the middle. If you use a round object (say, a glass that you happen to have and know won’t be harmed by soap) , you get concentric circles. Not a design with scallops to it.

And I’ve found a few new recipes that I’d like to try — one is using pureed cucumber in place of water in the soap. And one that’s got to wait for next year — using daffodils as the colourant!

Exchange Online

We’re moving users to the magic in-the-cloud Exchange. Is this a cost effective solution? Well – that depends on how you look at the cost. The on prem cost includes a lot of money to external groups that are still inside the company. If the SAN team employs ten people … well, that’s a sunk cost if they’re administering our disk space or not. If we were laying people off because services moved out to magic cloud hosted locations … then there’s a cost savings. But that’s not reality. Point being, there’s no good comparison because the internal “costs” are inflated. Microsoft’s pricing to promote cloud adoption means EOL is essentially free with purchase too. I’m sure the MS cost will go up in the future — I remember them floating “leased” software back in the late 90’s (prelude to SaaS) and thinking that was a total racket. You move all your licensing to this convenient “pay for what you use” model. And once a plurality of customers have adopted the licensing scheme, start bumping up rates. It’s a significant undertaking to migrate over – but if I’m saving hundreds of thousands of dollars a year … worth it. Rates go up, and the extra fifty grand a year isn’t worth the cost and time for migrating back to on prem. And next year that fifty grand more isn’t worth it either. Economies of scale say MS (or Amazon, or whomever) can purchase ten thousand servers and petabytes of disk space for less money than I can get two thousand servers and a hundred terabytes … but they want to make a profit too. There might be a small cost savings in the long term, but nothing like the hundreds of thousands we’re being sold up front.

Regardless – business accounting isn’t my thing. A lot of it seems counter-productive if not outright nonsensical. There are actually features in Exchange Online that do not exist in the on prem solution. The one I discovered today is subaddressing. At home, we use the virtusertable in sendmail to map entire subdomains to a single mailbox. This means I can provide a functional e-mail address, on the fly, to a new company and have mail delivered into my mailbox. Works fine for a small number of people, but it is not a scalable solution. Some e-mail providers started using a delimiter after which any string was ignored. This means I could have a GMail account of DevNull@gmail.com but get mail as DevNull+SomeRandomString@gmail.com or DevNull+CompanyNameHere@gmail.com … great for identifying who is losing your e-mail address out in Internet-land. Also somewhat trivial to write a rule that takes +SomeCompromisedAddress and move it to trash. EOL lets us do that.

Another interesting feature that is available on prem but not convenient is free busy federation (now termed an “organisational relationship”). In previous iterations, both parties needed to establish firewall rules (and preferably a B2B connection) to transfer the free busy data. But two companies with MS tenants should be able to link up without having to enact firewall changes. We still connect to the tenant. The other party still connects to the tenant. It’s our two tenants that communicate via MS’s network. Something I’m interested in playing around with … might try to see if we can link our sandbox tenant up to the production one just to see what exactly is involved.

Irony, Thy Name Is Trump

Yesterday, Trump bemoaned how terribly he is treated as President. From a man who has never encountered a superlative he didn’t incorporate into everyday speech … not surprising. But I keep thinking about how Trump is treated in comparison to Obama. Fundamentally different stories, and one narrative has yet to be proven true or false. But even if Trump’s campaign literally had nothing to do with Russian influence in the election – simply had overly-trusting people trying to do the “right thing” and ended up speaking with the wrong people (I had eight calls from the dude, the last one ten minutes long because I was telling them to STOP CALLING ME). Even if we ignore abuses of power relating to the investigation into the nothing that really happened (you get charged with a crime you didn’t commit, try intimidating witnesses because the charges scare you or the bad publicity scares you … the intimidation itself IS a crime). The basic premise behind how Republicans treated Obama is that policies he advocates are so terrible that we’d rather literally accomplish nothing in the next four years. And any cycle you spend hosting a beer summit after making a completely fair assessment of public bias and police actions (seriously, would some old white professor have the cops called if he got locked out of his fancy brownstone?) or discussing birth certificates (hey, Trump, that would be yours) is a cycle not spent advancing odious positions. Agree or disagree with the positions, it’s a decent strategy that the Republicans cultivated there. Positions switch, and beyond play acting … are you really surprised to see the opposition using the same strategy?

Difference is that Obama had a halfway decent approach to dissent — Trump makes a dramatic reality show with a cliffhanger each week (and a bit like “how did you not expect to be red herring’d out of effectiveness” … voters, how did you not expect the reality show star to create, well, THIS!?).

Alternative Fact: Those Who Do Not Know History Are Doomed To Sound Foolish

Alternative Fact: Trump, speaking at the US Coast Guard Academy commencement, claimed “No politician in history — and I say this with great surety — has been treated worse or more unfairly“. Had he gone with ‘and’ instead of ‘or’, the assertion would be subjective. But NO politician in HISTORY has been treated WORSE?!?

Real Fact: Real assassination — literally killing a person — is worse than character assassination. Robespierre – both large numbers of politicians during his reign of terror and his eventual demise – worse. Defenestration of Prague (both 1 and 2) – worse. But let us be generous: place in scope only politicians during Trump’s adult lifetime. Anwar Sadat – worse. No one is a better friend to Israel than Trump (and with friends like this …), so how can he forget Rabin – worse. John F Kennedy – worse.

Chicken Food

I’ve found a few good ideas for things chickens will feed themselves — include the compost area in the chicken pen, the chickens will turn the compost for you, eat fresh veggie scraps, and eat bugs they find.

Put a board or rubber mat down on the ground – let it sit there for a day or two, and a bunch of bugs will move in under it. Flip the board/mat into a new location & the chickens will go after the bugs that have been uncovered.

I also plan to grow a “chicken garden” in their coop — buckwheat, millet, flax, red clover, and forage peas. Hopefully they won’t confuse their garden with our garden. I want to try allowing them to roam in our raised bed garden to eat bugs … but I may end up fencing that off so they don’t eat our veggies!

Russia Returns

Russia has a decent play at undermining the American government without actually colluding with Trump’s campaign. Do something that benefits any part and there is suspicion. Do something that benefits someone who has been suspected of shady dealings with your country (money laundering, loans to someone American banks consider too risky) and the suspicion is even deeper. Someone who has used obstruction and intimidation routinely in business using the same tactics in their political misadventure … not exactly shocking.

Trump’s administration seems hopelessly unable to do anything but help the Russians undermine our government. Firing Comey looks bad no matter what happened during the election. Sharing code-word classified information with the same country suspected of interfering with the election … outright silly.

An “independent” investigation or one run by the House / Senate / FBI led by whomever Trump puts in charge – there’s no good outcome.

The investigation finds nothing illegal – half the country things the investigation was tainted, but we continue down this path. Allies withhold intel because they cannot trust Trump not to use the latest intercepts to brag about how great his intel briefings are. Reasonable policies are overturned along with the unreasonable because the Executive branch leadership doesn’t understand the “benefit” part of cost/benefit analysis. Taxes are lowered and deficits explode.

The investigation finds something – half the country thinks it’s fake evidence to go along with their fake news. But something has to be done. It isn’t like there’s a do-over election clause in the Constitution (even if there were, half of the country objects to the do-over election). Trump is impeached and Pence takes over – Democrats object – we’d have almost been better off with the ignorant guy who didn’t heap religious fundamentalism on top of his deregulation, tax cuts, and environmental destruction. Trump voters who are not traditional Republicans object — they didn’t vote for Pence’s policies either. Trump is impeached and Pence goes down too — Ryan takes over. See previous.

 

Git, Version Management, Branches, and Sub-modules

As we have increased in staff, we’ve gained a few new programmers. While it was easy enough for us to avoid stepping on each other’s toes, we have experienced several production problems that could be addressed by rethinking our repository configuration.

Current state: We have a monolithic repository for different batch servers. Each server has a clone of the repository, and the development equivalent has a clone of the same repository. The repository has top-level folders for each independent script. There is a SharedTools top-level folder for reusable functions.

Changes are made on forks located both on the development server and individuals’ computers, tested on the development server, then pushed to the repo. Under a CRQ, a pull is performed from the production server to elevate the new code. Glomming dozens of scripts into a single repository was simple and quick; but, with new people involved with development efforts, we have experienced challenges with changes being lost, unintentional elevation of code, and having UAT run against under-development code.

Pitfalls: Four people working on four different scripts are working in the same repository. We have had individuals developing on their laptop overwrite changes (force push is dangerous, even if force-with-lease is used), we have had individuals developing on the dev server commit other people’s edits (git add * isn’t a good idea in a shared environment – specifically add changed files to your commit), and we’ve had duplication of effort (which is certainly a problem outside of development efforts, and one that can be addressed outside of git).

We could address the issues we’ve seen through training and communication – ensure anyone contributing code to the repository adequately understands what force push means, appreciates what wildcards include, and generally have a more nuanced understanding of git than the one-hour training I provided last year. But I think we should consider the LOE and advantages of using a technical solution to ensure less experienced git users are able to successfully use our repositories.

Proposal – Functional Splits:

While we have a few more individuals with development experience, they are quite specifically Windows script developers (PowerShell, VBScript, etc). We could just stop using the Windows batch server and let the two or three Microsoft guys figure it out for themselves. This limits individual growth – I “don’t do” PowerShell development, the Windows guys don’t learn Linux. And, as the group changes over time, we have not addressed the underlying problem of multiple people working on the same codebase.

Proposal – Git Changes:

We can begin using branches for development efforts and reserve “master” for ready-for-deployment code. Doing so, we eliminate the possibility of inadvertently elevating code before it is ready – only commands targeted to “origin master” will be run on production servers.

Using descriptive branch names (Initials-ScriptFolderName-SummaryOfChange) will help eliminate duplicated efforts. If I notice we need to send a few mass mails with inline images, seeing “TJR-sendMassMail-AddInlineImages” in the branch list lets me know you’ve got it covered. And “TJR-sendMassMail-RecipientListFromLiveLDAPQuery” lets me know you’re working on something else and I’m setting myself up for merge challenges by working on my change right now. If both of our changes are high priority, we might choose to work through a merge operation. This would be an informed, upfront decision instead of a surprise message indicating that fast-forward merging is not possible.

In large development projects, branch management can become a full-time pursuit. I do not think that will be an issue in our case. Minimizing the number of branches used, and not creating branches based on branches, makes branch management a simpler task. We should be able to perform fast-forward merges to push code into master because our branches modify different files in the repository.

To begin a development effort, create a branch and push it to the git server. Make your changes within that branch, and ensure you keep your branch in sync with master – you cannot merge branches that are “behind” into master without force. Once you are finished with your development, merge your branch into master and delete your branch. This approach will require additional training to ensure everyone understands how to create, rebase, merge, and delete branches (and not to just force operations because it lets you complete your task).

Instead of using ‘master’ for production code, the inverse is equally viable: create a “stable” branch that is for production code and only pull that branch to PROD servers. I believe this approach is done to prevent accidental changes to prod code – you’ve got to intentionally target “origin stable” with an operation to impact production code.

Our single repository configuration is a detriment to using branches if development is performed on the DEV server. To illustrate the issue, create a BranchTesting repo and add a single file to master. Create a Branch1 branch in one command window and check it out. Create a Branch2 in a second command window and check it out. In your first command window, add a file and commit it. In your second command window, add a file and commit it. You will find that both files have been committed to Branch2.

How can we address this issue?

Develop on our individual workstations instead of the DEV server. Not sharing a file set for our development efforts eliminates the branch context switching problem. If you clone the repo to your laptop, Gary clones the repo to his laptop, and I clone the repo to my laptop … you can create TJR-sendMassMail-AddInlineImages on your computer, write and test the changes locally, commit the changes and pull them to the DEV server for more robust testing, and then merge your changes into master when you are ready to elevate the code. I can simultaneously create LJR-monitorLDAPReplication-AddOUD11Servers, do my thing, commit changes and pull them to the DEV server (first using “git branch” to determine if someone else is already testing their branch on the DEV server), and merge my stuff into master when I’m ready to elevate. Other than remembering to ensure you verify that DEV has master checked out (i.e. no one else is testing, so the resource is free), we do not have resource contention.

While it may not be desirable to fill up our laptop drives with the entire code set from six different application servers, sparse-checkout allows you to select the specific folders that will come down to your fork.

The advantage of this approach is that it has no initial LOE beyond training and process change. The repositories are left as-is, and we start using them differently.

Unfortunately, this approach may not be viable in some instances – when access to data sources is restricted by IP ACL, you may not be able to do more than linting on your laptop. It may not even be possible to configure a Windows laptop to run some of our code – some Linux requirements are difficult to address in Windows (the PKI website’s cert info check, for instance), and testing code on Windows may not ensure successful operation on the Linux hosts.

Break the monolithic repositories into discrete repositories and use submodules allow the multiple independent repositories to be “rolled up” into a top-level repository. Development is done in the submodule repositories. I can clone monitorLDAPReplication, you can clone sendMassMail, etc. changes can be made within our branches of these completely different repositories and merged into the individual repository’s master branch for release to the production environment. Release can be done for the superset (“–recurse-submodules”) or individual sub-modules.

This would require splitting a repository into its individual components and configuring the sub-module relationships. This can be a scripted operation, and it is an incremental change to the script I used to create the repositories and ingest code; but the LOE for implementation is a few days of script writing / testing. Training will be required to ensure individuals can register their submodules within the top-level repo, and we will need to accustom ourselves to maintaining individual repos.

Or just break monolithic repositories into discrete repositories. The level of effort is about the same initially, but no one needs to learn how to set up a new submodule. We lose single-repo conveniences, but there’s literally no association between our different script folders where someone working in X could inadvertently impact Y.