Well, the 2022 maple season is over — I think our taps have dried up because we’ve had a few freeze/thaw days and haven’t really yielded an appreciable amount of sap. We only got like 2.5 gallons of syrup this year — much less than expected … and we need to be ready to tap in January next year when the first week of freeze/thaw hits. While I love the flavor of late-season syrup, we’re getting way too many warm days in March for good sap production.
Author: Lisa
Tiny Ducky
A Proto-Duck Emerges
Docker – Changing an Existing Container
I’ll start by acknowledging that, of course, you could just redeploy the container with the settings you want now. The whole point of containerized development is that anything “good” should either be part of the deployment settings or data persisted outside of the container. So, in theory, redeploying the container every day shouldn’t really be detectable. Even when you didn’t deploy the original container (i.e. you don’t have the Dockerfile or docker run command handy to tweak as needed), you can reverse engineer what you need from docker inspect. But sometimes? It’s quicker/easier/more convenient to just fix what you need to within the existing container. And it is possible to do so.
The trickiest part is finding the right file to edit.
# cd into docker container definition folder
cd /var/lib/docker/containers/
# find the guid for the container you want to edit
docker ps
# Find the corresponding folder name
ls -al | grep bc9dc66882af
# cd into that folder
cd bc9dc66882af18f59c209faf10031fe21765571d0a2fe4a32a768a1d52cc1e53
# Edit the config.v2.json file for the container
vi config.v2.json
# And, finally, restart docker
systemctl stop docker
systemctl start docker
Docker – Reporting IP Addresses of Containers
Tiny Turkey Army, Take Two
The new turkeys arrived today — last year, USPS shipping was a horrible experience. This year, I called a few hatcheries to confirm they’ve been able to delivery healthy, happy poults. Meyers said they hadn’t had delivery problems, so we ordered 20 Black Spanish turkeys from them. They shipped yesterday, the shipping notice was delivered overnight, and the USPS clerk called at 6:30 this morning to let me know they arrived. Wow, was that early!
We got all the little ones into their brooder, fed, and watered (having more healthy birds seems to help because one little guy eats or drinks and a whole flock of little ones come over and copy it).
Farm Automation
Scott set up one of the ESP32’s that we use as environmental sensors to monitor the incubator. There’s an audible alarm if it gets too warm or cold, but that’s only useful if you’re pretty close to the unit. We had gotten a cheap rectangular incubator from Amazon — it’s got some quirks. The display says C, but if we’ve got eggs in a box that’s 100C? They’d be cooked. The number is F, but the letter “C” follows it … there’s supposed to be a calibration menu option, but there isn’t. Worse, though — the temperature sensor is off by a few degrees. If calibration was an option, that wouldn’t be a big deal … but the only way we’re able to get the device over 97F is by taping the temperature probe up to the top of the unit.
So we’ve got an DHT11 sensor inside of the incubator and the ESP32 sends temperature and humidity readings over MQTT to OpenHAB. There are text and audio alerts if the temperature or humidity aren’t in the “good” window. This means you can be out in the yard or away from home and still know if there’s a problem (since data is stored in a database table, we can even chart out the temperature or humidity across the entire incubation period).
We also bought a larger incubator for the chicken eggs — and there’s a new ESP32 and sensor in the larger incubator.
Incubators
We tried a cheap forced-air incubator from Amazon.
All of these little rectangular boxes seem to have the same design flaw: the fan and heater are in one corner of the incubator, and the hot air blows out of one side of the box. So there are really hot spots in the incubator and relatively cold spots.
Since we had a bad hatch rate (1 of 8) with the thing, we decided to get a bigger, better incubator. After researching a lot of options, we got the Farm Innovators 4250 — lots of space for eggs, a centrally located heater and fan that blow air all around, and a humidity sensor (looks to be the same DHT-11 that we use in our sensor). We’ll collect eggs and get a large batch going in a few weeks.
2022 Maple Season
We’ve been running the reverse osmosis system at about 95 PSI and concentrating the sap from around 2 percent to 7-8 percent. That’s a lot of water being pulled out (and the water tests at 0 … so we’re not wasting a statistically significant amount of sugar in the filtering process).
Postgresql – Querying Hot Standby Server
We hit our maximum connection limit on some PostgreSQL servers — which made me wonder why the hot standby servers weren’t being used … well, at all. They’re equally big, expensive servers with loads of disk space. But they’re just sitting there “in case”.
So we directed some traffic over to the standby server. I’m also going to tweak a few settings related to user limits — increase the max connections since these are dedicated hosts and have plenty of available I/O, memory, CPU, etc resources; increase the number of reserved connections since replication filled up all of the reserved slots; implement a per-user connection limit on one account that runs a lot of threads — but directing some people who were only trying to look at data over to the standby server seemed like a quick fix.
Now, we discovered something interesting about how queries against the standby interact with replication. It makes a lot of sense when you start thinking about it — if you query against the writable replica, there’s some blocking that goes on. The system isn’t going to vacuum data that you’re currently trying to use. The standby, however, doesn’t have any way to clue the writable replica in to the fact you are trying to use some data. So the writable replica gets a delete, does its thing to hide those rows from future queries, and eventually auto-vacuum comes through and cleans up those rows. All of this gets pushed over to the standby … and there goes the data you were trying to read.
Odds of this happening on a query that takes eight seconds? Incredibly low! Odds increase, however, the longer a query runs. So some of our super massive reports started seeing an error indicating that their query was cancelled “due to a conflict with recovery”
There are two solutions in the PostgreSQL documentation — one is to increase the max_standby_streaming_delay value (there’s also an archive delay, but we aren’t particularly concerned about clients querying the server during recovery operations) the other is to avoid vacuuming data too quickly — either by setting hot_standby_feedback on the standby or increasing vacuum_defer_cleanup_age on the primary.
There’s a third option too — don’t use the standby for long-running queries. That’s easily done in our case … and doesn’t require tweaking any PostgreSQL settings. Ad hoc reporting and direct user access really shouldn’t be implementing such substantial queries (it’s always good to have a SQL expert plan out and optimize complex queries if that’s an option).