We made switchel today — it’s a quick drink to cook up, but it takes a while to cool off. A couple of cups of water in a pot, add in a cup of maple syrup and 1/2 cup of grated ginger (we used a large microplane grater). Simmered it for ten minutes, then pulled out the ginger pulp and drained it into the pot. More water was added to make one gallon of water, and a cup of apple cider vinegar was added. This cooled it off enough that it could be transferred into a container and refrigerator. When serving, we add extra water because it’s a little powerful and not well balanced. Next time, I think we’ll use the same ingredients but get close to 2 gallons of water.
Author: Lisa
Keepin It Rural
There’s a movement in my community to “save” it — save it from developers who see hundreds of rural acres as the perfect place to make a load of money building and selling homes on small lots. And probably save it from people who move into a development surrounded by hundreds of rural acres and want to complain that cow poo smells bad — not something I’ve heard of here yet (which could just be that no one’s said it to me), but a friend of mine lived in a development that overlooked a scenic dairy farm. People bought into what almost amounts to agrotourism in my head — look at that pretty chuck of Americana over there. And you get to live right next to it! Aaaand then some people from the development tried to get local regulations changed to stop dairy farming because, well, animal poo does stink. Luckily Ohio has right-to-farm laws that protect farmers in these types of situations — unless you’re really outside industry practices and have an especially stinky farm, you don’t get shut down just because the development that moved in next door doesn’t want to smell cows.
It’s one thing to buy a couple hundred acres of your own and not develop it. Easy enough — don’t develop it! It’s another think altogether to buy two or three acres and not want any of the surrounding land to be developed. Not impossible if you are lucky enough to pick up property next to a park or something. But a tough ask when surrounded by other residential homeowners. Which is why I think a bigger part of the movement is an attempt to protect rural areas from mass agro. I don’t think many farmers approaching retirement actively want to sell their couple hundred acres to a developer. What they want is to cash out millions of dollars from their land to fund their retirement. An understandable desire. Many farmers I know would love to have kids that are interested in taking over the farm after they retire. But the reality that I see within small-scale farming is having a second job to pay for the farm. Maybe my experience if skewed because I work in IT — it’s a field that’s great for contract work, so people can work a few contracts during less busy farming seasons and focus on the farm in spring and autumn. But I don’t know anyone who literally makes their entire income from farming. Retired people who make extra money farming. IT folks who subsidize the farm. There’s a chap we follow on YouTube who left an architectural firm — they seem to live on their farm proceeds, but I don’t actually know him.
My point being? I think a big part of sustaining rural communities has got to be changing how we shop for food. Changing how restaurants source food. If some mass agriculture company grows corn on ten thousand acres and sells it at four bucks a bushel … we’ve got to value the small rural farmer enough to be willing to pay maybe six or seven bucks a bushel that provides a sustainable income for the farmer. That would also create an environment in which farmers who want to retire would have people who look at purchasing the farm as a viable small business opportunity. Instead of a developer being the only realistic option — seriously, who wants to be destitute in retirement so someone else can enjoy a couple hundred acres of undeveloped property!?
Hazelnuts!
Our hazelnut bushes are finally growing the male part of the flower that comes out in the Autumn! Fingers crossed, we’ll be harvesting hazelnuts this time next year. It’s been seven years since we planted the bushes, but deer and rabbits chomped them down to little nubs the first year they were planted.
Saving the Bees
Our bees have been invaded by hornets — I don’t think we’ll have a hive much longer, but we spent the day blasting hornets with soapy water trying to protect the hive. A few of the honey bees came over to snuggle with us. It was a really cool experience, holding the little honey bees right on our fingers and letting them perch on our shoulders like really silly pirate parrots.
Buttermilk Corn Bread
Buttermilk Corn Bread
Course: SidesCuisine: AmericanDifficulty: EasyIngredients
1/2 cup salted butter
2/3 cup sugar
2 eggs
1 cup buttermilk
1/2 tsp baking soda
1 cup cornmeal
1 cup all-purpose flour
1/4 tsp salt
Method
- Preheat oven to 375 F and grease an 8″ square pan
- Melt butter. Stir in sugar. Add eggs and beat until well blended. Combine buttermilk with baking soda and stir into butter mixture.
- Add cornmeal, flour, and salt. Blend until well mixed (may be a few lumps remaining). Pour into prepared pan.
- Bake 30 – 40 minutes.
Garlic Butter Rice
In the pressure cooker pot, melt 4 Tbsp of salted butter. Add in 6-8 cloves of garlic (cut into small chunks). When you can smell garlic, add 1 1/2 cups of long-grain white rice and stir around to coat with butter. Add 2 1/2 cups of broth and pressure cook on ‘high’ for 3 minutes. Allow to rest for ten minutes (natural steam release).
Ingredients:
- 4 Tbsp butter
- 8 cloves of garlic
- 1.5 cups rice
- 2.5 cups broth
Maple Lemonade
I’ve been making quick maple lemonade — 1:1 mix of lemon juice and maple diluted with water to taste. For me, that’s 1T of lemon juice, 1T of maple syrup, mixed into a large glass of water.
Cyberark Performance Improvement Proposal – In-memory caching
Issue: The multi-step process of retrieving credentials from CyberArk introduce noticeable latency on web tools that utilize multiple passwords. This occurs each execution cycle (scheduled task or user access).
Proposal: We will use a redis server to cache credentials retrieved from CyberArk. This will allow quick access of frequently used passwords and reduce latency when multiple users access a tool.
Details:
A redis server will be installed on both the production and development web servers. The redis implementation will be bound to localhost, and communication with the server will be encrypted using the same SSL certificate used on the web server.
Data stored in redis will be encrypted using libsodium. The key and nonce will be stored in a file on the application server.
All password retrievals will follow this basic process:
Outstanding questions:
- Using a namespace for the username key increases storage requirement. We could, instead, use allocate individual ‘databases’ for specific services. I.E. use database 1 for all Oracle passwords, use database 2 for all FTP passwords, use database 3 for all web service passwords. This would reduce the length of the key string.
- Data retention. How long should cached data live? There’s a memory limit, and I elected to use a least frequently used algorithm to prune data if that limit is reached. That means a record that’s fused once an hour ago may well age out before a frequently used cred that’s been on the server for a few hours. There’s also a FIFO pruning, but I think we will have a handful of really frequently used credentials that we want to keep around as much as possible.Basically infinite retention with low memory allocation – we could significantly limit the amount of memory that can be used to store credentials and have a high (week? weeks?) expiry period on cached data.Or we could have the cache expire more quickly – a day? A few hours? The biggest drawback I see with a long expiry period is that we’re retaining bad data for some time after a password is changed. I conceptualized a process where we’d want to handle authentication failure by getting the password directly from CyberArk and update the redis cache – which minimizes the risk of keeping the cached data for a long time.
- How do we want to encrypt/decrypt stashed data? I used libsodium because it’s something I used before (and it’s simple) – does anyone have a particular fav method?
- Anyone have an opinion on SSL session caching
################################## MODULES ##################################### # No additional modules are loaded ################################## NETWORK ##################################### # My web server is on a different host, so I needed to bind to the public # network interface. I think we'd *want* to bind to localhost in our # use case. # bind 127.0.0.1 # Similarly, I think we'd want 'yes' here protected-mode no # Might want to use 0 to disable listening on the unsecure port port 6379 tcp-backlog 511 timeout 10 tcp-keepalive 300 ################################# TLS/SSL ##################################### tls-port 6380 tls-cert-file /opt/redis/ssl/memcache.pem tls-key-file /opt/redis/ssl/memcache.key tls-ca-cert-dir /opt/redis/ssl/ca # I am not auth'ing clients for simplicity tls-auth-clients no tls-auth-clients optional tls-protocols "TLSv1.2 TLSv1.3" tls-prefer-server-ciphers yes tls-session-caching no # These would only be set if we were setting up replication / clustering # tls-replication yes # tls-cluster yes ################################# GENERAL ##################################### # This is for docker, we may want to use something like systemd here. daemonize no supervised no #loglevel debug loglevel notice logfile "/var/log/redis.log" syslog-enabled yes syslog-ident redis syslog-facility local0 # 1 might be sufficient -- we *could* partition different apps into different databases # But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle # from any web tool would be the same cred. In which case, one database suffices. databases 3 ################################ SNAPSHOTTING ################################ save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb # dir ./ ################################## SECURITY ################################### # I wasn't setting up any sort of authentication and just using the facts that # (1) you are on localhost and # (2) you have the key to decrypt the stuff we stash # to mean you are authorized. ############################## MEMORY MANAGEMENT ################################ # This is what to evict from the dataset when memory is maxed maxmemory-policy volatile-lfu ############################# LAZY FREEING #################################### lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no lazyfree-lazy-user-del no ############################ KERNEL OOM CONTROL ############################## oom-score-adj no ############################## APPEND ONLY MODE ############################### appendonly no appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes ############################### ADVANCED CONFIG ############################### hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes ########################### ACTIVE DEFRAGMENTATION ####################### # Enabled active defragmentation activedefrag no # Minimum amount of fragmentation waste to start active defrag active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag active-defrag-threshold-lower 10
Setting up redis sandbox
To set up my redis sandbox in Docker, I created two folders — conf and data. The conf will house the SSL stuff and configuration file. The data directory is used to store the redis data.

I first needed to generate a SSL certificate. The public and private keys of the pair are stored in a pem and key file. The public key of the CA that signed the cert is stored in a “ca” folder.
Then I created a redis configuation file — note that the paths are relative to the Docker container
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | ################################## MODULES ##################################### ################################## NETWORK ##################################### # My web server is on a different host, so I needed to bind to the public # network interface. I think we'd *want* to bind to localhost in our # use case. # bind 127.0.0.1 # Similarly, I think we'd want 'yes' here protected-mode no # Might want to use 0 to disable listening on the unsecure port port 6379 tcp-backlog 511 timeout 10 tcp-keepalive 300 ################################# TLS/SSL ##################################### tls-port 6380 tls-cert-file /opt/redis/ssl/memcache.pem tls-key-file /opt/redis/ssl/memcache.key tls-ca-cert-dir /opt/redis/ssl/ca # I am not auth'ing clients for simplicity tls-auth-clients no tls-auth-clients optional tls-protocols "TLSv1.2 TLSv1.3" tls-prefer-server-ciphers yes tls-session-caching no # These would only be set if we were setting up replication / clustering # tls-replication yes # tls-cluster yes ################################# GENERAL ##################################### # This is for docker, we may want to use something like systemd here. daemonize no supervised no #loglevel debug loglevel notice logfile "/var/log/redis.log" syslog-enabled yes syslog-ident redis syslog-facility local0 # 1 might be sufficient -- we *could* partition different apps into different databases # But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle # from any web tool would be the same cred. In which case, one database suffices. databases 3 ################################ SNAPSHOTTING ################################ save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb # dir ./ ################################## SECURITY ################################### # I wasn't setting up any sort of authentication and just using the facts that # (1) you are on localhost and # (2) you have the key to decrypt the stuff we stash # to mean you are authorized. ############################## MEMORY MANAGEMENT ################################ # This is what to evict from the dataset when memory is maxed maxmemory-policy volatile-lfu ############################# LAZY FREEING #################################### lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no lazyfree-lazy-user-del no ############################ KERNEL OOM CONTROL ############################## oom-score-adj no ############################## APPEND ONLY MODE ############################### appendonly no appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes ############################### ADVANCED CONFIG ############################### hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes ########################### ACTIVE DEFRAGMENTATION ####################### # Enabled active defragmentation activedefrag no # Minimum amount of fragmentation waste to start active defrag active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag active-defrag-threshold-lower 10 |
Once I had the configuration data set up, I created the container. I’m using port 6380 for the SSL connection. For the sandbox, I also exposed the clear text port. I mapped volumes for both the redis data, the SSL files, and the redis.conf file
1 | docker run --name redis-srv -p 6380:6380 -p 6379:6379 - v /d/docker/redis/conf/ssl : /opt/redis/ssl - v /d/docker/redis/data : /data - v /d/docker/redis/conf/redis .conf: /usr/local/etc/redis/redis .conf -d redis redis-server /usr/local/etc/redis/redis .conf --appendonly yes |
Voila, I have a redis server ready. Quick PHP code to ensure it’s functional:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | <?php $sodiumKey = random_bytes(SODIUM_CRYPTO_SECRETBOX_KEYBYTES); // 256 bit $sodiumNonce = random_bytes(SODIUM_CRYPTO_SECRETBOX_NONCEBYTES); // 24 bytes #print "Key:\n"; #print sodium_bin2hex($sodiumKey); #print"\n\nNonce:\n"; #print sodium_bin2hex($sodiumNonce); #print "\n\n"; $redis = new Redis(); $redis->connect('tls://memcached.example.com', 6380); // enable TLS //check whether server is running or not echo "<PRE>Server is running: ".$redis->ping()."\n</pre>"; $checks = array( "credValueGoesHere", "cred2", "cred3", "cred4", "cred5" ); #$ciphertext = safeEncrypt($message, $key); #$plaintext = safeDecrypt($ciphertext, $key); foreach ($checks as $i => $value) { usleep(100); $key = 'credtest' . $i; $strCryptedValue = base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey)); $redis->setEx($key, 1800, $strCryptedValue); // 30 minute timeout } echo "<UL>\n"; for($i = 0; $i < count($checks); $i++){ $key = 'credtest'.$i; $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey); echo "<LI>The value on key $key is: $strValue \n"; } echo "</UL>\n"; echo "<P>\n"; echo "<P>\n"; echo "<UL>\n"; $objAllKeys = $redis->keys('*'); // all keys will match this. foreach($objAllKeys as $objKey){ print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n"; } echo "</UL>\n"; foreach ($checks as $i => $value) { usleep(100); $value = $value . "-updated"; $key = 'credtest' . $i; $strCryptedValue = base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey)); $redis->setEx($key, 60, $strCryptedValue); // 1 minute timeout } echo "<UL>\n"; for($i = 0; $i < count($checks); $i++){ $key = 'credtest'.$i; $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey); echo "<LI>The value on key $key is: $strValue \n"; } echo "</UL>\n"; echo "<P>\n"; echo "<UL>\n"; $objAllKeys = $redis->keys('*'); // all keys will match this. foreach($objAllKeys as $objKey){ print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n"; } echo "</UL>\n"; foreach ($checks as $i => $value) { usleep(100); $value = $value . "-updated"; $key = 'credtest' . $i; $strCryptedValue = base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey)); $redis->setEx($key, 1, $strCryptedValue); // 1 second timeout } echo "<P>\n"; echo "<UL>\n"; $objAllKeys = $redis->keys('*'); // all keys will match this. foreach($objAllKeys as $objKey){ print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n"; } echo "</UL>\n"; sleep(5); // Sleep so data ages out of redis echo "<UL>\n"; for($i = 0; $i < count($checks); $i++){ $key = 'credtest'.$i; $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey); echo "<LI>The value on key $key is: $strValue \n"; } echo "</UL>\n"; ?> |
Frozen Pizza
For some reason, frozen pizza never gets cooked right when I follow the instructions. Yes, the oven is actually at the right temp (I know not to trust the built-in thermister … but, if three different devices agree within a degree or so … I am confident that I’ve got the oven to a reasonably correct temperature!). But the middle ends up uncooked and soggy. Ugh! And cooking it for a few more minutes until the crust is actually cooked just yields burnt pizza. Also ugh!
So I did an experiment — instead of cooking the pizza at 400 for 22-24 minutes, I tried half an hour at 350 and half an hour at 375. I had to add a couple extra minutes in either case, but 34 minutes at 350 yielded a not-burnt-but-cooked frozen pizza! That’s not exactly a quick meal — if I have frozen dough defrosted, I can bake a fresh pizza at 550 for about ten minutes and have a full half-sheet of well-cooked pizza. But it’s a lazy meal — maybe three minutes of active cooking and half an hour to wash dishes or something.