Category: Coding

Maven Build Certificate Error

Attempting to build some Java code, I got a lot of errors indicating a trusted certificate chain was not available:

Could not transfer artifact 
org.springframework.boot:spring-boot-starter-parent:pom:2.2.0.RELEASE 
from/to repo.spring.io (<redacted>): sun.security.validator.ValidatorException: 
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

And

[ERROR] Failed to execute goal on project errorhandler: 
Could not resolve dependencies for project com.example.npm:errorhandler:jar:0.0.1-SNAPSHOT: 
The following artifacts could not be resolved: 
org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot-configuration-processor:jar:2.3.7.BUILD-SNAPSHOT: 
Could not transfer artifact org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-20201211.052207-37 
from/to spring-snapshots (https://repo.spring.io/snapshot): 
transfer failed for https://repo.spring.io/snapshot/org/springframework/boot/spring-boot-starter-data-jpa/2.3.7.BUILD-SNAPSHOT/spring-boot-starter-data-jpa-2.3.7.BUILD-20201211.052207-37.jar: 
Certificate for <repo.spring.io> doesn't match any of the subject alternative names: [] -> [Help 1]

Ideally, you could just add whatever cert(s) needed to be trusted into the cacerts file for the Java instance using keytool (.\keytool.exe -import -alias digicert-intermed -cacerts -file c:\tmp\digi-int.cer) however the work computers are locked down such that I am unable to import certs into the Java trust store. The second error makes me think it wouldn’t work anyway — if there’s no matching SAN on the cert, trusting the cert wouldn’t do anything.

Fortunately, there are a few flags you can add to mvn to ignore certificate errors — thus allowing the build to complete without requiring access to the cacerts file. There is, of course, a possibility that the trust failure is because your connection is being redirected maliciously … but I see enough other people getting trust failures for this spring-boot stuff (and visiting the site doesn’t show anything suspect) that I’m happy to bypass the security validation this once and just be done with the build 🙂

mvn package -DskipTests -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dmaven.wagon.http.ssl.ignore.validity.dates=true jib:build

Postgresql Through an SSH Tunnel in Python

Our production Postgresql servers have a fairly restrictive IP access control list — which means you cannot VPN in and query the server. We’ve been using DBeaver with an SSH tunnel to connect, but it’s a bit time consuming to run a query across all of the servers for monitoring and troubleshooting. To work around the restriction, I built a python script that uses an SSH tunnel to relay communications to the Postgresql servers.

import psycopg2
from sshtunnel import SSHTunnelForwarder

from config import strSSHRelayHost, iSSHRelayPort, strSSHRelayUser, strSSHAuthKeyFile, dictHost
# In the config.py, dictHost should contain the following information
# dictHost = {"host":"dbserver.example.com","port":5432,"database": "dbname", "username":"dbuser", "password":"S3cr3tPhr@5e"}

# Example query -- listing out locks 
sqlQuery = "WITH RECURSIVE l AS (  SELECT pid, locktype, mode, granted, ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj FROM pg_locks ), pairs AS ( SELECT w.pid waiter, l.pid locker, l.obj, l.mode FROM l w JOIN l ON l.obj IS NOT DISTINCT FROM w.obj AND l.locktype=w.locktype AND NOT l.pid=w.pid AND l.granted  WHERE NOT w.granted ), tree AS ( SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER () all_pids FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l  UNION ALL  SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter) OVER () FROM tree JOIN pairs w ON tree.pid=w.locker AND NOT w.waiter = ANY ( all_pids )) SELECT (clock_timestamp() - a.xact_start)::interval(3) AS ts_age, replace(a.state, 'idle in transaction', 'idletx') state, (clock_timestamp() - state_change)::interval(3) AS change_age, a.datname,tree.pid,a.usename,a.client_addr,lvl, (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked, repeat(' .', lvl)||' '||left(regexp_replace(query, 's+', ' ', 'g'),100) query FROM tree JOIN pg_stat_activity a USING (pid) ORDER BY path"

with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_private_key=strSSHAuthKeyFile, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
# Alternately, you can use password authentication
#with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_password=strSSHRelayUserPass, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
    if server is not None:
        server.start()
        server._check_is_started()
        #print("Tunnel server connected")
        params = {'database': dictHost.get('database'),'user': dictHost.get('username'),'password': dictHost.get('password'), 'host': server.local_bind_host, 'port': server.local_bind_port}
        conn = psycopg2.connect(**params)
        cursor = conn.cursor()
        cursor.execute(sqlQuery)
        column_names = [desc[0] for desc in cursor.description]
        print(column_names)
        rows = cursor.fetchall()
        for row in rows:
            print(row)
        cursor.close()
        if conn is not None:
            conn.close()
        server.stop()
    else:
        print("Unable to establish SSH tunnel")

Python List Joining

I’ve seen lists joined with a delimiter like this before:

strDelimiter = “\n”
strDelimiter.join(listOfStuff)

But it seems silly to allocate memory just for the delimiter … not a big deal from a resource perspective, and probably using the delimiter variable is more comprehensible in the future … but I’ve always wondered if you couldn’t just use a static string with the join method. Turns out you can —

msgContent.attach(MIMEText(“\n”.join(listOfStuff), ‘plain’))

 

Where did THAT come from? (PHP class)

I’ve got some code that is cobbled together from a couple of different places & it’s got namespace collisions that wouldn’t exist if I’d been starting from scratch. But I’ve got what I’ve got … and, occasionally, new code falls over because a class has already been declared.

Luckily, there’s a way to find out from where a class was loaded:

			$strClassName = "Oracle_Cred";
			if( class_exists($strClassName) ){
				$reflector = new \ReflectionClass($strClassName);
				echo "Class $strClassName was loaded from " . $reflector->getFileName();
			}
			else{
				echo "Class $strClassName does not exist yet";
			}

Bash – Spaces, Quotes, and String Replacement

Had to figure out how to do string replacement (Scott wanted to convert WMA files to similarly named MP3 files) and pass a single parameter that has spaces into a shell script.

${original_string/thing_to_find/thing_to_replace_there} does string replacement. And $@ is the unexpanded parameter set. So this wouldn’t work if we were trying to pass in more than one parameter (well, it *would* … I’d just have to custom-handle parameter expansion in the script)

 

Fortify on Demand Remediation: Cookie Security: Cookie not Sent Over SSL

This is another one that might be a false positive or might be legit. If you look at the documentation for PHP’s setcookie function, you will see the sixth parameter sets a restriction so cookies are only sent over secure connections. If you are not setting this restriction, the vulnerability is legitimate and you should sort that. But … if you followed PHP’s documentation and passed 1 to the parameter? FoD is falsely reporting that the parameter is not set to true.

In this case, the solution is easy enough. Change your perfectly valid 1

to say true

And voila, the vulnerability has been remediated.

Porkbun DDNS API

I’ve been working on a script that updates our host names in Porkbun, but the script had a problem with the example.com type A records. Updating host.example.com worked fine, but example.com became example.com.example.com

Now, in a Bind zone, you just fully qualify the record by post-pending the implied root dot (i.e. instead of “example.com”, you use “example.com.”, but Porkbun didn’t understand a fully qualified record. You cannot say the name is null (or “”). You cannot say the name is “example.com” or “example.com.”

In what I hope is my final iteration of the script, I now identify cases where the name matches the zone and don’t include the name parameter in the JSON data. Otherwise I include the ‘name’ as the short hostname (i.e. the fully qualified hostname minus the zone name). This appears to be working properly, so (fingers crossed, knock on wood, and all that) our ‘stuff’ won’t go offline next time our IP address changes.

Scraping Google Calendar Data, take 2

I had written a script that uses the Google Calendar API to pull records from the Township’s calendar. Unfortunately, the pickle / token / whatever has started expiring every week. Which means manual intervention is required for my automated process to run. Which made me wonder … for a private calendar, it makes sense to use the API. I need to authenticate in order to read my private appointments. I can get the token to last for a year, but then I’ve got to go through whatever to be a real / approved application. Which is a lot of effort for something that I’m using to read my own data. Which made me wonder why I need to authenticate to read events on a public calendar!?

Turns out I don’t. I just need to use the iCal feed for the calendar. Using requests to pull data from a URL and then parsing out the iCal data is simple enough. So now I have a script that pulls the iCal file to populate my Exchange calendar. Since it’s unauthenticated, I shouldn’t have to do anything to get it working again next week 🙂

Cyberark Performance Improvement Proposal – In-memory caching

Issue: The multi-step process of retrieving credentials from CyberArk introduce noticeable latency on web tools that utilize multiple passwords. This occurs each execution cycle (scheduled task or user access).

Proposal: We will use a redis server to cache credentials retrieved from CyberArk. This will allow quick access of frequently used passwords and reduce latency when multiple users access a tool.

Details:

A redis server will be installed on both the production and development web servers. The redis implementation will be bound to localhost, and communication with the server will be encrypted using the same SSL certificate used on the web server.

Data stored in redis will be encrypted using libsodium. The key and nonce will be stored in a file on the application server.

All password retrievals will follow this basic process:

Outstanding questions:

  1. Using a namespace for the username key increases storage requirement. We could, instead, use allocate individual ‘databases’ for specific services. I.E. use database 1 for all Oracle passwords, use database 2 for all FTP passwords, use database 3 for all web service passwords. This would reduce the length of the key string.
  2. Data retention. How long should cached data live? There’s a memory limit, and I elected to use a least frequently used algorithm to prune data if that limit is reached. That means a record that’s fused once an hour ago may well age out before a frequently used cred that’s been on the server for a few hours. There’s also a FIFO pruning, but I think we will have a handful of really frequently used credentials that we want to keep around as much as possible.Basically infinite retention with low memory allocation – we could significantly limit the amount of memory that can be used to store credentials and have a high (week? weeks?) expiry period on cached data.Or we could have the cache expire more quickly – a day? A few hours? The biggest drawback I see with a long expiry period is that we’re retaining bad data for some time after a password is changed. I conceptualized a process where we’d want to handle authentication failure by getting the password directly from CyberArk and update the redis cache – which minimizes the risk of keeping the cached data for a long time.
  3. How do we want to encrypt/decrypt stashed data? I used libsodium because it’s something I used before (and it’s simple) – does anyone have a particular fav method?
  4. Anyone have an opinion on SSL session caching
################################## MODULES #####################################

# No additional modules are loaded
################################## NETWORK #####################################

# My web server is on a different host, so I needed to bind to the public
# network interface. I think we'd *want* to bind to localhost in our
# use case.
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no

# Might want to use 0 to disable listening on the unsecure port
port 6379
tcp-backlog 511
timeout 10
tcp-keepalive 300

################################# TLS/SSL #####################################

tls-port 6380
tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca

# I am not auth'ing clients for simplicity
tls-auth-clients no

tls-auth-clients optional
tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no

# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes

################################# GENERAL #####################################

# This is for docker, we may want to use something like systemd here.
daemonize no
supervised no

#loglevel debug
loglevel notice
logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0

# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices.
databases 3

################################ SNAPSHOTTING ################################

save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb

#
dir ./

################################## SECURITY ###################################

# I wasn't setting up any sort of authentication and just using the facts that
# (1) you are on localhost and
# (2) you have the key to decrypt the stuff we stash
# to mean you are authorized.

############################## MEMORY MANAGEMENT ################################

# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu

############################# LAZY FREEING ####################################

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no

############################ KERNEL OOM CONTROL ##############################

oom-score-adj no

############################## APPEND ONLY MODE ###############################

appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes

############################### ADVANCED CONFIG ###############################

hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes

########################### ACTIVE DEFRAGMENTATION #######################

# Enabled active defragmentation
activedefrag no

# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10