I often need to quickly see if a cert is going to expire — I’ve got nice monitoring scripts too, but something that I can run from the command line right now
echo -n Q | openssl s_client -connect $HOST:$PORT | openssl x509 -noout -dates
I often need to quickly see if a cert is going to expire — I’ve got nice monitoring scripts too, but something that I can run from the command line right now
echo -n Q | openssl s_client -connect $HOST:$PORT | openssl x509 -noout -dates
While Tableau doesn’t have anything nice like a ‘dumpster’ from which you can restore a deleted workbook, it does at least keep tables for historic events like workbook deletion. The following query finds records where a workbook with FOOBAR in its name was deleted. It lists all of the event info as well as info on the user who deleted it. Near as I can tell, the “created” date for the historical_events table is the date the workbook was deleted (from my restore, I know the workbook itself was created last year!)
SELECT historical_events.*, hist_workbooks.*, hist_users.*
FROM historical_events
left outer join historical_event_types on historical_event_types.type_id = historical_events.historical_event_type_id
left outer join hist_workbooks on hist_workbooks.id = historical_events.hist_workbook_id
left outer join hist_users on hist_users.id = historical_events.hist_actor_user_id
WHERE historical_event_types.name = 'Delete Workbook'
and hist_workbooks.name like '%FOOBAR%'
;
A very, very long time ago (2002-ish), we moved to using AD to store our Oracle connections — it’s far easier to edit the one connection entry in Active Directory than to distribute the latest connection file to every desktop and server in the company. Frankly, they never get to the servers. Individuals enter the connections they need … and update them when something stops working and they find the new host/port/etc. Unfortunately, Oracle used an anonymous connection to retrieve the data. So we’ve had anonymous binds enabled in Active Directory ever since. I no longer support AD, so haven’t really kept up with it … until a coworker asked why this huge security vulnerability was specifically configured for our domain. And I gave him the whole history. While we were chatting, a quick search revealed that Oracle 21c and later clients actually can use a wallet for credentials in the sqlnet.ora file:
MES.LDAP_AUTHENTICATE_BIND = TRUE
NAMES.LDAP_AUTHENTICATE_BIND_METHOD = LDAPS_SIMPLE_AUTH
WALLET_LOCATION = (SOURCE = (METHOD = FILE)
(METHOD_DATA = (DIRECTORY = /path/to/wallet.file) )
From https://www.oracle.com/a/otn/docs/database/oracle-net-active-directory-naming.pdf
RedHat is phasing out ZFS – there are several reasons for this move, but primarily ZFS is a closed source Solaris (now Oracle) codebase. While OpenZFS exists, it’s not quite ‘the same’. RedHat’s preferred solution is Virtual Data Optimizer (VDO). This page walks through the process of installing PostgreSQL and creating a database cluster on VDO and installing TimescaleDB extension on the database cluster for RedHat Enterprise 8 (RHEL8)
Before we create a VDO disk, we need to install it
yum install vdo kmod-kvdo |
Then we need to create a vdo – here a VDO named ‘PGData’ is created on /dev/sdb – a 9TB volume on which we will hold 16TB
vdo create --name=PGData --device=/dev/sdb --vdoLogicalSize=16T |
Check to verify that the object was created – it is /dev/mapper/PGData in this instance
vdo list |
Now format the volume using xfs.
mkfs.xfs /dev/mapper/PGData |
And finally add a mount point
# Create the mount point foldermkdir /pgpool# Update fstab to mount the new volume to that mount pintcat /etc/fstab/dev/mapper/PGData /pgpool xfs defaults,x-systemd.requires=vdo.service 0 0# Load the updated fstabsystemctl daemon-reload# and mount the volumemount -a |
it should be mounted at ‘/pgpool/’
The main reason for using VDO with Postgres is because of its compression feature – this is automatically enabled, although we may need to tweak settings as we test it.
We now have a place in our pool where we want our Postgres database to store its data. So let’s go ahead and install PostgreSQL,
here we are using RHEL8 and installing PostgreSQL 12
# Install the repository RPM:dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpmdnf clean all# Disable the built-in PostgreSQL module:dnf -qy module disable postgresql# Install PostgreSQL:dnf install -y postgresql12-server |
Once the installation is done we need to initiate the database cluster and start the server . Since we want our Postgres to store data in our VDO volume we need to initialize it into our custom directory, we can do that in many ways,
In all cases we need to make sure that the mount point of our zpool i.e., ‘/pgpool/pgdata/’ is owned by the ‘postgres’ user which is created when we install PostgreSQL. We can do that by running the below command before running below steps for starting the postgres server
mkdir /pgpool/pgdatachown -R postgres:postgres /pgpool |
Customize the systemd service by editing the postgresql-12 unit file and updateding the PGDATA environment variable
vdotest-uos:pgpool # grep Environment /usr/lib/systemd/system/postgresql-12.service# Note: avoid inserting whitespace in these Environment= lines, or you mayEnvironment=PGDATA=/pgpool/pgdata |
and then initialize, enable and start our server as below
/usr/pgsql-12/bin/postgresql-12-setup initdbsystemctl enable postgresql-12systemctl start postgresql-12 |
Here ‘/usr/pgsql-12/bin/’ is the bin directory of postgres installation you can substitute it with your bin directory path.
or
We can also directly give the data directory value while initializing db using below command
/usr/pgsql-12/bin/initdb -D /pgpool/pgdata/ |
and then start the server using
systemctl start postgresql-12 |
Now we have installed postgreSQL and started the server, we will install the Timescale extension for Postgres now.
add the time scale repo with below command
tee /etc/yum.repos.d/timescale_timescaledb.repo <<EOL[timescale_timescaledb]name=timescale_timescaledbbaseurl=https://packagecloud.io/timescale/timescaledb/el/8/\$basearchrepo_gpgcheck=1gpgcheck=0enabled=1gpgkey=https://packagecloud.io/timescale/timescaledb/gpgkeysslverify=1sslcacert=/etc/pki/tls/certs/ca-bundle.crtmetadata_expire=300EOLsudo yum update -y |
then install it using below command
yum install -y timescaledb-postgresql-12 |
After installing we need to add ‘timescale’ to shared_preload_libraries in our postgresql.conf, Timescale gives us ‘timescaledb-tune‘ which can be used for this and also configuring different settings for our database. Since we initialize our PG database cluster in a custom location we need to point the direction of postgresql.conf to timescaledb-tune it also requires a path to our pg_config file we can do both by following command.
timescaledb-tune --pg-config=/usr/pgsql-12/bin/pg_config --conf-path=/pgpool/pgdata/postgresql.conf |
After running above command we need to restart our Postgres server, we can do that by one of the below commands
systemctl restart postgresql-12 |
After restarting using one of the above commands connect to the database you want to use Timescale hypertables in and run below statement to load Timescale extension
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE; |
you can check if Timescale is loaded by passing ‘\dx’ command to psql which will load the extension list.
in order to configure PostgreSQL to allow remote connection we need to do couple of changes as below
Finally, create the tenants … we’re using OAUTH for Kibana authentication, so I wasn’t able to use the API to export “saved objects”. Fortunately, we don’t have many tenants … and exporting/importing those saved objects manually isn’t an onerous task.
import requests
from requests.auth import HTTPBasicAuth
def createTenant(strTenantName, strDescription):
jsonAddTenant = { "description": strDescription }
r2 = requests.put(f"https://opensearch.example.com:9200/_opendistro/_security/api/tenants/{strTenantName}", json=jsonAddTenant, auth = HTTPBasicAuth('something', 'something'), verify=False)
print(r2.text)
print(r2.status_code)
# Get all tenants from ES
r = requests.get(f"https://elasticsearch.example.com:9200/_opendistro/_security/api/tenants", auth = HTTPBasicAuth('something', 'something'), verify=False)
dictAllTenants = r.json()
for item in dictAllTenants.items():
if item[1].get('reserved') == False:
createTenant(item[0], item[1].get('description'))
Since there are a lot of changes in how lifecycle policies work between ElasticSearch and OpenSearch, the recommendation I’ve seen is to manually create them … but it’s a lot of repetitive typing, so I used a script to create a base policy — a name with a a hot allocation — and manually added all of the remaining stages, transitions, and index patterns to which the policy should be applied.
import requests
from requests.auth import HTTPBasicAuth
import json
from time import sleep
from datetime import timedelta
f = open("data-LifecyclePolicies.txt", "w")
listIgnoredILMPolicies = ["watch-history-ilm-policy"]
# Get all roles from prod & list users in those roles
r = requests.get(f"https://elasticsearch.example.com:9200/_ilm/policy", auth = HTTPBasicAuth('something', 'something'), verify=False)
dictAllILMPolicies= r.json()
for item in dictAllILMPolicies.items():
if item[0] not in listIgnoredILMPolicies:
strILMPolicyName = item[0]
dictILMPolicySettings = item[1]
iHotDays = None
iWarmDays = None
iColdDays = None
iDeleteDays = None
if item[1].get('policy').get('phases').get('hot'):
iHotDays = (item[1].get('policy').get('phases').get('hot').get('min_age'))
if item[1].get('policy').get('phases').get('warm'):
iWarmDays = (item[1].get('policy').get('phases').get('warm').get('min_age'))
if item[1].get('policy').get('phases').get('cold'):
iColdDays = (item[1].get('policy').get('phases').get('cold').get('min_age'))
if item[1].get('policy').get('phases').get('delete'):
iDeleteDays = (item[1].get('policy').get('phases').get('delete').get('min_age'))
print(f"Policy named {strILMPolicyName} has phases:")
print(f"\tHot {iHotDays}")
print(f"\tWarm {iWarmDays}")
print(f"\tCold {iColdDays}")
print(f"\tDelete {iDeleteDays}")
print("\n")
f.write(f"Policy named {strILMPolicyName} has phases:\n")
f.write(f"\tHot {iHotDays}\n")
f.write(f"\tWarm {iWarmDays}\n")
f.write(f"\tCold {iColdDays}\n")
f.write(f"\tDelete {iDeleteDays}\n")
f.write("\n")
jsonILMPolicyCreation = {
"policy": {
"description": "Ported from ES7",
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [
{
"retry": {
"count": 3,
"backoff": "exponential",
"delay": "1m"
},
"allocation": {
"require": {
"temp": "hot"
},
"include": {},
"exclude": {},
"wait_for": "false"
}
}
],
"transitions": []
}
],
"ism_template": []
}
}
r2 = requests.put(f"https://opensearch:9200/_plugins/_ism/policies/{item[0]}", json=jsonILMPolicyCreation, auth = HTTPBasicAuth('something', 'something'), verify=False)
print(r2.text)
print(r2.status_code)
f.close()
After the roles are created, I need to map users into the roles — using the ElasticSearch API to list all roles and add each user to the corresponding OpenSearch role.
import requests
from requests.auth import HTTPBasicAuth
def addUserToRole(strRole, strUID):
jsonAddUser = [
{ "op": "add", "path": f"/{strRole}", "value": {"users": strUID} }]
print(f"{strRole}\t{jsonAddUser}")
r2 = requests.patch(f"https://opensearch.example.com:9200/_plugins/_security/api/rolesmapping", json=jsonAddUser, auth = HTTPBasicAuth('something', 'something'), verify=False)
print(r2.text)
print(r2.status_code)
listIgnoredGroups = ['security_rest_api_access', 'logstash_role', 'elastalert_role', 'kibana_server', 'wsadmin_role', 'mgmt_role', 'logstash', 'manage_snapshots', 'readall', 'all_access', 'own_index', 'kibana_user', ]
# Get all roles from prod & list users in those roles
#GET _opendistro/_security/api/rolesmapping/
r = requests.get(f"https://elasticsearch.example.com:9200/_opendistro/_security/api/rolesmapping/", auth = HTTPBasicAuth('something', 'something'), verify=False)
dictAllRoles = r.json()
# For each role, list out each user and add that user to that role in OS
for item in dictAllRoles.items():
if item[0] not in listIgnoredGroups:
for strUID in item[1].get('users'):
addUserToRole(item[0], item[1].get('users'))
To create the roles, use the ElasticSearch API to get the existing role definitions, remove a few attributes I don’t want to set (reserved, static, hidden), and create the corresponding role in OpenSearch. I skip all of the reserved roles.
import requests
from requests.auth import HTTPBasicAuth
f = open("results-roles.txt", "a")
objGetRoleRequest = requests.get(f"https://elasticsearch.example.com:9200/_opendistro/_security/api/roles", auth = HTTPBasicAuth('something', 'something'), verify=False)
dictRoleInfo = objGetRoleRequest.json()
for item in dictRoleInfo.items():
if item[1].get('reserved') is False:
print(item)
print("\n")
dictRoleDefinition = dict(item[1])
dictRoleDefinition.pop('reserved')
dictRoleDefinition.pop('static')
dictRoleDefinition.pop('hidden')
r = requests.put(f"https://opensearch.example.com:9200/_plugins/_security/api/roles/{item[0]}", json=dictRoleDefinition, auth = HTTPBasicAuth('something', 'something'), verify=False)
print(r.json())
if r.status_code == 200:
print(f"{item[0]}\t{r.status_code}\t{r.json()}\n")
f.write(f"{item[0]}\t{r.status_code}\t{r.json()}\n")
else:
print(f"HTTP Error: {r.status_code} on web call")
print(f"{item[0]}\t{r.status_code}\t{r.json()}\n")
f.write(f"{item[0]}\t{r.status_code}\t{r.json()}\n")
f.close()
One of the trickier bits of migrating from ElasticSearch to OpenSearch has been the local users — most of our users are authenticated via OAUTH, but programmatic access is done with local user accounts. Fortunately, you appear to be able to get the user password hash from the .opendistro_security API if you authenticate using an SSL cert.
This means the CN of the certificate being used must be registered in the elasticsearch.yml as an admin DN:
plugins.security.authcz.admin_dn:
- 'CN=admin,O=LJRTest,ST=Ohio,C=US'
- 'CN=ljradmin,O=LJRTest,ST=Ohio,C=US'
Provided the certificate is an admin_dn, the account can be used to search the .opendistro_security index and return local user info — including hashes. Information within the document is base 64 encoded, so the value needs to be decoded before you’ve got legible user information. One the user record has been obtained, the information can be used to POST details to the OpenSearch API and create a matching user.
import json
import requests
import base64
from requests.auth import HTTPBasicAuth
clientCrt = "./certs/ljr-mgr.pem"
clientKey = "./certs/ljr-mgr.key"
strOSAdminUser = 'something'
strOSAdminPass = 'something'
r = requests.get("https://elasticsearch.example.com:9200/.opendistro_security/_search?pretty", verify=False, cert=(clientCrt, clientKey))
if r.status_code == 200:
dictResult = r.json()
for item in dictResult.get('hits').get('hits'):
if item.get('_id') == "internalusers":
strInternalUsersXML = item.get('_source').get('internalusers')
strUserJSON = base64.b64decode(strInternalUsersXML).decode("utf-8")
dictUserInfo = json.loads(strUserJSON)
for tupleUserRecord in dictUserInfo.items():
strUserName = tupleUserRecord[0]
dictUserRecord = tupleUserRecord[1]
if dictUserRecord.get('reserved') == False:
dictUserDetails = {
"hash": dictUserRecord.get('hash'),
"opendistro_security_roles": dictUserRecord.get('opendistro_security_roles'),
"backend_roles": dictUserRecord.get('backend_roles'),
"attributes": dictUserRecord.get('attributes')
}
if dictUserRecord.get('description') is not None:
dictUserDetails["description"] = dictUserRecord.get('description')
reqCreateUser = requests.put(f'https://opensearch.example.com:9200/_plugins/_security/api/internalusers/{strUserName}', json=dictUserDetails, auth = HTTPBasicAuth(strOSAdminUser, strOSAdminPass), verify=False)
print(reqCreateUser.text)
else:
print(r.status_code)