Category: System Administration

(Not) Finding the Rygel Log File

We’ve spent a lot of time trying to get a log file from the rygel server … setting the log level in the config file didn’t seem to do anything useful. And I cannot even find a log file. The only output we’re able to find is formed by running the binary from the command line. Where is that log file?!? Hey — there isn’t one. All of this log level setting has to do with what’s written to STDOUT and STDERR. You can either modify the unit file to tee the output off to a file or run it from the command line

To get debugging output, use

G_MESSAGES_DEBUG=all rygel -g 5

To tee the output off to a file,
rygel -g *:5 2>&1 | tee -a /path/to/rygel.log

 

Brinqa Remediation – mDNS

Some systems were found to be responding to mDNS requests (5353/udp). Linux hosts were running the avahi-daemon which provides this service. As the auto-discovery service is not used for service identification, the avahi-daemon was disabled.

Confirm response is seen on 5353/udp prior to change:

nmap -P0 -p 5353 -sU hostname.example.net

SSH to host identified as responding to mDNS requests. Disable the avahi-daemon then stop the avahi-daemon:

systemctl disable avahi-daemon
systemctl stop avahi-daemon

 

Verify that 5353/udp is no longer open by repeating the nmap command.

Fin.

Useful Bash Commands

Viewing Log Files

Tailing the File

When the same file name is used when logs are rotated (i.e. app.log is renamed to app.yyyymmdd.log and a new app.log is created), use the -F flag to follow the name instead of the file descriptor

tail -F /var/log/app.log

Tailing with Filtering

When you are looking for something specific in the log file, it often helps to run the log output through grep. This example watches a sendmail log for communication with the host 10.5.5.5

tail -F /var/log/maillog | grep "10.5.5.5"

Handling Log Files with Date Specific Naming

I alias out commands for viewing commonly read log files. This is easy enough when the current log file is always /var/log/application/content.log, but some active log files have date components in the file name. As an example, our Postgresql servers have the short day-of-week string in the log. Use command substitution to get the date-specific elements from the date executable. Here, I tail a file named postgresql-Tue.log on Tuesday. Since logs rotate to a new name, tail -F doesn’t really do anything. You’ll still need to ctrl-c the tail and restart it for the next day.

tail -f /pgdata/log/postgresql-$(date +%a).log

Apache HTTPD and DER Encoded Certificate

We are in the process of updating one of the web servers at work to a newer OS – along with a newer Apache HTTPD and PHP iteration. Ran into a snag just setting up the SSL web site – we couldn’t get HTTPD started with our Venafi certificate.

[Fri Jan 28 14:35:05.092086 2022] [ssl:emerg] [pid 57739:tid 139948816931136] AH02561: Failed to configure certificate hostname.example.com:443:0, check /path/to/certs/production/server.crt

[Fri Jan 28 14:35:05.092103 2022] [ssl:emerg] [pid 57739:tid 139948816931136] SSL Library Error: error:0909006C:PEM routines:get_name:no start line (Expecting: CERTIFICATE) — Bad file contents or format – or even just a forgotten SSLCertificateKeyFile?

[Fri Jan 28 14:35:05.092115 2022] [ssl:emerg] [pid 57739:tid 139948816931136] SSL Library Error: error:140AD009:SSL routines:SSL_CTX_use_certificate_file:PEM lib

The certificate was DER encoded – that’s not what I use, but it was working on the old server.

I think there might be something between httpd-2.4.6-97 and httpd-2.4.37-43 that stopped DER encoded certificates from working. Rather than figure out some way to coerce HTTPD to use this DER file that I don’t really care if I’ve got … I just used a quick command to export the B64 version of the certificate, copied the header/footer/stuff in between, and made a base-64 encoded certificate file.

openssl x509 -inform DER -in server.crt | openssl x509 -text

And, voila, we’ve got a web server.

 

Squid Custom Error

We’ve been having a challenge with Anya getting her school work completed. Part of the problem is the school’s own fault — they provide a site where kids are encouraged to read, but don’t provide any way to ensure this reading is done after classwork has been completed. But, even if that site didn’t exist … the Internet has all sorts of fun ways you can find to waste some time.

So her computer now routes through my proxy server. I’d set up a squid server so *I* could use the Internet unfettered whilst VPN’d in to work. It’s really annoying to get told you’re a naughty hacker every time you want to see some code example on StackOverflow!

While I didn’t really care about the default messages for my use (nor did I actually block anything for it to matter), I want Anya to be able to differentiate between “technical problem” the site didn’t load and “you are not allowed to be using this site now” the site didn’t load. So I customized the Squid error message for access denied. This can quickly be done by editing /usr/share/squid/errors/en-us/ERR_ACCESS_DENIED (you’ll need to make a backup of your version & may need to replace the file when upgrading squid in the future).

 

Bash – Spaces, Quotes, and String Replacement

Had to figure out how to do string replacement (Scott wanted to convert WMA files to similarly named MP3 files) and pass a single parameter that has spaces into a shell script.

${original_string/thing_to_find/thing_to_replace_there} does string replacement. And $@ is the unexpanded parameter set. So this wouldn’t work if we were trying to pass in more than one parameter (well, it *would* … I’d just have to custom-handle parameter expansion in the script)

 

SSO In Apache HTTPD – OAuth2

PingID is another external authentication source that looks to be replacing ADFS at work in the not-too-distant future. Unfortunately, I’ve not been able to get anyone to set up the “other side” of this authentication method … so the documentation is untested. There is an Apache Integration Kit available from PingID (https://www.pingidentity.com/en/resources/downloads/pingfederate.html). Documentation for setup is located at https://docs.pingidentity.com/bundle/pingfederate-apache-linux-ik/page/kxu1563994990311.html

Alternately, you can use OAuth2 through Apache HTTPD to authenticate users against PingID. To set up OAuth, you’ll need the mod_auth_openidc module (this is also available from the RedHat dnf repository). You’ll also need the client ID and secret that make up the OAuth2 client credentials. The full set of configuration parameters used in /etc/httpd/conf.d/auth_openidc.conf (or added to individual site-httpd.conf files) can be found at https://github.com/zmartzone/mod_auth_openidc/blob/master/auth_openidc.conf

As I am not able to register to use PingID, I am using an alternate OAUTH2 provider for authentication. The general idea should be the same for PingID – get the metadata URL, client ID, and secret added to the oidc configuration.

Setting up Google OAuth Client:

Register OAuth on Google Cloud Platform (https://console.cloud.google.com/) – Under “API & Services”, select “OAuth Consent Screen”. Build a testing app – you can use URLs that don’t go anywhere interesting, but if you want to publish the app for real usage, you’ll need real stuff.

Under “API & Services”, select “Credentials”. Select “Create Credentials” and select “OAuth Client ID”

Select the application type “Web application” and provide a name for the connection

You don’t need any authorized JS origins. Add the authorized redirect URI(s) appropriate for your host. In this case, the internal URI is my docker host, off port on 7443. The generally used URI is my reverse proxy server. I’ve had redirect URI mismatch errors when the authorized URIs don’t both include and exclude the trailing slash. Click “Create” to complete the operation.

You’ll see a client ID and secret – stash those as we’ll need to drop them into the openidc config file. Click “OK” and we’re ready to set up the web server.

Setting Up Apache HTTPD to use mod_auth_openidc

Clone the mod_auth_openidc repo (https://github.com/zmartzone/mod_auth_openidc.git) – I made one change to the Dockerfile. I’ve seen general guidance that using ENV to set DEBIAN_FRONTEND to noninteractive is not ideal, so I replaced that line with the transient form of the directive:

ARG DEBIAN_FRONTEND=noninteractive

I also changed the index.php file to

RUN echo "<html><head><title>Sample OAUTH Site</title><head><body><?php print $_SERVER['OIDC_CLAIM_email'] ; ?><pre><?php print_r(array_map(\"htmlentities\", apache_request_headers())); ?></pre><a href=\"/protected/?logout=https%3A%2F%2Fwww.rushworth.us%2Floggedout.html\">Logout</a></body></html>" > /var/www/html/protected/index.php

Build an image:

docker build -t openidc:latest .

Create an openidc.conf file on your file system. We’ll bind this file into the container so our config is in place instead of the default one. In my example, I have created “/opt/openidc.conf”. File content included below (although you’ll need to use your client ID and secret and your hostname). I’ve added a few claims so we have access to the name and email address (email address is the logon ID)

Then run a container using the image. My sandbox is fronted by a reverse proxy, so the port used doesn’t have to be well known.

docker run --name openidc -p 7443:443 -v /opt/openidc.conf:/etc/apache2/conf-available/openidc.conf -it openidc /bin/bash -c "source /etc/apache2/envvars && valgrind --leak-check=full /usr/sbin/apache2 -X"

* In my case, the docker host is not publicly available. I’ve also added the following lines to the reverse proxy at www.rushworth.us

ProxyPass /protected https://docker.rushworth.us:7443/protected
ProxyPassReverse /protected https://docker.rushworth.us:7443/protected

Access https://www.rushworth.us/protected/index.php (I haven’t published my app for Google’s review, so it’s locked down to use by registered accounts only … at this time, that’s only my ID. I can register others too.) You’ll be bounced over to Google to provide authentication, then handed back to my web server.

We can then use the OIDC_CLAIM_email — $_SERVER[‘OIDC_CLAIM_email’] – to continue in-application authorization steps (if needed).

openidc.conf content:

LogLevel auth_openidc:debug

LoadModule auth_openidc_module /usr/lib/apache2/modules/mod_auth_openidc.so

OIDCSSLValidateServer On

OIDCProviderMetadataURL https://accounts.google.com/.well-known/openid-configuration
OIDCClientID uuid-thing.apps.googleusercontent.com
OIDCClientSecret uuid-thingU4W

OIDCCryptoPassphrase S0m3S3cr3tPhrA53
OIDCRedirectURI https://www.rushworth.us/protected
OIDCAuthNHeader X-LJR-AuthedUser
OIDCScope "openid email profile"

<Location /protected>
     AuthType openid-connect
     Require valid-user
</Location>

OIDCOAuthSSLValidateServer On
OIDCOAuthRemoteUserClaim Username

SSO In Apache HTTPD — ADFS

Active Directory Federated Services (ADFS) can be used by servers inside or outside of the company network. This makes it an especially attractive authentication option for third party companies as no B2B connectivity is required to just authenticate the user base. Many third-party vendors are starting to support ADFS authentication in their out-of-the-box solution (in which case they should be able to provide config documentation), but anything hosted on Apache HTTPD can be configured using these directions:

This configuration uses the https://github.com/UNINETT/mod_auth_mellon module — I’ve built this from the repo. Once mod_auth_mellon is installed, create a directory for the configuration

mkdir /etc/httpd/mellon

Then cd into the directory and run the config script:

/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh urn:samplesite:site.example.com "https://site.example.com/auth/endpoint/"

 

You will now have three files in the config directory – an XML file along with a cert/key pair. You’ll also need the FederationMetadata.xml from the IT group – it should be

Now configure the module – e.g. a file /etc/httpd/conf.d/20-mellon.conf – with the following:

MellonCacheSize 100
MellonLockFile /var/run/mod_auth_mellon.lock
MellonPostTTL 900
MellonPostSize 1073741824
MellonPostCount 100
MellonPostDirectory "/var/cache/mod_auth_mellon_postdata"

To authenticate users through the ADFS directory, add the following to your site config

MellonEnable "auth"
Require valid-user
AuthType "Mellon"
MellonVariable "cookie" 
MellonSPPrivateKeyFile /etc/httpd/mellon/urn_samplesite_site.example.com.key
MellonSPCertFile /etc/httpd/mellon/urn_samplesite_site.example.com.cert
MellonSPMetadataFile /etc/httpd/mellon/urn_samplesite_site.example.com.xml
MellonIdPMetadataFile /etc/httpd/mellon/FederationMetadata.xml
MellonMergeEnvVars On ":"
MellonEndpointPath /auth/endpoint

 

Provide the XML file and certificate to the IT team that manages ADFS to configure the relying party trust.