The SELinux saga continues — need to set a context on the datadir folder
chcon -Rt mysqld_db_t /opt/mariadb chcon -Ru system_u /opt/mariadb
The SELinux saga continues — need to set a context on the datadir folder
chcon -Rt mysqld_db_t /opt/mariadb chcon -Ru system_u /opt/mariadb
I had a horrendous time trying to get the Samba share on our new server working. It worked insomuchas I could map a drive to the share … but I couldn’t actually see any files. Increasing the log level (smb.conf)
log level = 10 passdb:5 auth:5
showed that, yeah, I was getting a lot of access denied errors.
[2019/12/14 23:04:53.249959, 10, pid=17854, effective(0, 0), real(0, 0)] ../../source3/smbd/open.c:5438(create_file_unixpath) create_file_unixpath: NT_STATUS_ACCESS_DENIED [2019/12/14 23:04:53.249982, 10, pid=17854, effective(0, 0), real(0, 0)] ../../source3/smbd/open.c:5716(create_file_default) create_file: NT_STATUS_ACCESS_DENIED [2019/12/14 23:04:53.250012, 3, pid=17854, effective(0, 0), real(0, 0), class=smb2] ../../source3/smbd/smb2_server.c:3254(smbd_smb2_request_error_ex) smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] || at ../../source3/smbd/smb2_create.c:296 [2019/12/14 23:04:53.250038, 10, pid=17854, effective(0, 0), real(0, 0), class=smb2] ../../source3/smbd/smb2_server.c:3142(smbd_smb2_request_done_ex) smbd_smb2_request_done_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] body[8] dyn[yes:1] at ../../source3/smbd/smb2_server.c:3304
Many, many iterations of samba configs later, I wondered if SELinux was causing a problem. Temporarily disabling SELinux allowed files to be seen in the mapped drive … so that was the problem. I needed to tweak the SELinux settings to allow Samba to actually share files.
semanage fcontext -a -t samba_share_t "/data(/.*)?"
And
setsebool -P samba_export_all_rw=1
A few system updates ago, PHP fell over completely because of some multi-processing module. The quick fix was to change the multi-processing module and avoid having to figure out what changed and how to use php-fpm. Part of moving my VM’s to the new server, though, is cleaning up anything I’ve patched together as a quick fix. And, supposedly, php-fpm is a lot faster than the old-school Apache handler. Switching was a lot less involved than I had expected.
Install php-fpm:
dnf install php-fpm
Edit 00-mpm.conf
My quick fix was to switch to a non-default multi-processing module. That change is reverted to re-enable the ‘event’ module
vim /etc/httpd/conf.modules.d/00-mpm.conf
Configure Apache PHP Module
Verify the socket name used in /etc/php-fpm.d/ — Fedora is configured from /etc/php-fpm.d/www.conf with a socket at /var/run/php-fpm/www.sock
cp /etc/httpd/conf.modules.d/15-php.conf /etc/httpd/conf.modules.d/15-php.conf.orig
vi /etc/httpd/conf.modules.d/15-php.conf
# Handle files with .php extension using PHP interpreter
# Proxy declaration
<Proxy "unix:/var/run/php-fpm/www.sock|fcgi://php-fpm">
ProxySet disablereuse=off
</Proxy>
# Redirect to the proxy
<FilesMatch \.php$>
SetHandler proxy:fcgi://php-fpm
</FilesMatch>
#
# Allow php to handle Multiviews
#
AddType text/html .php
#
# Add index.php to the list of files that will be served as directory
# indexes.
#
DirectoryIndex index.php
Enable php-fpm to auto-start, start php-fpm, and restart Apache
systemctl enable php-fpm systemctl start php-fpm systemctl restart httpd
Voila — phpinfo() confirms that I am using FPM/FastCGI
We’ll see if this actually does anything to improve performance!
Instead of trying to map individual ports over to guest OS’s, I am just routing traffic to the VM bridge from the host.
Testing to ensure it works:
systemctl start firewalld
firewall-cmd –direct –passthrough ipv4 -I FORWARD -i br5 -j ACCEPT
firewall-cmd –direct –passthrough ipv4 -I FORWARD -o br5 -j ACCEPT
firewall-cmd –reload
Permanent setup:
systemctl enable firewalld
firewall-cmd –permanent –direct –passthrough ipv4 -I FORWARD -i br5 -j ACCEPT
firewall-cmd –permanent –direct –passthrough ipv4 -I FORWARD -o br5 -j ACCEPT
firewall-cmd –reload
Then I just added a static route for the network defined on br5 to the VM host.
We finally got a new server, and I’m starting to migrate our servers to the new box. We currently have a Windows virtualization platform (Hyper-V) — Windows Data Center edition was supposed to provide unlimited licenses for standard servers running on the host, so it seemed like a great deal. Except “all of the Windows servers” turned out to be, well, one. So we decided to use Fedora on the host. Worst case, that would mean re-installing a few servers. But I wanted to try converting the existing Hyper-V VMs.
Install libvirt and associated packages:
dnf -y install bridge-utils libvirt virt-install qemu-kvm virt-top libguestfs-tools qemu-img virt-manager
Start libvirtd and set it to auto-start on boot:
systemctl start libvirtd
systemctl enable libvirtd
Create an XML file with the definition for a new bridge:
[root@localhost ~]# cat br5.xml
<network>
<name>br5</name>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’br5′ stp=’on’ delay=’0’/>
<ip address=’10.1.2.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.1.2.200′ end=’10.1.2.250’/>
</dhcp>
</ip>
</network>
Build a new bridge from this definition and set it to auto-start on boot:
[root@localhost ~]# virsh net-define br5.xml
Network br10 defined from br5.xml
[root@localhost ~]# virsh net-autostart br5
Network br5 marked as autostarted
Verify the network is running and set to auto-start
[root@localhost ~]# virsh net-list –all
Name State Autostart Persistent
———————————————-
br5 active yes yes
View the IP address associated with the bridge:
[root@localhost ~]# ip addr show dev br5
5: br5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:33:3f:0c brd ff:ff:ff:ff:ff:ff
inet 10.1.2.1/24 brd 10.1.2.255 scope global br10
valid_lft forever preferred_lft forever
Copy the VHDX from Hyper-V to the Linux host and convert it to a qcow2 image:
qemu-img convert -O qcow2 fedora02.vhdx fedora02.qcow2
If needed, sysprep to clean up system SSH host keys, persistent network MAC configuration, and removing user accounts.
virt-sysprep -a fedora02.qcow2
When finished, use virt-manager to create a host by importing an existing HDD. Provided the drive type remains the same (SATA, in my case), the server boots right up.
Netgear provides instructions for using TFTP to write firmware to a basically bricked router (it boots into a recovery mode, indicated by a flashing power light). The instructions are, unfortunately, specific to Windows. To use a Linux computer to recover the router
(1) Plug your computer into the router & unplug everything else, as in the instructions. Hard-code an IP address. Then verify that the router shows up in your arp table:
arp -a
If the router does not appear, add it — you’ll need to get the device MAC address from the sticker on the back of the device.
arp -s 192.168.1.1 ??-??-??-??-??-??
(2) If you don’t already have a TFTP client, install one. Once you have a client, follow the instructions to get the router into recovery mode. On the Linux computer, run “tftp 192.168.1.1”
You’ll be in a TFTP console. Type binary and hit enter to set the transfer mode to binary. Then use put /path/to/file.name to upload the firmware file to the device. Wait and proceed with device setup.
We occasionally have to re-home our shell scripts, which means updating any static path values used within scripts. It’s quick enough to build a sed script to convert /old/server/path to /new/server/path, but it’s still extra work.
The dirname command works to provide a dynamic path value, provided you use the fully qualified path to run the script … but it fails spectacularly whens someone runs ./scriptFile.sh and you’re trying to use that path in, say, EXTRA_JAVA_OPTS. The “path” is just . — and Java doesn’t have any idea what to do with “-Xbootclasspath/a:./more/path/goes/here.jar”
Voila, realpath gives you the fully qualified file path for /new/server/path/scriptFile.sh, ./scriptFile.sh, or even bash scriptFile.sh … and the dirname of a realpath is the fully qualified path where scriptFile.sh resides:
#!/bin/bash
DIRNAME=`dirname $(realpath "$0")`
echo ${DIRNAME}
Hopefully next time we’ve got to re-home our batch jobs, it will be a simple scp & sed the old crontab content to use the new paths.
One of the web servers at work uses a refspec in the “git pull” command to map the remote development branch to the local remote-tracking master branch. This is fairly confusing (and it looks like the dev server is using the master branch unless you dig into how the pull is performed), but I can see how this prevents someone from accidentally typing something like “git checkout master” and really messing up the development environment. I can also see a dozen ways someone can issue what is a completely reasonable git command 99% of the time and really mess up the development environment.
While it is simple enough to just checkout the development branch, doing so does open us up to the possibility that someone will erroneously deliver the production code to the development server and halt all testing. While you cannot create shell aliases for multi-word commands (or, more accurately, alias expansion is performed for the first word of a simple command is checked to see if it has an alias … so you’ll never get the multi-word command), you can define a function to intercept git commands and avoid running unwanted commands:
function git() {
case $* in
"checkout master" ) command echo "This is a dev server, do not checkout the master branch!" ;;
"pull origin master" ) command echo "This is a dev server, do not pull the master branch" ;;
* ) command git "$@" ;;
esac
}
Or define the desired commands and avoid running any others:
function git(){
if echo "$@" | grep -Eq '^checkout uat$'; then
command git $@
elif echo "$@" | grep -Eq '^pull .+ uat$'; then
command git $@
else
echo "The command $@ needs to be whitelisted before it can be run"
fi
}
Either approach mitigates the risk of someone incorrectly using the master branch on the development server.
I needed to send email messages from a PHP form, and the web server at work uses Postfix. So … I’m getting Postfix set up to relay mail for the first time in a decade or two 🙂 I thought I’d just have to edit /etc/postfix/main.cf and add “relayhost = [something.example.com]”.
Nope. The service fails to start with nothing particularly indicative — just a [FAILED] status from the init script. Attempting to start Postfix outside of the init script is far more informative:
[lisa@564240601ac2 init.d]# /usr/sbin/postfix start postfix: fatal: parameter inet_interfaces: no local interface found for ::1
Turns out I’ve got to edit /etc/postfix/main.cf and tell it to use IPv4 only:
# Enable IPv4, and IPv6 if supported inet_protocols = ipv4
I finally put together a script that gathers some basic information (hostname & SAN’s) and creates a certificate signed against my CA. I’ve got a base myssl.cnf file that ends with
[ req_ext ] subjectAltName = @alt_names [ alt_names ]
The script appends all of the alternate names to the myssl.cnf file.
#!/bin/bash
RED_DARK='\033[38;5;196m'
GREEN_DARK='\033[38;5;35m'
BLUE_DARK='\033[38;5;57m'
NC='\033[0m' # Reset
function getInput {
echo -e "${BLUE_DARK}Please input the short hostname you wish to use (e.g. server123):${NC}"
read HOST
echo -e "${BLUE_DARK}Please input the domain name you wish to use with this hostname (e.g. rushworth.us):${NC}"
read DOMAIN
echo -e "${GREEN_DARK}Please enter any SAN values for this certificate, separated by spaces (must be fully qualified):${NC}"
read SANS
FQHOST="${HOST}.${DOMAIN}"
echo -e "Short hostname: $HOST"
echo -e "Fully qualified hostname: $FQHOST"
echo -e "SAN: $SANS"
echo -e "${RED_DARK}Is this correct? (Y/N):${NC}"
read boolCorrect
if [ $boolCorrect == 'Y' ] || [ $boolCorrect == 'y' ]
then
mkdir $HOST
echo $HOST
cp myssl.cnf "./$HOST/myssl.cnf"
cd "./$HOST"
echo "The following SANs will be used on this certificate: "
echo "DNS.1 = ${FQHOST}"
echo "DNS.1 = ${FQHOST}" >> ./myssl.cnf
echo "DNS.2 = ${HOST}"
echo "DNS.2 = ${HOST}" >> ./myssl.cnf
if [ -n "$SANS" ]
then
SANARRAY=( $SANS )
iSANCounter=2
for SANITEM in "${SANARRAY[@]}" ; do
let iSANCounter=iSANCounter+1
echo "DNS.${iSANCounter} = ${SANITEM}"
echo "DNS.${iSANCounter} = ${SANITEM}" >> ./myssl.cnf
done
fi
export strCertKeyPassword=Wh1t2v2rP144w9rd
export strPFXPassword=123abc456
openssl genrsa -passout env:strCertKeyPassword -aes256 -out $FQHOST.passwd.key 2048
openssl req -new -key $FQHOST.passwd.key -passin env:strCertKeyPassword -config ./myssl.cnf -reqexts req_ext -out $FQHOST.csr -subj "/C=US/ST=Ohio/L=Cleveland/O=Rushworth/OU=Home/CN=$FQHOST"
openssl x509 -req -in $FQHOST.csr -passin env:strCertKeyPassword -extensions req_ext -extfile ./myssl.cnf -out $FQHOST.cer -days 365 -CA /ca/ca.cer -CAkey /ca/ca.key -sha256
openssl rsa -in $FQHOST.passwd.key -out $FQHOST.key -passin pass:$strCertKeyPassword -passin env:strCertKeyPassword
openssl pkcs12 -export -out $FQHOST.pfx -inkey $FQHOST.key -in $FQHOST.cer -passout env:strPFXPassword
else
getInput
fi
}
getInput
There’s an encrypted private key and a non-encrypted private key. Because I have some Windows servers — Exchange and Active Directory — I create a PFX file too.