Category: System Administration

External Access to libvirt VMs

Instead of trying to map individual ports over to guest OS’s, I am just routing traffic to the VM bridge from the host.

Testing to ensure it works:

systemctl start firewalld
firewall-cmd –direct –passthrough ipv4 -I FORWARD -i br5 -j ACCEPT
firewall-cmd –direct –passthrough ipv4 -I FORWARD -o br5 -j ACCEPT
firewall-cmd –reload

Permanent setup:

systemctl enable firewalld
firewall-cmd –permanent –direct –passthrough ipv4 -I FORWARD -i br5 -j ACCEPT
firewall-cmd –permanent –direct –passthrough ipv4 -I FORWARD -o br5 -j ACCEPT
firewall-cmd –reload

Then I just added a static route for the network defined on br5 to the VM host.

Migrating from Hyper-V to libvirt

We finally got a new server, and I’m starting to migrate our servers to the new box. We currently have a Windows virtualization platform (Hyper-V) — Windows Data Center edition was supposed to provide unlimited licenses for standard servers running on the host, so it seemed like a great deal. Except “all of the Windows servers” turned out to be, well, one. So we decided to use Fedora on the host. Worst case, that would mean re-installing a few servers. But I wanted to try converting the existing Hyper-V VMs.

Install libvirt and associated packages:

dnf -y install bridge-utils libvirt virt-install qemu-kvm virt-top libguestfs-tools qemu-img virt-manager

Start libvirtd and set it to auto-start on boot:

systemctl start libvirtd
systemctl enable libvirtd

Create an XML file with the definition for a new bridge:

[root@localhost ~]# cat br5.xml

<network>
<name>br5</name>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’br5′ stp=’on’ delay=’0’/>
<ip address=’10.1.2.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.1.2.200′ end=’10.1.2.250’/>
</dhcp>
</ip>
</network>

Build a new bridge from this definition and set it to auto-start on boot:

[root@localhost ~]# virsh net-define br5.xml
Network br10 defined from br5.xml

[root@localhost ~]# virsh net-autostart br5
Network br5 marked as autostarted

Verify the network is running and set to auto-start

[root@localhost ~]# virsh net-list –all
Name State Autostart Persistent
———————————————-
br5 active yes yes

View the IP address associated with the bridge:

[root@localhost ~]# ip addr show dev br5
5: br5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:33:3f:0c brd ff:ff:ff:ff:ff:ff
inet 10.1.2.1/24 brd 10.1.2.255 scope global br10
valid_lft forever preferred_lft forever

Copy the VHDX from Hyper-V to the Linux host and convert it to a qcow2 image:

qemu-img convert -O qcow2 fedora02.vhdx fedora02.qcow2

If needed, sysprep to clean up system SSH host keys, persistent network MAC configuration, and removing user accounts.
virt-sysprep -a fedora02.qcow2

When finished, use virt-manager to create a host by importing an existing HDD. Provided the drive type remains the same (SATA, in my case), the server boots right up.

Recovering a bricked Netgear router

Netgear provides instructions for using TFTP to write firmware to a basically bricked router (it boots into a recovery mode, indicated by a flashing power light). The instructions are, unfortunately, specific to Windows. To use a Linux computer to recover the router

 

(1) Plug your computer into the router & unplug everything else, as in the instructions. Hard-code an IP address. Then verify that the router shows up in your arp table:

arp -a

If the router does not appear, add it — you’ll need to get the device MAC address from the sticker on the back of the device.

arp -s 192.168.1.1 ??-??-??-??-??-??

(2) If you don’t already have a TFTP client, install one. Once you have a client, follow the instructions to get the router into recovery mode. On the Linux computer, run “tftp 192.168.1.1”

You’ll be in a TFTP console. Type binary and hit enter to set the transfer mode to binary. Then use put /path/to/file.name to upload the firmware file to the device. Wait and proceed with device setup.

 

Shell Script: Path To Script

We occasionally have to re-home our shell scripts, which means updating any static path values used within scripts. It’s quick enough to build a sed script to convert /old/server/path to /new/server/path, but it’s still extra work.

The dirname command works to provide a dynamic path value, provided you use the fully qualified path to run the script … but it fails spectacularly whens someone runs ./scriptFile.sh and you’re trying to use that path in, say, EXTRA_JAVA_OPTS. The “path” is just . — and Java doesn’t have any idea what to do with “-Xbootclasspath/a:./more/path/goes/here.jar”

Voila, realpath gives you the fully qualified file path for /new/server/path/scriptFile.sh, ./scriptFile.sh, or even bash scriptFile.sh … and the dirname of a realpath is the fully qualified path where scriptFile.sh resides:

#!/bin/bash 
DIRNAME=`dirname $(realpath "$0")`
echo ${DIRNAME}

Hopefully next time we’ve got to re-home our batch jobs, it will be a simple scp & sed the old crontab content to use the new paths.

Preventing erronious use of the master branch on development servers

One of the web servers at work uses a refspec in the “git pull” command to map the remote development branch to the local remote-tracking master branch. This is fairly confusing (and it looks like the dev server is using the master branch unless you dig into how the pull is performed), but I can see how this prevents someone from accidentally typing something like “git checkout master” and really messing up the development environment. I can also see a dozen ways someone can issue what is a completely reasonable git command 99% of the time and really mess up the development environment.

While it is simple enough to just checkout the development branch, doing so does open us up to the possibility that someone will erroneously  deliver the production code to the development server and halt all testing. While you cannot create shell aliases for multi-word commands (or, more accurately, alias expansion is performed for the first word of a simple command is checked to see if it has an alias … so you’ll never get the multi-word command), you can define a function to intercept git commands and avoid running unwanted commands:

function git() { 
     case $* in 
         "checkout master" ) command echo "This is a dev server, do not checkout the master branch!" ;; 
         "pull origin master" ) command echo "This is a dev server, do not pull the master branch" ;; 
         * ) command git "$@" ;; 
     esac
}

Or define the desired commands and avoid running any others:

function git(){
     if echo "$@" | grep -Eq '^checkout uat$'; then
          command git $@
     elif echo "$@" | grep -Eq '^pull .+ uat$'; then
          command git $@
     else
          echo "The command $@ needs to be whitelisted before it can be run"
     fi
}

Either approach mitigates the risk of someone incorrectly using the master branch on the development server.

Postfix IPv6 Loopback Failure

I needed to send email messages from a PHP form, and the web server at work uses Postfix. So … I’m getting Postfix set up to relay mail for the first time in a decade or two 🙂 I thought I’d just have to edit /etc/postfix/main.cf and add “relayhost = [something.example.com]”.

Nope. The service fails to start with nothing particularly indicative — just a [FAILED] status from the init script. Attempting to start Postfix outside of the init script is far more informative:

[lisa@564240601ac2 init.d]# /usr/sbin/postfix start
postfix: fatal: parameter inet_interfaces: no local interface found for ::1

Turns out I’ve got to edit /etc/postfix/main.cf and tell it to use IPv4 only:

# Enable IPv4, and IPv6 if supported
inet_protocols = ipv4

 

Certificate Generation Script

I finally put together a script that gathers some basic information (hostname & SAN’s) and creates a certificate signed against my CA. I’ve got a base myssl.cnf file that ends with

[ req_ext ]
subjectAltName = @alt_names

[ alt_names ]

The script appends all of the alternate names to the myssl.cnf file.

#!/bin/bash

RED_DARK='\033[38;5;196m'
GREEN_DARK='\033[38;5;35m'
BLUE_DARK='\033[38;5;57m'
NC='\033[0m' # Reset

function getInput {
        echo -e "${BLUE_DARK}Please input the short hostname you wish to use (e.g. server123):${NC}"
        read HOST

        echo -e "${BLUE_DARK}Please input the domain name you wish to use with this hostname (e.g. rushworth.us):${NC}"
        read DOMAIN

        echo -e "${GREEN_DARK}Please enter any SAN values for this certificate, separated by spaces (must be fully qualified):${NC}"
        read SANS

        FQHOST="${HOST}.${DOMAIN}"

        echo -e "Short hostname: $HOST"
        echo -e "Fully qualified hostname: $FQHOST"
        echo -e "SAN: $SANS"

        echo -e "${RED_DARK}Is this correct? (Y/N):${NC}"
        read boolCorrect

        if [ $boolCorrect == 'Y' ] || [ $boolCorrect == 'y' ]
        then
                mkdir $HOST
                echo $HOST
                cp myssl.cnf "./$HOST/myssl.cnf"

                cd "./$HOST"

                echo "The following SANs will be used on this certificate: "
                echo "DNS.1 = ${FQHOST}"
                echo "DNS.1 = ${FQHOST}" >> ./myssl.cnf
                echo "DNS.2 = ${HOST}"
                echo "DNS.2 = ${HOST}" >> ./myssl.cnf

                if [ -n "$SANS" ]
                then
                        SANARRAY=( $SANS )
                        iSANCounter=2
                        for SANITEM in "${SANARRAY[@]}" ; do
                                let iSANCounter=iSANCounter+1
                                echo "DNS.${iSANCounter} = ${SANITEM}"
                                echo "DNS.${iSANCounter} = ${SANITEM}" >> ./myssl.cnf
                        done
                fi
                export strCertKeyPassword=Wh1t2v2rP144w9rd
                export strPFXPassword=123abc456
                openssl genrsa -passout env:strCertKeyPassword -aes256 -out $FQHOST.passwd.key 2048
                openssl req -new -key $FQHOST.passwd.key -passin env:strCertKeyPassword -config ./myssl.cnf -reqexts req_ext -out $FQHOST.csr -subj "/C=US/ST=Ohio/L=Cleveland/O=Rushworth/OU=Home/CN=$FQHOST"
                openssl x509 -req -in $FQHOST.csr -passin env:strCertKeyPassword -extensions req_ext -extfile ./myssl.cnf -out $FQHOST.cer -days 365 -CA /ca/ca.cer -CAkey /ca/ca.key -sha256
                openssl rsa -in $FQHOST.passwd.key -out $FQHOST.key -passin pass:$strCertKeyPassword -passin env:strCertKeyPassword
                openssl pkcs12 -export -out $FQHOST.pfx -inkey $FQHOST.key -in $FQHOST.cer -passout env:strPFXPassword

        else
                getInput
        fi
}

getInput

There’s an encrypted private key and a non-encrypted private key. Because I have some Windows servers — Exchange and Active Directory — I create a PFX file too.

 

GitLab – Using the Docker Executor for Testing

Setting up gitlab-runner to use a Docker executor: You need Docker running on the gitlab-runner host. In my sandbox, I have GitLab running as a Docker container. Instead of installing Docker in Docker, I have mounted the host Docker socket to the GitLab container. You’ll need to add the –privileged flag, and since I’m using Windows … my mount path is funky. But it works.

docker run –detach –hostname gitlab.rushworth.us –publish 443:443 –publish 80:80 –publish 22:22 –name gitlab -v //var/run/docker.sock:/var/run/docker.sock –privileged gitlab/gitlab-ee:latest

Register the runner using “docker-runner register”. I always specify the image in my CI YAML file, so the default image is immaterial … but I’ve encountered groups with an image that mirrors the production servers who set that image as the default.

Edit /etc/gitlab-runner/config.toml and change “privileged = false” to true.

Start the runner (docker-runner start). In the GitLab Admin Area, navigate to Overview => Runners and select the one we just created. When a project is updated, tags can be used to select an appropriate runner. Because most of my testing is done with the shell executor, the runner which uses the shell executor has no tags and the runner which uses the Docker executor is tagged with “runner-docker”. You can require all jobs include a tag to select the appropriate runner (which avoids someone accidentally forgetting a tag and having their project processed through the wrong runner).

An image – you’ll need an image. You can use base images from the Docker Hub registry or create your own image. You can add components in the before_script or use a Dockerfile to build an image from the parent image.

Now we’re ready to use the Docker executor! Create your CI YAML file.

If you are not using the default image, start with “image: <the image you want>”.

We don’t want phpunit in the running image, but I use it for testing. Thus, I need a before_script component to install the phpunit package.

If you’ve used a tag to restrict what is run in your Docker-executor based runner, add the appropriate tag. Include the tester command line.

.gitlab.yml:

image: gitlab.rushworth.us:4567/lisa/ljtestproject-dockerexecutor
stages:
- test

before_script:
# Install dependencies
- bash ci/docker_InstallReqs.sh

test_job:
stage: test
tags:
- runner-docker
script:
- phpunit --configuration phpunit_myapp.xml

Docker_InstallReqs.sh

#!/bin/bash
yum install php-phpunit-PHPUnit

Now when you commit changes to the repository, the Docker-executor based runner will be used for the CI/CD pipeline. A transient Docker container will be created with the image, your before_script will be executed, and then the test script will be run within the container.

 

GitLab – Using the built-in Docker Registry

GitLab has a built-in Docker registry that you can use for projects. With the Omnibus install (or a container based on the official Docker image), enabling the registry is as simple as adding a config line to your gitlab.rb (this assumes you have a SSL key at /etc/gitlab/ssl named with the fully qualified hostname and using .crt for the public key and .key for the private key

registry_external_url ‘https://gitlab.example.com:4567’

Then just tag an image to a project’s repository URL

docker tag ossautomation/cent68php56 gitlab.example.com:4567/lisa/ljtestproject-dockerexecutor

Log in and push the image:

D:\git\ljtestproject-dockerexecutor>docker login gitlab.example.com:4567
Username: lisa
Password:
Login Succeeded

D:\git\ljtestproject-dockerexecutor>docker push gitlab.example.com:4567/lisa/ljtestproject-dockerexecutor
The push refers to repository [gitlab.example.com:4567/lisa/ljtestproject-dockerexecutor]
45c3e2f5d139: Pushing [=> ] 33.31MB/1.619GB

GitLab SSH Deployment Setup

Preliminary stuff – before setting up SSH deployment in your pipeline, you’ll need a user on the target box with permission to write to the files being published. You will need a public/private key pair.

On the target server, the project needs to be cloned into the deployment directory. The public key will need to be added to authorized_keys (or authorized_keys2 on older versions of Linux) file so the private key can be used for authentication.

To set up your GitLab project for SSH-based deployment, you need to add some variables to the project. In the project, navigate to Settings ==> CI/CD

Expand the “Variables” section. You will need to add the following key/value variable pairs:

Key Value
SSH_KNOWN_HOSTS Output of ssh-keyscan targetserver.example.com
SSH_PRIVATE_KEY Content of your private key
DEPLOYMENT_HOST Target hostname, e.g. targetserver.example.com
DEPLOYMENT_USER Username on target server
DEPLOYMENT_PATH Path to which project will be deployed on target server

Save the variables

I am managing both a production and development deployment within the pipeline, so I’ve got prod and dev specific variables. We use the same username for prod and dev; but the hostname, path, and target server public key are different.

If your repository is publicly readable, this is sufficient. If you have a private repository, you’ll need a way to authenticate and fetch the data. In this example, I am using a deployment token. Under Settings Repository, expand the “Deployment Tokens” section and create a deployment token. On my target servers, the remote is added as https://TokenUser:TokenSecret@gitlab.example.com/path/to/project.git instead of just https://gitlab.example.com/path/to/project.git

Once you have defined these variables within the project, use the variables in your CI/CD YAML. In this example, I am deploying PHP code to a web server. Changes to the development branch are deployed to the dev server, and changes to the master branch are deployed to the production server.

In the before_script, I set up the key-based authentication by adding the private key to my runner environment and adding the prod and dev target server’s public key to the runner environment.

- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS_DEV" > ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS_PROD" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts

In the deployment component, username and host variables are used to connect to the target server via SSH. The commands run over that SSH session change directory into the deployment target path and use “git pull” to fetch and merge the updated code. This ensures the proper branch is pulled to the production and down-level environments.

production-deployment:
 stage: deploy
  script:
    - ssh $DEPLOYMENT_USER@$DEPLOYMENT_HOST_PROD "cd '$DEPLOYMENT_PATH_PROD'; git pull origin master"
  only:
    - master

development-deployment:
 stage: deploy
 script:
   - ssh $DEPLOYMENT_USER@$DEPLOYMENT_HOST_DEV "cd '$DEPLOYMENT_PATH_DEV'; git pull origin development"
 only:
   - development

Now when I make changes to the project code,

Assuming the tests still pass, the deployment will run

If you click on the deployment component, you can see what changes were pulled to the target server

And, yes, the updated files are on my target server.