Month: August 2019

Upgrading Oracle SQL Developer

I’ve been using an Oracle database more in my new position … which means I’ve got the Oracle SQL Developer tool installed on my computer. My first upgrade was available yesterday … and it didn’t work. Not like threw an error, but like double click on the executable and nothing happens. It silently exits.

Turns out there’s something in appdata that needs to be cleared. I don’t run multiple versions of SQL Developer, so I could just blow away “%userprofile%\appdata\roaming\SQL Developer” and “%userprofile%\appdata\roaming\sqldeveloper” to clear whatever needs to be cleared. Click the icon and the program finally runs.

Setting up gitlab-runner

CLI on the GitLab server:

# Set up the GitLab Repo
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh

# Install package
yum install gitlab-runner

# verify runner is executable
ll `which gitlab-runner`
# If needed, flag it executable – shouldn’t be a problem with RPM installations, but it’s been a problem for me with manual installs
#chmod ugo+x /usr/local/bin/gitlab-runner

# Register a runner
gitlab-runner register

# use URL & token from http://<GITLABSERVER>/admin/runners
# add tags, if you want to use tags to assign runner
# executor: shell (and docker, if you want to use docker executors. The shell executor is the simplest case, so I am starting there)

# start the runner
gitlab-runner start

On the GitLab Web GUI:

Admin section => Overview => Runners. Click pencil to edit the runner and uncheck “Lock to current projects”, and (unless you want to use tagging) check “Run untagged jobs”

** I was getting an error in every pipeline job saying the git command was not found.

For most other commands, you can append to the path in the before_script section of your .gitlab-ci.yml

before_script:
– export PATH=$PATH:/opt/whatever/odd/path/bin

But that doesn’t work in this case because we’re not getting that far: the bootstrap “stuff” cannot fetch the project to see the before script. Git, on my system, was part of the GitLab package. I just created a symlink into a “normal” binary location:

root@gitlab:~# which git
/opt/gitlab/embedded/bin/git
root@gitlab:~# ln -s /opt/gitlab/embedded/bin/git /usr/bin/git

And we’ve got successful test execution:

 

On Greenland

Trump wants to buy Greenland … which, yeah, it’s been suggested before. Way before, like 1946. And it wasn’t well received by the Danes at the time, so not exactly a stellar argument there. But Greenland isn’t a colony in the 1800’s style. A decade ago, the Act on Greenland Self-Government was granted. Which made Greenland’s parliament on par with the Danish parliament. Now, foreign policy and international agreements are still under Danish control, and there have been some power struggles in the intervening decade. But I don’t think anyone could buy Greenland from Denmark.

Software Flow Control and vim

In the early 90’s, one of the things I liked about Microsoft’s ecosystem was that they developed a standard for keyboard shortcuts. In most applications, developed by Microsoft or not, you could hit ctrl-p to print or ctrl-x to exit. Or ctrl-s to save. It’s quite convenient when I’m using Windows applications, but hitting ctrl-s to save without really thinking about it hangs vim. Hangs like “go into another shell and kill vim & that ssh session”. This is because ctrl-s, in Linux, means XOFF — the software flow control command that means “hi, I’m a thing from 1968 and my buffer is getting full. chill out for a bit and let me catch up”. Recovery is simple enough, send XON — “hi, that thing from 1968 again, and I’m all caught up. send me some more stuff”. That’s ctrl-q.

But there aren’t many slow anything‘s involved in computing these days, which means XON/XOFF isn’t the most useful of features for most people (* if you’ve got real serial devices attached … you may not be “most people” here). Instead of remembering ctrl-q gets gets vim back without killing it, just disable START/STOP control. Thing is it’s not really vim that’s using flow control — it’s the terminal emulator — so the “fix” isn’t something you’ll have to do in vim. In your ~/.bashrc or ~/.bash_profile (or globally in something like /etc/profile.d/disableFlowControl.sh)

# Disable XON/XOFF flow control so ctrl-s doesn’t hang vim
stty -ixon

If you can add -ixoff to avoid ctrl-q from meaning XON too … but I don’t bother since “start sending me data” doesn’t seem to hang anything.

Git: Listing Conflicting Files

To list the unmerged files — where you’ve got merge conflicts to resolve:

git diff --name-only --diff-filter=U

Filters are:

  • Added (A)
  • Copied (C)
  • Deleted (D)
  • Modified (M)
  • Renamed (R)
  • Type changed (T)
  • Unmerged (U)
  • Unknown (X)
  • pairing Broken (B)

(use lower case letters to exclude)

Renaming a Branch in Git

I finally had a situation where I needed to rename a branch in git. When I was the only one involved in a development effort (or even looking at it!), it didn’t really matter if I typo’d something. Exchange and Exchagne … I know what I meant. But working under a more formal development process, I started naming my branch after the issue ID. And managed to typo the first one. Sigh!

# Check out the incorrectly named branch

git checkout OSSA166

# Rename it with the correct name

git branch -m OSSA163

# See what you’ve got — the local one is right now, but the remote is still incorrectly named

git branch -a

* OSSA163
master
remotes/origin/HEAD -> origin/master
remotes/origin/OSSA166
remotes/origin/master
remotes/origin/uat

# Push a change to rename the remote one too

git push origin :OSSA166 OSSA163

Total 0 (delta 0), reused 0 (delta 0)

To ssh://git.example.com/path/to/my/repo.git

– [deleted]         OSSA166
* [new branch]      OSSA163 -> OSSA163

# And see what you’ve got again

git branch -a

* OSSA163
master
remotes/origin/HEAD -> origin/master
remotes/origin/OSSA163
remotes/origin/master
remotes/origin/uat

 

Docker Desktop for Windows – Bind Mounts

I’ve been trying to set up a Docker container running an older CentOS, Apache, and PHP version as a sandbox for work. This would allow me to update code on my local computer, test changes, and then pull the changes to the development server for UAT testing. Setting up the base container was easy enough — installed a VM, tar’d off the system, and imported it as a Docker image. There’s a lot of optimization that could/should be done, but I was aiming for proof of concept at this stage.

I am using bind mounts for the website configuration and code — the website conf file in conf.d, the SSL certificates, and the vhtml folder which houses the web code. This means I can tweak the site config & code in my IDE, reload Apache in Docker, and validate my changes. It worked great until I connected to the company VPN. Attempting to access the mounted data just hangs. Nothing. Drop the VPN, and the files are there again.

There are two problems — firstly, the default VPN configuration does not allow access to local network resources. And, it seems, the Docker NAT is a local network resource. We use Cisco AnyConnect. In the settings, I checked off “Allow local (LAN) access when using VPN (if configured)”. Note the if configured — the server-side settings need to allow use of local resources when connected via VPN. Fortunately, people with WiFi printers complained about having to disconnect the VPN every time they wanted to print something; and accessing local resources is permitted in our profile.

Unfortunately, I still couldn’t access files on my mount points. Docker Desktop shared out my drive, and the server network mounts the CIFS share. With my domain credentials. An Active Directory domain which is most certainly not registered in the VPN DNS servers.

[root@5542506m1a5e /]# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/QMCCTMGPBHQFW66ARPWHSQMWQL:/var/lib/docker/overlay2/l/IQ2YIH47ZXTN55PGH3BWUKFPTT,upperdir=/var/lib/docker/overlay2/d072c94532976a4196174751c57359139501739001e7b9d50de59041c768a307/diff,workdir=/var/lib/docker/overlay2/d072c94532976a4196174751c57359139501739001e7b9d50de59041c768a307/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
...
//10.0.75.1/D on /etc/httpd/certs type cifs (rw,relatime,vers=3.02,sec=ntlmsspi,cache=strict,username=myuid,domain=mydomain,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0755,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,nobrl,mfsymlinks,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
...
tmpfs on /sys/firmware type tmpfs (ro,relatime)

To use the share when connected via the VPN, I needed to use the credentials of a local account here. Beyond creating a local administator-level account, you may need to add read/write permissions for that new account to your %userprofile% directory — inheritence is generally disabled & only the individual user has access to the folder.

Once there’s a local account set up to work, you’ve got to tell Docker to use it. In the settings, select “Shared Drives”. Use “Reset credentials” to open a prompt for the logon credentials that will be used to mount the shared volume.

o

Start the Docker container, VPN into the company network, and I’ve got a fully functional sandbox in a Docker container.