I needed to copy all of the files under a directory from a docker container — and there is a quick command line that will list the fully qualified path/filename for everything under the current directory:
find $PWD
I needed to copy all of the files under a directory from a docker container — and there is a quick command line that will list the fully qualified path/filename for everything under the current directory:
find $PWD
Since this is the fifth time this month that I’ve spun up some CentOS image and been stymied by the inability to install new packages … I’m going to write down the sed commands that magic the default yum repository configuration to something that’s still functional.
cd /etc/yum.repos.d/
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
(Sorry, Anya … after today, I’ll try to not post anything about computers for three days!) Linux restricts non-root users from opening ports <1024. It’s generally a good idea not to run your services as root. Which means, unfortunately, we end up running a lot of services on nonstandard ports (so frequently that 1389 and 1636 are a quasi-standard port for LDAP and LDAPS, 8080 and 8443 quasi-standard ports for HTTP and HTTPS). But having to remember to add the nonstandard port to a web URL is an annoyance for users — I’ve seen a lot of people fix this by adding a load balanced VIP or NGINX proxy in front of the service to handle port translations. But there is a quick and easy way to handle port translation without any additional equipment. Most Linux hosts have firewalld running, and you can tell the firewall to forward the port for you. In this example, I’m letting my Kibana users access my web service using https://kibana.example.com without needing to append the :5601:
firewall-cmd –permanent –zone=public –add-forward-port=port=443:proto=tcp:toport=5601
Should you decide against the port forwarding, the same command with –remove-forward-port deregisters the rule:
firewall-cmd –zone=public –remove-forward-port=port=443:proto=tcp:toport=5601
We picked up a really nice color laser printer — a Dell 1350CN. It was really easy to add it to my Windows computer — download driver, install, voila there’s a printer. We found instructions for using a Xerox Phaser 6000 driver. It worked perfectly on Scott’s old laptop, but we weren’t able to install the RPM on his new laptop — it insisted that a dependency wasn’t found: libstdc++.so.6 CXXABI_1.3.1
Except, checking the file, CXXABI_1.3.1 is absolutely in there:
2022-09-17 13:04:19 [lisa@fc36 ~/]# strings /usr/lib64/libstdc++.so.6 | grep CXXABI CXXABI_1.3 CXXABI_1.3.1 CXXABI_1.3.2 CXXABI_1.3.3 CXXABI_1.3.4 CXXABI_1.3.5 CXXABI_1.3.6 CXXABI_1.3.7 CXXABI_1.3.8 CXXABI_1.3.9 CXXABI_1.3.10 CXXABI_1.3.11 CXXABI_1.3.12 CXXABI_1.3.13 CXXABI_TM_1 CXXABI_FLOAT128
We’ve tried using the foo2hbpl package with the Dell 1355 driver to no avail. It would install, but we weren’t able to print. So we returned to the Xerox package.
Turns out the driver package we were trying to use is a 32-bit driver (even though the download says 32 and 64 bit). From a 32-bit perspective, we really didn’t have libstdc++ — a quick dnf install libstdc++.i686 installed the library along with some friends.
Xerox’s rpm installed without error … but, attempting to print, just yielded an error saying that the filter failed. I had Scott use ldd to test one of the filters (any of the files within /usr/lib/cups/filter/Xerox_Phaser_6000_6010/ — it indicated the “libcups.so.2” could not be found. We also needed to install the 32-bit cups-libs.i686 package. Finally, he’s able to print from Fedora 36 to the Dell 1350cn!
You can use dmidecode to list all sorts of information about the system — there is a list of device types that you can use with the “-t” option
Type Information
────────────────────────────────────────────
0 BIOS
1 System
2 Baseboard
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply
40 Additional Information
41 Onboard Devices Extended Information
42 Management Controller Host Interface
Blah
[lisa@fedora ~/]# dmidecode -t 9
…
Handle 0x0024, DMI type 9, 17 bytes
System Slot Information
Designation: Slot6
Type: 32-bit PCI
Current Usage: In Use
Length: Short
ID: 6
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported
Bus Address: 0000:0a:02.0
The “Bus Address” value corresponds to information from lspci:
[lisa@fedora ~/]# lspci | grep “0a:02.0”
0a:02.0 Multimedia video controller: Conexant Systems, Inc. CX23418 Single-Chip MPEG-2 Encoder with Integrated Analog Video/Broadcast Audio Decoder
Bidirectional backwards compatibility was introduced in 2017 – which means my experience where you needed to upgrade the broker first and then the clients is no longer true. Rejoice!
Two CentOS docker containers were provisioned as follows:
docker run -dit --name=kafka1 -p 9092:9092 centos:latest docker run -dit --name=kafka2 -p 9093:9092 -p9000:9000 centos:latest
# Shell into each container and do the following:
sed -i -e "s|mirrorlist=|#mirrorlist=|g" /etc/yum.repos.d/CentOS-* sed -i -e "s|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g" /etc/yum.repos.d/CentOS-*
# Get Ips and hosts into /etc/hosts
172.17.0.2 40c2222cfea0
172.17.0.3 2923addbcb6d
# Update installed packages & install required tools
dnf update yum install -y passwd vim net-tools wget git unzip
# Add a kafka user, make a kafka folder, and give the kafka user ownership of the kafka folder
useradd kafka passwd kafka usermod -aG wheel kafka mkdir /kafka chown kafka:kafka /kafka
# Install Kafka
su – kafka cd /kafka wget https://archive.apache.org/dist/kafka/2.5.0/kafka_2.12-2.5.0.tgz tar vxzf kafka_2.12-2.5.0.tgz rm kafka_2.12-2.5.0.tgz ln -s /kafka/kafka_2.12-2.5.0 /kafka/kafka
# Configure zookeeper
vi /kafka/kafka/config/zookeeper.properties dataDir=/kafka/zookeeperdata server.1=172.17.0.2:2888:3888
# Start Zookeeper on the first server
screen -S zookeeper /kafka/kafka/bin/zookeeper-server-start.sh /kafka/kafka/config/zookeeper.properties
# Configure the cluster
vi /kafka/kafka/config/server.properties broker.id=1 # unique number per cluster node listeners=PLAINTEXT://:9092 zookeeper.connect=172.17.0.2:2181
# Start Kafka
screen -S kafka /kafka/kafka/bin/kafka-server-start.sh /kafka/kafka/config/server.properties
# Edit producer.properties on a server
vi /kafka/kafka/config/producer.properties bootstrap.servers=172.17.0.2:9092,172.17.0.3:9092
# Create test topic
/kafka/kafka/bin/kafka-topics.sh --create --zookeeper 172.17.0.2:2181 --replication-factor 2 --partitions 1 --topic ljrTest
# Post messages to the topic
/kafka/kafka/bin/kafka-console-producer.sh --broker-list 172.17.0.2:9092 --producer.config /kafka/kafka/config/producer.properties --topic ljrTest
# Retrieve messages from topic
/kafka/kafka/bin/kafka-console-consumer.sh --bootstrap-server 172.17.0.2:9092 --topic ljrTest --from-beginning /kafka/kafka/bin/kafka-console-consumer.sh --bootstrap-server 172.17.0.3:9092 --topic ljrTest --from-beginning
Voila, a functional Kafka sandbox cluster.
Now we’ll install the cluster manager
cd /kafka git clone --depth 1 --branch 3.0.0.6 https://github.com/yahoo/CMAK.git cd CMAK vi conf/application.conf cmak.zkhosts="40c2222cfea0:2181" # CMAK requires java > 1.8 … so getting 11 set up cd /usr/lib/jvm wget https://cdn.azul.com/zulu/bin/zulu11.58.23-ca-jdk11.0.16.1-linux_x64.zip unzip zulu11.58.23-ca-jdk11.0.16.1-linux_x64.zip mv zulu11.58.23-ca-jdk11.0.16.1-linux_x64 zulu-11 PATH=/usr/lib/jvm/zulu-11/bin:$PATH ./sbt -java-home /usr/lib/jvm/zulu-11 clean dist cp /kafka/CMAK/target/universal/cmak-3.0.0.6.zip /kafka cd /kafka unzip cmak-3.0.0.6.zip cd cmak-3.0.0.6 screen -S CMAK bin/cmak -java-home /usr/lib/jvm/zulu-11 -Dconfig.file=/kafka/cmak-3.0.0.6/conf/application.conf -Dhttp.port=9000
Access it at http://cmak_host:9000
# Back up the Kafka installation (excluding log files)
tar cvfzp /kafka/kafka-2.5.0.tar.gz --exclude logs /kafka/ws_npm_kafka/kafka_2.12-2.5.0
# Get newest Kafka version installed
# From another host where you can download the file, transfer it to the kafka server
scp kafka_2.12-3.2.3.tgz list@kafka1:/tmp/
# Back on the Kafka server — copy the tgz file into the Kafka directory
mv /tmp/kafka_2.12-3.2.3.tgz /kafka/kafka
# Verify Kafka data is stored outside of the install directory:
[kafka@40c2222cfea0 config]$ grep log.dir server.properties log.dirs=/tmp/kafka-logs
# Verify zookeeper data is stored outside of the install directory:
[kafka@40c2222cfea0 config]$ grep dataDir zookeeper.properties dataDir=/kafka/zookeeperdata
# Get the new version of Kafka – start with the zookeeper(s) then do the other nodes
cd /kafka wget https://downloads.apache.org/kafka/3.2.3/kafka_2.12-3.2.3.tgz tar vxfz /kafka/kafka_2.12-3.2.3.tgz
# Copy config from old iteration to new
cp /kafka/kafka_2.12-2.5.0/config/* /kafka/kafka_2.12-3.2.3/config/
# Edit server.properties and add a configuration line to force the inter-broker protocol version to the currently running Kafka version
# This ensures your cluster is using the “old” version to communicate and you can, if needed, revert to the previous version
vi /kafka/kafka/config/server.properties inter.broker.protocol.version=2.5.0
# Restart each Kafka server – waiting until it has come online before restarting the next one – with the new binaries
# Stop kafka
systemctl stop kafka
# Move symlink to new folder
unlink /kafka/kafka ln -s /kafka/kafka_2.12-3.2.3 /kafka/kafka
# start kafka
systemctl start kafka
# Or, to watch it run,
/kafka/kafka/bin/kafka-server-start.sh /kafka/kafka/config/server.properties
# Finally, ensure you’ve still got ‘stuff’
/kafka/kafka/bin/kafka-console-consumer.sh --bootstrap-server 172.17.0.3:9092 --topic ljrTest --from-beginning
# And verify the version has updated
[kafka@40c2222cfea0 bin]$ ./kafka-topics.sh --version 3.2.3 (Commit:50029d3ed8ba576f)
# Until this point, we can just roll back to the old folder & revert to the previous version of Kafka … that’s out backout plan.
# Once everything has been confirmed to be working, bump the inter-broker protocol version to the new version & restart Kafka
vi /kafka/kafka/config/server.properties inter.broker.protocol.version=3.2
I am using an NGINX container which is based on Debian 11 — following the vouch-proxy build instructions failed spectacularly on the first step, reporting that “package embed is not in GOROOT”. It appears that Debian package installation gets you go 1.15 — and ’embed’ wasn’t added until 1.16. So … that’s not great.
As a note to myself — here are the additional packages I install to the base container:
apt-get update
apt-get upgrade
apt-get install vim wget net-tools procps git make gcc g++
To manually install golang on Debian:
Now I am able to run their shell script to build the vouch-proxy binary:
I’m writing it down this time — after completing the steps to set up xrdp (installed, configured, running, firewall port open), we get prompted for credentials … good so far!
And then get stuck on a black screen. This is because the user we’re trying to log into is already logged into the machine. Log out locally, and the user is able to log into the remote desktop connection. Conversely, attempting to log in locally once the remote desktop connection is established just hangs on a black screen too.
I’ve seen a number of walkthroughs detailing how to convert an Aironet Wireless Access Point that’s using the lightweight firmware (the firmware which relies on something like a CAPWAP server to provide configuration so there’s not much in the way of local config options) to the autonomous firmware (one with local config & a management GUI). A few people encounter issues because downloading firmware requires a TACACS agreement — great if you’re a network engineer at a company, not great if you’ve bought a single access point somewhere.
While “google it and find someone who has posted the file … then verify the MD5 sum checks out” is an answer, a lot of the newer firmwares appear to have a major bug where any attempt to commit changes yields a 404 error. ap3g2-k9w7-tar.153-3.JF12.tar, ap3g2-k9w7-tar.153-3.JF15.tar, ap3g2-k9w7-tar.153-3.JPI4.tar — all very buggy. While it may be possible to use the CLI to “copy ru star” and write the running config into the startup config … that’s going to be difficult to explain to someone else. Something else odd — the built-in Cisco account is a ‘read only’ user — this may be normal where the GUI shows it as read only but it’s actually got management permission?
What I’ve realized, in our attempt to convert into a fully functional autonomous firmware, is that the specific version referenced in one of the walkthroughs (ap3g2-k9w7-tar.153-3.JH.tar) is a deliberate selection — it’s a security update firmware release. Which means it’s available for download for anyone with a Cisco account that’s OK for encryption download (i.e. not residing in one of those countries to which American companies are not allowed to ‘export’ good encryption stuff) even if you don’t have a TACACS account.
Luckily, the JH iteration of the firmware doesn’t have the 404 error on committing changes. The Cisco account is still showing up as read-only, but we were able to make our own read-write user & implement changes.