Tag: Fedora

Fedora 39: Load Balancing Across Two Network Connections

I think this is one of those things that people don’t normally do at home, and the folks who configure this in enterprises know what they’re doing and don’t need guidance on how to do basic network things. But … we wanted to have two network cards in our server so high network traffic usage like backups and TV recording don’t create contention. When I was a server admin, I’d set up link aggregation — bonding, teaming — and it just magically worked. We’d put in a port request to get the new port turned up, note it was going to be a teamed interface, do our OS config, and everything was fine. What the network guys did? I had no idea. Well, now I do!

On the switch — a Cisco 2960-S in this case — you need to create an EtherChannel and assign the ports to that channel. Telnet’ing to the switch, you first need to elevate your privileges as we start with level 1

wc2906s01>show priv
Current privilege level is 1

One you’ve entered privilege level 15, go into config term. Create the port-channel interface and assign it a number (I used 1, but 1 through 6 are options). Then go into each interface and add it to the port channel group you just created (again 1) — I set the mode to “on” because I doubt our server is going to negotiate PAgP and I didn’t want to get into setting up LACP.

enable 15
config term

interface Port-channel 1

interface GigabitEthernet1/0/13
channel-group 1 mode on

interface GigabitEthernet1/0/14
channel-group 1 mode on

# src-mac is the default, can change to something else
# e.g. src-dst-mac would be set using
# port-channel load-balance src-dst-mac
end

Done! Using show etherchannel summary confirms that this worked:

wc2906s01>show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator

M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 1
Number of aggregators: 1

Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SU) - Gi1/0/13(P) Gi1/0/14(P)

Then you can configure a network bond in Fedora and add the physical interfaces. Since we’re using KVM/QEMU, there is a VMBridge bridge that contains the bond, and the bond joins two physical interfaces named enp10s2 and enp0s25

# VM Bridge configuration

[lisa@fedora39 /etc/NetworkManager/system-connections/]# cat vmbridge.nmconnection
[connection]
id=vmbridge
uuid=b2bca190-827b-4aa4-a4f5-95752525e5e5
type=bridge
interface-name=vmbridge
metered=2
timestamp=1708742580

[ethernet]

[bridge]
multicast-snooping=false
priority=1
stp=false

[ipv4]
address1=10.1.2.3/24,10.5.5.1
dns=10.1.2.200;10.1.2.199;
dns-search=example.com;
may-fail=false
method=manual

[ipv6]
addr-gen-mode=stable-privacy
method=disabled

[proxy]

 

# Bond configuration — master is the vmbridge, and the round robin load balancing option is used.
[lisa@fedora39 /etc/NetworkManager/system-connections/]# cat bond0.nmconnection
[connection]
id=bond0
uuid=15556a5e-55c5-4505-a5d5-a5c547b5155b
type=bond
interface-name=bond0
master=vmbridge
metered=2
slave-type=bridge
timestamp=1708742580

[bond]
downdelay=0
miimon=1
mode=balance-rr
updelay=0

[bridge-port]

# Finally two network interfaces that are mastered by bond2
[lisa@fedora39 /etc/NetworkManager/system-connections/]# cat enp0s25.nmconnection
[connection]
id=enp0s25
uuid=159535a5-65e5-45f5-a505-a53555958525
type=ethernet
interface-name=enp0s25
master=bond0
metered=2
slave-type=bond
timestamp=1708733538

[ethernet]
auto-negotiate=true
mac-address=55:65:D5:15:A5:25
wake-on-lan=32768

[lisa@fedora39 /etc/NetworkManager/system-connections/]# cat enp10s2.nmconnection
[connection]
id=enp10s2
uuid=158525f5-f5d5-4515-9525-55e515c585b5
type=ethernet
interface-name=enp10s2
master=bond0
metered=2
slave-type=bond
timestamp=1708733538

[ethernet]
auto-negotiate=true
mac-address=55:35:25:D5:45:B5
wake-on-lan=32768

 

Restart NetworkManager to bring everything online. Voila — two network interfaces joined together and connected to the switch. Check out the bond file under /proc/net/bonding to verify this side is working.

[lisa@fedora39 ~/]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.7.5-200.fc39.x86_64

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 1
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp0s25
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 55:65:d5:15:a5:25
Slave queue ID: 0

Slave Interface: enp10s2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 55:35:35:d5:45:b5
Slave queue ID: 0

Fedora: Finding Build Parameters for RPM

There have been a few times I’ve needed to make a custom build of an application — to enable some feature that the default build does not include or to use a newer version than is available in the package repository — and I always thought it would be really useful to know what the build parameters were. Turns out you can find how Fedora packages were build.

Go to https://koji.fedoraproject.org and search for the package

Locate ‘yours’ – the right version of the application and the right version of Fedora – and click on the package name

Scroll down to the “Logs” section – click on the “build.log” for the proper architecture

Here, you will see the entire log for building the RPM but part of that is building the application from source. You’ll be able to find the configuration and make parameters used in the build. As an example, I was trying to determine if Gerbera was build with the REUSEADDR flag (it was) and LIBUPNP disable-blocking-tcp-connections

https://kojipkgs.fedoraproject.org//packages/gerbera/2.0.0/1.fc39/data/logs/x86_64/build.log

In my particular case, I then had to find the libupnp package and see how that was built – they’ve got enable blocking tcp connections. Reusing the parameters from the RPM allows me to build packages that land files in “the right place” (or, rather, the place used in the Fedora package) and include any features they’ve included.

Samba – Address family not supported by protocol

After upgrading to Fedora 39, we started having problems with Samba falling over on startup. The server has IPv6 disabled, and (evidently) something is not happy about that. I guess we could enable IPv6, but we don’t really need it.

Adding the following to lines to the GLOBAL section of the smb.conf file and restarting samba sorted it:

bind interfaces only = yes
interfaces = lo eth0

 

Feb 11 06:26:01 systemd[1]: Started smb.service – Samba SMB Daemon.
Feb 11 06:26:01 smbd[1109]: [2024/02/11 06:26:01.285076, 0] ../../source3/smbd/server.c:1091(smbd_open_one_socket)
Feb 11 06:26:01 smbd[1109]: smbd_open_one_socket: open_socket_in failed: Address family not supported by protocol
Feb 11 06:26:01 smbd[1109]: [2024/02/11 06:26:01.290022, 0] ../../source3/smbd/server.c:1091(smbd_open_one_socket)
Feb 11 06:26:01 smbd[1109]: smbd_open_one_socket: open_socket_in failed: Address family not supported by protocol
Feb 11 08:01:43 systemd[1]: Stopping smb.service – Samba SMB Daemon…
Feb 11 08:01:43 systemd[1]: smb.service: Deactivated successfully.
Feb 11 08:01:43 systemd[1]: Stopped smb.service – Samba SMB Daemon.

Updating Fedora — System Boots to Grub Error After Update

If you film the boot sequence and look frame by frame, you’ll see that it very briefly flashes a TPM error

error: ../../grub-core/commands/efi/tpm.c:150:unknown TPM error.

 

From what I’ve been able to glean, this secure boot stuff works off of signatures. Microsoft has signatures in BIOS. Everyone else kind of inserts their keys on the fly … so you can run out of space to save these keys and be unable to boot. To work around this, every time an update gets us over the limit, we go into the secure boot DBX management menu and reset the “Forbidden Signatures” from factory default. This is 13 keys instead of 373, and the OS is able to do it’s “thing” and boot.

 

And I’m actually writing this down this time because I had spent a lot of time researching this last time Scott’s laptop failed to boot and dumped out to a grub menu. This time, I kinda know what we did and why but lost a lot of the details.

Mounting DD Raw Image File

And a final note from my disaster recovery adventure — I had to use ddrescue to copy as much data from a corrupted drive as possible (ddrescue /dev/sdb /mnt/vms/rescue/backup.raw –try-again –force –verbose) — once I had the image, what do you do with it? Fortunately, you can mount a dd file and copy data from it.

# Mounting DD image
2023-04-17 23:54:01 [root@fedora /]# kpartx -l backup.raw
loop0p1 : 0 716800 /dev/loop0 2048
loop0p2 : 0 438835200 /dev/loop0 718848

2023-04-17 23:55:08 [root@fedora /]# mount /dev/mapper/loop0p2 /mnt/recovery/ -o loop,ro
mount: /mnt/recovery: cannot mount /dev/loop1 read-only.
       dmesg(1) may have more information after failed mount system call.

2023-04-17 23:55:10 [root@fedora /]# mount /dev/mapper/loop0p2 /mnt/recovery/ -o loop,ro,norecovery

2023-04-18 00:01:03 [root@fedora /]# ll /mnt/recovery/
total 205G
drwxr-xr-x  2 root root  213 Jul 14  2021 .
drwxr-xr-x. 8 root root  123 Apr 17 22:38 ..
-rw-r--r--. 1 root root 127G Apr 17 20:35 ExchangeServer.qcow2
-rw-r--r--. 1 qemu qemu  10G Apr 17 21:42 Fedora.qcow2
-rw-r--r--. 1 qemu qemu  15G Apr 17 14:05 FedoraVarMountPoint.qcow2


Mounting a QCOW File

We had a power outage on Monday that took out the drive that holds our VMs. There are backups, but the backup drive copies had superblock errors and all sorts of issues. To recover our data, I learned all sorts of new things — firstly that you can mount a QCOW file and copy data out. First, you have to connect a network block device to the file. Once it is connected, you can use fdisk to list the partitions on the drive and mount those partitions. In this example, I had a partition called nbd0p1 that I mounted to /mnt/data_recovery

modprobe nbd max_part=2
qemu-nbd --connect=/dev/nbd0 /path/to/server_file.qcow2
fdisk /dev/nbd0 -l
mount /dev/nbd0p1 /mnt/data_recovery

Once you are done, unmount it and disconnect from the network block device.

umount /mnt/data_recovery
qemu-nbd --disconnect /dev/nbd0
rmmod nbd