The following formula prints just the substring found before the first dash from the data in cell A2:
=LEFT(A2, FIND("-", A2) - 1)
The following formula prints just the substring found before the first dash from the data in cell A2:
=LEFT(A2, FIND("-", A2) - 1)
Ingredients
¼ cup unsalted butter, softened
¼ teaspoon salt
3 ¼ cups confectioners’ sugar
⅓ cup sweetened condensed milk
½ teaspoon peppermint extract (taste as you add it, don’t overdo it!)
food coloring, optional
Instructions
I am working with a new application that doesn’t seem to like when a person has multiple roles assigned to them … however, I first need to prove that is the problem. Luckily, your browser gets the SAML response and you can actually see the Role entitlements that are being sent. Just need to parse them out of the big 80 meg file that a simple “go here and log on” generates!
To gather data to be parsed, open the Dev Tools for the browser tab. Click the settings gear icon and select “Persist Logs”. Reproduce the scenario – navigate to the site, log in. Then save the dev tools session as a HAR file. The following Python script will analyze the file, extract any SAML response tokens, and print them in a human-readable format.
################################################################################
# This script reads a HAR file, identifies HTTP requests and responses containing
# SAML tokens, and decodes "SAMLResponse" values.
#
# The decoded SAML assertions are printed out for inspection in a readable format.
#
# Usage:
# - Update the str_har_file_path with your HAR file
################################################################################
# Editable Variables
str_har_file_path = 'SumoLogin.har'
# Imports
import json
import base64
import urllib.parse
from xml.dom.minidom import parseString
################################################################################
# This function decodes SAML responses found within the HAR capture
# Args:
# saml_response_encoded(str): URL encoded, base-64 encoded SAML response
# Returns:
# string: decoded string
################################################################################
def decode_saml_response(saml_response_encoded):
url_decoded = urllib.parse.unquote(saml_response_encoded)
base64_decoded = base64.b64decode(url_decoded).decode('utf-8')
return base64_decoded
################################################################################
# This function finds and decodes SAML tokens from HAR entries.
#
# Args:
# entries(list): A list of HTTP request and response entries from a HAR file.
#
# Returns:
# list: List of decoded SAML assertion response strings.
################################################################################
def find_saml_tokens(entries):
saml_tokens = []
for entry in entries:
request = entry['request']
response = entry['response']
if request['method'] == 'POST':
request_body = request.get('postData', {}).get('text', '')
if 'SAMLResponse=' in request_body:
saml_response_encoded = request_body.split('SAMLResponse=')[1].split('&')[0]
saml_tokens.append(decode_saml_response(saml_response_encoded))
response_body = response.get('content', {}).get('text', '')
if response.get('content', {}).get('encoding') == 'base64':
response_body = base64.b64decode(response_body).decode('utf-8', errors='ignore')
if 'SAMLResponse=' in response_body:
saml_response_encoded = response_body.split('SAMLResponse=')[1].split('&')[0]
saml_tokens.append(decode_saml_response(saml_response_encoded))
return saml_tokens
################################################################################
# This function converts XML string to an XML dom object formatted with
# multiple lines with heirarchital indentations
#
# Args:
# xml_string (str): The XML string to be pretty-printed.
#
# Returns:
# dom: A pretty-printed version of the XML string.
################################################################################
def pretty_print_xml(xml_string):
dom = parseString(xml_string)
return dom.toprettyxml(indent=" ")
# Load HAR file with UTF-8 encoding
with open(str_har_file_path, 'r', encoding='utf-8') as file:
har_data = json.load(file)
entries = har_data['log']['entries']
saml_tokens = find_saml_tokens(entries)
for token in saml_tokens:
print("Decoded SAML Token:")
print(pretty_print_xml(token))
print('-' * 80)
Not that anyone hosts their own Exchange server anymore … but we had a pretty strange issue pop up. Exchange has been, for a dozen years, configured to use the system DNS servers. The system can still use DNS just fine … but the Exchange transport failed to query DNS and just queued messages.
PS C:\scripts> Get-Queue -Identity "EXCHANGE01\3" | Format-List *
DeliveryType : SmtpDeliveryToMailbox
NextHopDomain : mailbox database 1440585757
TlsDomain :
NextHopConnector : 1cdb1e55-a129-46bc-84ef-2ddae27b808c
Status : Retry
MessageCount : 7
LastError : 451 4.4.0 DNS query failed. The error was: DNS query failed with error ErrorRetry
RetryCount : 2
LastRetryTime : 1/4/2025 12:20:04 AM
NextRetryTime : 1/4/2025 12:25:04 AM
DeferredMessageCount : 0
LockedMessageCount : 0
MessageCountsPerPriority : {0, 0, 0, 0}
DeferredMessageCountsPerPriority : {0, 7, 0, 0}
RiskLevel : Normal
OutboundIPPool : 0
NextHopCategory : Internal
IncomingRate : 0
OutgoingRate : 0
Velocity : 0
QueueIdentity : EXCHANGE01\3
PriorityDescriptions : {High, Normal, Low, None}
Identity : EXCHANGE01\3
IsValid : True
ObjectState : New
Yup, still configured to use the SYSTEM’s DNS:
PS C:\scripts> Get-TransportService | Select-Object Name, *DNS*
Name : EXCHANGE01
ExternalDNSAdapterEnabled : True
ExternalDNSAdapterGuid : 2fdebb30-c710-49c9-89fb-61455aa09f62
ExternalDNSProtocolOption : Any
ExternalDNSServers : {}
InternalDNSAdapterEnabled : True
InternalDNSAdapterGuid : 2fdebb30-c710-49c9-89fb-61455aa09f62
InternalDNSProtocolOption : Any
InternalDNSServers : {}
DnsLogMaxAge : 7.00:00:00
DnsLogMaxDirectorySize : 200 MB (209,715,200 bytes)
DnsLogMaxFileSize : 10 MB (10,485,760 bytes)
DnsLogPath :
DnsLogEnabled : True
I had to hard-code the DNS servers to the transport and restart the service:
PS C:\scripts> Set-TransportService EXCHANGE01 -InternalDNSServers 10.5.5.85,10.5.5.55,10.5.5.1
PS C:\scripts> Set-TransportService EXCHANGE01 -ExternalDNSServers 10.5.5.85,10.5.5.55,10.5.5.1
PS C:\scripts> Restart-Service MSExchangeTransport
WARNING: Waiting for service 'Microsoft Exchange Transport (MSExchangeTransport)' to stop...
WARNING: Waiting for service 'Microsoft Exchange Transport (MSExchangeTransport)' to start...
PS C:\scripts> Get-TransportService | Select-Object Name, InternalDNSServers, ExternalDNSServers
Name InternalDNSServers ExternalDNSServers
---- ------------------ ------------------
EXCHANGE01 {10.5.5.1, 10.5.5.55, 10.5.5.85} {10.5.5.85, 10.5.5.55, 10.5.5.1}
Viola, messages started popping into my mailbox.
Ever since we upgraded to Fedora 41, we have been having horrible problems with our Exchange server. It will drop off the network for half an hour at a time. I cannot even ping the VM from the physical server. Some network captures show there’s no response to the ARP request.
Evidently, the VM configuration contains a machine type that doesn’t automatically update. We are using PC-Q35 as the chipset … and 4.1 was the version when we built our VMs. This version, however has been deprecated. Which you can see by asking virsh what capabilities it has:
2025-01-02 23:17:26 [lisa@linux01 /var/log/libvirt/qemu/]# virsh capabilities | grep pc-q35
<machine maxCpus='288' deprecated='yes'>pc-q35-5.2</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.2</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.7</machine>
<machine maxCpus='4096'>pc-q35-9.1</machine>
<machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
<machine maxCpus='288'>pc-q35-7.1</machine>
<machine maxCpus='1024'>pc-q35-8.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-6.1</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.4</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.10</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-5.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.9</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-3.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.1</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.6</machine>
<machine maxCpus='4096'>pc-q35-9.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.12</machine>
<machine maxCpus='288'>pc-q35-7.0</machine>
<machine maxCpus='288'>pc-q35-8.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-6.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.0.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-5.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.8</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-3.0</machine>
<machine maxCpus='288'>pc-q35-7.2</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.0</machine>
<machine maxCpus='1024'>pc-q35-8.2</machine>
<machine maxCpus='288'>pc-q35-6.2</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.5</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.11</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-5.2</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.2</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.7</machine>
<machine maxCpus='4096'>pc-q35-9.1</machine>
<machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
<machine maxCpus='288'>pc-q35-7.1</machine>
<machine maxCpus='1024'>pc-q35-8.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-6.1</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.4</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.10</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-5.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.9</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-3.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.1</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.6</machine>
<machine maxCpus='4096'>pc-q35-9.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.12</machine>
<machine maxCpus='288'>pc-q35-7.0</machine>
<machine maxCpus='288'>pc-q35-8.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-6.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.0.1</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-5.0</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.8</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-3.0</machine>
<machine maxCpus='288'>pc-q35-7.2</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-4.0</machine>
<machine maxCpus='1024'>pc-q35-8.2</machine>
<machine maxCpus='288'>pc-q35-6.2</machine>
<machine maxCpus='255' deprecated='yes'>pc-q35-2.5</machine>
<machine maxCpus='288' deprecated='yes'>pc-q35-2.11</machine>
Or filtering out the deprecated ones …
2025-01-02 23:16:50 [lisa@linux01 /var/log/libvirt/qemu/]# virsh capabilities | grep pc-q35 | grep -v "deprecated='yes'"
<machine maxCpus='4096'>pc-q35-9.1</machine>
<machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
<machine maxCpus='288'>pc-q35-7.1</machine>
<machine maxCpus='1024'>pc-q35-8.1</machine>
<machine maxCpus='4096'>pc-q35-9.0</machine>
<machine maxCpus='288'>pc-q35-7.0</machine>
<machine maxCpus='288'>pc-q35-8.0</machine>
<machine maxCpus='288'>pc-q35-7.2</machine>
<machine maxCpus='1024'>pc-q35-8.2</machine>
<machine maxCpus='288'>pc-q35-6.2</machine>
<machine maxCpus='4096'>pc-q35-9.1</machine>
<machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
<machine maxCpus='288'>pc-q35-7.1</machine>
<machine maxCpus='1024'>pc-q35-8.1</machine>
<machine maxCpus='4096'>pc-q35-9.0</machine>
<machine maxCpus='288'>pc-q35-7.0</machine>
<machine maxCpus='288'>pc-q35-8.0</machine>
<machine maxCpus='288'>pc-q35-7.2</machine>
<machine maxCpus='1024'>pc-q35-8.2</machine>
<machine maxCpus='288'>pc-q35-6.2</machine>
So I shut down my Exchange server again (again, again), used “virsh edit “exchange01”, changed
<os>
<type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
<boot dev='hd'/>
</os>
to
<os>
<type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
</os>
And started my VM. It took about an hour to boot. It absolutely hogged the disk physical server’s resources. It was the top listing in iotop -o
But then … all of the VMs dropped off of iotop. My attempt to log into the server via the console was logged in and waiting for me. My web mail, which had failed to load all day, was in my e-mail. And messages that had been queued for delivery had all come through.
The load on our physical server dropped from 30 to 1. Everything became responsive. And Exchange has been online for a good thirty minutes now.
This script is an example of using the Sumo Logic API to retrieve collector details. This particular script looks for Linux servers and validates that each collector has the desired log sources defined. Those that do not contain all desired sources are denoted for farther investigation.
import requests
from requests.auth import HTTPBasicAuth
import pandas as pd
from config import access_id, access_key # Import your credentials from config.py
# Base URL for Sumo Logic API
base_url = 'https://api.sumologic.com/api/v1'
def get_all_collectors():
"""Retrieve all collectors with pagination support."""
collectors = []
limit = 1000 # Adjust as needed; check API docs for max limit
offset = 0
while True:
url = f'{base_url}/collectors?limit={limit}&offset={offset}'
response = requests.get(url, auth=HTTPBasicAuth(access_id, access_key))
if response.status_code == 200:
result = response.json()
collectors.extend(result.get('collectors', []))
if len(result.get('collectors', [])) < limit:
break # Exit the loop if we received fewer than the limit, meaning it's the last page
offset += limit
else:
print('Error fetching collectors:', response.status_code, response.text)
break
return collectors
def get_sources(collector_id):
"""Retrieve sources for a specific collector."""
url = f'{base_url}/collectors/{collector_id}/sources'
response = requests.get(url, auth=HTTPBasicAuth(access_id, access_key))
if response.status_code == 200:
sources = response.json().get('sources', [])
# print(f"Log Sources for collector {collector_id}: {sources}")
return sources
else:
print(f'Error fetching sources for collector {collector_id}:', response.status_code, response.text)
return []
def check_required_logs(sources):
"""Check if the required logs are present in the sources."""
required_logs = {
'_security_events': False,
'_linux_system_events': False,
'cron_logs': False,
'dnf_rpm_logs': False
}
for source in sources:
if source['sourceType'] == 'LocalFile':
name = source.get('name', '')
for key in required_logs.keys():
if name.endswith(key):
required_logs[key] = True
# Determine missing logs
missing_logs = {log: "MISSING" if not present else "" for log, present in required_logs.items()}
return missing_logs
# Main execution
if __name__ == "__main__":
collectors = get_all_collectors()
report_data = []
for collector in collectors:
# Check if the collector's osName is 'Linux'
if collector.get('osName') == 'Linux':
collector_id = collector['id']
collector_name = collector['name']
print(f"Checking Linux Collector: ID: {collector_id}, Name: {collector_name}")
sources = get_sources(collector_id)
missing_logs = check_required_logs(sources)
if any(missing_logs.values()):
report_entry = {
"Collector Name": collector_name,
"_security_events": missing_logs['_security_events'],
"_linux_system_events": missing_logs['_linux_system_events'],
"cron_logs": missing_logs['cron_logs'],
"dnf_rpm_logs": missing_logs['dnf_rpm_logs']
}
# print(f"Missing logs for collector {collector_name}: {report_entry}")
report_data.append(report_entry)
# Create a DataFrame and write to Excel
df = pd.DataFrame(report_data, columns=[
"Collector Name", "_security_events", "_linux_system_events", "cron_logs", "dnf_rpm_logs"
])
# Generate the filename with current date and time
if not df.empty:
timestamp = pd.Timestamp.now().strftime("%Y%m%d-%H%M")
output_file = f"{timestamp}-missing_logs_report.xlsx"
df.to_excel(output_file, index=False)
print(f"\nData written to {output_file}")
else:
print("\nAll collectors have the required logs.")
We upgraded all of our internal servers to Fedora 41 after a power outage yesterday — had a number of issues to resolve (the liblockdev legacy config reverted so OpenHAB no longer could use USB serial devices, the physical server was swapping 11GB of data even though it had 81GB of memory free, and our Gerbera installation requires some libspdlog.so.1.12 which was updated to version 1.14 with the Fedora upgrade.
The last issue was more challenging to figure out because evidently DNF is now DNF5 and instead of throwing an error like “hey, new version dude! Use the new syntax” when you use an old command to list what is installed … it just says “No matching packages to list”. Like there are no packages installed? Since I’m using bash, openssh, etc … that’s not true.
Luckily, the new syntax works just fine. dnf repoquery –installed
Also:
dnf5 repoquery –available
dnf5 repoquery –userinstalled
With 81 GB available, there’s no good reason to be using 11GB of swap!
2024-12-30 09:11:16 [root@FPP01 /var/named/chroot/etc/]# free -h total used free shared buff/cache available Mem: 125Gi 44Gi 58Gi 1.8Mi 23Gi 81Gi Swap: 11Gi 11Gi 121Mi
Memory Stats
2024-12-30 09:11:22 [root@FPP01 /var/named/chroot/etc/]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu------- r b swpd free buff cache si so bi bo in cs us sy id wa st gu 0 1 12458476 43626964 710656 42012692 1354 1895 51616 8431 16514 6 1 2 88 7 0 3
How to see what is using swap
2024-12-30 09:11:45 [root@FPP01 /var/named/chroot/etc/]# smem -rs swap PID User Command Swap USS PSS RSS 2903 qemu /usr/bin/qemu-system-x86_64 4821840 3638976 3643669 3675164 3579 qemu /usr/bin/qemu-system-x86_64 2282316 6237632 6242508 6275260 3418 qemu /usr/bin/qemu-system-x86_64 2182844 2063528 2068041 2098292 3331 qemu /usr/bin/qemu-system-x86_64 1398728 7078176 7082951 7115368 3940 qemu /usr/bin/qemu-system-x86_64 1020944 4258144 4262757 4294080 3622 qemu /usr/bin/qemu-system-x86_64 525272 7942284 7947159 7979876 25088 qemu /usr/bin/qemu-system-x86_64 160456 8298900 8305130 8342252 2563 root /usr/bin/python3 -Es /usr/s 11696 1332 4050 10872 2174 squid (squid-1) --kid squid-1 --f 6944 4312 5200 10832 1329 root /sbin/mount.ntfs-3g /dev/sd 5444 29636 29642 30224 24593 root /usr/sbin/smbd --foreground 4940 16004 19394 31712 2686 root /usr/sbin/libvirtd --timeou 4172 28704 30096 37964 2159 root /usr/sbin/squid --foregroun 3340 152 763 4532 5454 root /usr/sbin/smbd --foreground 3180 212 496 3552 2134 root /usr/sbin/smbd --foreground 3008 208 598 4368 2157 root /usr/sbin/smbd --foreground 2992 136 245 1504 2156 root /usr/sbin/smbd --foreground 2912 212 304 1648 17963 root /usr/sbin/smbd --foreground 2880 480 1603 8964 1631 named /usr/sbin/named -u named -c 2820 60696 60896 63892 1424 polkitd /usr/lib/polkit-1/polkitd - 2700 704 913 3864 4271 root /usr/sbin/smbd --foreground 2644 1996 3106 8408 1 root /usr/lib/systemd/systemd -- 2220 4680 5826 9512 2766 root /usr/sbin/virtlogd 1972 112 873 4548 30736 root /usr/sbin/smbd --foreground 1864 884 3861 15756 31077 root /usr/sbin/smbd --foreground 1844 1044 4189 16368 2453 root /usr/lib/systemd/systemd -- 1656 824 1707 4588 1446 root /usr/sbin/NetworkManager -- 1636 4748 5593 10348 1413 dbus dbus-broker --log 4 --contr 1288 964 1072 2000 21904 root sshd-session: root@pts/9 1028 644 1287 5412 1402 dbus /usr/bin/dbus-broker-launch 968 456 571 1872 21900 root sshd-session: root [priv] 848 488 1911 8588
Voila! Well, install dnf-utils and then …
[lisa@linux03 lisa]# repoquery –list gerbera
Last metadata expiration check: 0:00:48 ago on Mon 27 Dec 2024 12:04:05 PM EST.
/etc/gerbera
/etc/gerbera/config.xml
/etc/gerbera/gerbera.db
/etc/gerbera/gerbera.html
/etc/logrotate.d
/etc/logrotate.d/gerbera
/usr/bin/gerbera
/usr/lib/.build-id
/usr/lib/.build-id/8e
/usr/lib/.build-id/8e/cba8f3a7f9db93d01a462f31a8270f1c8ff975
/usr/lib/systemd/system/gerbera.service
/usr/lib/sysusers.d/gerbera.conf
/usr/share/doc/gerbera
/usr/share/doc/gerbera/AUTHORS
/usr/share/doc/gerbera/CONTRIBUTING.md
/usr/share/doc/gerbera/ChangeLog.md
/usr/share/licenses/gerbera
/usr/share/licenses/gerbera/LICENSE.md
/usr/share/man/man1/gerbera.1.gz
/var/log/gerbera
A single line command to retrieve the the secrets from a namespace and decode the values:
k8shost:~ # kubectl get secret ca-secret -n mynamespace -o json | jq -r '.data | to_entries[] | "\(.key): \(.value | @base64d)"' ACCESS_SECRET: X7aB-52p-p2y API_USER: PM_USER BASE_URL: https://apiserver.example.com/api/ COMPONENT_ID: 955_18 CPU_MEM_ID: 955_17 INTERFACE_ID: 955_16 INVENTORY_ID: 955_5 RAW_ID: 955_19