Microsoft Word – Custom Table of Contents to Omit Heading 1

I know I could sort this by adjusting which “style” I use when I write documents … but I like Heading 1 as the title, Heading 2 as a paragraph title, etc. To create a table of contents below the title line that doesn’t start with the title line and then have everything subordinate to that top item, create a custom table of contents:

Select “Options”

And simply re-map the styles to TOC levels – Heading 1 doesn’t get a level. Heading 2 becomes 1, Heading 3 becomes 2, and Heading 4 becomes 3.

Voila – now I’ve got a bunch of “top level” items from Heading 2

 

Sumo Logic: Creating Roles via API

This script creates very basic roles with no extra capabilities and restricts the role to viewing only the indicated source category’s data.

################################################################################
# This script reads an Excel file containing role data, then uses the Sumo Logic
# API to create roles based on the data. It checks each row for a role name and
# uses the source category to set data filters. The script requires a config.py
# file with access credentials.
################################################################################
import pandas as pd
import requests
import json
from config import access_id, access_key  # Import credentials from config.py

# Path to Excel file
excel_file_path = 'NewRoles.xlsx'

# Base URL for Sumo Logic API
base_url = 'https://api.sumologic.com/api/v1'

################################################################################
# Function to create a new role using the Sumo Logic API.
# 
# Args:
#     role_name (str): The name of the role to create.
#     role_description (str): The description of the role.
#     source_category (str): The source category to restrict the role to.
# 
# Returns:
#     None. Prints the status of the API call.
################################################################################
def create_role(role_name, role_description, source_category):
    
    url = f'{base_url}/roles'

    # Role payload
    data_filter = f'_sourceCategory={source_category}'
    payload = {
        'name': role_name,
        'description': role_description,
        'logAnalyticsDataFilter': data_filter,
        'auditDataFilter': data_filter,
        'securityDataFilter': data_filter
    }

    # Headers for the request
    headers = {
        'Content-Type': 'application/json',
        'Accept': 'application/json'
    }

    # Debugging line
    print(f"Attempting to create role: '{role_name}' with description: '{role_description}' and filter: '{data_filter}'")

    # Make the POST request to create a new role
    response = requests.post(url, auth=(access_id, access_key), headers=headers, data=json.dumps(payload))

    # Check the response
    if response.status_code == 201:
        print(f'Role {role_name} created successfully.')
    else:
        print(f'Failed to create role {role_name}. Status Code: {response.status_code}')
        print('Response:', response.json())

################################################################################
# Reads an Excel file and processes each row to extract role information and
# create roles using the Sumo Logic API.
# 
# Args:
#     file_path (str): The path to the Excel file containing role data.
# 
# Returns:
#     None. Processes the file and attempts to create roles based on the data.
################################################################################
def process_excel(file_path):
    # Load the spreadsheet
    df = pd.read_excel(file_path, engine='openpyxl')

    # Print column names to help debug and find correct ones
    print("Columns found in Excel:", df.columns)

    # Iterate over each row in the DataFrame
    for index, row in df.iterrows():
        role_name = row['Role Name']  # Correct column name for role name
        source_category = row['Source Category']  # Correct column name for source category to which role is restricted

        # Only create a role if the role name is not null
        if pd.notnull(role_name):
            role_description = f'Provides access to source category {source_category}'
            create_role(role_name, role_description, source_category)


# Process the Excel file
process_excel(excel_file_path)

Old Fashion Butter Mint Recipe

Ingredients

¼ cup unsalted butter, softened
¼ teaspoon salt
3 ¼ cups confectioners’ sugar
⅓ cup sweetened condensed milk
½ teaspoon peppermint extract (taste as you add it, don’t overdo it!)
food coloring, optional

Instructions

  • To the bowl of stand mixer fitted with the paddle attachment, combine butter and salt and beat for 1 minute on medium-high speed.
  • Add 3 1/4 cups confectioners’ sugar, milk, peppermint, and beat on medium-low speed until a dough forms. If the dough seems wet, add additional confectioners’ sugar until dough combines (I use 3 1/2 cups sugar). The dough will be crumbly but will come together when pinched and squeezed into a ball.
  • Taste the batter. If you want a more intense mint flavor, add additional mint extract, to taste (see note below).
  • Remove dough from the mixer, separate it into 1 to 4 smaller balls, and add one ball back into the mixer. Add the food coloring of your choice to the ball by squirting the droplets on top of the dough (careful when you turn on the mixer), and paddle on low speed until coloring is well-blended.
  • After the dough has been colored, either wrap it with plastic wrap and place it in an airtight container in the refrigerator to be rolled out later or roll it immediately.
  • To shape the butter mints, place a golf-ball sized amount of dough in your hands and roll dough into long thin cylinders about 1 centimeter wide. Place cylinders on countertop and with a pizza cutter (or knife – be careful of your counter), slice cylinders into bite-sized pieces, approximately 1 centimeter long.
  • Store mints in an airtight container in the refrigerator where they will keep for many weeks.

Parsing HAR File

I am working with a new application that doesn’t seem to like when a person has multiple roles assigned to them … however, I first need to prove that is the problem. Luckily, your browser gets the SAML response and you can actually see the Role entitlements that are being sent. Just need to parse them out of the big 80 meg file that a simple “go here and log on” generates!

To gather data to be parsed, open the Dev Tools for the browser tab. Click the settings gear icon and select “Persist Logs”. Reproduce the scenario – navigate to the site, log in. Then save the dev tools session as a HAR file. The following Python script will analyze the file, extract any SAML response tokens, and print them in a human-readable format.

################################################################################
# This script reads a HAR file, identifies HTTP requests and responses containing
# SAML tokens, and decodes "SAMLResponse" values.
#
# The decoded SAML assertions are printed out for inspection in a readable format.
#
# Usage:
# - Update the str_har_file_path with your HAR file
################################################################################
# Editable Variables
str_har_file_path = 'SumoLogin.har'

# Imports
import json
import base64
import urllib.parse
from xml.dom.minidom import parseString

################################################################################
#  This function decodes SAML responses found within the HAR capture
# Args: 
#   saml_response_encoded(str): URL encoded, base-64 encoded SAML response
# Returns:
#   string: decoded string
################################################################################
def decode_saml_response(saml_response_encoded):
    url_decoded = urllib.parse.unquote(saml_response_encoded)
    base64_decoded = base64.b64decode(url_decoded).decode('utf-8')
    return base64_decoded

################################################################################
#  This function finds and decodes SAML tokens from HAR entries.
#
# Args:
#   entries(list): A list of HTTP request and response entries from a HAR file.
#
# Returns:
#   list: List of decoded SAML assertion response strings.
################################################################################
def find_saml_tokens(entries):
    saml_tokens = []
    for entry in entries:
        request = entry['request']
        response = entry['response']
        
        if request['method'] == 'POST':
            request_body = request.get('postData', {}).get('text', '')
            
            if 'SAMLResponse=' in request_body:
                saml_response_encoded = request_body.split('SAMLResponse=')[1].split('&')[0]
                saml_tokens.append(decode_saml_response(saml_response_encoded))
        
        response_body = response.get('content', {}).get('text', '')
        
        if response.get('content', {}).get('encoding') == 'base64':
            response_body = base64.b64decode(response_body).decode('utf-8', errors='ignore')
        
        if 'SAMLResponse=' in response_body:
            saml_response_encoded = response_body.split('SAMLResponse=')[1].split('&')[0]
            saml_tokens.append(decode_saml_response(saml_response_encoded))
    
    return saml_tokens

################################################################################
#  This function converts XML string to an XML dom object formatted with
# multiple lines with heirarchital indentations
#
# Args:
#   xml_string (str): The XML string to be pretty-printed.
#
# Returns:
#   dom: A pretty-printed version of the XML string.
################################################################################
def pretty_print_xml(xml_string):
    dom = parseString(xml_string)
    return dom.toprettyxml(indent="  ")

# Load HAR file with UTF-8 encoding
with open(str_har_file_path, 'r', encoding='utf-8') as file:
    har_data = json.load(file)

entries = har_data['log']['entries']

saml_tokens = find_saml_tokens(entries)
for token in saml_tokens:
    print("Decoded SAML Token:")
    print(pretty_print_xml(token))
    print('-' * 80)

Exchange 2013 DNS Oddity

Not that anyone hosts their own Exchange server anymore … but we had a pretty strange issue pop up. Exchange has been, for a dozen years, configured to use the system DNS servers. The system can still use DNS just fine … but the Exchange transport failed to query DNS and just queued messages.

PS C:\scripts> Get-Queue -Identity "EXCHANGE01\3" | Format-List *

DeliveryType : SmtpDeliveryToMailbox
NextHopDomain : mailbox database 1440585757
TlsDomain :
NextHopConnector : 1cdb1e55-a129-46bc-84ef-2ddae27b808c
Status : Retry
MessageCount : 7
LastError : 451 4.4.0 DNS query failed. The error was: DNS query failed with error ErrorRetry
RetryCount : 2
LastRetryTime : 1/4/2025 12:20:04 AM
NextRetryTime : 1/4/2025 12:25:04 AM
DeferredMessageCount : 0
LockedMessageCount : 0
MessageCountsPerPriority : {0, 0, 0, 0}
DeferredMessageCountsPerPriority : {0, 7, 0, 0}
RiskLevel : Normal
OutboundIPPool : 0
NextHopCategory : Internal
IncomingRate : 0
OutgoingRate : 0
Velocity : 0
QueueIdentity : EXCHANGE01\3
PriorityDescriptions : {High, Normal, Low, None}
Identity : EXCHANGE01\3
IsValid : True
ObjectState : New

Yup, still configured to use the SYSTEM’s DNS:

PS C:\scripts> Get-TransportService | Select-Object Name, *DNS*

Name : EXCHANGE01
ExternalDNSAdapterEnabled : True
ExternalDNSAdapterGuid : 2fdebb30-c710-49c9-89fb-61455aa09f62
ExternalDNSProtocolOption : Any
ExternalDNSServers : {}
InternalDNSAdapterEnabled : True
InternalDNSAdapterGuid : 2fdebb30-c710-49c9-89fb-61455aa09f62
InternalDNSProtocolOption : Any
InternalDNSServers : {}
DnsLogMaxAge : 7.00:00:00
DnsLogMaxDirectorySize : 200 MB (209,715,200 bytes)
DnsLogMaxFileSize : 10 MB (10,485,760 bytes)
DnsLogPath :
DnsLogEnabled : True

 

I had to hard-code the DNS servers to the transport and restart the service:

PS C:\scripts> Set-TransportService EXCHANGE01 -InternalDNSServers 10.5.5.85,10.5.5.55,10.5.5.1
PS C:\scripts> Set-TransportService EXCHANGE01 -ExternalDNSServers 10.5.5.85,10.5.5.55,10.5.5.1

PS C:\scripts> Restart-Service MSExchangeTransport
WARNING: Waiting for service 'Microsoft Exchange Transport (MSExchangeTransport)' to stop...
WARNING: Waiting for service 'Microsoft Exchange Transport (MSExchangeTransport)' to start...

PS C:\scripts> Get-TransportService | Select-Object Name, InternalDNSServers, ExternalDNSServers

Name InternalDNSServers ExternalDNSServers
---- ------------------ ------------------
EXCHANGE01 {10.5.5.1, 10.5.5.55, 10.5.5.85} {10.5.5.85, 10.5.5.55, 10.5.5.1}

 

Viola, messages started popping into my mailbox.

Fedora 41, KVM, QEMU, and the Really (REALLY!) Bad Performance

Ever since we upgraded to Fedora 41, we have been having horrible problems with our Exchange server. It will drop off the network for half an hour at a time. I cannot even ping the VM from the physical server. Some network captures show there’s no response to the ARP request.

Evidently, the VM configuration contains a machine type that doesn’t automatically update. We are using PC-Q35 as the chipset … and 4.1 was the version when we built our VMs. This version, however has been deprecated. Which you can see by asking virsh what capabilities it has:


2025-01-02 23:17:26 [lisa@linux01 /var/log/libvirt/qemu/]# virsh capabilities | grep pc-q35
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.2</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.7</machine>
      <machine maxCpus='4096'>pc-q35-9.1</machine>
      <machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
      <machine maxCpus='288'>pc-q35-7.1</machine>
      <machine maxCpus='1024'>pc-q35-8.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-6.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.4</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.10</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.9</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-3.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.6</machine>
      <machine maxCpus='4096'>pc-q35-9.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.12</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='288'>pc-q35-8.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-6.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.0.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.8</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-3.0</machine>
      <machine maxCpus='288'>pc-q35-7.2</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.0</machine>
      <machine maxCpus='1024'>pc-q35-8.2</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.5</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.11</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.2</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.7</machine>
      <machine maxCpus='4096'>pc-q35-9.1</machine>
      <machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
      <machine maxCpus='288'>pc-q35-7.1</machine>
      <machine maxCpus='1024'>pc-q35-8.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-6.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.4</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.10</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.9</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-3.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.1</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.6</machine>
      <machine maxCpus='4096'>pc-q35-9.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.12</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='288'>pc-q35-8.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-6.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.0.1</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-5.0</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.8</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-3.0</machine>
      <machine maxCpus='288'>pc-q35-7.2</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-4.0</machine>
      <machine maxCpus='1024'>pc-q35-8.2</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>
      <machine maxCpus='255' deprecated='yes'>pc-q35-2.5</machine>
      <machine maxCpus='288' deprecated='yes'>pc-q35-2.11</machine>

Or filtering out the deprecated ones …

2025-01-02 23:16:50 [lisa@linux01 /var/log/libvirt/qemu/]# virsh capabilities | grep pc-q35 | grep -v "deprecated='yes'"
      <machine maxCpus='4096'>pc-q35-9.1</machine>
      <machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
      <machine maxCpus='288'>pc-q35-7.1</machine>
      <machine maxCpus='1024'>pc-q35-8.1</machine>
      <machine maxCpus='4096'>pc-q35-9.0</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='288'>pc-q35-8.0</machine>
      <machine maxCpus='288'>pc-q35-7.2</machine>
      <machine maxCpus='1024'>pc-q35-8.2</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>
      <machine maxCpus='4096'>pc-q35-9.1</machine>
      <machine canonical='pc-q35-9.1' maxCpus='4096'>q35</machine>
      <machine maxCpus='288'>pc-q35-7.1</machine>
      <machine maxCpus='1024'>pc-q35-8.1</machine>
      <machine maxCpus='4096'>pc-q35-9.0</machine>
      <machine maxCpus='288'>pc-q35-7.0</machine>
      <machine maxCpus='288'>pc-q35-8.0</machine>
      <machine maxCpus='288'>pc-q35-7.2</machine>
      <machine maxCpus='1024'>pc-q35-8.2</machine>
      <machine maxCpus='288'>pc-q35-6.2</machine>

So I shut down my Exchange server again (again, again), used “virsh edit “exchange01”, changed

  <os>
    <type arch='x86_64' machine='pc-q35-4.1'>hvm</type>
    <boot dev='hd'/>
  </os>

to

  <os>
    <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
  </os>

And started my VM. It took about an hour to boot. It absolutely hogged the disk physical server’s resources. It was the top listing in iotop -o

But then … all of the VMs dropped off of iotop. My attempt to log into the server via the console was logged in and waiting for me. My web mail, which had failed to load all day, was in my e-mail. And messages that had been queued for delivery had all come through.

The load on our physical server dropped from 30 to 1. Everything became responsive. And Exchange has been online for a good thirty minutes now.

Sumo Logic: Validating Collector Data Sources via API

This script is an example of using the Sumo Logic API to retrieve collector details. This particular script looks for Linux servers and validates that each collector has the desired log sources defined. Those that do not contain all desired sources are denoted for farther investigation.

import requests
from requests.auth import HTTPBasicAuth
import pandas as pd
from config import access_id, access_key  # Import your credentials from config.py

# Base URL for Sumo Logic API
base_url = 'https://api.sumologic.com/api/v1'

def get_all_collectors():
    """Retrieve all collectors with pagination support."""
    collectors = []
    limit = 1000  # Adjust as needed; check API docs for max limit
    offset = 0

    while True:
        url = f'{base_url}/collectors?limit={limit}&offset={offset}'
        response = requests.get(url, auth=HTTPBasicAuth(access_id, access_key))
        if response.status_code == 200:
            result = response.json()
            collectors.extend(result.get('collectors', []))
            if len(result.get('collectors', [])) < limit:
                break  # Exit the loop if we received fewer than the limit, meaning it's the last page
            offset += limit
        else:
            print('Error fetching collectors:', response.status_code, response.text)
            break

    return collectors

def get_sources(collector_id):
    """Retrieve sources for a specific collector."""
    url = f'{base_url}/collectors/{collector_id}/sources'
    response = requests.get(url, auth=HTTPBasicAuth(access_id, access_key))
    if response.status_code == 200:
        sources = response.json().get('sources', [])
        # print(f"Log Sources for collector {collector_id}: {sources}")
        return sources
    else:
        print(f'Error fetching sources for collector {collector_id}:', response.status_code, response.text)
        return []

def check_required_logs(sources):
    """Check if the required logs are present in the sources."""
    required_logs = {
        '_security_events': False,
        '_linux_system_events': False,
        'cron_logs': False,
        'dnf_rpm_logs': False
    }

    for source in sources:
        if source['sourceType'] == 'LocalFile':
            name = source.get('name', '')
            for key in required_logs.keys():
                if name.endswith(key):
                    required_logs[key] = True

    # Determine missing logs
    missing_logs = {log: "MISSING" if not present else "" for log, present in required_logs.items()}
    return missing_logs

# Main execution
if __name__ == "__main__":
    collectors = get_all_collectors()
    report_data = []

    for collector in collectors:
        # Check if the collector's osName is 'Linux'
        if collector.get('osName') == 'Linux':
            collector_id = collector['id']
            collector_name = collector['name']
            print(f"Checking Linux Collector: ID: {collector_id}, Name: {collector_name}")

            sources = get_sources(collector_id)
            missing_logs = check_required_logs(sources)
            if any(missing_logs.values()):
                report_entry = {
                    "Collector Name": collector_name,
                    "_security_events": missing_logs['_security_events'],
                    "_linux_system_events": missing_logs['_linux_system_events'],
                    "cron_logs": missing_logs['cron_logs'],
                    "dnf_rpm_logs": missing_logs['dnf_rpm_logs']
                }
                # print(f"Missing logs for collector {collector_name}: {report_entry}")
                report_data.append(report_entry)

    # Create a DataFrame and write to Excel
    df = pd.DataFrame(report_data, columns=[
        "Collector Name", "_security_events", "_linux_system_events", "cron_logs", "dnf_rpm_logs"
    ])

    # Generate the filename with current date and time
    if not df.empty:
        timestamp = pd.Timestamp.now().strftime("%Y%m%d-%H%M")
        output_file = f"{timestamp}-missing_logs_report.xlsx"
        df.to_excel(output_file, index=False)
        print(f"\nData written to {output_file}")
    else:
        print("\nAll collectors have the required logs.")

Fedora 41 – Using DNF to List Installed Packages

We upgraded all of our internal servers to Fedora 41 after a power outage yesterday — had a number of issues to resolve (the liblockdev legacy config reverted so OpenHAB no longer could use USB serial devices, the physical server was swapping 11GB of data even though it had 81GB of memory free, and our Gerbera installation requires some libspdlog.so.1.12 which was updated to version 1.14 with the Fedora upgrade.

The last issue was more challenging to figure out because evidently DNF is now DNF5 and instead of throwing an error like “hey, new version dude! Use the new syntax” when you use an old command to list what is installed … it just says “No matching packages to list”. Like there are no packages installed? Since I’m using bash, openssh, etc … that’s not true.

Luckily, the new syntax works just fine. dnf repoquery –installed

Also:

dnf5 repoquery –available
dnf5 repoquery –userinstalled

Fedora 41 — A LOT of Swapping

With 81 GB available, there’s no good reason to be using 11GB of swap!

2024-12-30 09:11:16 [root@FPP01 /var/named/chroot/etc/]# free -h
total        used        free      shared  buff/cache   available
Mem:           125Gi        44Gi        58Gi       1.8Mi        23Gi        81Gi
Swap:           11Gi        11Gi       121Mi

Memory Stats

2024-12-30 09:11:22 [root@FPP01 /var/named/chroot/etc/]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st gu
0  1 12458476 43626964 710656 42012692 1354 1895 51616  8431 16514    6  1  2 88  7  0  3

How to see what is using swap

2024-12-30 09:11:45 [root@FPP01 /var/named/chroot/etc/]# smem -rs swap
PID User     Command                         Swap      USS      PSS      RSS
2903 qemu     /usr/bin/qemu-system-x86_64  4821840  3638976  3643669  3675164
3579 qemu     /usr/bin/qemu-system-x86_64  2282316  6237632  6242508  6275260
3418 qemu     /usr/bin/qemu-system-x86_64  2182844  2063528  2068041  2098292
3331 qemu     /usr/bin/qemu-system-x86_64  1398728  7078176  7082951  7115368
3940 qemu     /usr/bin/qemu-system-x86_64  1020944  4258144  4262757  4294080
3622 qemu     /usr/bin/qemu-system-x86_64   525272  7942284  7947159  7979876
25088 qemu     /usr/bin/qemu-system-x86_64   160456  8298900  8305130  8342252
2563 root     /usr/bin/python3 -Es /usr/s    11696     1332     4050    10872
2174 squid    (squid-1) --kid squid-1 --f     6944     4312     5200    10832
1329 root     /sbin/mount.ntfs-3g /dev/sd     5444    29636    29642    30224
24593 root     /usr/sbin/smbd --foreground     4940    16004    19394    31712
2686 root     /usr/sbin/libvirtd --timeou     4172    28704    30096    37964
2159 root     /usr/sbin/squid --foregroun     3340      152      763     4532
5454 root     /usr/sbin/smbd --foreground     3180      212      496     3552
2134 root     /usr/sbin/smbd --foreground     3008      208      598     4368
2157 root     /usr/sbin/smbd --foreground     2992      136      245     1504
2156 root     /usr/sbin/smbd --foreground     2912      212      304     1648
17963 root     /usr/sbin/smbd --foreground     2880      480     1603     8964
1631 named    /usr/sbin/named -u named -c     2820    60696    60896    63892
1424 polkitd  /usr/lib/polkit-1/polkitd -     2700      704      913     3864
4271 root     /usr/sbin/smbd --foreground     2644     1996     3106     8408
1 root     /usr/lib/systemd/systemd --     2220     4680     5826     9512
2766 root     /usr/sbin/virtlogd              1972      112      873     4548
30736 root     /usr/sbin/smbd --foreground     1864      884     3861    15756
31077 root     /usr/sbin/smbd --foreground     1844     1044     4189    16368
2453 root     /usr/lib/systemd/systemd --     1656      824     1707     4588
1446 root     /usr/sbin/NetworkManager --     1636     4748     5593    10348
1413 dbus     dbus-broker --log 4 --contr     1288      964     1072     2000
21904 root     sshd-session: root@pts/9        1028      644     1287     5412
1402 dbus     /usr/bin/dbus-broker-launch      968      456      571     1872
21900 root     sshd-session: root [priv]        848      488     1911     8588