Category: Technology

Basic Security Or Paranoia

We have Amazon’s smart speakers, so I don’t know if this is true for Google or Apple digital assistants. But the Alexa series of speakers has a default wake word and several non-default options you can elect to use instead. Never use the default — that’s a good general security maxim. We had other factors in our wake word decision – a friend of Scott’s has a daughter whose name is quite close to Alexa and I foresaw the speaker going crazy if they’d speak of her. But the fact is, day 0 of the device … I expected advertisers to incorporate “Alexa, give me more info on product XYZ” in their ads. Aaaand now we have South Park season 21’s first episode.

This is just goofy stuff – maybe words you don’t want replaying at inopportune moments, maybe an alarm way too early in the morning for you. Remember TV commercials that asked kids to hold the telephone handset up to the screen and then played DTMF to ring the order hotline? Alexa, call 800-###-####. Hell, they could order Amazon products on your credit card. Something like ShopSafe (a unique card number with a low limit that actually rejects purchases over that limit) can be tied to your account. It’s extra work to keep updating the card on your account, but I’d rather Alexa buy 12$ of something I didn’t want than 250$. Then our speakers do not have unfettered access to my credit card – there’s a pin required to make purchases. I’m sure that won’t stop your kid who overhears the code from using it, but it prevents television programs, radio shows, and party-goers from buying random junk as a joke.

Checking Supported TLS Versions and Ciphers

There have been a number of ssl vulnerabilities (and deprecated ciphers that should be unavailable, especially when transiting particularly sensitive information). On Linux distributions, nmap includes a script that enumerates ssl versions and, per version, the supported ciphers.

[lisa@linuxbox ~]# nmap -P0 -p 25 –script +ssl-enum-ciphers myhost.domain.ccTLD

Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-13 11:36 EDT
Nmap scan report for myhost.domain.ccTLD (#.#.#.#)
Host is up (0.00012s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
25/tcp open smtp
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 144.67 seconds

Security Standards For Financial Information

A long time ago, processors of credit card information didn’t have any standards. And they’d lose your data. People didn’t like that, and some type of regulation had to be put on the industry. The credit card processors got together and formed an initiative to form their own regulations – PCI. They were a lot more concerned with the regulation’s impact on profitability than government regulations would have been. The PCI standards were fairly effective.

And now one of the credit bureaus has lost a huge amount of personal data – including social security numbers and account numbers that I don’t get why were stored in anything other than a one-way hash in the first place. But the bigger question is how are these credit bureaus able to operate with standards that are less strict than the industry-association generated PCI standards? My guess is that there will be a credit bureau industry association writing security standards in the next week or so. If there isn’t an industry association forming to ensure my social security number and account numbers aren’t stored in clear text on web-accessible servers at credit bureaus … I should hope the government would intervene and mandate a certain level of security.

Equifax Hack

First of all, saying half the population of the United States has had their personal information stolen might be accurate, but it’s the good marketing spin. 2016 numbers had 249,485,228 adults in the United States. That’s 57% of people over 18 who have had their personal data stolen. Now there are people with no credit history. It’s a bit of a thing when you first want to rent a flat or get a credit card … you have no credit history, and can’t get credit until you have one. Last I read, it was something like 14% of adults who have no credit record — meaning Equifax gave up information on 66% of the credit-having population.

Leaving aside the marketing spin on numbers, though, why the hell is a credit bureau storing my personal information in a retrievable format instead of a one-way hash? Performance, I assume … so I guess my question really is why were a couple of clock cycles considered more important than the security of my data? Some of the data is probably maintained in clear text because they use heuristic matching to link incoming data to entities. I’m guessing my info comes in with a name, address, creditor name, and account number. And they’ve got to be able to match up the thirty different iterations of my address to ingest the data. But there’s no reason for the account number to be stored unhashed – store the last two or three digits in a new column for display (Your XYZ account ending in ###). And there’s sure as hell no reason for the SSN to be stored unhashed – even if they’d have to store the full one hashed and the last four in another hash because some data doesn’t come in with full SSNs.

ZoneMinder After Upgrade

We recently updated from ZoneMinder 1.30 to 1.34 – easy as can be, ran the DB update script and everything came right online. Except … our home automation system hasn’t been able to access the system. OpenHAB reports that the bridge is offline. And we’re getting 404 errors in all of the /zm/api calls in access_log.

Turns out the API was offline because when the new package came down … there was a zoneminder.conf.rpmnew in the Apache conf.d directory. Can’t even say I found this intentionally – I wanted to check the Apache config file to see if it had anything about the api directory, did a directory listing, and said oooooh!

[lisa@fedora01 conf.d]# ll zone*
-rw-r–r– 1 root root 1990 Jul 29 18:13 zoneminder.conf
-rw-r–r– 1 root root 1990 Aug 28 22:34 zoneminder.conf.rpmnew

They’ve changed a few of the sub-directories and added components to the config. As soon as I renamed zoneminder.conf to zoneminder.conf.old, copied zoneminder.conf.rpmnew to zoneminder.conf, and repeated a few config tweaks we had made for the original installation … restarted Apache and voila, we can fetch /zm/api/host/getVersion.json and get values. So if you’re getting odd 404 errors and CakePHP “/zm/api” not found errors maybe you forgot to update your config with changes from the rpmnew file.

Cleaning Up Old OpenHAB Persistence Tables

So my husband asked for a program that would go out to the OpenHAB persistence database and identify all of the item tables that are no longer associated with active items. If you rename or delete an item from OpenHAB, the associated data is retained in the persistence database. Might be a good thing – maybe you wanted that data. But if it’s useless fluff … well, no need to keep the state changes from a door sensor that’s no longer around.

Wrote the code, and asked him how many days old he wanted the last update to be before the item table got dropped … and he told me this was a useless way to do it and maybe something really hadn’t updated in six months or three years and age of last update is no way to be identifying tables to be removed. Which, yeah, then why ask for it!? Then I needed to write something that takes a list of items from OpenHAB and identifies everything in the items table that does not appear in the OpenHAB list so those tables can be deleted. But I figured I’d post the original code too in case anyone else could use it. Both in perl, and neither in particularly well written perl. I trust the data and don’t want to protect against insertion attacks.

Drop tables for items that no longer appear in OpenHAB:

use strict;
use DBI;

my %strItemsFromOpenHAB = ();
open(INPUT,"./openhabItemList.txt");
while(<INPUT>){
        chomp();
        my $strCurrentItem = $_;
        $strItemsFromOpenHAB{$strCurrentItem}++;
}
close INPUT;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        my $strItemID = $row[0];
        my $strItemName = $row[1];
        if(! $strItemsFromOpenHAB{$strItemName} ){              # If the current item name is not in the list of items from OpenHAB
#               print "DELETE FROM items where ItemID = $strItemID\n";
                print "DROP TABLE Item$strItemID;  # $strItemName \n";
        }
}
$sth->finish();

$dbh->disconnect();
close OUTPUT;

 

Identify tables that have not been updated in iTooOldInDays days:

use strict;
use DBI;
use Date::Parse;
use Time::Local;

my $iTooOldInDays = 365;

my $iCurrentEpochTime = time();

my @strItems = ();
my $iItems = 0;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM Items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        $strItems[$iItems++] = $row[0];
}
$sth->finish();

for(my $i = 0; $i < $iItems; $i++){ my $strTableName = 'Item' . $strItems[$i]; my $sth = $dbh->prepare("SELECT * FROM $strTableName ORDER BY Time DESC LIMIT 1");
        $sth->execute();
        while (my @row = $sth->fetchrow_array) {
                my $strUpdateTime = $row[0];
                my @strDateTimeBreakout = split(/ /,$strUpdateTime);
                my $strDate = $strDateTimeBreakout[0];
                my $strTime = $strDateTimeBreakout[1];

                my @strDateBreakout = split(/-/,$strDate);
                my @strTimeBreakout = split(/:/,$strTime);

                my $iUpdateEpochTime = timelocal($strTimeBreakout[2],$strTimeBreakout[1],$strTimeBreakout[0], $strDateBreakout[2],$strDateBreakout[1]-1,$strDateBreakout[0]);
                my $iTableAge = $iCurrentEpochTime - $iUpdateEpochTime;

                if($iTableAge > ($iTooOldInDays * 86400) ){
                        print "$strTableName last updated $strUpdateTime - $iUpdateEpochTime\n";
                }
        }
        $sth->finish();
}

$dbh->disconnect();
close OUTPUT;

Configuring and Using RPZ

I realized today what, while I had written about why response policy zones are useful, I never indicated how to configure one! So here’s a quick document outlining how to set it up in ISC Bind. In your named.conf file, add a response policy to your options section:

        response-policy {
                zone “rpz”;
        };
Then add the correspondingly named zone at the end of the file. For purposes of testing, I added a zone as a forward only zone so I could perform a network capture to see what exactly transpires when a name in the RPZ is resolved.
zone “rpz” {
      type master;
      file “rpz.db”;
      allow-query { none; };
      allow-transfer { none; };
};
zone “windstream.com” {
    type forward;
    forward only;
    forwarders { 8.8.8.8; };
};
Then you just have to make a rpz.db where you store your named files:
$TTL 60
$ORIGIN rpz.
@            IN    SOA  localhost. root.localhost.  (
                          2   ; serial
                          3H  ; refresh
                          1H  ; retry
                          1W  ; expiry
                          1H) ; minimum
                  IN    NS    localhost.

www.windstream.com    CNAME    www.yahoo.com.
Restarted named and ran “rndc flush” to avoid serving cached content instead of the RPZ host data. Then ran a few tests and confirmed that the resolution configured in the rpz zone:
[lisa@fedora02 named]# dig +short www.windstream.com @localhost
www.yahoo.com.
atsv2-fp.wg1.b.yahoo.com.
98.139.183.24
98.138.252.30
98.139.180.149
98.138.253.109
[lisa@fedora02 named]# dig +short dell905.windstream.com @localhost
ns4.windstream.com.
173.186.244.139
[lisa@fedora02 named]# dig +short www.google.com @localhost
216.58.218.228
In this process, I learnt something interesting about ICS’s implementation of RPZ: it still performs the query and then overrides the results. Odd waste of cycles, but the resolution that was subsequently turned into yahoo’s address from the rpz zone. Looking up a windstream.com host that isn’t in my RPZ and I got another query out to 8.8.8.8 which was expected. Query to something not in the forward zone and not in the rpz zone and I get no traffic to 8.8.8.8 (because it follows my normal forwarding which is to our ISP’s DNS).
I was curious if this meant rpz could not be used to publish a bad hostname locally – but attempting to resolve a bad hostname (added abadhost.windstream.com with the same CNAME to Yahoo and reloaded my zone) worked just fine.

[root@fedora02 ~]# dig abadhost.windstream.com @localhost

; <<>> DiG 9.11.1-P2-RedHat-9.11.1-2.P2.fc26 <<>> abadhost.windstream.com @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8382
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1aa34751c5df7f78857a921259a8706fb5e1741a46eb5352 (good)
;; QUESTION SECTION:
;abadhost.windstream.com. IN A

;; ANSWER SECTION:
abadhost.windstream.com. 5 IN CNAME www.yahoo.com.
www.yahoo.com. 1800 IN CNAME atsv2-fp.wg1.b.yahoo.com.
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.180.149
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.253.109
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.183.24
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.252.30

;; AUTHORITY SECTION:
wg1.b.yahoo.com. 172800 IN NS yf3.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf4.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf1.yahoo.com.
wg1.b.yahoo.com. 172800 IN NS yf2.yahoo.com.

;; ADDITIONAL SECTION:
yf1.yahoo.com. 86400 IN A 68.142.254.15
yf2.yahoo.com. 86400 IN A 68.180.130.15

;; Query time: 1204 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Aug 31 16:24:15 EDT 2017
;; MSG SIZE rcvd: 315

But there is a query that goes out to the name server and a ‘no such name’ result returned. Odd.

DMARC and DKIM

Microsoft’s latest security newsletter included the fact that more than 90% of Fortune 500 companies have not fully implemented DMARC. Wow — that’s something I do at home! Worse still, the Fortune 500 company for which I work is in that 90% … a fact I hope to rectify this week. SPF is just some DNS entries that indicate the source IPs that are expected to be sending email from your domain. Lots of SPF record generators online.

DKIM is a little more involved, but it’s a lot easier now that packages for DKIM are available on Linux distro repositories. You still *can* build it from source, but it’s easier to install the OpenDKIM package.

Once the package is installed, generate the key(s) to be used with your domain(s).

cd /etc/opendkim/keys/
openssl genrsa -out dkim.private 2048
openssl rsa -in dkim.private -out dkim.public -pubout -outform PEM
# secure private key file
chown opendkim:opendkim dkim.private
chmod go-r dkim.private

Decide on the selector you are using — I use ‘mail’ as my selector. At work, I use ‘2017Q3Key’ — this allows us to change to a new key without in-transit mail being impacted. Old mail was sent with the 2017Q2 selector and *that* public key is in DNS. New mail comes across with 2017Q3 and uses the new DNS record to verify. I do *not* share these keys – anyone else sending mail from our domain needs to generate their own key (or I make one for them), use their own unique selector, and I will create the DNS records for their selector. When marketing engages a third party to send e-mails on our behalf, we have a 2017VendorName selector too.

Edit /etc/opendkim.conf. The socket line is not necessary – I just tend away from default ports as a habit. Since it’s bound to localhost, not such a big deal.

Mode sv
Socket   inet:8895@localhost
Selector mail
KeyFile /etc/opendkim/keys/dkim.private
KeyTable /etc/opendkim/KeyTable
SigningTable refile:/etc/opendkim/SigningTable
InternalHosts refile:/etc/opendkim/TrustedHosts

There’s a config option to “SendReports” — it’s a boolean that indicates if you want your system to send failure reports when the sender indicates they want such reports and provide a reporting address. Especially for testing purposes, I recommend indicating your domain wants reports — it is helpful in case you’ve got something configured not quite right and are failing delivery on some messages. As such, configure my installation to send reports. It’s additional overhead in cases where verification fails; I don’t see all that many failures, and it isn’t a lot of extra load. Since I know my installation will send detailed failure information, I can use my domain when testing new implementations.

Once you have the base configuration set, edit /etc/opendkim/SigningTable and add your domain(s) and the appropriate selector

*@rushworth.us mail._domainkey.rushworth.us
*@lisa.rushworth.us mail._domainkey.lisa.rushworth.us
*@scott.rushworth.us mail._domainkey.scott.rushworth.us
*@anya.rushworth.us mail._domainkey.anya.rushworth.us

Edit /etc/opendkim/KeyTable and map each selector from the SigningTable to a key file

mail._domainkey.rushworth.us rushworth.us:default:/etc/opendkim/keys/dkim.private
mail._domainkey.lisa.rushworth.us lisa.rushworth.us:default:/etc/opendkim/keys/lisa.dkim.private
mail._domainkey.scott.rushworth.us scott.rushworth.us:default:/etc/opendkim/keys/scott.dkim.private
mail._domainkey.anya.rushworth.us anya.rushworth.us:default:/etc/opendkim/keys/anya.dkim.private

Edit /etc/opendkim/TrustedHosts and add the internal IPs that relay your domain’s mail through the server (IP addresses or subnets)

Create DNS TXT records – the part after p= is the content of the public key file for that selector. When you are first setting up DKIM, use t=y (yes, we are just testing this). Once you confirm everything is functional, you can change to y=n (nope, really pay attention to our DKIM signature and policy). The policy is an individual preference. I use ‘all’ (all mail from my domain will be signed) and “o=-” (again all mail from my domain will be signed). You can use “o=~” (some mail from my domain is signed, some isn’t … who knows) and “dkim=unknown” (again, some is signed). You can use “dkim=discardable” (don’t just consider the message as more likely to be spam if it is not signed … you can outright drop the message). As a business, I don’t use this *just in case*. Something crazy happens – the dkim service falls over, your key gets mangled – and receiving parties can start dropping your messages. Using “dkim=all” means they are more apt to quarantine them as spam, but someone can go and get the messages. And hopefully notice something odd is happening.

mail._domainkey.domain.tld  TXT k=rsa;t=y;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzTnpc7tHfyH1zgT3Jx/JHmGSz8WCy1jvzu5QsYvDBmimKEHRY4Kz4mya5bOYsDQuJ/sz+BJo6xDwsUXCuyEkykIlgqP+7E9oK2EcW0dZms87SGmNEnNBN5iTe0pdzk1lXx2js3QdOWswO+cmA9F1Z8OzSR+2u79huugPFBHl79zFvOEHbigrmeHEfo0KHWpeNomf/xKx+wyYr1n3R5gS+28CeC3abSyKgmaYYRLoZsjrCLbEM0m2YPJRKd1ZGOObBMa4PZWj7pT07ISEjoNnXQ27BtcL/QjKKeLkbJ0UGEOSdPEJKuEpAUvYU9lA5hbtzrqiwdlPxWYocDVPrcqAHwIDAQAB
_adsp._domiankey.domain.tld TXT dkim=all
_domainkey.rushworth.us TXT t=y;o=-;r=dkim@lisa.rushworth.us
_ssp._domainkey.rushworth.us TXT t=y;dkim=all

 

Edit /etc/mail/sendmail.mc (using the port defined in /etc/opendkim.conf

INPUT_MAIL_FILTER(`dkim-filter’, `S=inet:8895@localhost’)

Make your sendmail.mc to sendmail.cf and verify that you’ve got the dkim-filter line

Xdkim-filter, S=inet:8895@localhost

Start opendkim, then restart sendmail. Now test it — inbound mail should have *their* DKIM signatures verified, outbound mail should be signed with the appropriate key.

Once you have verified your DKIM is functioning properly — well, first of all you can update your DNS records to remove testing mode. Then create your DMARC record:

_dmarc.rushworth.us     v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:dmarc-rua@lisa.rushworth.us!10m; ruf=mailto:dmarc-ruf@lisa.rushworth.us!10m; rf=afrf; pct=100; ri=172800
 Again, you don’t need to use quarantine — ‘reject’ would recommend mail be dropped or ‘none’ recommends no action (good for testing). The rua (aggregate reporting email address) and ruf (address to recieve failing samples for analysis) should be in your domain.
You could add either/both “adkim=s” or “aspf=s” to indicate your DKIM or SPF adhere to strict standards. I use relaxed (default, do not need to specify it in the TXT record).
If you want the reports delivered to an address outside of your domain, that domain needs to publish a DNS record authorizing receipt of the reports:
     rushworth.us._report._dmarc.lisa.rushworth.us     v=DMARC1

Sendmail VirtUserTable

Some mail systems support sub-addressing (i.e. user+ignoredstring@example.com), but Exchange is not one of them. Even if/when it gets supported, it’s really easy to figure out the real e-mail address in that sub-address. Instead, we use sendmail’s virtusertable to map entire subdomains (i.e. @lisa.example.com) over to our primary e-mail addresses. If an address becomes compromised, we can blacklist the particular something@subdomain.rushworth.us address in the access table).

Virtual Domain Aliases

These aliases allow changes to be made to intended recipient addresses.  There are two files required for an address to be aliased.  An entry for “VIRTUSER_DOMAIN_FILE” will exist in the sendmail.mc specifying the file listing the domains to be included for aliasing.  For us, this is /etc/mail/virtuser-domains.  This is a text file containing the name of each domain to be virtualized for aliasing, one domain per line.  Please note, the domains included herein need only be the recipient domains, not the domains to which aliases are mapped.  E.G. our virtuser-domains file contains just:

example.com

And yet we can alias test.addy@example.com to someotheraddy@example.net … it is only the source address that needs to be defined in virtuser-domains.

Aliases for the virtual domains are contained in /etc/mail/virtusertable.  The left-hand entry is the recipient address and the right-hand entry is what that recipient will be translated to.  Left-hand entries can be an email address (testaddy@example.com) or a domain (@lisa.example.com)

Right-hand entries can be an alternate address.  If the address should remain the same, an exclamation point can be used:

myfakeaddress@example.com        external.email@example.net
myaddress@example.com            !

The right-hand entry can also be an action, like error which will return an error code

compromised.address@lisa.example.com            error:nouser User unknown

 

To commit changes to the virtusertable:

makemap hash /etc/mail/virtusertable.db < /etc/mail/virtusertable

 

Testing Virtual Aliases:

You can test the results of the virtual address space aliasing using sendmail –bt.  From within the new prompt (a greater than sign on a blank line) type3,0 followed by the address you would like to test.  E.G.:

[uid@NEOHTWNLX821 ~]# sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> 3,0 llanders@example.com
canonify           input: llanders @ example . com
Canonify2          input: llanders < @ example . com >
Canonify2        returns: llanders < @ example . com . >
canonify         returns: llanders < @ example . com . >
parse              input: llanders < @ example . com . >
Parse0             input: llanders < @ example . com . >
Parse0           returns: llanders < @ example . com . >
ParseLocal         input: llanders < @ example . com . >
ParseLocal       returns: llanders < @ example . com . >
Parse1             input: llanders < @ example . com . >
Recurse            input: llanders @ example . net
canonify           input: llanders @ example . net
Canonify2          input: llanders < @ example . net >
Canonify2        returns: llanders < @ example . net . >
canonify         returns: llanders < @ example . net . >
parse              input: llanders < @ example . net . >
Parse0             input: llanders < @ example . net . >
Parse0           returns: llanders < @ example . net . >
ParseLocal         input: llanders < @ example . net . >
ParseLocal       returns: llanders < @ example . net . >
Parse1             input: llanders < @ example . net . >
Mailertable        input: < example . net > llanders < @ example . net . >
Mailertable        input: example . < com > llanders < @ example . net . >
Mailertable      returns: llanders < @ example . net . >
Mailertable      returns: llanders < @ example . net . >
MailerToTriple     input: < > llanders < @ example . net . >
MailerToTriple   returns: llanders < @ example . net . >
Parse1           returns: $# esmtp $@ example . net . $: llanders < @ example . net . >
parse            returns: $# esmtp $@ example . net . $: llanders < @ example . net . >
Recurse          returns: $# esmtp $@ example . net . $: llanders < @ example . net . >
Parse1           returns: $# esmtp $@ example . net . $: llanders < @ example . net . >
parse            returns: $# esmtp $@ example . net . $: llanders < @ example . net . >

Use ctrl-d to exit the test.

Sendmail Mailertable

Mailertable (/etc/mail/mailertable)

Routing information for external delivery. Functionally, these are like the SMTP Connectors within Exchange. The mailertable entries can override everything including smarthost definitions. This is required for internal mail routing – our sendmail servers should not transmit email for @windstream.com to the MX records but rather the destination we intend. We also use mailertable entries to force B2B communication over internal secured channels.

If a server is unable to deliver mail to a specific domain (e.g. one of our public IP addresses gets blacklisted), a mailertable entry can be used to direct all mail destined for the domain through one of our servers still able to make delivery.

The file contains two columns, domains and actions. Domains can be ends-with substring matches:
.anythingfromthisdomain.com

Will match @thishost.anythingfromthisdomain.com as well as @thathost.anythingfromthisdomain.com. Domains can also be a full match of the right-hand side of the email address:
justthisemaildomain.com

Which will match @justthisemaildomain.com. The most “accurate” match will win, not just the first match in line. So if your file contains the following:
.mysampledomain.com relay:[10.10.10.10]
thishost.mysampledomain.com relay:[20.20.20.20]

Mail destined for thishost.mysampledomain.com will be sent to 20.20.20.20

Actions contain both a mailer and a host. The mailer can redirect messages to local users:
.egforlocaldelivery.com local:username

Or it can force an error response:
Baddomain.com error:5.3.0:Unknown User

Our use of the mailertable, though, is to redirect mail destined for the domain:
windstream.com relay:[twnexchinbound.windstream.com]
newacquisition.com relay:[theirinternalhost.theirdomain.com]

In these cases, the square brackets around the destination override the MX record. To reroute a domain’s delivery destination, then, it is imperative that the host be enclosed in square brackets.

To commit changes to the file, either use “make” from within /etc/mail to commit all changes or the following command to commit just the changes to mailertable:
makemap hash /etc/mail/mailertable < /etc/mail/mailertable