Category: Technology

PHP: Windows Authentication to MS SQL Database

I’ve encountered several people now how have followed “the directions” to allow their IIS-hosted PHP code to authenticate to a MS SQL server using Windows authentication … only to get an error indicating some unexpected ID is unable to log into the SQL server.

Create your application pool and add an identity. Turn off fastcgi.impersonate in your php.ini file. Create web site, use custom application pool … FAIL.

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="IUSR" />
    </authentication>
  </security>
</system.webServer>

The web site still doesn’t pick up the user from the application pool. Click on Anonymous Authentication, then click “Edit” over in the actions pane. Change it to use the application pool identity here too (why wouldn’t it automatically do so when an identity is provided?? no idea!).

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="" />
    </authentication>
  </security>
</system.webServer>

I’ve always seen the null string in userName, although I’ve read that the element may be omitted entirely. Once the site is actually using the pool identity, PHP can authenticate to SQL accounts using Windows authentication.

Facebook’s Offensive Advertising Profiles

As a programmer, I assumed Facebook used some sort of statistical analysis to generate advertising categories based on user input rather than employing a marketing group. A statistical analysis of the phrases being typed is *generally* an accurate reflection of what people type, although I’ve encountered situations where their code does not appropriately weight adjectives (FB thought I was a Trump supporter because incompetent, misogynist, unqualified, etc didn’t clue them into my real beliefs). But I don’t think the listings causing an uproar this week were factually wrong.
 
Sure, the market segment name is offensive; but computers don’t natively identify human offense. I used to manage the spam filtering platform for a large company (back before hourly anti-spam definition updates were a thing). It is impossible to write every iteration of every potentially offensive string out there. We would get e-mails for \/|@GR@! As such, there isn’t a simple list of word combinations that shouldn’t appear in your marketing profiles. It would be quite limiting to avoid ‘kill’ or ‘hate’ in profiles too — a group of people who hate vegetables is a viable target market. Or those who make killer mods to their car.
 
FB’s failing, from a development standpoint, is not having a sufficiently robust set of heuristic principals against which target demo’s are analysed for non-publication. They may have considered the list would be self-pruning: no company is going to buy ads to target “kill all women”. Any advertising string that receives under some threshold of buys in a delta-time gets dropped. Lazy, but I’m a lazy programmer and could *totally* see myself going down that path. And spinning it as the most efficient mechanism at that. To me, this is the difference between a computer science major and an information sciences major. Computer science is about perfecting the algorithm to build categories from user input and optimizing the results by mining purchase data to determine which categories are worth retaining. Information science teaches you to consider the business impact of customers seeing the categories which emerge from user input. 
 
There are ad demo’s for all sorts of other offensive groups, so it isn’t like the algorithm unfairly targeted a specific group. Facebook makes money from selling advertisements to companies based on what FB users talk about. It isn’t a specific attempt to profit by advertising to hate groups; it’s an attempt to profit by dynamically creating marketing demographic categories and sorting people into their bins.
This isn’t limited to Facebook either – any scenario where it is possible to make money but costs nothing to create entries for sale … someone will write an algorithm to create passive income. Why WOULDN’T they? You can sell shirts on Amazon. Amazon’s Marketplace Web Service allows resellers to automate product listings. Custom write some code to insert random (adjectives | nouns | verbs) into a template string then throw together a PNG of the logo superimposed on a product. Have a production facility with an API to order, make the product once it has been ordered, and you’ve got passive income. And people did. I’m sure some were wary programmers – a sufficiently paranoid person might even have a human approve the new list of phrases. Someone less paranoid might make a banned word list (or even a banned word list and source one’s words from a dictionary and look for the banned words in the definition too). But a poorly conceived implementation will just glom words together and assume something stupid/offensive just won’t sell. Works that way sometimes. Bad publicity sinks the company other times.
 
The only thing that really offends me about this story is that unpleasant people are partaking in unpleasant conversations. Which isn’t news, nor is it really FB’s fault beyond creating a platform to facilitate the discussion. Possibly some unpleasant companies are targeting their ads to these individuals … although that’s not entirely FB’s fault either. Buy an ad in Breitbart and you can target a bunch of white supremacists too. Not creating a marketing demographic for them doesn’t make the belief disappear. 

Basic Security Or Paranoia

We have Amazon’s smart speakers, so I don’t know if this is true for Google or Apple digital assistants. But the Alexa series of speakers has a default wake word and several non-default options you can elect to use instead. Never use the default — that’s a good general security maxim. We had other factors in our wake word decision – a friend of Scott’s has a daughter whose name is quite close to Alexa and I foresaw the speaker going crazy if they’d speak of her. But the fact is, day 0 of the device … I expected advertisers to incorporate “Alexa, give me more info on product XYZ” in their ads. Aaaand now we have South Park season 21’s first episode.

This is just goofy stuff – maybe words you don’t want replaying at inopportune moments, maybe an alarm way too early in the morning for you. Remember TV commercials that asked kids to hold the telephone handset up to the screen and then played DTMF to ring the order hotline? Alexa, call 800-###-####. Hell, they could order Amazon products on your credit card. Something like ShopSafe (a unique card number with a low limit that actually rejects purchases over that limit) can be tied to your account. It’s extra work to keep updating the card on your account, but I’d rather Alexa buy 12$ of something I didn’t want than 250$. Then our speakers do not have unfettered access to my credit card – there’s a pin required to make purchases. I’m sure that won’t stop your kid who overhears the code from using it, but it prevents television programs, radio shows, and party-goers from buying random junk as a joke.

Checking Supported TLS Versions and Ciphers

There have been a number of ssl vulnerabilities (and deprecated ciphers that should be unavailable, especially when transiting particularly sensitive information). On Linux distributions, nmap includes a script that enumerates ssl versions and, per version, the supported ciphers.

[lisa@linuxbox ~]# nmap -P0 -p 25 –script +ssl-enum-ciphers myhost.domain.ccTLD

Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-13 11:36 EDT
Nmap scan report for myhost.domain.ccTLD (#.#.#.#)
Host is up (0.00012s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
25/tcp open smtp
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 144.67 seconds

Security Standards For Financial Information

A long time ago, processors of credit card information didn’t have any standards. And they’d lose your data. People didn’t like that, and some type of regulation had to be put on the industry. The credit card processors got together and formed an initiative to form their own regulations – PCI. They were a lot more concerned with the regulation’s impact on profitability than government regulations would have been. The PCI standards were fairly effective.

And now one of the credit bureaus has lost a huge amount of personal data – including social security numbers and account numbers that I don’t get why were stored in anything other than a one-way hash in the first place. But the bigger question is how are these credit bureaus able to operate with standards that are less strict than the industry-association generated PCI standards? My guess is that there will be a credit bureau industry association writing security standards in the next week or so. If there isn’t an industry association forming to ensure my social security number and account numbers aren’t stored in clear text on web-accessible servers at credit bureaus … I should hope the government would intervene and mandate a certain level of security.

Equifax Hack

First of all, saying half the population of the United States has had their personal information stolen might be accurate, but it’s the good marketing spin. 2016 numbers had 249,485,228 adults in the United States. That’s 57% of people over 18 who have had their personal data stolen. Now there are people with no credit history. It’s a bit of a thing when you first want to rent a flat or get a credit card … you have no credit history, and can’t get credit until you have one. Last I read, it was something like 14% of adults who have no credit record — meaning Equifax gave up information on 66% of the credit-having population.

Leaving aside the marketing spin on numbers, though, why the hell is a credit bureau storing my personal information in a retrievable format instead of a one-way hash? Performance, I assume … so I guess my question really is why were a couple of clock cycles considered more important than the security of my data? Some of the data is probably maintained in clear text because they use heuristic matching to link incoming data to entities. I’m guessing my info comes in with a name, address, creditor name, and account number. And they’ve got to be able to match up the thirty different iterations of my address to ingest the data. But there’s no reason for the account number to be stored unhashed – store the last two or three digits in a new column for display (Your XYZ account ending in ###). And there’s sure as hell no reason for the SSN to be stored unhashed – even if they’d have to store the full one hashed and the last four in another hash because some data doesn’t come in with full SSNs.

ZoneMinder After Upgrade

We recently updated from ZoneMinder 1.30 to 1.34 – easy as can be, ran the DB update script and everything came right online. Except … our home automation system hasn’t been able to access the system. OpenHAB reports that the bridge is offline. And we’re getting 404 errors in all of the /zm/api calls in access_log.

Turns out the API was offline because when the new package came down … there was a zoneminder.conf.rpmnew in the Apache conf.d directory. Can’t even say I found this intentionally – I wanted to check the Apache config file to see if it had anything about the api directory, did a directory listing, and said oooooh!

[lisa@fedora01 conf.d]# ll zone*
-rw-r–r– 1 root root 1990 Jul 29 18:13 zoneminder.conf
-rw-r–r– 1 root root 1990 Aug 28 22:34 zoneminder.conf.rpmnew

They’ve changed a few of the sub-directories and added components to the config. As soon as I renamed zoneminder.conf to zoneminder.conf.old, copied zoneminder.conf.rpmnew to zoneminder.conf, and repeated a few config tweaks we had made for the original installation … restarted Apache and voila, we can fetch /zm/api/host/getVersion.json and get values. So if you’re getting odd 404 errors and CakePHP “/zm/api” not found errors maybe you forgot to update your config with changes from the rpmnew file.

Cleaning Up Old OpenHAB Persistence Tables

So my husband asked for a program that would go out to the OpenHAB persistence database and identify all of the item tables that are no longer associated with active items. If you rename or delete an item from OpenHAB, the associated data is retained in the persistence database. Might be a good thing – maybe you wanted that data. But if it’s useless fluff … well, no need to keep the state changes from a door sensor that’s no longer around.

Wrote the code, and asked him how many days old he wanted the last update to be before the item table got dropped … and he told me this was a useless way to do it and maybe something really hadn’t updated in six months or three years and age of last update is no way to be identifying tables to be removed. Which, yeah, then why ask for it!? Then I needed to write something that takes a list of items from OpenHAB and identifies everything in the items table that does not appear in the OpenHAB list so those tables can be deleted. But I figured I’d post the original code too in case anyone else could use it. Both in perl, and neither in particularly well written perl. I trust the data and don’t want to protect against insertion attacks.

Drop tables for items that no longer appear in OpenHAB:

use strict;
use DBI;

my %strItemsFromOpenHAB = ();
open(INPUT,"./openhabItemList.txt");
while(<INPUT>){
        chomp();
        my $strCurrentItem = $_;
        $strItemsFromOpenHAB{$strCurrentItem}++;
}
close INPUT;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        my $strItemID = $row[0];
        my $strItemName = $row[1];
        if(! $strItemsFromOpenHAB{$strItemName} ){              # If the current item name is not in the list of items from OpenHAB
#               print "DELETE FROM items where ItemID = $strItemID\n";
                print "DROP TABLE Item$strItemID;  # $strItemName \n";
        }
}
$sth->finish();

$dbh->disconnect();
close OUTPUT;

 

Identify tables that have not been updated in iTooOldInDays days:

use strict;
use DBI;
use Date::Parse;
use Time::Local;

my $iTooOldInDays = 365;

my $iCurrentEpochTime = time();

my @strItems = ();
my $iItems = 0;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM Items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        $strItems[$iItems++] = $row[0];
}
$sth->finish();

for(my $i = 0; $i < $iItems; $i++){ my $strTableName = 'Item' . $strItems[$i]; my $sth = $dbh->prepare("SELECT * FROM $strTableName ORDER BY Time DESC LIMIT 1");
        $sth->execute();
        while (my @row = $sth->fetchrow_array) {
                my $strUpdateTime = $row[0];
                my @strDateTimeBreakout = split(/ /,$strUpdateTime);
                my $strDate = $strDateTimeBreakout[0];
                my $strTime = $strDateTimeBreakout[1];

                my @strDateBreakout = split(/-/,$strDate);
                my @strTimeBreakout = split(/:/,$strTime);

                my $iUpdateEpochTime = timelocal($strTimeBreakout[2],$strTimeBreakout[1],$strTimeBreakout[0], $strDateBreakout[2],$strDateBreakout[1]-1,$strDateBreakout[0]);
                my $iTableAge = $iCurrentEpochTime - $iUpdateEpochTime;

                if($iTableAge > ($iTooOldInDays * 86400) ){
                        print "$strTableName last updated $strUpdateTime - $iUpdateEpochTime\n";
                }
        }
        $sth->finish();
}

$dbh->disconnect();
close OUTPUT;

Configuring and Using RPZ

I realized today what, while I had written about why response policy zones are useful, I never indicated how to configure one! So here’s a quick document outlining how to set it up in ISC Bind. In your named.conf file, add a response policy to your options section:

        response-policy {
                zone “rpz”;
        };
Then add the correspondingly named zone at the end of the file. For purposes of testing, I added a zone as a forward only zone so I could perform a network capture to see what exactly transpires when a name in the RPZ is resolved.
zone “rpz” {
      type master;
      file “rpz.db”;
      allow-query { none; };
      allow-transfer { none; };
};
zone “windstream.com” {
    type forward;
    forward only;
    forwarders { 8.8.8.8; };
};
Then you just have to make a rpz.db where you store your named files:
$TTL 60
$ORIGIN rpz.
@            IN    SOA  localhost. root.localhost.  (
                          2   ; serial
                          3H  ; refresh
                          1H  ; retry
                          1W  ; expiry
                          1H) ; minimum
                  IN    NS    localhost.

www.windstream.com    CNAME    www.yahoo.com.
Restarted named and ran “rndc flush” to avoid serving cached content instead of the RPZ host data. Then ran a few tests and confirmed that the resolution configured in the rpz zone:
[lisa@fedora02 named]# dig +short www.windstream.com @localhost
www.yahoo.com.
atsv2-fp.wg1.b.yahoo.com.
98.139.183.24
98.138.252.30
98.139.180.149
98.138.253.109
[lisa@fedora02 named]# dig +short dell905.windstream.com @localhost
ns4.windstream.com.
173.186.244.139
[lisa@fedora02 named]# dig +short www.google.com @localhost
216.58.218.228
In this process, I learnt something interesting about ICS’s implementation of RPZ: it still performs the query and then overrides the results. Odd waste of cycles, but the resolution that was subsequently turned into yahoo’s address from the rpz zone. Looking up a windstream.com host that isn’t in my RPZ and I got another query out to 8.8.8.8 which was expected. Query to something not in the forward zone and not in the rpz zone and I get no traffic to 8.8.8.8 (because it follows my normal forwarding which is to our ISP’s DNS).
I was curious if this meant rpz could not be used to publish a bad hostname locally – but attempting to resolve a bad hostname (added abadhost.windstream.com with the same CNAME to Yahoo and reloaded my zone) worked just fine.

[root@fedora02 ~]# dig abadhost.windstream.com @localhost

; <<>> DiG 9.11.1-P2-RedHat-9.11.1-2.P2.fc26 <<>> abadhost.windstream.com @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8382
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1aa34751c5df7f78857a921259a8706fb5e1741a46eb5352 (good)
;; QUESTION SECTION:
;abadhost.windstream.com. IN A

;; ANSWER SECTION:
abadhost.windstream.com. 5 IN CNAME www.yahoo.com.
www.yahoo.com. 1800 IN CNAME atsv2-fp.wg1.b.yahoo.com.
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.180.149
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.253.109
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.183.24
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.252.30

;; AUTHORITY SECTION:
wg1.b.yahoo.com. 172800 IN NS yf3.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf4.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf1.yahoo.com.
wg1.b.yahoo.com. 172800 IN NS yf2.yahoo.com.

;; ADDITIONAL SECTION:
yf1.yahoo.com. 86400 IN A 68.142.254.15
yf2.yahoo.com. 86400 IN A 68.180.130.15

;; Query time: 1204 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Aug 31 16:24:15 EDT 2017
;; MSG SIZE rcvd: 315

But there is a query that goes out to the name server and a ‘no such name’ result returned. Odd.