Tag: Security

Oracle Password Expiry – Sandbox Server

Oracle 11g seems to ship with password expiry enabled — which is a very good thing for production systems. I’ve even written some code to maintain our system account password (scripts are grabbing the password from a not-clear-text storage facility anyway, so it wasn’t a big deal to add an n-1 password and move the current stashed password into the n-1 column, change the account password, and stash the updated password in the current password location … now my system ID password is updated by a monthly cron job, no one actually knows the password {although anyone could find it, so I would run the password cycle script when individuals leave the group}). But I’m a lot lazier about this stuff in my sandbox. Proof of concept code has clear text passwords. But the server is bound to localhost & there’s no real data in, well, anything.

I started seeing lines in my error log indicating the password would expire. Aaaand that’s how I learned that password expiry was enabled by default now.

[Sat Apr 18 07:42:59 2020] [error] [client 127.0.0.1] PHP Warning: oci_connect(): OCI_SUCCESS_WITH_INFO: ORA-28002: the password will expire within 7 days in /var/www/vhtml/…/file.php on line 191, referer: …

I’m going to disable password expiry because it’s a sandbox. For a real system, obviously, this may not be a stellar idea.

select USERNAME, ACCOUNT_STATUS, PROFILE from dba_users where USERNAME = 'SampleUser';

 

USERNAME ACCOUNT_STATUS PROFILE
SampleUser EXPIRED(GRACE) DEFAULT

 

Note the account status “EXPIRED(GRACE)” — that’s why I am getting the error shown above. Grab the profile name — it’s a sandbox, so 99% sure it’s going to be ‘DEFAULT’ and alter that profile with an unlimited password expiration:

alter profile <profile_name> limit password_life_time UNLIMITED;

Except that didn’t actually stop the error. Turns out you’ve still got to change the password once the account has been flagged as expired (or let the password expire and then unlock the account … but I was looking at the log because I’m debugging something, and I wanted the error to stop *right now*).

alter user SampleUser identified by N3W_P@s5_w0rD;

 

The end of password changes?

I knew Microsoft was publishing recommendations against forced password expiry, but it was still surprising to see this banner in my Azure admin portal. It would be nice if their message was clearer on the nuances here — especially that enabling MFA (preferably not SMS-based MFA that is just asking for someone important’s number to get hijacked) is an important component of this recommendation.

In one of my first jobs, I was a sys admin for call-center systems. As such, I interacted with a lot of the call center management and staff … and, when you know someone in IT, you ping them when the proper support route isn’t as responsive as you’d like. Which is to say I did a good bit of end-user support as well. The number of people whose password was written on a post-it note under the keyboard astonished me. This particular call center didn’t have floating seating, but two or three people would share a cube because they worked different schedules. If I’ve got to come up with a new thing I need to remember every 90 days … well, that’s how you end up with Winter19, Spring20, Summer20, Autumn20 or Maggie12, Maggie13, Maggie14 passwords. That then get posted under the keyboard so I can remember that I’m up to “14” now. Couple that with the overhead of supporting password resets for those who didn’t write it down and happened to forget the password. I’d been a proponent of long password expiry coupled with increased complexity requirements. Maybe !Maggie-19? is good for all of 2019. It’s nice to see a major IT vendor starting to realize the real-world impact of IT policies.

Powershell Credentials

I’ve only recently started using PowerShell to perform automated batch functions, and storing passwords in plain text is a huge problem. In other languages, I crypt the password with a string stored elsewhere, then use the elsewhere-stored string in conjunction with the cipher text to retrieve the password. There’s a really easy way to accomplish this in PowerShell (although it does not split the credential into two separate entities that force an attacker to obtain something from two places {i.e. reading a file on disk is good enough} but if they’re already reading my code and on my server … the attacker could easily repeat the algorithmic process of retrieving the other string … which is a long way to say the PowerShell approach may seem less secure; but, effectively, it isn’t much less secure)

The first step is to store the password into a file. Use “read-host -AsSecureString” to grab the password from user input, pipe that to convertfrom-securestring to turn it into a big ugly jumble of text, then pipe that out to a file

read-host -assecurestring | converfrom-securestring | out-file -filepath c:\temp\pass.txt

View the content of the file and you’ll see … a big long hex thing

Great, I’ve got a bunch of rubbish in a file. How do I use that in a script? Set a variable to the content of the file piped through convertto-securestring and then use that password to create a new PSCredential object

$strPassword = get-content -path c:\temp\pass.txt | convertto-securestring
$cred = new-object -typename PSCredential -argumentlist ‘UserID’,$strPassword

Voila, $cred is a credential that you can use in other commands.

Home Security Drone

We’ve conceptualized home security drones for some time with autonomous programming that instructs the drones to return to a charging station when their batteries become depleted. Feed the video back to a platform that knows what the area should look like and alert on abnormalities.

The idea of a drone patrol is interesting to me because optimizing the ‘random walk’ algorithm to best suit the implementation is challenging. The algorithm would need to be modified to account for areas that other drones recently visited and allow weighting for ease of ingress (i.e. it’s not likely someone will scale a cliff wall to infiltrate your property. A lot of ‘intrusions’ will come through the driveway). Bonus points for a speaker system that would have the drone direct visitors to the appropriate entrance (please follow me to the front door) — a personal desire because delivery people seem to believe both our garage and our kitchen patio are the front door.

This is a great security solution when it’s unique, but were the idea to be widely adopted … it would suck as a home security implementation. Why? Drones with video feeds sound like a great way to deter trespassing. But drones have practical limitations. Home break-ins would be performed during storms. Or heavy snowfall. Or …

What if the drone charging base has wheels – during adverse weather, the drone can convert itself into an autonomous land vehicle. I’d probably include an additional battery in the base as the wheeled vehicle traversing land would use more energy. And there would be places a wheeled vehicle could not travel. The converted drone would be able to cover some of the property, and generally the area closest to the structures could be traversed.

Spectre & Meltdown

The academic whitepapers for both of these vulnerabilities can be found at https://spectreattack.com/ — or El Reg’s article and their other article provide a good summary for those not included to slog through technical nuances. There’s a lot of talk about chip manufacturer’s stock drops and vendor patches … but I don’t see anyone asking how bad this is on hosted platforms. Can I sign up for a free Azure trial and start accessing data on your instance? Even if they isolate free trial accounts (and accounts given to students through University relationships), is a potential trove of data worth a few hundred bucks to a hacker? Companies run web storefronts that process credit card info, so there’s potentially profit to be made. Hell, is the data worth a few million to some state-sponsored entity or someone getting into industrial espionage? I’m really curious if MS uses the same Azure farms for their hosted Exchange and SharePoint services.

While Meltdown has patches (not such a big deal if you’re use cases are GPU intensive games, but does a company want a 30% performance hit on business process servers, automated build and testing machines, data mining servers?), Spectre patches turn IT security into TSA regulations. We can make a patch to mitigate the last exploit that occurred. Great for everyone else, but doesn’t help anyone who experienced that last exploit. Or the people about to get hit with the next exploit.

I wonder if Azure and AWS are going to give customers a 5-30% discount after they apply the performance reducing patch? If I agreed to pay x$ for y processing capacity, now they’re supplying 0.87y … why wouldn’t I pay 0.87x$?

Nothing Is New

I keep seeing articles hyping the anonymity of bitcoin-type “currency”. That’s not a new concept in value stores. Non-registered bearer bonds allowed untraceable fund transfers. As bearer instruments are not illegal in the United States, such bonds can still be issued. The holder cannot get any tax exemptions on interest paid for the bond, but you can transact business using bearer bonds. And just like bitcoin-type currencies … you’re screwed if someone takes it. Bonds provide legal recourse – bitcoin, not so much. If no one wants to pay a couple hundred thousand dollars for your bitcoin, you have little bits on disk. It’s like an anonymous stock — it’s worth whatever people are currently willing to pay for it.

As a data storage technique – distributed across the world, redundant, but ultimately meaningless in its sub-components to anyone who happens to have a snippet – it’s intriguing. But as a non-dodgy way of transacting business, it’s just silly.

Apple FaceID

The irony of facial recognition — the idea is that you trade some degree of privacy for enhanced security. There are 10k four digit codes – a 1:10000 chance of any specific code unlocking your device. Apple touted a one in a million chance of facial recognition unlocking your phone.

So you trade your privacy for this one in a million super secure lock. Aaaaand a Vietnamese security firm can hack the phone with a mask. Not even a *good* mask (like I take a couple of your pictures, available online, synthesize them into a 3d image and print a realistic mask).

This feat wasn’t accomplished with millions of dollars of hardware. It took them a week and 150$ (plus equipment, but a 3d printer isn’t as expensive as you’d think).

Boyd v. United States or Riley v. California provide fourth amendment protection for phone content … but that only means the police need a warrant. Fourth amendment, check. Fifth amendment … Commonwealth of Virginia v. Baust  or  United States v. Kirschner says that you while cannot be compelled to reveal a passcode to allow police to access your phone (testimonial) … a fingerprint is not testimonial, it is documentary. And can be compelled. As with a lot of security, one can ask why I care. If I’m not doing anything wrong then who cares if the police peruse my phone. But if I’m not protesting, why do I care if peaceful assembly is being restricted. I’m not publishing the Paradise Papers, so why do I care if freedom of the press is being restricted? Like Martin Niemöller and the Nazis – by the time they get around to harming you, there’s no one left to care.

Security Standards For Financial Information

A long time ago, processors of credit card information didn’t have any standards. And they’d lose your data. People didn’t like that, and some type of regulation had to be put on the industry. The credit card processors got together and formed an initiative to form their own regulations – PCI. They were a lot more concerned with the regulation’s impact on profitability than government regulations would have been. The PCI standards were fairly effective.

And now one of the credit bureaus has lost a huge amount of personal data – including social security numbers and account numbers that I don’t get why were stored in anything other than a one-way hash in the first place. But the bigger question is how are these credit bureaus able to operate with standards that are less strict than the industry-association generated PCI standards? My guess is that there will be a credit bureau industry association writing security standards in the next week or so. If there isn’t an industry association forming to ensure my social security number and account numbers aren’t stored in clear text on web-accessible servers at credit bureaus … I should hope the government would intervene and mandate a certain level of security.

Equifax Hack

First of all, saying half the population of the United States has had their personal information stolen might be accurate, but it’s the good marketing spin. 2016 numbers had 249,485,228 adults in the United States. That’s 57% of people over 18 who have had their personal data stolen. Now there are people with no credit history. It’s a bit of a thing when you first want to rent a flat or get a credit card … you have no credit history, and can’t get credit until you have one. Last I read, it was something like 14% of adults who have no credit record — meaning Equifax gave up information on 66% of the credit-having population.

Leaving aside the marketing spin on numbers, though, why the hell is a credit bureau storing my personal information in a retrievable format instead of a one-way hash? Performance, I assume … so I guess my question really is why were a couple of clock cycles considered more important than the security of my data? Some of the data is probably maintained in clear text because they use heuristic matching to link incoming data to entities. I’m guessing my info comes in with a name, address, creditor name, and account number. And they’ve got to be able to match up the thirty different iterations of my address to ingest the data. But there’s no reason for the account number to be stored unhashed – store the last two or three digits in a new column for display (Your XYZ account ending in ###). And there’s sure as hell no reason for the SSN to be stored unhashed – even if they’d have to store the full one hashed and the last four in another hash because some data doesn’t come in with full SSNs.

Smart Home (In)Security

I’ve seen a lot of articles recently about hacked IoT devices (and now one about a malicious company disrupting the customer’s service in retaliation for poor reviews (and possibly abusive calls to technical support). I certainly don’t think *everything* needs to be connected to the Internet. If you want to write messages on toast remotely, whatever … but beyond gimmicks, there are certainly products where the Internet offers no real advantage. But a lot of articles disparage the idea of a smart home based on goofy products.

There are devices that are more convenient than their ‘dumb’ counterparts. Locks that unlock when you are nearby. Garage lights that come on when the door is unlocked or opened. And if that was the extent of home automation, I guess you could still call it a silly fad.

But there are a LOT of connected devices that save resources: Exterior lighting that illuminates as you near your house. With motion detectors controlling light switches and bulbs, you (or the kids) cannot forget to turn out the lights. An outlet that turn OFF to eliminate draw when appliances are in ‘standby’ mode saved us about 50$/year just on the television/receiver. Use moisture sensors to control a sprinkler system so the grass is only watered when there is actual need. Water flow sensors that can alert you to unusual usage (e.g. when the water filter system gasket goes and it starts dumping water through the thing 24×7).

And some that prevent real damages to your home or person. If your house uses combustion for heat, configure the carbon monoxide sensor to shut off the HVAC system when CO levels are too high. Leak sensors shut off the water mains when a leak is detected (and turn off appliances in the wet area if there’s potential for shorting).

The major security problem with any IoT device, smart home systems included, is that you’ve connect private resources to the Internet. With all the hackers, punks, and downright malicious people out there. And from a privacy standpoint, you are providing information that can be mined to enhance marketing profiles — very carefully read the privacy policies of any company whose platform you will be using. Maybe a ‘smart’ coffee machine sounds good to you — but are they collecting (and potentially selling to third parties) information about how many cups of coffee you brew and the times of day you brew them? If you care is a personal decision, but it’s something that should be considered just the same.

When each individual device has its own platform, the privacy and security risks grow. A great number of these devices don’t need to be connected to the INTERNET directly but rather a relay point (hub). From a business perspective, this is a boon … since you have a Trane furnace (big money, not apt to be replaced yearly), you should also buy these other products that we sell and pay the monthly recurring to use our Nexia platform for all of your other smart devices. Or since you have a Samsung TV with a built-in hub … you should not only buy these other Samsung products, but hook all of your other smart ‘things’ up to SmartThings. And in a year or two when you’re shopping for a new TV … wait, you need one with a SmartThings hub or you’re going to have to port your existing configuration to a new vendor. Instant customer loyalty.

For an individual, the single relay point reduce risk (it’s not one of a dozen companies that need to be compromised to affect me, just this one) and confusion (I only have to keep track of one company’s privacy policy). *But* it also gives one company a lot more information. The device type is often indicative, but most people name the devices according to location (i.e. bedroom light, garage light, front door). Using SmartThings, Samsung knew when we went to bed and woke up, that we ate breakfast before brushing teeth (motion in hallway, motion in kitchen, water usage, power draw on appliances, motion in hallway, motion in bathroom, water usage) or showering (power draw on hot water tank, increased water usage). Which rooms we frequented (motion), when we watched TV (not what we watched, but when), when we left the house (no motion, presence change). How often we wash laundry (power draw on washer, water usage) and dishes (power draw in dishwasher, water usage). Temperature in the house (as reported from multi-sensor devices or from a smart thermostat), if we change settings for day/night. How often we drive a car (garage door open/closed with presence change, or speed of location change on presence), how much time we spend away from home. How often we have overnight guests (motion in guest bedroom at night).

And, yeah, the profile they glean is a guess. I might open the garage door when mowing grass. Or I might have rooms with no motion sensors for which they cannot account. But they have a LOT of data on which to base their guesses and no one selling targeted advertising profiles claims to be 100% accurate. Facebook’s algorithm, for quite some time, had me listed as a right-leaning Trump supporter. I finally tired of seeing campaign ads on their site and manually updated my advertising profile. Point is, one company has a lot of data from which they build fairly good targeted profiles. How much of our house is actually used (a lot of bedrooms that rarely get motion, get a ‘downsizing specialist’ real estate flyer. All rooms constantly with motion, get a flyer specific to finding a larger home to give you all some space). If the HVAC system is connected, they could create a target group “people who could use additional insulation or sealing in their house” (outdoor temp for location v/s indoor temp for location v/s energy draw).

In some ways, it’s cool that a company might be able to look at my life and determine a need of which I am not even aware. Didn’t realize how much of our energy bill was HVAC – wow, tightening the house and insulation will save how much?! But it’s also potentially offensive: yeah, we could use a bigger house for all of these people. We could also use a bigger pay cheque, what of it? Yeah, the kids moved out … but this is our house and why would you tell me I should be leaving? And generally invasive — information that doesn’t really cause harm but they’ve got no reason to know either.

What articles highlighting the insecurity of IoT devices seem to miss is that the relay point can reside on your local network with no Internet access. We personally use OpenHAB – which enables our home automation to function completely inside our local network. You trust the developers (or don’t, ours is open source … you can read the whole thing if you don’t want to trust developers), but you own the data and what is done with it.

You don’t need an expensive dedicated server to host your own home automation controller – a Raspberry PI will do. What you do need is technical knowledge and a good bit of time (or hire someone to do it for you, in which case you need money and someone else’s time). But the end result is the same — physical presence is required to compromise the system. Since physical presence will also let you bump locks, smash windows, cut power, flick light switches, open doors … you’re not worse off than before.