Ransomware

My company held a ransomware response through experiment recently – and, honestly, every ransomware response I’ve seen has been some iteration of “walk through backups until we find good files”. Maybe use something like the SharePoint versioning to help identify a good target date (although that date may be different for different files … who knows!). But why wouldn’t you attempt a proactive identification of compromised files?

The basis of ransomware is that it encrypts data and you get the password after paying so-and-so a bitcoin or three. Considering that NGO virus authors (e.g. those who aren’t trying to slow down Iran’s centrifuges) are generally interested in creating mayhem. There’s not a lot of disincentive to creating mayhem and making a couple of bucks. I don’t anticipate ransomware to become less prevalent in the future; in fact I anticipate seeing it in vigilante hacking: EntityX gets their files back after they publicly donate 100k to their antithesis organisation.

Since it’s probably not going away, it seems worthwhile to immediately identify the malicious data scrambling. Reverting to yesterday’s backups sucks, but not as much as finding that your daily backups have aged out and you’re stuck with the monthly backup from 01 Nov as your last “good” data set. It would also be good to merge whatever your last good backup is into the non-encrypted files so the only ‘stuff’ that reverts is a worthless scramble of data anyway. Sure someone may have worked on the file this morning and sucks for them to find their work back-rev’d to last night … but again that’s better than everyone having to reproduce their last two and a half months of work.

Promptly identifying the attack: There are routine processes that read changed files. Windows Search indexing, antivirus scanner, SharePoint indexing. Running against the Windows Search index log on every computer in the organisation is logistically challenging. Not impossible, but not ideal either. A central log for enterprise AV software or the SharePoint indexing log, however, can be parsed from the data centre. Scrape the log files for “unable to read this encrypted file” events. Then there are a myriad of actions that can be taken. Alert the file owner and have them confirm the file should be encrypted. Alert the IT staff when more than x encrypted files are identified in a unit time. Check the create time-stamp and alert the file owner for any files that were created prior to encountering them as encrypted.

Restoring only scrambled files: Since you have a list of encrypted files, you have a scope for the restore job. Instead of restoring everything in place (because who has 2x the storage space to restore to an alternate location?!). Restore just the recently identified as encrypted files – to an alternate location or in place. Ideally you’ve gotten user input on the encrypted files and can omit any the user indicated they encrypted too.

Leave a Reply

Your email address will not be published. Required fields are marked *