The Cyber Security Expert | Some analysis of the DHS and FBI Russian hacking IOCs
758
post-template-default,single,single-post,postid-758,single-format-standard,ajax_fade,page_not_loaded,,qode-title-hidden,qode-theme-ver-12.1,qode-theme-bridge,wpb-js-composer js-comp-ver-5.4.2,vc_responsive

Some analysis of the DHS and FBI Russian hacking IOCs

As 2016 closed, the US took the unusual step of releasing an report attributing a series of hacks, and subsequent leaks of stolen information, to Russian intelligence services (RIS in the vernacular of spooks). The report consisted of some technical analysis and indicators of compromise (IOCs) intended for use by cyber defenders.

You can find the Joint Analysis Report (JAR) and IOCs here. I’m not going to comment here in depth on the attribution beyond noting that i) I think the evidence in open source that this was a Russian intelligence backed operation is pretty strong on its own, and have no reason to doubt the published assessment of the US agencies, and ii) this release did not add anything particularly to the attribution, and hence did not serve well to silence critics or reassure doubters.

On to the IOCs! 

The IOC list consists mainly of domains, lots of IP addresses, and hashes of malicious software. We provide a security monitoring and protection service for websites (built on OSSEC and modsecurity – both fantastic tools. Get in touch if you want to know more!), and as part of this we collect data and report on suspicious events. The JAR explicitly says that website scanning and potentially hacking is something this threat group is associated with. To quote:

“Some traffic that may appear legitimate is actually malicious, such as vulnerability scanning or browsing of legitimate public facing services (e.g., HTTP, HTTPS, FTP). Connections from these IPs may be performing vulnerability scans attempting to identify websites that are vulnerable to cross-site scripting (XSS) or Structured Query Language (SQL) injection attacks. If scanning identified vulnerable sites, attempts to exploit the vulnerabilities may be experienced.”

This seemed relevant to the data we collect, and indeed one of our customers is in a sector mentioned in the JAR as being targeted. So we decided to see what use we could make of the IOCs.

TL;DR – Not much.

The longer version: Firstly, there are a couple of caveats. We only store security, or unusual, events. This can be a lot of things – malformed or unusual user-agents, http errors, vulnerability scanning, actual attacks etc, but isn’t normal browsing. So if someone browsed normally from one of the IOC IP addresses we wouldn’t have spotted it. Secondly, we’re a small company and as such have a relatively small client list, however as noted above one at least falls into the at risk category.

Moving on. There were 876 IP addresses released. A quick search across our database (approx six months worth of collection) produced 288 unique IP address matches. Which is to say that over six months 288 IP address on the naughty list visited one of the websites we monitor and did something ‘naughty’ at least once. This seemed pretty exciting, however our enthusiasm was soon damped.

Others had critiqued the IOCs for containing Tor nodes. So we decided to check – of the 288 matches, ~190 turned out to be Tor exit nodes. Given Tor is widely used for malicious activity (and legitimate stuff too – I’m not anti Tor) and that any user can pop out of any Tor exit node at any particular time, in depth analysis of these IP addresses seemed fruitless.

That left us with less than a hundred suspect IP addresses. Our high threat website certainly was the most targeted of the sites we monitor, but equally is also probably the highest profile. Our analysis certainly showed nothing that stood out as indicative of targeted or persistent scanning. Lots of the IP addresses were flagged for incurring multiple 404s (which can be indicative of malicious activity) but it seemed largely random. Many were blocked just because of the user agent – a known bot, or other automated scraping tool. Our analysis also showed a slight peak in activity around early October, but the data set is really too small to draw any strong conclusion from that.

Overall the problem is the lack of context. IOCs like this are not terribly useful without additional supporting information. What sort of traffic is associated with this IP, and in what time frame? Also, is it a Tor node or not?

That’s not to say the IOCs are completely useless, just that our use case was not terribly productive. Used against historical records of network activity there might well be more usable results, however I would still expect false positives (and this is supported by comments I have seen from other analysts). Also of course there is more than just the IP addresses available – they were just all that was relevant in our case.

I have some sympathy with US-CERT having been on the other side of the fence and trying to release this kind of information. Getting the classifications down to allow release is not easy, and often means the context, and other supporting information is lost. Overall though I would not grade this release all that highly – simple things like stripping out the Tor nodes (or leaving them in but flagging them as such) would have simplified analysis. Some effort to give context, and a time frame would have made the whole lot much more usable.

As always, if you have questions please get in touch! Find us on twitter, or use the contact form.

I originally published this post on LinkedIn. Why not check out my posts on there too?

Thanks for reading!

Rob