Sunday, August 13, 2017

The need for dump analysis in Cyber Threat Intelligence (CTI)

Over the last year, there have been numerous dumps of stolen classified data posted on the Internet for all to see.  The damage from these dumps has obviously been huge to the US intelligence community.  In this post, we won’t discuss the actual damage of the dumps to the intelligence community (many others have already pontificated on that).  Instead, this post will focus on the need for CTI analysts to perform analysis of the dumps.

For the first time, CTI analysts have a view of what appears to be a relatively complete nation state toolset in the Shadow Brokers dumps and insight into tool development and computer network exploitation (CNE) tool requirements in the Vault 7 dumps.  These are game changers for CTI analysts. We define threat as the intersection between intent, opportunity, and capability.  These tools and documents highlight the capabilities of an APT adversary. Whether you believe the US intelligence services have the intent to attack your network, it is likely (almost certain) that other nation state attackers have developed similar capabilities.  Analyzing the data you have available (Shadow Brokers and Vault 7) can help shed light on what you don’t have available (every other nation state attacker’s toolset in a single dump).

Note: We understand that this is a sensitive topic. When classified data is released, it is still considered classified until declassified by a classification authority.  There is no evidence that any classification authorities have declassified the data in the Shadow Brokers or Vault 7 dumps.  It is likely that they remain classified to this day.  The advice in this article may put those with security clearances at odds with the advice of their security officers.  Please proceed with care.

Read the full post on the Rendition Infosec blog.

Saturday, August 5, 2017

Software plugins/extensions should be part of your threat model

Over the last few months we’ve seen multiple cases of warnings about plugins and extensions for various software packages threatening the security of users.  We’ve recently seen the Copyfish and and Web Developer Chrome plugins compromised and used to push malware to users.

While Chrome is likely safe and should probably not be considered a threat, perhaps your plugins should be.  Plugins are developed by potentially malicious third parties. Even if your plugin developers are not themselves malicious, they have security concerns just like everyone else.  And make no mistake about it: when understanding software supply chain issues, their security is your security.

Read the full story here.

Wednesday, August 2, 2017

An important consideration for “bug bounty” programs

The US DoJ recently released guidance on running vulnerability disclosure programs (aka bug bounties).  The document is nothing earth shattering, but does provide some free advice to organizations considering such programs.

Rendition’s advice to organizations considering a bug bounty program? Think VERY carefully about how it will impact your monitoring and detection strategies. People looking for bugs will create noise in your network – a lot of it.  And the noise will look like attacks, because technically they ARE attacks. How will you separate this non-malicious attack traffic from real attack traffic you should be concerned about?

Read the full post here.

Wednesday, July 12, 2017

Honestly evaluating the Kaspersky debate

So far, Rendition has posted on the Kaspersky debate twice.  In the first post, Rendition educated the public on why a software audit would not address the fears raised by the Senate.  The second post explained the damage that any antivirus software could perform in a network if its operation were taken over by a foreign government.  The second post is about more than just Kaspsersky - as Rendition made clear in the post, it could apply to any antivirus software.

Bloomberg's reports previously unknown Kaspersky involvement with Russian government
Yesterday, Bloomberg wrote an article claiming that Kaspersky is far deeper involved with Russian intelligence than was publicly known.  At Rendition, we think parts of that reporting were careless, especially the interpretation of the words "active countermeasures."  "Active countermeasures" is not an industry standard term, a pet peeve of Rendition's founder Jake Williams, who has spoken on the topic at various industry events.  Bloomberg took the phrase "active countermeasures" to mean the following.
"Active countermeasures is a term of art among security professionals, often referring to hacking the hackers, or shutting down their computers with malware or other tricks.
We know of no such standard definition for "active countermeasures."  Even if Bloomberg got this definition from an infosec expert, any expert worth quoting would have told Bloomberg that their definition was one of many and not "generally accepted" by the community.  That this wasn't reported makes the whole article reek of bias - where there's smoke, there's usually fire.

Kaspersky responds to Bloomberg
Eugene Kaspersky posted a retort that addresses the Bloomberg article point by point. Kaspersky calls out some of the obvious problems with the article, including talking around the point made above.  But in his response, Kaspersky says something that is misleading if not outright false, and we think that needs to be addressed as well.

Read the full story here.

Tuesday, July 11, 2017

Is antivirus software part of your threat model? Maybe it should be...

Recently we learned that the US Senate was pushing to add language to the National Defense Authorization Act (NDAA) that would prohibit the purchase and use of Kaspersky software anywhere in the DoD. This is nearly certainly a political move and CyberScoop’s Patrick Howell O’Neill did a great job of covering this story already from a political angle. It is entirely possible that the Senate’s statements about the NDAA are just political messages meant to rattle the sabers.

But should antivirus be part of your threat model? Perhaps it should. As Tavis Omandy has shown over the last year, antivirus software is often full of security vulnerabilities. This is especially concerning because antivirus runs with elevated privileges. And the elevated privileges make antivirus software so dangerous.

In considering this debate, it is important to consider the types of threats that antivirus software could pose if the vendor were subject to “influence” from a government. Obviously we are talking about this because of Kaspersky and the NDAA, but it is important to note that this any antivirus company could be subject to the same attacks. The risk is not only for antivirus companies that could be influenced – any software manufacturer with automatic updates could be used as an attack platform by a government. If one was hacked by an APT group (most likely a nation state), their customers would also be vulnerable (whether the software in question is antivirus or something else).

Read the full post here.

Tuesday, June 13, 2017

CRASHOVERRIDE guidance from NCCIC is confusing at best

After reviewing the awesome Dragos Inc report on CRASHOVERRIDE, Rendition analysts received a similar alert from US Cert and NCCIC.  After reviewing the guidance from NCCIC, we were less than thrilled.  The second recommendation from NCCIC (take measures to avoid watering hole attacks) is impossible by its very definition.  A watering hole attack first compromises a remote site that you would already be visiting in an attempt to compromise your network.  The fact is that the victim is not being tricked into visiting a rogue site as is the case in phishing.  There is frankly no way for an organization to do this.  Unfortunately, the fact that this "mission impossible" is set as recommendation #2 means that many will stop trying to implement anything further down the stack, assuming that the rest may also be impossible by definition.

Read the full post here.

Monday, June 12, 2017

CRASHOVERRIDE – monitor your IT networks (and OT too)

Last week Rendition Infosec founder Jake Williams contributed an article for next month’s issue of Power Grid International magazine.  The article highlights the need for utilities to monitor their IT networks in order to protect their OT networks from compromise.  Today’s release of the excellent CRASHOVERRIDE report by Dragos Inc only reinforces the points Williams’ made in his article.

While a simple Shodan search will show many ICS devices directly connected to the Internet, these organizations obviously aren’t following best practices in the first place. Monitoring would certainly help these organizations to detect threats as well, but they honestly have bigger problems that start with segmenting their networks.

For those utilities that have already segmented IT from OT (operational technology), monitoring the IT network is absolutely critical.  Most attackers enter the OT network from the IT side of the network through phishing emails or other commodity exploits.  They then noisily stumble through the network looking for the bridge between IT and OT.  Even if the networks are completely airgapped (few truly are in our experience), attackers will eventually find a way to get malware to the OT side.  But along the way, attackers usually make a ridiculous amount of noise trying to find the places where the IT and OT networks are joined.

Read the full story here.

Monday, May 22, 2017

The problems of PUA (Potentially Unwanted Alerts)

Recently we had a client call us about a problem on their network.  Rendition Infosec runs a 24×7 security monitoring service and had a client call about an antivirus alert for PUA (potentially unwanted application).  This class of alert is often difficult to tune out since attackers and administrators often use the same software tools.

Frequent examples of this are netcat (nc.exe) and psexec from SysInternals.  These tools are like the infamous “dual use technology” we hear so much about when sanctioning oppressive regimes.  When we receive an alert like this, we most frequently find that the alert can be attributed to the activity of a systems administrator.  However, there is a possibility that the alert represents the activities of an attacker.

Monday, May 8, 2017

Petition for Microsoft to disclose data about MS17-010

Rendition Infosec is sponsoring a petition asking Microsoft to disclose telemetry data around MS17-010. We've highlighted a number of reasons why we feel this is important for the security community as a whole.

It is almost certain that Microsoft has data around how these vulnerabilities were exploited by attackers. Revealing this data will help us better understand decisions made in the vulnerability equities process. It will also enhance understanding about how likely it is that vulnerabilities discovered by APT attackers are independently rediscovered by others attack groups. Finally, it will help policy makers assess whether the exploits reportedly stolen (and subsequently released) by Shadow Brokers were likely used to exploit other targets before being released to the general public. If you work in infosec, think computer security is a good thing to have, and/or believe in transparency, please consider signing our petition, linked below:

Thursday, April 27, 2017

Observations from the latest Internet-wide DOUBLEPULSAR scan

I've posted some notes from the latest Rendition Infosec Internet wide scans for DOUBLEPULSAR. Despite some reports to the contrary, it's not getting any better. In fact, it's a bit worse than earlier this week despite the uninstallation scripts moving around the Internet (note that Rendition Infosec does NOT recommend using these tools).

You can read the rest of the story here.

Monday, April 24, 2017

DOUBLEPULSAR (NSA malware) infects more than 3% of machines with SMB exposed to the Internet

After reading some early articles mentioning that DOUBLEPULSAR (reportedly NSA malware) infections were widespread on the Internet, my folks at Rendition Infosec thought the numbers might be inflated due to poorly implemented scans.  After performing some of our own scans, we are confident that these numbers are not inflated and at least 3% of the machines with TCP port 445 exposed to the Internet are infected with DOUBLEPULSAR.

Read the rest of the story here:

Friday, April 21, 2017

A "Digital Geneva Convention" won't be a reality without reliable cyber attribution

Microsoft released their idea of a “Digital Geneva Convention” to help normalize behavior on the cyber battlefield.  The document, linked here, is generally well written and documents the need for a document of its type.

While the idea of regulating the cyber domain is not a bad one, the proposal depends on attribution, a field that is sorely lacking in reliability and repeatability.  I've outlined some of those problems here.

Tuesday, April 18, 2017

Business impact of the Shadow Brokers dump of Windows exploits

The Shadow Brokers have dumped their cache of exploits for Windows systems (supposedly stolen from NSA).  Although some were originally reported as zero-days exploits, this has since been proven to be incorrect due to recent Microsoft patches.  However, there's still plenty of business impact.  In what I'm sure will be the first of many posts on this topic, I'm focusing on the problem of Windows Server 2003, which continues to be widely deployed.

Read the full post, complete with recommendations for businesses here.

Sunday, April 9, 2017

Russia “crosses the Rubicon” with newest Shadow Brokers dump

Russia is likely using the latest Shadow Brokers release to attempt to control the news cycle and take coverage away from the Syria conflict. Yesterday, in a political rant using broken English, the Shadow Brokers released the password for the encrypted zip file they seeded last year (link).
This release gives threat intelligence teams unprecedented insight into the capabilities of the Equation Group hackers. The dump appears to contain only Linux and Unix tools and exploits, so organizations running only Windows don’t need to react to tools in this release (though they should check their available netflow and firewall logs for evidence they have communicated with redirection hosts posted here). For organizations running Linux and/or Unix, it should be noted that most of the exploits target older software version. However the dump is still significant for threat intelligence professionals. Because Equation Group is likely typical of other nation state hacking groups, the dump offers unprecedented insight into the capabilities and targets of an Advanced Persistent Threat (APT) actor.
Read the rest of our analysis here.

Thursday, March 23, 2017

Ins and outs of Cyber Threat Intelligence

I was recently interviewed for an article explaining what CTI is. The article is published, but I think my full answers provide a little bit better context.  You can read the article here.  If you want the full, unpublished answers I gave, check it out on my corporate blog.

Thursday, March 16, 2017

TicketBleed - making the case for packet capture

You may have heard about the TicketBleed vulnerability that plagued some F5 devices.  Though it's been a while, I figured it would be worth the time to put out this case study on why it makes the case for packet capture at strategic locations.

I personally don't think a bug impacting so few devices (it requires a non-default configuration) is worthy of naming, but hey to each their own.  If you are an F5 customer, you certainly care about the bug and I don't blame you a bit.  Otherwise, meh.

But I think the #TicketBleed vulnerability is a good example of why you need full network traffic capture.  I covered the HeartBleed vulnerability pretty extensively for SANS and dealt with the fallout in customer environments. One of the questions people asked me consistently was "how can I tell if I've been exploited?"  Unfortunately, successful exploitation would not create any logs.  Netflow is also useless for detecting HeartBleed exploitation.  The same is likely true for TicketBleed.

But with packet capture, HeartBleed (and now TicketBleed) can be detected.  But where should your taps for packet capture be located?  Most would say after traffic is decrypted (e.g. after a VPN concentrator).  I would normally agree with this advice, but vulnerabilities like TicketBleed and HeartBleed show that this isn't always the right answer.  There are exceptions to every rule.  In order to execute the discovery Course of Action (CoA), you need some amount of packet capture.

Any time a serious vulnerability is discovered, there is always a question about whether you may have been exploited before you were patched.  This is especially concerning for those who have nation states as part of their threat model.  If you haven't considered a packet capture or other network security monitoring, please talk to us at Rendition Infosec and we'll be happy to help you plan for success in your next investigation.

Wednesday, February 22, 2017

Hacking the soda lobby - a few thoughts

I saw a wild story in the NYT about the use of restricted sale spyware to spy on proponents of the soda tax in Mexico.  I regularly talk to clients at Rendition Infosec who say "We wouldn't be targeted, who would hack us?"  I always respond that if you have something that lets you do business better than someone else, that data is valuable. And someone might target you to get it.

But it's not just intellectual property that directly supports business.  If you are influencing public policy, you might also be targeted. There's obviously a lot of money involved in public policy.  That's what seems to be happening in Mexico.  Proponents of the soda tax have been exploited using malware that is supposedly only sold to governments.

There seem to be three possibilities here, all clearly disturbing.  The first is that the proponents of the soda lobby are being hacked by a government using the government only tools.  The second possibility is that the government only tools have been sold to a non-government.  The third possibility is that the spyware really has only been sold to the government, but the tools were leaked or stolen by an outsider.

Can you really keep hacking tools private?
I'll leave the first two possibilities for your imagination.  I'd like look at the third possibility.  If the tools were stolen or leaked, that would be extremely disturbing, but probably not unprecedented.  There are certainly suspicions that Harold Martin, an NSA insider, was the source of the Shadow Brokers tool leaks.  In the case of Shadow Brokers, there is believed to be only one source for the hacking tools (supposedly this is NSA).  In the Mexico case, it isn't clear how many different governments have access to the commercial spyware.  But understand that each legitimate customer of those tools is likely a nation state hacking target.  Think about that the next time you hear your government talking about public policy for it's hacking tools, exploits, and backdoors.  Or how about the FBI wanting backdoors to get into your iPhone? Given the Shadow Brokers leaks, losing any backdoor isn't outside the realm of possibilities.  Given time and daylight, we may find that the source of the cyber attacks targeting the Mexican soda lobby was a government who legitimately purchased the tools, but fell victim to hacking themselves.

Wednesday, February 15, 2017

HP printer security FUD highlights everything that's wrong with infosec

I'll start this post by saying I half expect to get a cease and desist over it.  If that happens, know that HP doesn't stand by their marketing and is using lawyers to silence those who would challenge them.

HP kicked off RSA with a big push on printer security.  I stopped by the booth and they have a few people standing around (customers and employees).  I asked (seriously, not trolling) what the sales pitch is.  I said I'd like to hear the technical side. I say "I'm an infosec consultant, what real world examples of printer exploitation can HP share with me that will get my clients serious about printer security?"  The obvious sales lady passes me off to one of two engineers at the booth.  The other engineer is already talking to someone.  The other engineer (the one not talking to me) says to the guy he's talking to:
You have to take print security seriously. Did you know that Stuxnet attacked the print spooler?
Now this is some USDA Grade A FUD.  First, Stuxnet didn't attack printers at all.  The print spooler? Sure. On Windows. If a tag line for your printer security campaign is a vulnerability patched in 2010 (MS10-061) on something that isn't even your platform, well, bravo.  I'd like to induct this salesperson/engineer into the snake oil hall of fame (or ninth circle of hell, I'm actually not picky here).

Before I can interrupt "Mr Stuxnet" my engineer starts talking to me.  I ask him the same question I asked the "sales only" person at the booth.  He says:
Did you hear about the tens of thousands of printers hacked in the last few weeks? It's been all over the news.
Um yeah (he's talking about this), but those were printers directly connected to the Internet. I'm not concerned about that, or Weev, spewing garbage to my printers.  I ask about the narrative they seem to be suggesting where the printer is used to compromise the rest of the network.  Can your printer really be used to compromise the rest of the network? Have you ever seen it?  Do you have a single case, even without naming the victim, that you can point to?

*Note: I don't need anyone to cite a talk at *CON where someone did a PoC.  I'm talking real world.

It turns out the answer is no.  The HP engineer says that they have responded to dozens of networks (later throws out the number 60) and says that in two of them they've found printer firmware that "had a different checksum from ours."  He won't commit to saying it was modified, just repeats that "well, the checksum was different." I'm intrigued, tell me more.  He says that they suspect it would have been nation state since these networks were known to have been hacked by nation state actors in the past.  Okay, cool. Since you found this "modified" firmware, you certainly reverse engineered it, right?  Well, yes they did, but "unfortunately HP Labs is very secretive" and is "unlikely to share anything they find."  And this was like four years ago and I haven't heard anything back so lets move on.

Wait, WUT?! You found firmware that wasn't yours loaded onto printers four years ago, sent if for reversing and you got nothing back?  This is the sort of thing that literally screams "take my money" and you have no information on it.  Okay, I think I see what you found: nothing.

When pressed for more examples of printer exploits, we got stuff like "Do you know what can be done with PCL and PostScript?" Yes actually, which is why you're really not scaring me.  Again, show me examples of this happening in the wild. Do you have document hashes in VirusTotal that show demonstrations of attackers doing this stuff?  My clients want to focus on real threats, not hypothetical ones.

Luckily I didn't pay attention to the video playing in the HP booth while I was there.  When I came back to my hotel room I actually watched this vile piece of filth.

WARNING: you'll never get this six minutes of your life back.

This video features tag lines like "nothing is safe" and "There are hundreds of millions of business printers in the world.  Only 2% of them are safe."  This is FUD to the nth degree.  The fact that Christian Slater, who stars in Mr. Robot, is hacking the printers is designed to give it a more realistic bent.  But it's not realistic.  How did Slater get on the wireless network to pwn these printers?  And were they configured with default credentials?  And how did he just happen to have firmware for these specific models.  And why FFS were the printers not checking a digital signature on the firmware.  So many questions...

And this all made me mad until Twitter user Jason Testart told me that HP was sending electronic marketing material to executives (likely CIOs) to get them to invest in printer security.  Then I was just furious.

According to Jason, that thing in the middle is a cell phone screen that plays marketing material (presumably the Christian Slater video?).  In today's digital age, it is environmentally irresponsible to create such waste in advertising materials.  Cell phone screens?  Really?  I'm sure this is flashy and I'm sure people look at it before throwing it away.  But I'm a little outraged at the waste - especially when used to spew such deceptive FUD.

Real Security
Securing your printing environments isn't hard.  If you need help, contact me at Rendition Infosec and I'll be happy to help you.  But I'll save you all sorts of money up front and tell you that it involves basics like not connecting your printer directly to the Internet, not exposing unnecessary services (turn off LDAP on your multi function device if you don't use it), and change default credentials on your print devices that have a web interface.  Turn off wifi if you don't use it (you shouldn't be using wifi for your printer, honestly).  Then back this all up with network monitoring.  Find your printer beaconing to China?  Isolate it from the rest of the network, then call someone who can reverse engineer the printer implant.  While you're at it, hunt the rest of your network.  The attackers didn't just compromise the printer.

Closing Thoughts
In closing, things like this make me sad to be in infosec.  I know that I'm not making much of a dent here raging on how dumb this printer security push is (in the overall scheme of things).  But please don't make my efforts in vain.  Your CIO probably got a cardboard encrusted cell phone screen from HP.  Talk to them.  Tell your CIO this is FUD. Tell them you probably need to invest money in basic infosec blocking and tackling.  Then feel good about changing the world for the better.

Thursday, February 9, 2017

Some Q&A about the law of Cyber Warfare

I am not a lawyer. I don't even play one on TV. But a group of lawyers did get together to try to decode the laws of many nations apply to cyber warfare.  The Tallinn 2.0 manual is the output of this effort.

Some of the lawyers in this group got together yesterday to have a panel and answer questions.  They try to answer questions such as "is this an act of war" and the law of peacetime cyber operations.  The text is dense and probably requires a law degree to completely comprehend, but the Q&A session linked here was pretty good for background noise while driving.

Wednesday, February 8, 2017

Get ready for more mandatory training!

If you work at DHS (and honestly, probably anywhere in .gov) you should brace yourself for more mandatory training.  H.R. 666 (yes, I immediately noticed the irony of that number) has passed the house and is heading to the Senate.  It would require that Homeland Security develop awareness programs (read more mandatory annual training) to help users spot insiders.  The bill of course does much more than that.

I'm particularly impressed by these two items.  If the legislation passes the Senate, (G) will ensure that by law the Insider Threat Program is informed about current technology and trends.  This is much better than relying on your adviser's BS in IT from ten years ago to steer decisions (sadly, this isn't a hypothetical).

I also like section (H) where metrics are required.  As we regularly tell Rendition Infosec customers, metrics are critically important to ensuring program success.  Of course some customers hate metrics, and we get it.  They aren't sexy (downright boring in many cases), but they are critical.  Effectiveness for an insider program will be difficult to measure, so it will be really interesting to see what metrics DHS selects for the program. 

Friday, February 3, 2017

Why you should care about CVE-2017-0016 (new SMBv3 0-day)

I've seen a few people talking on social media about how CVE-2017-0016 is just not a big deal.  They correctly point out that it can't trigger Remote Code Execution (RCE) and can only be used for Denial of Service (DoS).  Both of these are correct.  More than a few people have made the mistake of saying something to the effect of "if you have SMB listening on the Internet, you have all sorts of other problems."

But those making the latter statement don't understand the vulnerability.  Unlike many previous SMB vulnerabilities like MS-08-067 (used by Conficker) that would require a host to be listening on SMB ports (TCP 139 and TCP 445) to be exploited,  this vulnerability requires that the vulnerable host be able to talk to an attacker on these ports.  So SMB need not be exposed to the Internet in a traditional sense for a host to be exploited.  If an attacker can get a user (or an automated process) to visit a malicious link over SMB, the exploit will be successful and the machine will crash.

What do you need to do?
Microsoft has not yet made a patch available.  However, there is a publicly available PoC script so attackers can cause mischief today with no work on their part.  You need to ensure that your networks don't allow TCP 139 or TCP 445 outbound.  Due to the SMB worms of the past, most residential ISPs block TCP 139 and TCP 445.  Most business ISP connections do not.

There are tons of reasons to not allow TCP 139 and TCP 445 outbound.  They are too numerous to mention here and there's no reason to repeat them here.  If you want to test your network, I set up a test before Badlock last year.  The instructions for running it are here.  You really should make sure you block SMB outbound from your network, anything less is a ticking time bomb.  When we audit small business networks at Rendition Infosec, we see SMB allowed outbound with a surprising regularity.

Saturday, January 28, 2017

Cyber attackers cause safety issues

Update: According to @cybergibbons he checked with the hotel in the story and found out the story was fake.  That's disturbing.  So we won't be using this as a cautionary tale. We also won't be using this as a cautionary tale.  Locking people in their rooms seemed far fetched, but locking people out of their rooms is still a huge (and believable) risk.  I'm going to leave this post up for the time being. We've seen backdoors left behind by cyber extortionists before.  We think it's wise to segment networks.  For those reasons alone, we think the post (though originally based on fiction) is worth keeping up.

Original post
I read a story today about a cyber attack causing safety issues, or more specifically a threat to human life. The attackers took over the key management system at a hotel with 180 guests and locked guests out of their rooms. Supposedly, even the guests in their rooms couldn't get out.  This is an obvious safety issue for guests.

The hotel paid a ransom in Bitcoin to restore service.  The attackers only asked for 1,500 EUR, but honestly could have probably gotten far more given the seriousness of the mayhem they were causing the hotel.  

A more important note is that attackers left a backdoor in the hotel's system and tried to come back.  It's not the first time the hotel has been attacked.  The hotel has been attacked at least twice before, though there are no details about the previous attacks offered in the article.  The hotel management also noted that they've been in contact with other hotels that have had similar ransom situations.

There's some interesting takeaways here. First, if you need an example of a cyber ransom attack causing a possible threat to human life, here you have it.  I'll certainly be holding this one in my back pocket for future discussions with Rendition Infosec clients.  The possibly liability here should be obvious (it's enormous).

Perhaps a more important takeaway is that the attackers planted a backdoor in the hotel's systems.  I don't disagree with the idea of paying a ransom. Do what you have to do to ensure safety.  People locked in their rooms are a fire hazard.  People who can't get into their rooms to get life saving medication are also obviously at risk.  So paying the ransom is the right thing to do.  But after you pay the ransom, organizations should aggressively hunt for attackers on the network.  Machines that have been compromised cannot be reliably cleaned of all backdoors and malware.  Best practices require that the systems be rebuilt (not restored from backup).

After rebuilding the computer systems the hotel decoupled some of its systems from its core network.  This is very similar to best practices in ICS (industrial control systems) networks where IT (information technology) networks are separated from OT (operational technology) networks. There's really no reason for the hotel's key control system to be on the corporate network in the first place.  Only bad things can happen from this extra connectivity.  It's worth noting that "decoupled" could mean any number of things.  We can only hope that the system is truly separated from the corporate network.

Finally, the article says the hotel will be replacing electronic keys with regular keys.  This creates a whole new threat model, but the good news is that real keys never get demagnetized (as a traveler, I hate this).  The hotel will have to evaluate whether replacing digital keys with physical keys is best for the safety of its guests.  But this is a good note of how maybe we shouldn't connect everything to the Internet (yes, I'm looking at you IoT).

This is a great educational story that could have ended very poorly.  Instead, the hotel responded quickly and took steps to keep it from happening again.  They pulled victory from the jaws of defeat.  I'm sure there are some who will say the hotel was wrong for paying the ransom, that paying encourages the attackers to target other victims.  But it's unlikely that those making these claims have ever faced such a situation.

Thursday, January 26, 2017

Witchcraft as a Service (WaaS)

I read this hilarious Motherboard article the other night about a witch who claims she can use magic to drive out computer viruses.  Sure, this article is humorous (at least to infosec people reading this blog).  But I got to thinking that there are many products and services being seriously marketed in infosec that are no more effective than magic.  In fact, on more than one occasion, a vendor has described a process to me as "nearly magic."  Um, no. You've lost my attention.  Go sell your magic beans to someone else.  I don't need your beanstalk screwing up my security architecture.

I'm not sure which security firm (see what I did there?) will be the first to offer WaaS, but whoever it is would be wise not to take the advice of the real witch.

The witch says she "called in earth, air, fire, and water" to aid in clearing a virus.  But every A+ technician working the bench at Geek Squad knows that at least three of those things are bad for computers.  Something is already a little fishy about this technique.

Looking for the root cause? Feel "a snag"
However, she claims to be able to find where a virus got in.  It's where she feels "a snag."  I doubt I'll be using witchcraft in any Rendition Infosec incident response in the future. But the next time a lawyer asks me if I've explored every option, I'll point them to this article (all the while hoping they don't call in a witch or a psychic to uncover the logs that rolled over 6 months ago).

A little further into the interview, the witch is asked about demons infecting computers.  I'm pretty sure the interviewer meant to say "daemons" instead of "demons" and everyone just got confused...

Let's get serious for a minute
This has been fun (for me at least), but let's seriously talk about the logical fallacy that she uses to deal with those who discount her work.  She essentially says "before you can challenge me, you must first read The Spiral Dance" (paraphrasing here).  She discounts the naysayers, saying "they say incredibly stupid things" and appears to presume it is their lack of knowledge that causes them to question her magic.  This is an example of the "tu quoque" fallacy. Rather than addressing the merits of the argument, you say "your argument is ridiculous" and dismiss it.

Unfortunately, I see this approach used in infosec far too often.  Infosec professionals may assume that those they are arguing with lack knowledge to "see the light" as it were.  Sometimes this is true, but I caution you to use this (hopefully) humorous example to learn about logical fallacies so you can avoid them in your own work.

Wednesday, January 25, 2017

Kaspersky head of computer incident investigations arrested for treason

There's some shocking news out of Russia this morning that the head of computer incident investigations at Kaspersky Labs was arrested for treason.  According to this article, he was arrested in December and his arrest may be linked to another arrest in the FSB around the same time.

Update: Forbes is reporting that the charges stem back to an investigation into the deputy head of the FSB's information security center (CDC) Sergei Mikhailov.  Moscow Times reports that the arrest was related to taking money from foreign sources and also draws a connection to Mikhailov and the CDC.

According to this source there were changes to Russia's treason laws in 2012 that include the following definition for treason:
providing financial, technical, advisory or other assistance to a foreign state or international organization . . . directed against Russia's security, including its constitutional order, sovereignty, and territorial integrity
It's worth noting that the definition above is very broad and could meet the definition of publishing (or attempting to publish) information about a Russian state sponsored hacking group.  This would definitely be "technical assistance" and would definitely aid "a foreign state."  Also, it's pretty easy to see how that would hurt "Russia's security."  All the definitions are met here for a hypothetical scenario where a security researcher in Russia could be charged with treason for "outing" a Russian state sponsored hacking group.

Where is Russia on this?
The GRIZZLY STEPPE report made it clear that Russian state actors were involved in attempting to manipulate the US elections.  The arrests happened in December shortly after the elections, but it would be an illusory fallacy to assume that the timing of the arrest is connected to the elections.  However, this an obvious connection that many will jump to (including several members of the press who have called me today).  The FSB is smart enough to know this connection will be assumed. If they wanted to get out in front of it, they could.  But they haven't.  I assess that whether the timing is connected, the FSB is comfortable with people assuming that it is (or at least raising the question).

Keep up the good fight
In any case, today I thank my lucky stars that I perform incident response in the United States where the government doesn't overtly try to suppress my freedoms.  That's not to say I don't have a healthy fear of our government when it comes to publishing security information (more on that in a later post).  But I seriously doubt that the US government would charge treason for investigating an incident involving our own network exploitation assets.  On the contrary, I feel pretty confident Russia could.

For those living and working under oppressive regimes, keep up the good fight.  But also remember that no incident response report or conference talk is worth jail time (or worse).

Also, to the GREAT researchers at Kaspersky Lab (I love your work), I hope this incident doesn't in any way tarnish your reputation. The actions of one individual should not be a measure of the group.

Tuesday, January 24, 2017

The Yahoo sale delay is good news for infosec

You may have heard that the sale of Yahoo to Verizon is being delayed.  This is obviously bad news for Yahoo.  But honestly, it's probably great news for infosec.

At Rendition Infosec, we've worked a fair number of breaches over the years involving new organization acquisitions.  In every case, the acquiring organization failed to perform good due diligence on the purchased organization.  They certainly did a financial audit, but failed to perform a security audit.  The value they paid to acquire the organizations was in every case too high, since the price was calculated without knowing about an ongoing breach.

So is the case with Yahoo, only the deal isn't complete yet.  You can bet that Verizon will pay less for Yahoo than originally planned if the deal goes through at all.

However, this isn't the case with most acquisitions.  In most cases, the purchase is complete before the breach is discovered.  And unfortunately, the purchasing organization is left holding the bag in these cases.  They paid more for an organization than it was worth and likely have buyer's remorse.  For smaller acquisitions they might also spend more on the incident response, breach notification, and reputation damage than they paid to acquire in the first place.

Then there's the very real concern that the smaller organization is being used as a compromise vector for the acquiring organization (which likely has better security).  We've seen evidence strongly suggesting this has happened in at least one case (and circumstantial evidence for other cases).

Given the importance of cyber security in today's marketplace, M&A teams would be wise to use threat hunting from external teams as a resource.  The cost of threat hunting, while not cheap, is far cheaper than making a bad purchase.  We anticipate that contracts can be structured such that if compromise is found, the acquired organization pays the bills for threat hunting.  Even if this isn't the case, the cost is cheap insurance for the acquiring organization.

If I'm reading the tea leaves correctly, this means more threat hunting jobs by external teams.  It should go without saying, but internal teams are certainly not what you want doing threat hunting for this purpose.  All in all, this is great for infosec, particularly firms that specialize in DFIR.  I'd be remiss not to mention to you that Rendition Infosec provides these services using our own internally developed proprietary hunting software.  But whether you use us or someone else, don't acquire another organization without the due diligence of threat hunting.

Saturday, January 21, 2017

On the importance of picking good leaders for infosec

Over the last several years, I've noticed the role of CIO transitioning increasingly to business experience rather than IT.  Likewise, the CISO position is often given to those lacking any hard skills in security.

I'm not trying to reignite the hard vs soft skills debate, but I do think it's worth discussing how important security understanding is to being a CISO. One organization I ultimately opted not to work with hired their "CISO" from within their app development ranks. You can't convince me you're serious about security if this is your plan.  If you want to put someone on a career development track to eventually be CISO, fine.  But an application development manager is no more qualified to be a CISO on day one than I am qualified to take over control of a 747 mid-flight.

At Rendition Infosec we regularly meet with infosec leaders - some have a wealth of domain experience and others unfortunately do not.  We're happy to help organizations of all shapes and sizes with directors from all backgrounds.  However, we know that we have better outcomes when dealing with leaders who really understand the problem infosec space.

Some will argue that a great leader can be a great CISO by surrounding themselves with good people.  I think this is fundamentally wrong. Honestly, I doubt this works really well in any field where you lack general domain knowledge.  You could surround yourself with great electricians, but you'd still be a bad chief electrician if you didn't understand panel load or ohms law.  And sorry if I offend any electricians in saying this, but information security is way more complicated. In other words, more domain knowledge is needed.

Armchair Experts
But infosec lends itself to armchair experts. It's in our very nature to over estimate our capabilities, especially on topics we find partly familiar. For better or worse, we all use smart phones and computers these days.  So perhaps we feel like we already have more domain knowledge here than say, pipe fitting. But this is deceptive since the problems of infosec are hugely complex and require massive amounts of domain knowledge.  My ex-father in law was a doctor - a legitimately smart man.  He had no problem paying a plumber to fix a leaky faucet or even a painter to paint a wall. But when faced with a computer problem, he couldn't get over the idea that he was paying someone to do something he could fix himself.  He couldn't understand the value because he overestimated his domain knowledge for cyber security.

Is this really a problem?
You bet it is.  A great example of this is Giuliani. Say what you will about the man, he has a lot of leadership experience. He's also reasonably adept at talking to the press.  So I was a little shocked to read this Market Watch article where he totally confused about cause and effect on a topic as simple as Y2K. As far as Giuliani was concerned (as of January, 2016) we digitized systems to deal with Y2K.  

Obviously this is wrong, and hideously so. But it highlights the type of messaging that can occur when leaders overestimate their domain knowledge in infosec. So what, we're never going to have another Y2K you say? True. But we will continue to face complex security challenges and others will assume you know what you're talking about.  Think I'm wrong? Market Watch didn't question Giuliani's response at all and continued giving him airtime.
* The Giulaini reference isn't meant to be political in any way.  It just happens to be the best high profile example I can muster of leaders thinking they know more than they actually do about infosec.

Just get the checklist done
I've regularly meet with infosec "leaders" (and I use that term lightly) who proudly present me with their list of things they will do to "finish securing our networks for good."  Whoa - no professional believes they will simply check a few boxes and be done.  That's insane. 100% delusional.  Unfortunately, directors and executives eat this stuff up and are convinced that soon they will be "done solving the information security challenge."

Infosec is a domain that requires professionals at the helm.  I don't want someone with a MBA doing hip surgery because he "understands business and has a hip he uses every day, so you know, he's qualified."  We need to stress the importance of understanding the security domain to our leaders.  If you already have a leader who doesn't understand the domain, you have two fundamental choices - either educate them or get a new job.  Sadly, usually the latter works best.

Thursday, January 12, 2017

Shadow Brokers - Russian thoughts?

Who are the Shadow Brokers?  Are they nation state?  If so, are they Russian government or Russian government sponsored?

The timing of the latest releases certainly makes that seem likely.  Along with the release of the GRIZZLY STEPPE report detailing Russian hacking, a number of Russian "diplomats" (probably spies) were kicked out of the US.  Apparently we also took their summer vacation home.

One week later, the Shadow Brokers released a dump including a file listing of Windows tools supposedly stolen from US intelligence agencies.  They also posted screenshots detailed here and here.  It's hard not to see this as a retaliation for the US expelling the Russian diplomats.  If it's not a retaliation, make no mistake about it: the Shadow Brokers knew that analysts would likely come to this conclusion.

But then in the early morning of January 12th, 2017 the Shadow Brokers dumped 61 Windows binaries (.dll, .exe, and .sys files).  They claim they only dumped the 58 tools that were detected by Kaspersky AV, but the dump contained 61 files.  A little anonymous birdie told me that Kaspersky only detects 43 of these files as of mid-day on the 12th. I don't like Russian software on my machines so I can't confirm whether or not that's true.

Shadow Brokers "final message"

So why dump the actual files themselves?  I think that since the dump of the filenames on Sunday there's been a lot of behind the scenes diplomatic talks and Russia decided the US wasn't taking them seriously.  In this case, releasing 61 files is a good way to be taken seriously, while holding back a huge cache of files.  "Feel some pain, but know we can hurt you again and again and again."

Of course, I could totally be wrong about this, but it sure is fun to watch what appear to be two country's intelligence agencies battle it out in public.

Wednesday, January 11, 2017

Blind recruiting on LinkedIn - advice to recruiters

This message I got on LinkedIn today is a great example of how NOT to recruit infosec candidates.  Nothing about this message says "I actually looked at your profile."  Everything about this says "I never looked at your profile, but some word on there matches a search term."

Want to get serious about recruiting? Talk salary. Talk benefits. Tell me something that says "I read your profile."  Be careful about targeting business owners. Think before you just send.  Maybe ask yourself:
What am I offering that would make this job attractive to a business owner?  So attractive they'd leave the business and come work for me? 
If you can't answer those questions, think again about even sending your offer.  When I make successful sales at Rendition Infosec, I research the organization I'm selling to.  Honestly, infosec/cyber recruiters need to start doing the same.

Tuesday, January 10, 2017

Novel malware sandbox evasion

I was working on a piece of malware for a Rendition Infosec client recently and noticed a novel malware sandbox evasion.  Malware often tries to determine if it's in a sandbox and if so, performs different functions than when it is on an endpoint system.

This particular malware enters a loop and tries to connect to  If the malware connects successfully, it goes on and does bad things.  If not, it sleeps and does it again.  And again. And again. Good news for sandbox evasion: until the malware successfully connects to Google, there's no way that you'll see anything bad.  For this (and other) reasons, this malware had really low detection and had no trouble bypassing antivirus on the client's system.

The attacker knows however that tools like FakeDNS and a simple HTTP server could easily trick the malware into thinking it was on the Internet.  But here the attacker reads the data returned and checks the first four bytes of the return to find "<!do".  This string is likely the "<!doctype html>" tag that is found at the start of the Google website (and others).  I checked a few sandbox programs that try to mimic the Internet and most of them just serve up an HTML page without the "<!doctype html>" tag.  I'd recommend adding this to your sandbox program if your sandbox is configurable.

This is a great time to remind everyone that sandboxes are useful tools but are no replacement for a good reverse engineer.  If you don't have a dedicated reverse engineering staff but would like to have the capability at your disposal, talk to us at Rendition Infosec and we can get you up and running on a retainer quickly.  Once you have a reverse engineering capability at your disposal, it's pretty amazing how much you'll actually use it.

Monday, January 9, 2017

More finds from the Shadow Brokers dump

Yesterday, I blogged about the Shadow Brokers dump and some take aways.  I wanted to introduce another potential takeaway.  One of the lines in this screenshot published by Shadow Brokers says psp_avoidance.  What is Psp_Avoidance?  Is someone looking to avoid the Playstation Personal?  Paint Shop Pro?  Doubtful...

I downloaded the screenshots published by the Shadow Brokers (which oddly doesn't include this screenshot).  However, it does include the output of the find command across the dump.  After searching through the directory list output for the string "psp" we find a number of different XML files (among other Python files and others).  Note the output below.
We have no idea what a pspFPs is, but what we see here seems to indicate that psp is a security product.  We also get some idea of what antivirus products are of interest to the group Shadow Brokers stole the tools from.

This additional find command output data seems to support that psp is nomenclature for security product.  
A few Google searches later with, this one with the obvious terms "psp computer network operations" we get back as the fifth result this wonderful page from ManTech.  It details the ACTP CNO Programmer Course.  The course documentation indicates that PSP is an acronym for "Personal Security Product."

Thanks ManTech!

So, circling back around, what is Psp_Avoidance?  Obviously, we don't know - but if the acronym is correct, it would seem to be software built to evade personal security products, which directory listings suggest (as does ManTech) are antivirus programs.

Should you run antivirus products? Sure. At Rendition Infosec we tell customers that operating without AV is like driving a car with no airbags. But this dump suggests that advanced attackers have mitigations for antivirus products - a sobering reality for organizations without defense in depth.  Bottom line, AV is valuable but the new dumps casts a shadow on the effectiveness of antivirus against APT attackers.

Sunday, January 8, 2017

Implications of the newest Shadow Brokers offerings

Shadow Brokers are at it again, this time offering apparent Windows exploits and toolkits.  The timing of this does not seem coincidental. If Shadow Brokers are to be believed, they've been holding the tools for some time and just now releasing the Windows toolkits.  Previously, they have released other tool sets, but nothing that operated against or exploited Windows.

The Tools
What of the tools?  There's little specific information about the tools, but I've included some images here from Twitter.  In the last I've embedded tweets, but when accounts get suspended, the tweets are no longer available.  It's at least plausible that this account will be suspended...

This screenshot shows the price for individual components.  Most interesting perhaps is the fact that the exploits contain a possible SMB zero day exploit.  For the price requested, one would hope it is a zero day.  The price is far too high for an exploit for a known vulnerability.

This screenshot shows a number of names of apparent tools in the dump.  Of particular interest are the version numbers.  Note that most of the tools have apparently been through multiple revisions, adding apparent legitimacy to the claim that these exploits are real.  Though another screenshot hints at a possible zero day SMB exploit, there's no indication of which exploit names involve SMB (or any other target service). 

The exploits named "touch" in the screenshot do however seem to offer some ideas of services that might be interesting.  Of particular interest is WorldClientTouch - suggesting that perhaps one of the code-named exploits work against MDaemon's web based email client?  

Finally, this screenshot seems to show some information about the tools available.  Some capabilities like "GetAdmin" and "PasswordDump" seem rather obviously needed capabilities.  

However, the listed plugin "EventLogEdit" is significant for digital forensics and incident response (DFIR) professionals investigating APT cases.  While we understand that event logs can be cleared and event logging stopped, surgically editing event logs is usually considered to be a very advanced capability (if possible at all).  We've seen rootkit code over the years (some was published on the now defunct that supported this feature, but often made the system unstable in the process. 

Knowing that some attackers apparently have the ability to edit event logs can be a game changer for an investigation.  If Shadow Brokers release this code to the world (as they've done previously), it will undermine the reliability of event logs in forensic investigations.  Cyberark recently claimed that event logs might be subject to tampering, though it doesn't appear that they were discussing the Shadow Brokers capability specifically.

The Timing
So what do we make of the timing? It's hard believe that the timing is purely coincidental and has nothing to do with the release by US intelligence about the Russian hacking of the DNC.

The theory that immediately comes to mind is that Shadow Brokers are Russian or Russian operatives and the release of the Windows toolkit is retaliation for the report.  Unlike previous dumps, this dump goes a bit further, showing screenshots of the GUI tools and execution of some scripts.  However, it is important to note that no tools are offered for proof of the dump this time. Only screenshots and descriptions of the tools are offered.

An alternative theory is that Shadow Brokers are not Russian and are timing this release to shift the blame to Russia.  There's unfortunately no way to test this theory.

Finally, there's the theory that the timing of this latest Shadow Brokers release has nothing to do with the intelligence community report.  This seems the least likely.  Shadow Brokers must have known that people would make this analytic leap, so even if they scheduled this release some time ago, the decision to go ahead given the release of the report on Russian hacking was done with the understanding that connections would be made.

Regardless of your feelings on timing or what we know of the tools themselves, this is certainly an interesting development.

Friday, January 6, 2017

Putting science in CTI with GRIZZLY STEPPE usage

At Rendition Infosec, we're trying to put some science into Cyber Threat Intelligence (CTI). We're really interested in how customers are using the GRIZZLY STEPPE (Russian APT) report. We'll publish a report on the responses (in aggregate) in the coming week. Please make your voice heard and contribute to the community's understanding of how this data was used in the real world.

Wednesday, January 4, 2017

The JAR did more harm than good

The Joint Activity Report (JAR) on GRIZZLY STEPPE did far more harm than good.  I've had numerous clients of Rendition Infosec question me on what the indicators mean and whether they should be concerned.

Concerned about Russian hackers in your network? Not based on those indicators (most of them).

Concerned about the competence of government cyber analysts (or lack thereof)? Yeah, definitely.

There are 876 IP addresses in the GRIZZLY STEPPE IOCs.  There are several from Amazon EC2, and absent a date of when those IPs were actively used by Russian hackers, they are useless.  Less than useless.

My favorite IP address in the report though has to be  This resolves to  This makes it clear that nobody competent vetted the report.  Either that or someone at NCCIC has it out for Dr. Watson.

What's an indicator anyway?
These indicators aren't indicators.  To be more than data, an indicator has to indicate something.  These fall well short of that.  The report thankfully doesn't recommend blocking the IPs in the report, but also fails to say how hopelessly under-vetted they are.

My recommended action for these indicators is to ignore them until they have been better vetted.  NCCIC honestly owes a large number of network operators and incident response teams a formal apology for the time they wasted responding to this farce of a report.  The IOCs have triggered countless false positives in my customers' networks.  Even Rob Graham noted that he had two of the IPs in his browser DNS cache.  But more than that, the report communicates to corporate leadership to ignore future reports from NCCIC.  One day they'll have something useful to share, and based on this clown show, nobody will be left paying attention.

Monday, January 2, 2017

What should CTI teams be telling leadership?

In this post I want to address a problem that many CTI (Cyber Threat Intelligence) teams encounter on a fairly regular basis.  CTI teams rarely deliver good news.  After all, they are delivering information about cyber threats. The news is rarely great and in less enlightened cultures, it really isn't what leadership wants to hear.  At Rendition Infosec, we are regularly asked to sugar coat reports to make them more palatable to leaders.  Now I'm not one for FUD (fear, uncertainty, and doubt), but I'm also not one for ignoring the truth.  And often, unfortunately that truth is "we need help."  So in this post I'd like to address the question of whether it's better to tell them what they want to hear or sugar coat the truth.  To help illustrate the point, I'll use a CIA review of the book "What Stalin Knew" that I came across recently.  If you haven't read this review already, you should.

Tell them what they want to hear
Telling leaders what they want to hear is usually the easiest solution in the short term, but it can cause real problems in the long term.  "We're doing great on security and don''t have anything to worry about" is all fine and good until you have a security incident and have to explain why you were wrong (or deceitful).  However, this approach can increase liability if you are a contractor.  For internal employees, bear in mind that there are often sacrificial lambs brought to slaughter for every major security incident.  If your message is consistently "we're fine, don't worry" you may be that lamb.

Tell them what they need to hear
As pointed out in the book, this can get you killed while those who sugar coat the truth (or simply omit annoying facts) may prosper in your place.  Now you aren't likely to be killed for telling the truth, but you may not be promoted and might be marginalized in your existing position. If you are a contractor, you might not be invited to return.  But the good news is that this approach reduces liability and you'll probably sleep better at night doing it this way.

Take a blended approach
I personally think this is the best approach.  Executives and information technology professionals suffer under intelligence fatigue.  They need actionable intelligence to make decisions and operate effectively, but too much non-actionable information isn't a good thing.  At Rendition, we'll happily provide full details of all intelligence available as well as all recommendations for fix actions.  But we really prefer to focus on the top three to five threats and the top five to ten remediation actions.  We find that in numbers above these, we're really over saturating executives and exceeding the ability of IT organizations to take action on the remediation actions that are presented. We carefully work with the organizations to see their progressing in actioning the intelligence provided and then present the next most pressing threats and remediations.

What's the best approach?
What are your thoughts on the approach that CTI teams should take?  Continue the conversation on Peerlyst, leave a comment here, or hit me up on Twitter.

Cross posted from Peerlyst.

Sunday, January 1, 2017

Russian election hacking sanctions

I'm not touching on all the indicators released by the government in their report (yet).  I have lots of opinions on that, stay tuned.  What I really want to talk about are the sanctions and how ridiculously short sighted they are.  In my opinion, the sanctions were a publicity stunt designed to make people who don't know any better think that the administration was doing something significant.

Now I'll admit declaring 35 Russian operatives in the US persona non grata (PNG) IS significant.  It takes time to find and train new embassy operatives (if you believe what you see on The Americans) and this will impact Russian intelligence for some time to come.  But expect that Russia will also declare some of our "diplomats" PNG in retaliation.  To not do so would nearly be an admission that the Obama administration was right about the hacking.  So expect that this is a zero sum game.

But what about the sanctions for people and companies involved in the hacking?  This is where things get ridiculous.  Few of these diplomats named own property in US, and honestly, if they do I'm fine with it being seized under the sanctions.  But the idea that this will impact the three Russian companies named in the announcement is ridiculous.

First, they don't do business in the US.  Second, one of the companies listed (Zor Security) has reportedly been closed - sanctioning a closed business is lunacy.  Several people have noted that the company still shows active on the Russian business registry. But the owner of the business claims to have shuttered the business some time ago.  In any case, she wasn't doing business in the US.  Unless Zor Security has assets in the US, the sanction is a publicity stunt.

The Department of Treasury uses powers from the newly expanded Executive Order 16394.  If you haven't read the original order, you should start there before reading the amendment just issued.  The real problem here is that the language is so broad that if Russia was to adopt the same language, they could sanction huge numbers of NSA and DoD contractors and gov personnel.

Whole sections of DoD contractors are probably researching zero days, writing malware, and planning and executing cyber operations as I write this. What makes them different from the people and organizations sanctioned by the US Government?  Maybe the types of cyber operations they engage in.  I think the intent was to limit the scope of the Executive Order to only certain types of hacking.  But read below and you'll see it's pretty clear they missed the mark.

The definitions in (A) and (B) are pretty broad.  Any takers that US contractors haven't performed or materially contributed to one of these operations against a foreign government?

The problem with (C) is that there are often unintended consequences to cyber operations.  Intent and impact don't always align.  An investigation I was involved in with Rendition Infosec involved an apparent denial of service attack on a database server.  After examining logs, the attacker had been there for months executing queries against the database sporadically (to gain intelligence and/or trade secrets).  The attacker executed a query that used an inner join operation to create a sub-table and select from there.  The tables involved were huge and exhausted available memory.  While the query was syntactically correct, it caused the server to stop responding to requests.  We can all agree the DBMS should have been more resilient to memory issues, but that's opinion. I'm talking about reality. The database was serving an ERP application so this had significant financial impact for the organization.

The intent was not to cause "significant disruption" but the impact definitely was.  The powers granted in the Executive Order make no differentiation between intent and impact and this could be an issue.  Even if you don't care about this because "F%$k Russia" remember that they (and others) may choose to judge US citizens by these same standards.

Regarding (D) it's the official US policy to not use hacking to steal trade secrets which are given to US companies for financial benefit.  But given the number of classified Executive Orders released by the Snowden leaks, is it really unrealistic to believe this might be happening?  Whether or not you think it's happening, isn't it realistic that another government might think so and start sanctioning US contractors providing material support to cyber operations?

Finally, lets not be myopic about (E).  There are plenty of reports about the CIA tampering with elections.  I'll let you form an opinion here, but I'm pretty sure that if CIA (or whoever) is still tampering with foreign elections, they are using intelligence gained from cyber to do it.  That squarely fits the definition for (E).

I'm all for responsible sanctions, but the language used here does not consider potential blowback to US citizens.  Since it's largely accepted that China was responsible for the OPM hack, maybe I should be more concerned with China than Russia.  And speaking of which, where are the sanctions for Chinese companies providing "material support" to hacking operations?

Make no mistake about it, this Executive Order sends a powerful message.  Unfortunately, that message is "here's a road map for how to hurt us more than we can hurt you."  Think I'm wrong?  If you sell NSA an 0-day (or just give it away because you're a patriot), you would almost certainly fall squarely in the definitions of this EO.  Think this publicity stunt makes the US stronger?  Think again...