Saturday, April 26, 2014

Heartbleed Eradication Gone Wild

It’s well known that the Heartbleed bug is a threat to those who depend on TLS (a.k.a., SSL, HTTPS) to protect user credentials, session tokens, and any sensitive data available on a public website (or server endpoints). See the heartbleed.com site for details.  Briefly, malicious clients can coax memory/data from the TLS server. This includes not just individual client sessions, which would be limited to transient symmetric keys, but also includes any server-side configurations resident in memory. Worst of all it can include the private key of the server itself, allowing malicious sites to impersonate well known sites.

The weight of the server-side implications can’t be dismissed.  However, the usefulness of stolen private keys requires the attacker to find a way to insert themselves in the middle of transactions, making the threat something short of the doomsday that has been reported.  Given what I've heard about the state of ISPs, maybe this isn’t such an exaggeration.  Whatever the case, the largest attack surface and the largest threat is that credentials, session tokens, and sensitive data from individuals can be exposed on vulnerable sites.

Eradication should involve patching server endpoints, testing, revoking keys that existed prior to eradication, and installing new private keys.

Once this is done, the doomsday panic is over.  However, the panic has widely extended to an effort to totally eradicate the vulnerable OpenSSL libraries wherever they are found.

What’s not talked about as much (in the general public anyway) is the client-side implications of this threat. This has been referred to as “Reverse Heartbleed.” The threat is extended to the client beyond the third-party attacker client extracting data from the server. When the client itself depends on the vulnerable OpenSSL versions to establish a TLS session, a malicious or compromised server could extract data from the client using the same methods that the third-party client uses against the server.

What does this really mean to end-users? The number of clients that depend on OpenSSL reaches far and wide but is limited when you consider the colossal numbers of clients in use. Internet Explorer, Chrome, Safari, and Firefox are not vulnerable for the vast majority of people. However, Chrome on Android uses OpenSSL. Even so, this will not mean that any and all Android users are affected because they will have to have the right (should say wrong, I suppose) version of OpenSSL. Reverse Heartbleed matters to only some end-users.

Of course, clients don’t have to involve end-users. OpenSSL could be used in client software on the backend, in a B2B scenario for example.

The threat to clients is covered by The Register here:http://www.theregister.co.uk/2014/04/10/many_clientside_vulns_in_heartbleed_says_sans/.

An example that can sometimes include end-user interaction and sometimes not is VPN. However, let’s not panic too much over this. (The news about Heartbleed VPN attacks centers on vulnerable servers, not clients.) VPN clients aren’t used to browse the net after all. We know where your VPN clients are connecting most likely. It’s possible, but unlikely that your employees are connecting to VPN server endpoints outside of your control. If you’re concerned about your employees connecting in from home, you don’t really need to be if you have taken care to secure your own VPN server endpoints (including layered security, like monitoring and endpoint hardening).

Again, there is the threat of an attacker being in the middle which is actually a considerable threat when your employees connect via cafes and hotel WIFI. However, this threat should be less panic inducing than the server-side threat. We accept this threat on some level when we allow our employees to surf on their company owned laptop while connected to untrusted networks (web servers, etc.). Accordingly, this threat falls into a category that we've already considered how to manage.

Should you worry about VPN clients? It depends.

Other possible clients include a myriad of internal applications. These clients could be database connectors, internal node-to-node, appliances, automated patching, etc. Consider a client that connects to databases– it is very unlikely that these are connecting to any networks outside of your control. They are most likely connecting your reporting software to the data warehouse or an Apache server connecting an application to its databases. Similar to the VPN client concerns, the threat depends on whether there are connections to networks outside your control.

Should you worry about these clients? It depends.

I say make damn sure that your server endpoint is taken care of including detecting anyone messing with it from a lateral threat (some other attack vector, for example). From there, address the client threat the way you would any other vulnerability that doesn't get as much press and hype.

In all cases of client-side concerns, most likely the vast majority of clients have very predictable connections to things that you control. Yes, there is the possibility that these vulnerable clients could be used as a means to move laterally within your network, but that threat is true of many, many vulnerabilities that arise. In other words, this threat shouldn't be treated as the same as the panic-in-the-streets threat that Heartbleed inspires for publicly facing servers.

While I can understand the aversion to having a nuanced response to this threat, I am left wondering if there isn’t a different kind of exploitation possible. Immediately eradicating the wrong versions of OpenSSL wherever they hide in your organization will be an extremely expensive operation. Let’s call this the nuclear response to Heartbleed. Could it be that the security org is overreacting or, worse, (consciously or otherwise) exploiting the hype to justify past and future expenses? 

I have an aversion to all of this cyber-warrior chatter (see RSA Conference keynotes) for this very reason. Little boys playing with guns always want bigger guns, after all.

An often repeated axiom of risk is that you don’t spend a million dollars to address the risk of losing a million dollars– otherwise you have realized your risk after all.

Those involved in the nuclear response are enjoying total support from executives who have read article after article about this apparently doomsday-level threat. However, I wonder if when the mushroom clouds disperse we won’t have some questions to answer about how smart it is to use total war in situations like this.

Reverse Heartbleed is mostly an ordinary vunerability and should be handled according to pre-April-1st-2014 practices.

Without a sober approach to risk, including a prioritized response, we have effectively thrown out risk management and have rewound the clock over ten years in our approach to vulnerabilities.

This won’t be the last time we see a bug like this.  Have the rules changed because of Heartbleed?  I hope not.

Will we respond to the next one the same way or will we learn from this experience, including objectively measuring our response to Heartbleed to uncover mistakes, overreach, and excess spending? What if we have to respond to several like this a year? Will we go into the red and justify it by throwing around intangible threats like reputation?

Perhaps it’s easy for me to pontificate when I’m not the CISO with his job on the line. The nuanced response, after all, might leave the blind-spot that winds up being the hole that gets you a pink slip. However, with access to top-notch SMEs, intelligence feeds, quality consultants, etc. a measured response should be possible. I say this crisis is pretty much over when the server endpoints and some fraction of the client threats are addressed. After that, it’s routine vulnerability management.
Written with StackEdit.

Tuesday, April 15, 2014

No Inside

There is a notion that I have always found dubious and that persists where we believe that we can hold ourselves to different standards when an application is planned for internal deployment only.  This internal standard apparently applies to the quality of our work from usability to security.

I liken this mentality to the manager who acts one way when in a meeting with his direct reports and another when his boss is present.  It reveals a lack of character and integrity.  One should apply the same standard no matter the context.  If anything, this makes things a lot less complex.  There's no need to work on two distinct behaviors when one will do.

When applied to usability, we accept less than optimal experiences.  I suppose this is something like cooking for just the immediate family versus for the dinner party.  (Note that I'm the primary cook for my family.) We don't need some fancy pan sauce unless guests are coming over.  If a user interface is painful to use, well just deal with it and don't be whiner.  However, this isn't like cooking for the family.  Our employees have the option to leave, after all.  We should care about the experience for a number of reasons, including productivity but also perception.  What signals do we send if we don't care about quality internally?  When new internal systems are released with dead links and clunky interfaces, we're acting as if we don't care and when we shrug and say "deal with it" we're acting as if we're running a Soviet bread-line rather than a company that cares.

When applied to security, we also accept less than we know we can do.  We'll take time to design it the way we know it should be done.  However, we negotiate with ourselves as deadlines approach and pull out optimal security in favor of good enough.  I don't know how many times I've heard "you do realize that this is behind the firewalls, right?"  (Note that most of our attention to firewalls operationally apply to layers 2 and 3 while most of the threats today are on layer 7-- firewalls, shmire-walls.  Besides, a lot goes on behind the firewalls including insider threats and compromised workstations.)

Why should where an app is in relation to firewalls change the equation at all?  I suppose we think good enough saves us time and money.  However, I'm certain that kicking better designs down the road stunts our growth and leaves us ill prepared for when we can't negotiate our way out of it.  We fail to make investments that we could use later, both in the technology and the competencies of our workforce.

BYOD is here, whether official or not.  I realized this when I saw executives make the switch from Blackberry to smart phones and tablets.  When I sat in the room with an exec taking notes on her iPad, I wanted to ask how she kept IP safe, but I bit my tongue.  Like it or not, it's here.  What this means is that our notion that there is any behind-the-firewall boundary is eroding... and fast.  Of course, these boundaries were already soft since many of us can be off the corporate network using our laptops to do much more than being internal permits.

It's best to assume that there is no inside.  This isn't just from a security perspective.  If we are to fully commit to what is meant by Cloud Computing, anything we build in IT should have the long-term possibility that it could be sold to others.  All IT services could become a business service.

In practice, this means that we should always build quality inside and out.  Our user experiences should be more than just adequate, they should be pretty damn good.  We should align with standards when they're available to address cross-industry interop.  We should avoid proprietary security controls on the back-end so that there's no need to refactor anything should the posture of the application become external or commercial.  We should stop seeing quality, especially security, as a tax and start seeing it as an investment.  We should build each app as if it's externally facing-- fully exposed to the expectations of the outside world whether the threat is a usability critic or a bad actor.

(Note that this doesn't mean that I'll be making pan sauces for the kids every weekday.  Weekends?  Maybe.)

Followers