Saturday, April 26, 2014

Heartbleed Eradication Gone Wild

It’s well known that the Heartbleed bug is a threat to those who depend on TLS (a.k.a., SSL, HTTPS) to protect user credentials, session tokens, and any sensitive data available on a public website (or server endpoints). See the heartbleed.com site for details.  Briefly, malicious clients can coax memory/data from the TLS server. This includes not just individual client sessions, which would be limited to transient symmetric keys, but also includes any server-side configurations resident in memory. Worst of all it can include the private key of the server itself, allowing malicious sites to impersonate well known sites.

The weight of the server-side implications can’t be dismissed.  However, the usefulness of stolen private keys requires the attacker to find a way to insert themselves in the middle of transactions, making the threat something short of the doomsday that has been reported.  Given what I've heard about the state of ISPs, maybe this isn’t such an exaggeration.  Whatever the case, the largest attack surface and the largest threat is that credentials, session tokens, and sensitive data from individuals can be exposed on vulnerable sites.

Eradication should involve patching server endpoints, testing, revoking keys that existed prior to eradication, and installing new private keys.

Once this is done, the doomsday panic is over.  However, the panic has widely extended to an effort to totally eradicate the vulnerable OpenSSL libraries wherever they are found.

What’s not talked about as much (in the general public anyway) is the client-side implications of this threat. This has been referred to as “Reverse Heartbleed.” The threat is extended to the client beyond the third-party attacker client extracting data from the server. When the client itself depends on the vulnerable OpenSSL versions to establish a TLS session, a malicious or compromised server could extract data from the client using the same methods that the third-party client uses against the server.

What does this really mean to end-users? The number of clients that depend on OpenSSL reaches far and wide but is limited when you consider the colossal numbers of clients in use. Internet Explorer, Chrome, Safari, and Firefox are not vulnerable for the vast majority of people. However, Chrome on Android uses OpenSSL. Even so, this will not mean that any and all Android users are affected because they will have to have the right (should say wrong, I suppose) version of OpenSSL. Reverse Heartbleed matters to only some end-users.

Of course, clients don’t have to involve end-users. OpenSSL could be used in client software on the backend, in a B2B scenario for example.

The threat to clients is covered by The Register here:http://www.theregister.co.uk/2014/04/10/many_clientside_vulns_in_heartbleed_says_sans/.

An example that can sometimes include end-user interaction and sometimes not is VPN. However, let’s not panic too much over this. (The news about Heartbleed VPN attacks centers on vulnerable servers, not clients.) VPN clients aren’t used to browse the net after all. We know where your VPN clients are connecting most likely. It’s possible, but unlikely that your employees are connecting to VPN server endpoints outside of your control. If you’re concerned about your employees connecting in from home, you don’t really need to be if you have taken care to secure your own VPN server endpoints (including layered security, like monitoring and endpoint hardening).

Again, there is the threat of an attacker being in the middle which is actually a considerable threat when your employees connect via cafes and hotel WIFI. However, this threat should be less panic inducing than the server-side threat. We accept this threat on some level when we allow our employees to surf on their company owned laptop while connected to untrusted networks (web servers, etc.). Accordingly, this threat falls into a category that we've already considered how to manage.

Should you worry about VPN clients? It depends.

Other possible clients include a myriad of internal applications. These clients could be database connectors, internal node-to-node, appliances, automated patching, etc. Consider a client that connects to databases– it is very unlikely that these are connecting to any networks outside of your control. They are most likely connecting your reporting software to the data warehouse or an Apache server connecting an application to its databases. Similar to the VPN client concerns, the threat depends on whether there are connections to networks outside your control.

Should you worry about these clients? It depends.

I say make damn sure that your server endpoint is taken care of including detecting anyone messing with it from a lateral threat (some other attack vector, for example). From there, address the client threat the way you would any other vulnerability that doesn't get as much press and hype.

In all cases of client-side concerns, most likely the vast majority of clients have very predictable connections to things that you control. Yes, there is the possibility that these vulnerable clients could be used as a means to move laterally within your network, but that threat is true of many, many vulnerabilities that arise. In other words, this threat shouldn't be treated as the same as the panic-in-the-streets threat that Heartbleed inspires for publicly facing servers.

While I can understand the aversion to having a nuanced response to this threat, I am left wondering if there isn’t a different kind of exploitation possible. Immediately eradicating the wrong versions of OpenSSL wherever they hide in your organization will be an extremely expensive operation. Let’s call this the nuclear response to Heartbleed. Could it be that the security org is overreacting or, worse, (consciously or otherwise) exploiting the hype to justify past and future expenses? 

I have an aversion to all of this cyber-warrior chatter (see RSA Conference keynotes) for this very reason. Little boys playing with guns always want bigger guns, after all.

An often repeated axiom of risk is that you don’t spend a million dollars to address the risk of losing a million dollars– otherwise you have realized your risk after all.

Those involved in the nuclear response are enjoying total support from executives who have read article after article about this apparently doomsday-level threat. However, I wonder if when the mushroom clouds disperse we won’t have some questions to answer about how smart it is to use total war in situations like this.

Reverse Heartbleed is mostly an ordinary vunerability and should be handled according to pre-April-1st-2014 practices.

Without a sober approach to risk, including a prioritized response, we have effectively thrown out risk management and have rewound the clock over ten years in our approach to vulnerabilities.

This won’t be the last time we see a bug like this.  Have the rules changed because of Heartbleed?  I hope not.

Will we respond to the next one the same way or will we learn from this experience, including objectively measuring our response to Heartbleed to uncover mistakes, overreach, and excess spending? What if we have to respond to several like this a year? Will we go into the red and justify it by throwing around intangible threats like reputation?

Perhaps it’s easy for me to pontificate when I’m not the CISO with his job on the line. The nuanced response, after all, might leave the blind-spot that winds up being the hole that gets you a pink slip. However, with access to top-notch SMEs, intelligence feeds, quality consultants, etc. a measured response should be possible. I say this crisis is pretty much over when the server endpoints and some fraction of the client threats are addressed. After that, it’s routine vulnerability management.
Written with StackEdit.

Tuesday, April 15, 2014

No Inside

There is a notion that I have always found dubious and that persists where we believe that we can hold ourselves to different standards when an application is planned for internal deployment only.  This internal standard apparently applies to the quality of our work from usability to security.

I liken this mentality to the manager who acts one way when in a meeting with his direct reports and another when his boss is present.  It reveals a lack of character and integrity.  One should apply the same standard no matter the context.  If anything, this makes things a lot less complex.  There's no need to work on two distinct behaviors when one will do.

When applied to usability, we accept less than optimal experiences.  I suppose this is something like cooking for just the immediate family versus for the dinner party.  (Note that I'm the primary cook for my family.) We don't need some fancy pan sauce unless guests are coming over.  If a user interface is painful to use, well just deal with it and don't be whiner.  However, this isn't like cooking for the family.  Our employees have the option to leave, after all.  We should care about the experience for a number of reasons, including productivity but also perception.  What signals do we send if we don't care about quality internally?  When new internal systems are released with dead links and clunky interfaces, we're acting as if we don't care and when we shrug and say "deal with it" we're acting as if we're running a Soviet bread-line rather than a company that cares.

When applied to security, we also accept less than we know we can do.  We'll take time to design it the way we know it should be done.  However, we negotiate with ourselves as deadlines approach and pull out optimal security in favor of good enough.  I don't know how many times I've heard "you do realize that this is behind the firewalls, right?"  (Note that most of our attention to firewalls operationally apply to layers 2 and 3 while most of the threats today are on layer 7-- firewalls, shmire-walls.  Besides, a lot goes on behind the firewalls including insider threats and compromised workstations.)

Why should where an app is in relation to firewalls change the equation at all?  I suppose we think good enough saves us time and money.  However, I'm certain that kicking better designs down the road stunts our growth and leaves us ill prepared for when we can't negotiate our way out of it.  We fail to make investments that we could use later, both in the technology and the competencies of our workforce.

BYOD is here, whether official or not.  I realized this when I saw executives make the switch from Blackberry to smart phones and tablets.  When I sat in the room with an exec taking notes on her iPad, I wanted to ask how she kept IP safe, but I bit my tongue.  Like it or not, it's here.  What this means is that our notion that there is any behind-the-firewall boundary is eroding... and fast.  Of course, these boundaries were already soft since many of us can be off the corporate network using our laptops to do much more than being internal permits.

It's best to assume that there is no inside.  This isn't just from a security perspective.  If we are to fully commit to what is meant by Cloud Computing, anything we build in IT should have the long-term possibility that it could be sold to others.  All IT services could become a business service.

In practice, this means that we should always build quality inside and out.  Our user experiences should be more than just adequate, they should be pretty damn good.  We should align with standards when they're available to address cross-industry interop.  We should avoid proprietary security controls on the back-end so that there's no need to refactor anything should the posture of the application become external or commercial.  We should stop seeing quality, especially security, as a tax and start seeing it as an investment.  We should build each app as if it's externally facing-- fully exposed to the expectations of the outside world whether the threat is a usability critic or a bad actor.

(Note that this doesn't mean that I'll be making pan sauces for the kids every weekday.  Weekends?  Maybe.)

Monday, March 24, 2014

The Walled Garden Has No Walls

If the contest is between walled garden and border-less security, I fall firmly in the latter camp.  Every year I'm reminded of this contest at the RSA Conference, where it seems that 90% of the attention falls in the walled garden camp.  The companies in this space have the big booths in the middle with all of the best schwag.  These are the peddlers of what Gunnar Peterson (@oneraindrop) calls magic pizza box.  The mentality is akin to defense manufacturing where bigger and more exotic (and expensive) tools get all of the attention and funding (and boy do they dream of the industry maturing to the level of the military industrial complex).  If the firewall is not enough, we need to escalate the complexity and sophistication of this toolset.  The firewall needs to be layer 7 and malware aware (great).  If the threat arrives over ports 80 and 443, beyond packets, whole conversations and packages must be inspected for their sanity.  We must have a TSA Checkpoint for anything allowed on these ports.  Strip-search those protocols with gloves!  We need virtual warriors pulling people aside and forcing them to give up their malicious intent.  We need intel akin to NSA and CIA to play with the bad guys on their own turf.

That's all very fun and intriguing to think about... and I believe the mindset is here to stay.  However, I say we need to build our software and systems as if they're exposed to the chaos the internet.  We need to stop pretending that something else will protect the business and IT from themselves.  The evolution toward cloud business services has already put many on this path.  Whether or not your software is aimed toward the cloud (sooner or later you'll likely have to face this anyway), it must be built in the same manner as cloud applications.  When the defenses turn out to fall short-- and they will, we need to be ready with secure code no matter where it is intended to live.

Anyone who has worked in security architecture for any length of time is familiar with the conversation.  In response to your assertion of an appropriately secure design, the response is, "you do realize that these are behind the firewall, right?"  A quick bite of the tongue gets me through this as I remind myself that the person who asked is in the majority for even technical experts in IT.  To be honest, if my phone had a dope slap button, I'd use that.

Example applications are COTS (consumer-off-the-shelf), customized COTS, or home-grown applications that are intended to support internal business functions only.  I'll refer to COTS, but any of these examples apply.

My approach to designing for security with any application or service is a simple recipe where I focus first on Authentication, Authorization, Audit Logging, and Encryption (AAA + encryption).  Of course there's more to security architecture than this, but these are the foundational ingredients for integration. For some business cases, you might add transactional integrity, but where I work that's a small percent of engagements.

COTS applications have been permitted to play by different rules when behind the firewall.  Perhaps on some level, this has allowed us to put off the investment that would be required to make them safe to use.  We have permitted these applications to have limited options for integration.  Many of these applications will simply assume that they'll be wired to LDAP or Microsoft Active Directory.  Add a user to the COTS_App_Users group and they can simply authenticate the way they do elsewhere and start using the tool.  Some might use propriety integrations, like CA's Siteminder or Oracle's OAM.  This checks off the authentication and coarse-grained authorization (either they are a permitted app use or not).

Depending on the platform, the audit logging might be standard Windows events or whatever the application decided to produce.  The standard audit log expectations should be authentication attempts (pass or fail) and CRUDE (create, read, update, delete, execute).  Typically an application will make authorization decisions in context for a transaction.  For example, an application might have to implement logic that ensures that the authenticated identity and the data ultimately go together.

(An ideal solution would have the application playing only the role of Policy Enforcement Point, deferring the authorization decision to a Policy Decision Point.  The simple reason is that this means that what is ultimately highly specialized business logic is handled by security practitioners-- by those who accept their role in the defense of the business's assets.  This concept is often called Externalized Authorization Management (EAM) and a great example of the architecture is codified in XACML.)

Finally, encryption can address the impact of a failure to protect data by other means.  In-transit, it assures that data cannot be extracted from hosts on the same network.  At-rest, it can address a failure to decommission hardware safely (the most common implementation for checklist security as far as I can tell).  This usually means encrypting a cluster, a drive, a partition, a database or a table.  It can also mean encrypting individual fields in a dataset, which is common for Identity Provider (IdP) implementations.  It might also include things like credit card numbers, social security numbers, and anything else that's considered an especially sensitive part of a record.

Usually encryption gets the most resistance from the walled garden camp because it complicates architecture and management (an apt concern from an operations standpoint). However, I have found that when the subject is integration in a cloud setting, as with a B2B SaaS, I get little resistance to granular encryption.  Everyone seems to agree that the data could fall into the wrong hands; that either the Cloud Service Provider (CSP) or another tenant might find their way to the data unexpectedly and that there should be some measure to address this.

Unfortunately, when the service is to be deployed behind the walled garden any perception of an especially high bar on all of these subjects tends to be discarded as we negotiate with ourselves on the way to releasing a service.

Not only is it less safe to bet the farm on what few likely understand and what will fail over time, it's short-sighted from a security perspective but, most importantly, from a business perspective.

The vendor who sells to big enterprise customers who own and run their own datacenters is failing to prepare themselves for the future when fewer and fewer people own their own datacenters (as cloud becomes a list of commodity IT services).  And for those who will hold on to their datacenters, they are likely to sell excess capacity as a CSP eventually.  This will mean that they too will want everything in their virtual stack and beyond to have protections that assume that they are or could be exposed to all of the threats implied by being directly on the internet.

The business that builds software for in-house use or to be sold to others to install is failing to accept that the perimeters and boundaries are already dissolving with the rise of BYOD.  How will its employees connect untrusted mobile devices to its industry-specialized time tracking software, for example? Beyond BYOD, how can that custom IT service management tool, for example, be extended for use by consumer tenants?  How will the authentication of your system handle identities from multiple organizations and not just the one reflected in your primary identity store?  How will your approach to authorization ensure that data can be protected in new contexts?  How can what wants to use your data be aware of and honor the policies for that data?  What if your software succeeds in ways that you never imagined in the beginning?

It is in the best interest for all to assume that the walled garden does not exist.  From there we need to make the investments in design, process, people, and infrastructure to support addressing increased exposure of systems.  I agree with Chris Hoff (@beaker), when he asserts that this is not a matter of working without perimeters but working with many more small, fine-grained perimeters.  Moving to the EAM approach and getting serious about encryption (and other crypto, like X.509) to whatever level is justified can be the foundation for these smaller perimeters. Supporting multiple user claims: system and on-behalf-of users rather than simple system-to-system service credentials is also essential, even on the backend. 

Thursday, March 6, 2014

Contrarian in Depth

It was announced this week that the CIO of Target resigned following the well publicized breach of credit card data. Target was just one victim of many, but at the center of the story due to their scale.  Scale should be an advantage as it means defenses are well funded, but this was obviously not enough.  I assert that it is because they were far too predictable and conventional.

To understand the predictability of decisions made in the defense of corporations, I must first describe the climate of these corporations and how they are organized.  (Bear with me.)

As with community, township, county, state and nation; what comprises a company is a collection of individuals. How a company behaves is a collection of decisions made by these individuals.  What propels a company is the actions taken, again, by these individuals.

Companies themselves break down into groupings that have distinct accountability and govern themselves, as much as permitted, according to best practices to handle this accountability and all of the responsibilities therein.The leaders of these groupings are the middle management.

The accountability assigned to them allows their leaders to minimize the accountability directly placed upon them.  This is especially important in very large companies.

When the accountability of a particular group is large and unwieldly, these groupings are further pared down into additional groups.

Thus layers and layers of middle management are born.  Those who aspire to move upward in these layers of groups are driven to demonstrate their ability to manage individuals and then groups beneath them.  When you cannot move upward, you build beneath.  Now we have the empire builders.  Each middle manager is permitted to grow groups beneath them by their superiors because their superiors are also driven to demonstrate their ability to manage complex groups.  The ambitions of their managers is harvested for their benefit.

The experienced empire builder signs up for the right amount of responsibility within their org.  These responsibilities must be important enough to raise attention but not so important as to be dangerous to the survival of the empire.

This is the climate in which IT security lives just as any other IT function.

I joined an IT security group just as security was being recognized as a distinct and essential function within a large corporation.  I've seen it grow from a small collection of practitioners and thinkers to a complex organization with its own breakdown of distinct functions.


It's predictable and what's predictable is easier to understand from the outside.  An attacker can make assumptions that are likely to be correct.  The attacker can assume that the target has chosen tools that are common.  The careful middle manager does not take many leaps from what is perceived as common best practices.  If he is asked to choose an anti-behavioral-malware solution, for example, he will choose what's perceived as the best by the industry.

The attacker can assume a well-funded company has FireEye or perhaps Palo Alto Wildfire.  Of course assumptions aren't necessary when the tool chosen is apparent from LinkedIn or can be easily pried from a boastful salesperson.

There are many actions that could be taken to address this.  LinkedIn can be monitored.  The company can subscribe to services that watch for activity that targets the corporation.  Additional solutions can be employed that fill in gaps or otherwise augment the limitations of the primary solutions.  Hell, if the budget permits it, double up the top-of-the-line solutions employed: don't choose the one best solution, choose the two best.

However, could it be a good move to go the other way and choose the less-than-obvious solutions?  A cautious practice of this might be to employ a solution perceived as the best but also one that is perceived as emerging. More radical, and perhaps contrarian, would be to choose a couple that are emerging.

Of course this approach would have the added benefit of embracing innovation in the industry.  Customers with complex requirements are fertilizer for companies with emerging ideas.

One could go further and decide to build solutions from scratch or from collections of available open source tools.  However, this is something that most companies would have a hard time handling because they do not have core competencies in software (unless they are in the security industry).  What's worse is it's much more challenging to build confidence upwards (senior leaders).  It's hard enough to simply get their attention much less convince them that a home-grown idea is the best choice.  It's wiser, it appears, to instead go with the short-hand of sales brochures from large security firms who have top executives giving speeches at conferences.  It's wiser to check the Gartner magic quadrant... the senior leaders will do this after all.

What if Target had taken an unconventional approach while rolling out their point-of-sale systems (POS)?  The Target POS systems that were attacked were actually new implementations very recently rolled out to the stores.  What if they had chosen something not just slightly customized, as was the reported case, but radically different?  What if instead of a windows variant, they were rolled out on OpenBSD or some obscure embedded operating system?  (The common wisdom is that they should have had MFA for the leaked credentials or even should have segmented their systems better and used privileged access management, but this is the walled garden approach which is essentially wishing that the perimeter/firewall convention could live on in perpetuity.)

The answer is that had they made a more contrarian move they would not have been compromised (not this time).  The attacker could make broad assumptions about the retail industry and these assumptions paid off.

Target very likely could not have even conceived of this kind of approach because of the friction across organizations. Who would manage the OS if it was unfamiliar?  How would we harden it?  How would we deploy it?  Who would integrate common card readers with this system?  Would our compliance and vulnerability monitoring tools even provide coverage for such an uncommon system for corporations?  How would we patch it?  For that matter, who would design this?  The security org?  That's not what they do.  Who would champion it?  The CISO?  That would seem too assertive for an org that is essentially a cross-cutting concern.  The empire builders and their empires make the unconventional very difficult.

I had a conversation with a security SME from a local medical device manufacturer.  I briefly spoke to a group about the *Internet of Things(IoT)*.  This had him thinking about his company.  Usually the IoT is about what seems like silly nonsense so far like smart fridges and smart light light switches.  In his industry, however, their IoT involves important tools that save lives or improve them greatly.  Increasingly these devices have an address and, with that, complex integration challenges.  Of course they also have major security challenges.

He bounced an idea off me that involved employing encryption in such a way that it's more difficult to unwind for attackers.  I embraced it for the same reasons I describe above.  I think they should invent to solve their problems.  Further, they should not settle on just one approach, but evolve it and perhaps employ different approaches to different devices (to control their attack surface should one security invention be unwound).  I would normally advise people NOT to invent in the complex area of encryption because this specialized area requires rare talent. However firms like Cryptography Research could aid in the design and vet the solutions prior to deployment.

I assume that even within this device manufacturing company, the same organizational barriers exist.  This will very likely destroy any chance for this idea to be realized.  Security folks aren't easily embraced as visionaries in industries that aren't focused on security.  It would take a rare talent to push this forward.  Unfortunately, the first step this person must take is to understand these barriers and call attention to them as he presents ideas.


Listening for and understanding this organizational friction is in the best interest of those who are ultimately accountable for security.  As the resignation of the Target CIO demonstrates, common best practices are not enough.  Contrarian and even radical ideas must come to the forefront to defend against the increasingly effective adversary.  Not doing what everyone else is doing is key to defense.  Of course, this cannot simply be security-by-obscurity, but well thought out defenses.

Simply building out a bigger security empire is not enough and will probably only make matters worse.  This is what's likely to happen now that Target has a CISO.

Contrarian in depth means that much more thoughtful and innovative remedies must be prescribed and expected from the security org.

Followers