Monday, February 22, 2016

A Risk Based Datasource

I am often consulted for how, from an architecture perspective, we might raise the bar on handling sensitive data. In this case, let's say the data is social security numbers(SSN). In other words, how can we handle SSN consistently within an application and then promote the patterns and any reusable implementations to the enterprise?

The heart of the matter is most often improving the handling of sensitive data between the application and the datasource (let's assume it's a back-end database).

I think there are a number of ways that this can be approached. I propose a very simple solution: use the tools you already have.

First, I need to rant about how various vendors sell database security features and the failures of many to understand attack patterns.

No, full disk encryption isn't a silver bullet. If you're seeking a solution that addresses smash-and-grab theft or any mishandling of the hardware, then you have a solution.  Definitely do this when there's sensitive data on a drive.  However this is probably the least of your worries since your data centers are in distinct locations with elevated physical security and processes for handling decommissioning hardware (true, right?).

So does it solve data at-rest encryption requirements? Well, maybe enough to pass an audit. But what have you really done to limit access to data? The DBAs can see it. The OS administrators can see it. Anyone with an account can see it. Are all of these people properly provisioned? Is it required that they are able to see it to fulfill their business function?

Proprietary databases with baked-in encryption aren't all that much better. It's possible that you could keep your OS administrators out with some solutions that tie decryption to external trust authorities (e.g., Vormetric). But you still have the DBAs. But say we trust both of them enough and decide that's enough to address the risk. You still have all the service accounts (applications connected to it) and maybe the occasional business reports user. Does the database happily decrypt the data so long as the caller is authenticated? It probably does, so this doesn't do so much after all.

I saw an Oracle presentation a while back and spotted a lie in their diagrams. They talked up all of these features to address security concerns, but every diagram showed people interacting with databases. So that's the occasional business reports accounts. Great. That still leaves 99.9% of the traffic coming from those service accounts. The honest picture, in many cases, would show people interacting with databases by way of a big, sloppy, crappy application with a service account between it and the database on one side and on the other side, the unwashed masses.

Why does this matter? Of course it's because the real users of this data don't know the service account credentials or rather haven't proven they deserve to have access from the database perspective. They have proven that they can get into the application that uses the service account. In other words, the database defers authorization to data, including sensitive data, to the that clumsy beast that your company coded in-house with developers who have little appreciation of security (whether you can change this is another blog post). So, no, you can't say Bob logged into that app and this query coming from service account user MyBigAppUser is legitimate for Bob to execute. Sorry, Oracle but this is the real problem that needs to be addressed.

So what can we do without paying more to the companies with the big booths at the RSA Conference? Or more to-the-point, what can we do without writing database drivers or going completely off the rails trying to come up with something we imagine is perfect?

The solution is to dislodge the developers from their stoop. Force them to put on gloves when they handle sensitive data. Make them use a toothbrush to keep things clean. Make them use a special datasource when the risk of accessing data is higher.

How? Make it so that the service account, MyBigAppUser, can't see ANYTHING that is sensitive.   Use view if you have that option. Create a view so that if they ask for it, they get back nonsense or nothing at all or an error (in-app honeypot?).

So how do they get at sensitive data? They use a second datasource with a separate service account. Let's call it MyBigAppSecureUser. This user can see sensitive data. The data is encrypted at-rest and decrypted when it's queried. But this is audited in a well monitored, secure logging store (centralized, hard to corrupt). It could also have limits, like never returning more than one record because there is no use case that would demand this.... or maybe 10, or 20 only. Whatever the case limit it. You could also insist on more arguments that permit us to be reasonably sure that this data belongs to Bob. Do it transparently, derived from a verified authentication token and call it SQL augmentation. Review and review again the smaller code set that touches this datasource.  Push dynamic testing add it turned up to brute force and fuzzing.  Have security analysts pen test this over and over each time a change is released.

Now the developer has to groan and think about handling sensitive data. They have to go to the special datasource, the special data layer and ask for this special data. When they do this, you also tell them to create an audit log from the application or you provide a way to do it transparently.

When the attacker comes along with a SQL injection that works, maybe they wind up hitting the non-sensitive datasource. Hopefully this alone raises alarms.  If they hit the sensitive datasource, alarms definitely ring both from activity in the app and in the database. Spotting the anomaly is now easier since we don't mingle all those queries that involve non-sensitive data with those that involve sensitive data.

If queries to the sensitive datasource come from a compromised application server, this can be detected in the logging and app monitoring.  The queries will lack correlation to front-end activity.  Be sure to watch for this and explain this threat to the log management team who will implement the correlation, monitoring, and alerting.

Another bonus: encrypt the secure data at-rest and leave the other unencrypted. Now you're encrypting only data that is known to be sensitive. If sensitive data leaks out the other side, your Data Loss Prevention tooling in a discovery model can catch it.

Is it perfect? No. Is it elegant? Elegant enough, until you finally decide to dismantle your monolithic app in favor of a service design. To push it further without a full-on rewrite, you could make sensitive data functionality a whole separate app, a nanoservice... but that's another post.

(Note that this post was drafted a couple of years ago.  However, I still find it very relevant and hope that others do as well.)

Saturday, April 26, 2014

Heartbleed Eradication Gone Wild

It’s well known that the Heartbleed bug is a threat to those who depend on TLS (a.k.a., SSL, HTTPS) to protect user credentials, session tokens, and any sensitive data available on a public website (or server endpoints). See the site for details.  Briefly, malicious clients can coax memory/data from the TLS server. This includes not just individual client sessions, which would be limited to transient symmetric keys, but also includes any server-side configurations resident in memory. Worst of all it can include the private key of the server itself, allowing malicious sites to impersonate well known sites.

The weight of the server-side implications can’t be dismissed.  However, the usefulness of stolen private keys requires the attacker to find a way to insert themselves in the middle of transactions, making the threat something short of the doomsday that has been reported.  Given what I've heard about the state of ISPs, maybe this isn’t such an exaggeration.  Whatever the case, the largest attack surface and the largest threat is that credentials, session tokens, and sensitive data from individuals can be exposed on vulnerable sites.

Eradication should involve patching server endpoints, testing, revoking keys that existed prior to eradication, and installing new private keys.

Once this is done, the doomsday panic is over.  However, the panic has widely extended to an effort to totally eradicate the vulnerable OpenSSL libraries wherever they are found.

What’s not talked about as much (in the general public anyway) is the client-side implications of this threat. This has been referred to as “Reverse Heartbleed.” The threat is extended to the client beyond the third-party attacker client extracting data from the server. When the client itself depends on the vulnerable OpenSSL versions to establish a TLS session, a malicious or compromised server could extract data from the client using the same methods that the third-party client uses against the server.

What does this really mean to end-users? The number of clients that depend on OpenSSL reaches far and wide but is limited when you consider the colossal numbers of clients in use. Internet Explorer, Chrome, Safari, and Firefox are not vulnerable for the vast majority of people. However, Chrome on Android uses OpenSSL. Even so, this will not mean that any and all Android users are affected because they will have to have the right (should say wrong, I suppose) version of OpenSSL. Reverse Heartbleed matters to only some end-users.

Of course, clients don’t have to involve end-users. OpenSSL could be used in client software on the backend, in a B2B scenario for example.

The threat to clients is covered by The Register here:

An example that can sometimes include end-user interaction and sometimes not is VPN. However, let’s not panic too much over this. (The news about Heartbleed VPN attacks centers on vulnerable servers, not clients.) VPN clients aren’t used to browse the net after all. We know where your VPN clients are connecting most likely. It’s possible, but unlikely that your employees are connecting to VPN server endpoints outside of your control. If you’re concerned about your employees connecting in from home, you don’t really need to be if you have taken care to secure your own VPN server endpoints (including layered security, like monitoring and endpoint hardening).

Again, there is the threat of an attacker being in the middle which is actually a considerable threat when your employees connect via cafes and hotel WIFI. However, this threat should be less panic inducing than the server-side threat. We accept this threat on some level when we allow our employees to surf on their company owned laptop while connected to untrusted networks (web servers, etc.). Accordingly, this threat falls into a category that we've already considered how to manage.

Should you worry about VPN clients? It depends.

Other possible clients include a myriad of internal applications. These clients could be database connectors, internal node-to-node, appliances, automated patching, etc. Consider a client that connects to databases– it is very unlikely that these are connecting to any networks outside of your control. They are most likely connecting your reporting software to the data warehouse or an Apache server connecting an application to its databases. Similar to the VPN client concerns, the threat depends on whether there are connections to networks outside your control.

Should you worry about these clients? It depends.

I say make damn sure that your server endpoint is taken care of including detecting anyone messing with it from a lateral threat (some other attack vector, for example). From there, address the client threat the way you would any other vulnerability that doesn't get as much press and hype.

In all cases of client-side concerns, most likely the vast majority of clients have very predictable connections to things that you control. Yes, there is the possibility that these vulnerable clients could be used as a means to move laterally within your network, but that threat is true of many, many vulnerabilities that arise. In other words, this threat shouldn't be treated as the same as the panic-in-the-streets threat that Heartbleed inspires for publicly facing servers.

While I can understand the aversion to having a nuanced response to this threat, I am left wondering if there isn’t a different kind of exploitation possible. Immediately eradicating the wrong versions of OpenSSL wherever they hide in your organization will be an extremely expensive operation. Let’s call this the nuclear response to Heartbleed. Could it be that the security org is overreacting or, worse, (consciously or otherwise) exploiting the hype to justify past and future expenses? 

I have an aversion to all of this cyber-warrior chatter (see RSA Conference keynotes) for this very reason. Little boys playing with guns always want bigger guns, after all.

An often repeated axiom of risk is that you don’t spend a million dollars to address the risk of losing a million dollars– otherwise you have realized your risk after all.

Those involved in the nuclear response are enjoying total support from executives who have read article after article about this apparently doomsday-level threat. However, I wonder if when the mushroom clouds disperse we won’t have some questions to answer about how smart it is to use total war in situations like this.

Reverse Heartbleed is mostly an ordinary vunerability and should be handled according to pre-April-1st-2014 practices.

Without a sober approach to risk, including a prioritized response, we have effectively thrown out risk management and have rewound the clock over ten years in our approach to vulnerabilities.

This won’t be the last time we see a bug like this.  Have the rules changed because of Heartbleed?  I hope not.

Will we respond to the next one the same way or will we learn from this experience, including objectively measuring our response to Heartbleed to uncover mistakes, overreach, and excess spending? What if we have to respond to several like this a year? Will we go into the red and justify it by throwing around intangible threats like reputation?

Perhaps it’s easy for me to pontificate when I’m not the CISO with his job on the line. The nuanced response, after all, might leave the blind-spot that winds up being the hole that gets you a pink slip. However, with access to top-notch SMEs, intelligence feeds, quality consultants, etc. a measured response should be possible. I say this crisis is pretty much over when the server endpoints and some fraction of the client threats are addressed. After that, it’s routine vulnerability management.
Written with StackEdit.

Tuesday, April 15, 2014

No Inside

There is a notion that I have always found dubious and that persists where we believe that we can hold ourselves to different standards when an application is planned for internal deployment only.  This internal standard apparently applies to the quality of our work from usability to security.

I liken this mentality to the manager who acts one way when in a meeting with his direct reports and another when his boss is present.  It reveals a lack of character and integrity.  One should apply the same standard no matter the context.  If anything, this makes things a lot less complex.  There's no need to work on two distinct behaviors when one will do.

When applied to usability, we accept less than optimal experiences.  I suppose this is something like cooking for just the immediate family versus for the dinner party.  (Note that I'm the primary cook for my family.) We don't need some fancy pan sauce unless guests are coming over.  If a user interface is painful to use, well just deal with it and don't be whiner.  However, this isn't like cooking for the family.  Our employees have the option to leave, after all.  We should care about the experience for a number of reasons, including productivity but also perception.  What signals do we send if we don't care about quality internally?  When new internal systems are released with dead links and clunky interfaces, we're acting as if we don't care and when we shrug and say "deal with it" we're acting as if we're running a Soviet bread-line rather than a company that cares.

When applied to security, we also accept less than we know we can do.  We'll take time to design it the way we know it should be done.  However, we negotiate with ourselves as deadlines approach and pull out optimal security in favor of good enough.  I don't know how many times I've heard "you do realize that this is behind the firewalls, right?"  (Note that most of our attention to firewalls operationally apply to layers 2 and 3 while most of the threats today are on layer 7-- firewalls, shmire-walls.  Besides, a lot goes on behind the firewalls including insider threats and compromised workstations.)

Why should where an app is in relation to firewalls change the equation at all?  I suppose we think good enough saves us time and money.  However, I'm certain that kicking better designs down the road stunts our growth and leaves us ill prepared for when we can't negotiate our way out of it.  We fail to make investments that we could use later, both in the technology and the competencies of our workforce.

BYOD is here, whether official or not.  I realized this when I saw executives make the switch from Blackberry to smart phones and tablets.  When I sat in the room with an exec taking notes on her iPad, I wanted to ask how she kept IP safe, but I bit my tongue.  Like it or not, it's here.  What this means is that our notion that there is any behind-the-firewall boundary is eroding... and fast.  Of course, these boundaries were already soft since many of us can be off the corporate network using our laptops to do much more than being internal permits.

It's best to assume that there is no inside.  This isn't just from a security perspective.  If we are to fully commit to what is meant by Cloud Computing, anything we build in IT should have the long-term possibility that it could be sold to others.  All IT services could become a business service.

In practice, this means that we should always build quality inside and out.  Our user experiences should be more than just adequate, they should be pretty damn good.  We should align with standards when they're available to address cross-industry interop.  We should avoid proprietary security controls on the back-end so that there's no need to refactor anything should the posture of the application become external or commercial.  We should stop seeing quality, especially security, as a tax and start seeing it as an investment.  We should build each app as if it's externally facing-- fully exposed to the expectations of the outside world whether the threat is a usability critic or a bad actor.

(Note that this doesn't mean that I'll be making pan sauces for the kids every weekday.  Weekends?  Maybe.)

Monday, March 24, 2014

The Walled Garden Has No Walls

If the contest is between walled garden and border-less security, I fall firmly in the latter camp.  Every year I'm reminded of this contest at the RSA Conference, where it seems that 90% of the attention falls in the walled garden camp.  The companies in this space have the big booths in the middle with all of the best schwag.  These are the peddlers of what Gunnar Peterson (@oneraindrop) calls magic pizza box.  The mentality is akin to defense manufacturing where bigger and more exotic (and expensive) tools get all of the attention and funding (and boy do they dream of the industry maturing to the level of the military industrial complex).  If the firewall is not enough, we need to escalate the complexity and sophistication of this toolset.  The firewall needs to be layer 7 and malware aware (great).  If the threat arrives over ports 80 and 443, beyond packets, whole conversations and packages must be inspected for their sanity.  We must have a TSA Checkpoint for anything allowed on these ports.  Strip-search those protocols with gloves!  We need virtual warriors pulling people aside and forcing them to give up their malicious intent.  We need intel akin to NSA and CIA to play with the bad guys on their own turf.

That's all very fun and intriguing to think about... and I believe the mindset is here to stay.  However, I say we need to build our software and systems as if they're exposed to the chaos the internet.  We need to stop pretending that something else will protect the business and IT from themselves.  The evolution toward cloud business services has already put many on this path.  Whether or not your software is aimed toward the cloud (sooner or later you'll likely have to face this anyway), it must be built in the same manner as cloud applications.  When the defenses turn out to fall short-- and they will, we need to be ready with secure code no matter where it is intended to live.

Anyone who has worked in security architecture for any length of time is familiar with the conversation.  In response to your assertion of an appropriately secure design, the response is, "you do realize that these are behind the firewall, right?"  A quick bite of the tongue gets me through this as I remind myself that the person who asked is in the majority for even technical experts in IT.  To be honest, if my phone had a dope slap button, I'd use that.

Example applications are COTS (consumer-off-the-shelf), customized COTS, or home-grown applications that are intended to support internal business functions only.  I'll refer to COTS, but any of these examples apply.

My approach to designing for security with any application or service is a simple recipe where I focus first on Authentication, Authorization, Audit Logging, and Encryption (AAA + encryption).  Of course there's more to security architecture than this, but these are the foundational ingredients for integration. For some business cases, you might add transactional integrity, but where I work that's a small percent of engagements.

COTS applications have been permitted to play by different rules when behind the firewall.  Perhaps on some level, this has allowed us to put off the investment that would be required to make them safe to use.  We have permitted these applications to have limited options for integration.  Many of these applications will simply assume that they'll be wired to LDAP or Microsoft Active Directory.  Add a user to the COTS_App_Users group and they can simply authenticate the way they do elsewhere and start using the tool.  Some might use propriety integrations, like CA's Siteminder or Oracle's OAM.  This checks off the authentication and coarse-grained authorization (either they are a permitted app use or not).

Depending on the platform, the audit logging might be standard Windows events or whatever the application decided to produce.  The standard audit log expectations should be authentication attempts (pass or fail) and CRUDE (create, read, update, delete, execute).  Typically an application will make authorization decisions in context for a transaction.  For example, an application might have to implement logic that ensures that the authenticated identity and the data ultimately go together.

(An ideal solution would have the application playing only the role of Policy Enforcement Point, deferring the authorization decision to a Policy Decision Point.  The simple reason is that this means that what is ultimately highly specialized business logic is handled by security practitioners-- by those who accept their role in the defense of the business's assets.  This concept is often called Externalized Authorization Management (EAM) and a great example of the architecture is codified in XACML.)

Finally, encryption can address the impact of a failure to protect data by other means.  In-transit, it assures that data cannot be extracted from hosts on the same network.  At-rest, it can address a failure to decommission hardware safely (the most common implementation for checklist security as far as I can tell).  This usually means encrypting a cluster, a drive, a partition, a database or a table.  It can also mean encrypting individual fields in a dataset, which is common for Identity Provider (IdP) implementations.  It might also include things like credit card numbers, social security numbers, and anything else that's considered an especially sensitive part of a record.

Usually encryption gets the most resistance from the walled garden camp because it complicates architecture and management (an apt concern from an operations standpoint). However, I have found that when the subject is integration in a cloud setting, as with a B2B SaaS, I get little resistance to granular encryption.  Everyone seems to agree that the data could fall into the wrong hands; that either the Cloud Service Provider (CSP) or another tenant might find their way to the data unexpectedly and that there should be some measure to address this.

Unfortunately, when the service is to be deployed behind the walled garden any perception of an especially high bar on all of these subjects tends to be discarded as we negotiate with ourselves on the way to releasing a service.

Not only is it less safe to bet the farm on what few likely understand and what will fail over time, it's short-sighted from a security perspective but, most importantly, from a business perspective.

The vendor who sells to big enterprise customers who own and run their own datacenters is failing to prepare themselves for the future when fewer and fewer people own their own datacenters (as cloud becomes a list of commodity IT services).  And for those who will hold on to their datacenters, they are likely to sell excess capacity as a CSP eventually.  This will mean that they too will want everything in their virtual stack and beyond to have protections that assume that they are or could be exposed to all of the threats implied by being directly on the internet.

The business that builds software for in-house use or to be sold to others to install is failing to accept that the perimeters and boundaries are already dissolving with the rise of BYOD.  How will its employees connect untrusted mobile devices to its industry-specialized time tracking software, for example? Beyond BYOD, how can that custom IT service management tool, for example, be extended for use by consumer tenants?  How will the authentication of your system handle identities from multiple organizations and not just the one reflected in your primary identity store?  How will your approach to authorization ensure that data can be protected in new contexts?  How can what wants to use your data be aware of and honor the policies for that data?  What if your software succeeds in ways that you never imagined in the beginning?

It is in the best interest for all to assume that the walled garden does not exist.  From there we need to make the investments in design, process, people, and infrastructure to support addressing increased exposure of systems.  I agree with Chris Hoff (@beaker), when he asserts that this is not a matter of working without perimeters but working with many more small, fine-grained perimeters.  Moving to the EAM approach and getting serious about encryption (and other crypto, like X.509) to whatever level is justified can be the foundation for these smaller perimeters. Supporting multiple user claims: system and on-behalf-of users rather than simple system-to-system service credentials is also essential, even on the backend. 

Thursday, March 6, 2014

Contrarian in Depth

It was announced this week that the CIO of Target resigned following the well publicized breach of credit card data. Target was just one victim of many, but at the center of the story due to their scale.  Scale should be an advantage as it means defenses are well funded, but this was obviously not enough.  I assert that it is because they were far too predictable and conventional.

To understand the predictability of decisions made in the defense of corporations, I must first describe the climate of these corporations and how they are organized.  (Bear with me.)

As with community, township, county, state and nation; what comprises a company is a collection of individuals. How a company behaves is a collection of decisions made by these individuals.  What propels a company is the actions taken, again, by these individuals.

Companies themselves break down into groupings that have distinct accountability and govern themselves, as much as permitted, according to best practices to handle this accountability and all of the responsibilities therein.The leaders of these groupings are the middle management.

The accountability assigned to them allows their leaders to minimize the accountability directly placed upon them.  This is especially important in very large companies.

When the accountability of a particular group is large and unwieldly, these groupings are further pared down into additional groups.

Thus layers and layers of middle management are born.  Those who aspire to move upward in these layers of groups are driven to demonstrate their ability to manage individuals and then groups beneath them.  When you cannot move upward, you build beneath.  Now we have the empire builders.  Each middle manager is permitted to grow groups beneath them by their superiors because their superiors are also driven to demonstrate their ability to manage complex groups.  The ambitions of their managers is harvested for their benefit.

The experienced empire builder signs up for the right amount of responsibility within their org.  These responsibilities must be important enough to raise attention but not so important as to be dangerous to the survival of the empire.

This is the climate in which IT security lives just as any other IT function.

I joined an IT security group just as security was being recognized as a distinct and essential function within a large corporation.  I've seen it grow from a small collection of practitioners and thinkers to a complex organization with its own breakdown of distinct functions.

It's predictable and what's predictable is easier to understand from the outside.  An attacker can make assumptions that are likely to be correct.  The attacker can assume that the target has chosen tools that are common.  The careful middle manager does not take many leaps from what is perceived as common best practices.  If he is asked to choose an anti-behavioral-malware solution, for example, he will choose what's perceived as the best by the industry.

The attacker can assume a well-funded company has FireEye or perhaps Palo Alto Wildfire.  Of course assumptions aren't necessary when the tool chosen is apparent from LinkedIn or can be easily pried from a boastful salesperson.

There are many actions that could be taken to address this.  LinkedIn can be monitored.  The company can subscribe to services that watch for activity that targets the corporation.  Additional solutions can be employed that fill in gaps or otherwise augment the limitations of the primary solutions.  Hell, if the budget permits it, double up the top-of-the-line solutions employed: don't choose the one best solution, choose the two best.

However, could it be a good move to go the other way and choose the less-than-obvious solutions?  A cautious practice of this might be to employ a solution perceived as the best but also one that is perceived as emerging. More radical, and perhaps contrarian, would be to choose a couple that are emerging.

Of course this approach would have the added benefit of embracing innovation in the industry.  Customers with complex requirements are fertilizer for companies with emerging ideas.

One could go further and decide to build solutions from scratch or from collections of available open source tools.  However, this is something that most companies would have a hard time handling because they do not have core competencies in software (unless they are in the security industry).  What's worse is it's much more challenging to build confidence upwards (senior leaders).  It's hard enough to simply get their attention much less convince them that a home-grown idea is the best choice.  It's wiser, it appears, to instead go with the short-hand of sales brochures from large security firms who have top executives giving speeches at conferences.  It's wiser to check the Gartner magic quadrant... the senior leaders will do this after all.

What if Target had taken an unconventional approach while rolling out their point-of-sale systems (POS)?  The Target POS systems that were attacked were actually new implementations very recently rolled out to the stores.  What if they had chosen something not just slightly customized, as was the reported case, but radically different?  What if instead of a windows variant, they were rolled out on OpenBSD or some obscure embedded operating system?  (The common wisdom is that they should have had MFA for the leaked credentials or even should have segmented their systems better and used privileged access management, but this is the walled garden approach which is essentially wishing that the perimeter/firewall convention could live on in perpetuity.)

The answer is that had they made a more contrarian move they would not have been compromised (not this time).  The attacker could make broad assumptions about the retail industry and these assumptions paid off.

Target very likely could not have even conceived of this kind of approach because of the friction across organizations. Who would manage the OS if it was unfamiliar?  How would we harden it?  How would we deploy it?  Who would integrate common card readers with this system?  Would our compliance and vulnerability monitoring tools even provide coverage for such an uncommon system for corporations?  How would we patch it?  For that matter, who would design this?  The security org?  That's not what they do.  Who would champion it?  The CISO?  That would seem too assertive for an org that is essentially a cross-cutting concern.  The empire builders and their empires make the unconventional very difficult.

I had a conversation with a security SME from a local medical device manufacturer.  I briefly spoke to a group about the *Internet of Things(IoT)*.  This had him thinking about his company.  Usually the IoT is about what seems like silly nonsense so far like smart fridges and smart light light switches.  In his industry, however, their IoT involves important tools that save lives or improve them greatly.  Increasingly these devices have an address and, with that, complex integration challenges.  Of course they also have major security challenges.

He bounced an idea off me that involved employing encryption in such a way that it's more difficult to unwind for attackers.  I embraced it for the same reasons I describe above.  I think they should invent to solve their problems.  Further, they should not settle on just one approach, but evolve it and perhaps employ different approaches to different devices (to control their attack surface should one security invention be unwound).  I would normally advise people NOT to invent in the complex area of encryption because this specialized area requires rare talent. However firms like Cryptography Research could aid in the design and vet the solutions prior to deployment.

I assume that even within this device manufacturing company, the same organizational barriers exist.  This will very likely destroy any chance for this idea to be realized.  Security folks aren't easily embraced as visionaries in industries that aren't focused on security.  It would take a rare talent to push this forward.  Unfortunately, the first step this person must take is to understand these barriers and call attention to them as he presents ideas.

Listening for and understanding this organizational friction is in the best interest of those who are ultimately accountable for security.  As the resignation of the Target CIO demonstrates, common best practices are not enough.  Contrarian and even radical ideas must come to the forefront to defend against the increasingly effective adversary.  Not doing what everyone else is doing is key to defense.  Of course, this cannot simply be security-by-obscurity, but well thought out defenses.

Simply building out a bigger security empire is not enough and will probably only make matters worse.  This is what's likely to happen now that Target has a CISO.

Contrarian in depth means that much more thoughtful and innovative remedies must be prescribed and expected from the security org.

Thursday, November 14, 2013

It's the Singer not the Song

Security often finds itself chasing after changes in software architecture and even development methodology. While one would hope that the naivety that is so familiar to the early days of the internet was just a matter of kicking off something entirely new, we seem to be finding ourselves persistently backing into security after the sprint toward something new. Security has matured, and with it whole new organizations have sprouted, but I see no evidence that this means security will not continue to have to chase after change, and at an increasing pace, it seems. Worse, a more mature security practice is proving to be a hindrance to innovation and a barrier to adoption for better software models.

To understand where we’re going, we have to understand where we’ve been. And so, before going into the latest challenges, I’ll spend some time explaining how the DMZ matured, at least from my in practice experience.

The typical tiered Java application has a web server, an application server, a database, and maybe some remote services.

The web server is there to accept requests from the outside world. It often is used to serve static content to avoid pushing this burden off to the critical systems in the back-end. It establishes the connection to the application server on the requestor’s behalf. It can play the role of load balancer, choosing the least busy or the next available application server. Finally, this is often where the end-user’s session is validated. The web server has a plugin that can talk to the policy server where sessions are established and validated. The session comes in from the end-user’s client, the policy server validates it and extracts the user principal (who the end-user is), and the user principal is passed back to the application server.

The application server is there to deliver the business functionality (and value, we hope). This is where the difficult stuff happens and where most of the developer’s work goes. Often the business logic is executed here. (When it’s not, it’s executed in a business rules engine that organizes critical logic in one, manageable space.) Calls out to data sources happen here. Usually there is at least one data source used by the application server that persists information about the end-user and anything that will bill useful to execute the business function. The application server has a trusted connection with the data source, usually a shared secret (username and password, or service credentials). The application server is actually normally two or more servers with a large amount of CPU (horsepower) and RAM (memory).

The application server often contains more than it should. This where the most man-power is usually engaged and, accordingly, it’s the default place where gaps are solved. Is the business rules engine not delivering the logic needed? Patch it up with some code in the application layer. Does the service fail to orchestrate and aggregate as we’d like, fix it here. Does the access control out front fail to be granular enough to ensure safety while executing our business functionality? Solve it here. Do we get more information than is appropriate for this end-user from the data or services layers? The application becomes the authorization decision point (policy decision point or PDP in XACML). In short, the application layer is overloaded. THis creates a snowball effect as an application lives on through the years. This is where we solve all of this so it must be the right place to solve these complex problems. Applications grow and so do the teams that support them.

The data source is where the application stores data it needs to support the business functionality and bring value to the end-user. This is often a database. More specifically, it tends to be a relational database. Databases are tuned to retrieve data, correlate, sort, and save. There is a lot of potential for logic in many of the common databases, but I’ve found that most often there’s tendency to avoid this in favor of something out in front, like the application server, business rules engines, or services. From what I can tell, this is mostly because of the available resources for application development, but also because database administrators (DBAs) have a more mature technology, older personnel, and well-defined processes. There is a tendency to resist introducing more to this layer that could degrade performance, introduce manageability concerns, complicate troubleshooting, and generally make things more complicated.

This completes the picture of a typical tiered application. I mention other systems, like services and business rules engines, but for the sake of brevity I’ll keep most of the attention on web servers, application servers, and data sources. It’s important to know that additional servers and services participate.

Between these layers, we have introduced additional security on a firewall level. In short, we let the layers talk to the minimum necessary. The web server can talk to the internet, but only through ports 80 or 443 (HTTPS, encrypted HTTP transport). On the inside, the web server talks only to the application ports using the expected protocols (HTTP, AJK, etc.). The web server also talks to the policy server for the session concerns mentioned earlier.

The application server is open to requests from the web server. It is allowed to talk to the data source using only the ports intended for the application and using expected protocols. The application server can also talk to services, often in another domain, even external services from other businesses. Of course, the application server must also talk to the business rules engine, which is often in the same zone. In some cases, conversation with the policy server is also necessary to execute fine grained authorization. The policy server is usually located in the application zone, mostly because there is no need for it to ever be available directly to the internet. In cases where it does need to be available, as with some single-sign on (SSO) scenarios snd federated authentication, the traffic is either driven through a web server layer or is made available by way of a load balancer appliance, when one is used.

The load balancer requires some attention as well. This is in front of most large applications. It serves the purpose of balancing transactions across resources, accelerating encryption in-transit (TLS/HTTPS), and increasingly plays a role as a firewall for the application stack (layer 7). Many load balancers are now introducing integration with policy servers, essentially taking this responsibility from the web servers. Further, they can act as a kind of STS, translating tokens from the external domain into tokens that can be used on the backend.

All of these conversations that are necessary to make a site work properly are codified in policy. The rules between layers are described as distinct zones, horizontal isolation. A whitelist of conversations is written, approved, and all agree to follow these rules (until they don’t, which is common). This is the law of application security behind the scenes (most end-users don’t know it, see it, or care).

What I have so far described is a typical 3-tiered application. It has been bread and butter for the last decade and more. This was the how to build it right model all of this time. However, the proliferation of internet facing services and now the trend toward discreet REST (basically terse, less formal services, leveraging HTTP protocols more directly) APIs (application programming interfaces), this law seems out of place.

The process of defining APIs for a business means decomposing business functionality into discreet, consumable pieces. As I’ve already described, most of our business functionality is in the application. REST is definitely not just another way to build applications. It discards the application. In its place, client-side code retrieves what is needed and assembles it into a cohesive user experience.

The laws of application security for the old tiered model were adapted to how applications were already being built. That web server in the presentation was there to serve static content and route traffic. Without the calls out to policy server, it’s really most often just a dumb reverse proxy. Because it’s not the same attack surface, we might argue that it plays a defensive role for the application server which contains more sensitive credentials and handles more sensitive data. But we know that this layer has been proven vulnerable with resulting exploits that allowed attackers to control and pivot from these presentation servers. One popular flavor of Apache included a simple header injection that essentially enabled a port scan, allowing attackers to map out the attack surface of the presentation layer, including connections into other zones. The load balancer appliance added a double layer of indirection, but was brought in to solve engineering problems more than security, as much as the vendors now seem to want you to believe in the security pedigree of their devices.

Behind this presentation layer, again, are highly sensitive servers and services that are often poorly secured. Encryption in-transit terminates at the presentation appliance. Everything behind it is often unencrypted. Access controls, again, are handled in the web server with its relationship with the policy server. Everything behind it is node-to-node trust, at best. Often access controls don’t exist between backend layers. It’s assumed that if a server can make a request that it’s a request that we can trust, as in if the servers have been permitted to have a conversation with one another, it must be legit. Essentially this means that firewall rules between tiered zones are used as access controls. However, make one mistake in the app layer, like SQL injection, and your trusted connection from the app zone opens a pivot from the data layer.

These old laws in new contexts, applied to new models of architecture are not bound to be successful. Once again the security team finds itself chasing after how applications are actually built, and from this defining how they are to be secured. However, this time there is much more awareness of security across organizations and even in the public. This strengthens the hand of the security team who now has influence over application architecture. However, the security team might not appreciate when applications are truly different than what they are familiar with. They are applying their application security laws to models that aren’t tiered in the same way.

What are the rules when an application is built from a language compiled into the web server itself? Should it require another web server acting as a reverse proxy in front of it? Perhaps, but what if there’s an appliance, like a load balancer? What about the backend? If this app uses a limited local store for end-user state and services to interact with persisted user data, should it require a separate database, isolated by horizontal zone? What if the application is one, small, discreet business function amongst many, many of its kind? Should they all be tiered to maintain status quo DMZ design? Should we concern ourselves with this extra overhead when can barely make the case that it makes anything more secure? Are we adding safety or are we making our diagrams work? Are we safeguarding our assets or the status quo? What does a firewall access control mean when resources that have always been buried in the backend are now found in the front-end?

These old laws weren’t all that effective in the first place. The attention to the servers (or nodes) and their placement distracts from the real goal which is to safely handle the data. Had attention been on data all along, adoption of new software models would meet less resistance because the data would remain the same and so too would the controls and most of the capabilities that support its safe handling.

Regardless, I’ve seen ruthless enforcement where there seemed to be no regard for architecture that did not fit the model. And so a fit was forced, drawing a horizontal line between layers that had no reason to be separated. The diagrams worked and the laws were applied, but the architecture was broken. Increased overhead and expense were apparently seen as a necessary price. The hidden costs of arbitrary complexity were realized later, under fire. The root cause will never quite be understood because it never is.

Written with StackEdit.

Wednesday, November 6, 2013

Developer Games

The typical developer focuses on business functionality, complex integrations, making sites work and look great, and so on. Those of us in the business or elsewhere in the technology org see a lot of the mistakes they make. We call their babies ugly.

Infosec folks find ourselves apoplectic about the developer’s ignorance about the security domain, like when they start talking about firewalls when you are trying to explain why their SQL injection flaw is a big deal. We’re frustrated when, year after year, we have the same conversations with the same developers.

But don’t fool yourself into thinking that developers are incompetent or even that they don’t care. I am a developer (mostly in my spare time these days). I know too well the pressure developers are under. They have to contend with impossible deadlines, ridiculous politics, poor technology decisions made elsewhere (often by execs over a game of golf), an IDE that corporate approves rather than the one that they like, and last minute requirement changes. What they love mostly is solving challenging problems.

The developer employed as the attacker has to be a whole different matter. I can only speculate since I have not practiced the dark arts outside of ethical hacking on behalf of the business with approvals.

We know from what we see in attack patterns that there is a tendency to go after low hanging fruit. This means that the developer-as-attacker is often focused on the weakness du jour. With this kind of focus, a good developer can really excel. I can just imagine the amusement they get from plowing through poor defenses. Good defenses are probably even more fun.

The infosec review process is often seen as yet another distraction, further dividing focus from the developer’s point-of-view. For that matter, the secure SDLC adds further complexity to a process that many developers already view as an imposition from irrelevant wonks who care about the wrong things.

We can’t estimate the threat by judging our adversaries by our own day-to-day. This isn’t because the attacker is far superior, it’s because they are far more focused.

Introducing your developers (and the org) to the process of attack helps them understand defense. Business would benefit from allowing their developers and other technology folks to turn they focus away from the day-to-day. Beyond having developers tinker with WebGoat, a red-team/blue-team exercise (gaming, if you want to look like a hip leader) would surely satisfy a developer’s intellectual curiosity while also strengthening their understanding of defense. Developers will see that infosec is not irrelevant, if anything, because they don’t want to face the embarrassment of being on the losing end of the game. This would also foster a less abstract sense of ownership and even accountability.

Written with StackEdit.