Thursday, November 14, 2013

It's the Singer not the Song

Security often finds itself chasing after changes in software architecture and even development methodology. While one would hope that the naivety that is so familiar to the early days of the internet was just a matter of kicking off something entirely new, we seem to be finding ourselves persistently backing into security after the sprint toward something new. Security has matured, and with it whole new organizations have sprouted, but I see no evidence that this means security will not continue to have to chase after change, and at an increasing pace, it seems. Worse, a more mature security practice is proving to be a hindrance to innovation and a barrier to adoption for better software models.

To understand where we’re going, we have to understand where we’ve been. And so, before going into the latest challenges, I’ll spend some time explaining how the DMZ matured, at least from my in practice experience.

The typical tiered Java application has a web server, an application server, a database, and maybe some remote services.

The web server is there to accept requests from the outside world. It often is used to serve static content to avoid pushing this burden off to the critical systems in the back-end. It establishes the connection to the application server on the requestor’s behalf. It can play the role of load balancer, choosing the least busy or the next available application server. Finally, this is often where the end-user’s session is validated. The web server has a plugin that can talk to the policy server where sessions are established and validated. The session comes in from the end-user’s client, the policy server validates it and extracts the user principal (who the end-user is), and the user principal is passed back to the application server.

The application server is there to deliver the business functionality (and value, we hope). This is where the difficult stuff happens and where most of the developer’s work goes. Often the business logic is executed here. (When it’s not, it’s executed in a business rules engine that organizes critical logic in one, manageable space.) Calls out to data sources happen here. Usually there is at least one data source used by the application server that persists information about the end-user and anything that will bill useful to execute the business function. The application server has a trusted connection with the data source, usually a shared secret (username and password, or service credentials). The application server is actually normally two or more servers with a large amount of CPU (horsepower) and RAM (memory).

The application server often contains more than it should. This where the most man-power is usually engaged and, accordingly, it’s the default place where gaps are solved. Is the business rules engine not delivering the logic needed? Patch it up with some code in the application layer. Does the service fail to orchestrate and aggregate as we’d like, fix it here. Does the access control out front fail to be granular enough to ensure safety while executing our business functionality? Solve it here. Do we get more information than is appropriate for this end-user from the data or services layers? The application becomes the authorization decision point (policy decision point or PDP in XACML). In short, the application layer is overloaded. THis creates a snowball effect as an application lives on through the years. This is where we solve all of this so it must be the right place to solve these complex problems. Applications grow and so do the teams that support them.

The data source is where the application stores data it needs to support the business functionality and bring value to the end-user. This is often a database. More specifically, it tends to be a relational database. Databases are tuned to retrieve data, correlate, sort, and save. There is a lot of potential for logic in many of the common databases, but I’ve found that most often there’s tendency to avoid this in favor of something out in front, like the application server, business rules engines, or services. From what I can tell, this is mostly because of the available resources for application development, but also because database administrators (DBAs) have a more mature technology, older personnel, and well-defined processes. There is a tendency to resist introducing more to this layer that could degrade performance, introduce manageability concerns, complicate troubleshooting, and generally make things more complicated.

This completes the picture of a typical tiered application. I mention other systems, like services and business rules engines, but for the sake of brevity I’ll keep most of the attention on web servers, application servers, and data sources. It’s important to know that additional servers and services participate.

Between these layers, we have introduced additional security on a firewall level. In short, we let the layers talk to the minimum necessary. The web server can talk to the internet, but only through ports 80 or 443 (HTTPS, encrypted HTTP transport). On the inside, the web server talks only to the application ports using the expected protocols (HTTP, AJK, etc.). The web server also talks to the policy server for the session concerns mentioned earlier.

The application server is open to requests from the web server. It is allowed to talk to the data source using only the ports intended for the application and using expected protocols. The application server can also talk to services, often in another domain, even external services from other businesses. Of course, the application server must also talk to the business rules engine, which is often in the same zone. In some cases, conversation with the policy server is also necessary to execute fine grained authorization. The policy server is usually located in the application zone, mostly because there is no need for it to ever be available directly to the internet. In cases where it does need to be available, as with some single-sign on (SSO) scenarios snd federated authentication, the traffic is either driven through a web server layer or is made available by way of a load balancer appliance, when one is used.

The load balancer requires some attention as well. This is in front of most large applications. It serves the purpose of balancing transactions across resources, accelerating encryption in-transit (TLS/HTTPS), and increasingly plays a role as a firewall for the application stack (layer 7). Many load balancers are now introducing integration with policy servers, essentially taking this responsibility from the web servers. Further, they can act as a kind of STS, translating tokens from the external domain into tokens that can be used on the backend.

All of these conversations that are necessary to make a site work properly are codified in policy. The rules between layers are described as distinct zones, horizontal isolation. A whitelist of conversations is written, approved, and all agree to follow these rules (until they don’t, which is common). This is the law of application security behind the scenes (most end-users don’t know it, see it, or care).

What I have so far described is a typical 3-tiered application. It has been bread and butter for the last decade and more. This was the how to build it right model all of this time. However, the proliferation of internet facing services and now the trend toward discreet REST (basically terse, less formal services, leveraging HTTP protocols more directly) APIs (application programming interfaces), this law seems out of place.

The process of defining APIs for a business means decomposing business functionality into discreet, consumable pieces. As I’ve already described, most of our business functionality is in the application. REST is definitely not just another way to build applications. It discards the application. In its place, client-side code retrieves what is needed and assembles it into a cohesive user experience.

The laws of application security for the old tiered model were adapted to how applications were already being built. That web server in the presentation was there to serve static content and route traffic. Without the calls out to policy server, it’s really most often just a dumb reverse proxy. Because it’s not the same attack surface, we might argue that it plays a defensive role for the application server which contains more sensitive credentials and handles more sensitive data. But we know that this layer has been proven vulnerable with resulting exploits that allowed attackers to control and pivot from these presentation servers. One popular flavor of Apache included a simple header injection that essentially enabled a port scan, allowing attackers to map out the attack surface of the presentation layer, including connections into other zones. The load balancer appliance added a double layer of indirection, but was brought in to solve engineering problems more than security, as much as the vendors now seem to want you to believe in the security pedigree of their devices.

Behind this presentation layer, again, are highly sensitive servers and services that are often poorly secured. Encryption in-transit terminates at the presentation appliance. Everything behind it is often unencrypted. Access controls, again, are handled in the web server with its relationship with the policy server. Everything behind it is node-to-node trust, at best. Often access controls don’t exist between backend layers. It’s assumed that if a server can make a request that it’s a request that we can trust, as in if the servers have been permitted to have a conversation with one another, it must be legit. Essentially this means that firewall rules between tiered zones are used as access controls. However, make one mistake in the app layer, like SQL injection, and your trusted connection from the app zone opens a pivot from the data layer.

These old laws in new contexts, applied to new models of architecture are not bound to be successful. Once again the security team finds itself chasing after how applications are actually built, and from this defining how they are to be secured. However, this time there is much more awareness of security across organizations and even in the public. This strengthens the hand of the security team who now has influence over application architecture. However, the security team might not appreciate when applications are truly different than what they are familiar with. They are applying their application security laws to models that aren’t tiered in the same way.

What are the rules when an application is built from a language compiled into the web server itself? Should it require another web server acting as a reverse proxy in front of it? Perhaps, but what if there’s an appliance, like a load balancer? What about the backend? If this app uses a limited local store for end-user state and services to interact with persisted user data, should it require a separate database, isolated by horizontal zone? What if the application is one, small, discreet business function amongst many, many of its kind? Should they all be tiered to maintain status quo DMZ design? Should we concern ourselves with this extra overhead when can barely make the case that it makes anything more secure? Are we adding safety or are we making our diagrams work? Are we safeguarding our assets or the status quo? What does a firewall access control mean when resources that have always been buried in the backend are now found in the front-end?

These old laws weren’t all that effective in the first place. The attention to the servers (or nodes) and their placement distracts from the real goal which is to safely handle the data. Had attention been on data all along, adoption of new software models would meet less resistance because the data would remain the same and so too would the controls and most of the capabilities that support its safe handling.

Regardless, I’ve seen ruthless enforcement where there seemed to be no regard for architecture that did not fit the model. And so a fit was forced, drawing a horizontal line between layers that had no reason to be separated. The diagrams worked and the laws were applied, but the architecture was broken. Increased overhead and expense were apparently seen as a necessary price. The hidden costs of arbitrary complexity were realized later, under fire. The root cause will never quite be understood because it never is.

Written with StackEdit.

Wednesday, November 6, 2013

Developer Games

The typical developer focuses on business functionality, complex integrations, making sites work and look great, and so on. Those of us in the business or elsewhere in the technology org see a lot of the mistakes they make. We call their babies ugly.

Infosec folks find ourselves apoplectic about the developer’s ignorance about the security domain, like when they start talking about firewalls when you are trying to explain why their SQL injection flaw is a big deal. We’re frustrated when, year after year, we have the same conversations with the same developers.

But don’t fool yourself into thinking that developers are incompetent or even that they don’t care. I am a developer (mostly in my spare time these days). I know too well the pressure developers are under. They have to contend with impossible deadlines, ridiculous politics, poor technology decisions made elsewhere (often by execs over a game of golf), an IDE that corporate approves rather than the one that they like, and last minute requirement changes. What they love mostly is solving challenging problems.

The developer employed as the attacker has to be a whole different matter. I can only speculate since I have not practiced the dark arts outside of ethical hacking on behalf of the business with approvals.

We know from what we see in attack patterns that there is a tendency to go after low hanging fruit. This means that the developer-as-attacker is often focused on the weakness du jour. With this kind of focus, a good developer can really excel. I can just imagine the amusement they get from plowing through poor defenses. Good defenses are probably even more fun.

The infosec review process is often seen as yet another distraction, further dividing focus from the developer’s point-of-view. For that matter, the secure SDLC adds further complexity to a process that many developers already view as an imposition from irrelevant wonks who care about the wrong things.

We can’t estimate the threat by judging our adversaries by our own day-to-day. This isn’t because the attacker is far superior, it’s because they are far more focused.

Introducing your developers (and the org) to the process of attack helps them understand defense. Business would benefit from allowing their developers and other technology folks to turn they focus away from the day-to-day. Beyond having developers tinker with WebGoat, a red-team/blue-team exercise (gaming, if you want to look like a hip leader) would surely satisfy a developer’s intellectual curiosity while also strengthening their understanding of defense. Developers will see that infosec is not irrelevant, if anything, because they don’t want to face the embarrassment of being on the losing end of the game. This would also foster a less abstract sense of ownership and even accountability.

Written with StackEdit.

Followers