The perimeter and, with it, boundaries and domains are coming under a lot of scrutiny because of cloud initiatives in the enterprise. "The perimeter is dead," cry the loudmouthed analyst/guru types. "The perimeter is changed," cry more reasonable and informed people. Chris Hoff was the first I heard say that it's now many perimeters on many objects rather than one big one, which is the POV I most agree with. Whatever the case, we need to think about what the perimeter is, what is has meant, and what it has implied about how we build processes and services before we can understand how it will change and how we can permit it to evolve safely.
Since the nineties, the perimeter, and the DMZ, have been the first line of defense and continue to play an enormous role in the defense of enterprise assets. I recall the days when, after installing your ethernet cards, throwing cables, locating the specialized software, and plugging in; your machine would be directly on the internet. In short, the attack surface that grew from this might be seen as the incubator for the industry of hacking that we now are aware of even in the popular (and worthless) press, like Newsweek and Time.
This informal perimeter was also a problem from the inside, as people stood up ftp, irc, and nntp (news) servers for warez trading and worse. The insider threat was born, although it wasn't clear to the intrepid warez trader that he was doing anything wrong.
Then came the hardened perimeter and an internal world, as defined by NAT, that was increasingly walled off from the external world. An elite, credentialed group of network geeks were on the defense. Some of us worried about how much they knew about internal activities, and so mostly through FUD the accidental or naive internal threat withered. However, most of us rejoiced that we could now worry less about what could be done internally.
We were and still are in the egg era of business computing. It has a hard exterior and a soft, vulnerable middle (core). We trust the core and imagine that it is most trusted even though it's increasingly clear that it's dangerous.
Inside the egg creativity and productivity flourished. Distributed computing was the revolution that would tear down the constraints of big iron. We wouldn't wait for some centralized, soviet style committee to solve problems. We'd solve them ourselves. Some official business case problem solving group didn't lease space on the big computer, we'd do it on our own. Heck, we'd even build a server out of hopped up gaming boards and do it on the cheap. FreeBSD and Linux made this viable. X86 Solaris made it enterprisey, if you had to. It was going to be a great world.
Then came the wet blankets: IT security. Is it patched? Are you using access controls? Are you using encryption between nodes? Are employing encryption at-rest? Are you rotating your passwords? Are you considering the life-cycle of your service credentials? Are you considering the life-cycle of data? And on and on. But, but, but it's behind the firewall... it's internal! Are you admitting that your network guys can't do their jobs? Can't you just buy some tool to solve the problem? Short-cuts that we didn't even know that we were taking are now coming back and are seen as part of a death by a thousand cuts story.
So why isn't distributed computing seen as a failure even though it largely seems to have been one? You could certainly argue that it wasn't a failure in that it led to innovations that we can't imagine not having today. There was certainly an upside and the distributed computing revolutionary took as much of it as he could. He also had a long runway to get away from the downside. Let operations create a super-fund to clean up those messes.
The VP of the next revolution has moved on to cloud computing. The same revolutionary spirit that fueled distributed computing is now driving this. We want to do it ourselves and we don't want to wait. We'll go to the best provider of a service, build some ourselves, and integrate what couldn't be integrated before.
But can we take the same short-cuts we took with distributed computing? Can we get away with any of these short-cuts? Cutting to the chase, can the revolutionary wipe his hands clean, claiming success fast enough before his errors come beating down his door? I doubt it.
What I find most dubious is that the perimeter mentality is infecting the designs of cloud initiatives. This is obvious in how people seem to imagine identity will be handled in this space. If the data is to be set free to integrate with cloud services, can you really leverage even cloud-friendly solutions like SAML the very same way we have done thus far? Does this mean that every service provider will eventually know everything about every constituent or even any potential constituent in order to line them up with their data and in order to have their data ready... just in case? Will formal partnerships and legal agreements (and legal threats) force us to act any more responsibly toward data stewardship than we are with distributed systems? Will every player have to be large enough to take this responsibility in order to defend themselves when asked to be accountable for mistakes? Will we push data to every corner of the cloud, eventually creating an amorphous cloud data store which could never be governed? Will we simply resign ourselves to the Newsweek moronism that people don't care about privacy, so they'll accept (because they have to) that their data cannot possibly be governed?
This is the back-drop to just about everything on my mind today. How can I get the right principles in the heads of these revolutionaries so that they can do what they want to do without wreaking havoc? Are there tools that can help? Maybe. Are there standards we should adopt or extend? Yes. Do the right people know what these are? Not really. Might we need to create new ones? Yes. Should we do it alone? Certainly not. Can we make it so that the downside is known and felt by those who take the upside? I hope so.