Thursday, November 14, 2013

It's the Singer not the Song

Security often finds itself chasing after changes in software architecture and even development methodology. While one would hope that the naivety that is so familiar to the early days of the internet was just a matter of kicking off something entirely new, we seem to be finding ourselves persistently backing into security after the sprint toward something new. Security has matured, and with it whole new organizations have sprouted, but I see no evidence that this means security will not continue to have to chase after change, and at an increasing pace, it seems. Worse, a more mature security practice is proving to be a hindrance to innovation and a barrier to adoption for better software models.

To understand where we’re going, we have to understand where we’ve been. And so, before going into the latest challenges, I’ll spend some time explaining how the DMZ matured, at least from my in practice experience.

The typical tiered Java application has a web server, an application server, a database, and maybe some remote services.

The web server is there to accept requests from the outside world. It often is used to serve static content to avoid pushing this burden off to the critical systems in the back-end. It establishes the connection to the application server on the requestor’s behalf. It can play the role of load balancer, choosing the least busy or the next available application server. Finally, this is often where the end-user’s session is validated. The web server has a plugin that can talk to the policy server where sessions are established and validated. The session comes in from the end-user’s client, the policy server validates it and extracts the user principal (who the end-user is), and the user principal is passed back to the application server.

The application server is there to deliver the business functionality (and value, we hope). This is where the difficult stuff happens and where most of the developer’s work goes. Often the business logic is executed here. (When it’s not, it’s executed in a business rules engine that organizes critical logic in one, manageable space.) Calls out to data sources happen here. Usually there is at least one data source used by the application server that persists information about the end-user and anything that will bill useful to execute the business function. The application server has a trusted connection with the data source, usually a shared secret (username and password, or service credentials). The application server is actually normally two or more servers with a large amount of CPU (horsepower) and RAM (memory).

The application server often contains more than it should. This where the most man-power is usually engaged and, accordingly, it’s the default place where gaps are solved. Is the business rules engine not delivering the logic needed? Patch it up with some code in the application layer. Does the service fail to orchestrate and aggregate as we’d like, fix it here. Does the access control out front fail to be granular enough to ensure safety while executing our business functionality? Solve it here. Do we get more information than is appropriate for this end-user from the data or services layers? The application becomes the authorization decision point (policy decision point or PDP in XACML). In short, the application layer is overloaded. THis creates a snowball effect as an application lives on through the years. This is where we solve all of this so it must be the right place to solve these complex problems. Applications grow and so do the teams that support them.

The data source is where the application stores data it needs to support the business functionality and bring value to the end-user. This is often a database. More specifically, it tends to be a relational database. Databases are tuned to retrieve data, correlate, sort, and save. There is a lot of potential for logic in many of the common databases, but I’ve found that most often there’s tendency to avoid this in favor of something out in front, like the application server, business rules engines, or services. From what I can tell, this is mostly because of the available resources for application development, but also because database administrators (DBAs) have a more mature technology, older personnel, and well-defined processes. There is a tendency to resist introducing more to this layer that could degrade performance, introduce manageability concerns, complicate troubleshooting, and generally make things more complicated.

This completes the picture of a typical tiered application. I mention other systems, like services and business rules engines, but for the sake of brevity I’ll keep most of the attention on web servers, application servers, and data sources. It’s important to know that additional servers and services participate.

Between these layers, we have introduced additional security on a firewall level. In short, we let the layers talk to the minimum necessary. The web server can talk to the internet, but only through ports 80 or 443 (HTTPS, encrypted HTTP transport). On the inside, the web server talks only to the application ports using the expected protocols (HTTP, AJK, etc.). The web server also talks to the policy server for the session concerns mentioned earlier.

The application server is open to requests from the web server. It is allowed to talk to the data source using only the ports intended for the application and using expected protocols. The application server can also talk to services, often in another domain, even external services from other businesses. Of course, the application server must also talk to the business rules engine, which is often in the same zone. In some cases, conversation with the policy server is also necessary to execute fine grained authorization. The policy server is usually located in the application zone, mostly because there is no need for it to ever be available directly to the internet. In cases where it does need to be available, as with some single-sign on (SSO) scenarios snd federated authentication, the traffic is either driven through a web server layer or is made available by way of a load balancer appliance, when one is used.

The load balancer requires some attention as well. This is in front of most large applications. It serves the purpose of balancing transactions across resources, accelerating encryption in-transit (TLS/HTTPS), and increasingly plays a role as a firewall for the application stack (layer 7). Many load balancers are now introducing integration with policy servers, essentially taking this responsibility from the web servers. Further, they can act as a kind of STS, translating tokens from the external domain into tokens that can be used on the backend.

All of these conversations that are necessary to make a site work properly are codified in policy. The rules between layers are described as distinct zones, horizontal isolation. A whitelist of conversations is written, approved, and all agree to follow these rules (until they don’t, which is common). This is the law of application security behind the scenes (most end-users don’t know it, see it, or care).

What I have so far described is a typical 3-tiered application. It has been bread and butter for the last decade and more. This was the how to build it right model all of this time. However, the proliferation of internet facing services and now the trend toward discreet REST (basically terse, less formal services, leveraging HTTP protocols more directly) APIs (application programming interfaces), this law seems out of place.

The process of defining APIs for a business means decomposing business functionality into discreet, consumable pieces. As I’ve already described, most of our business functionality is in the application. REST is definitely not just another way to build applications. It discards the application. In its place, client-side code retrieves what is needed and assembles it into a cohesive user experience.

The laws of application security for the old tiered model were adapted to how applications were already being built. That web server in the presentation was there to serve static content and route traffic. Without the calls out to policy server, it’s really most often just a dumb reverse proxy. Because it’s not the same attack surface, we might argue that it plays a defensive role for the application server which contains more sensitive credentials and handles more sensitive data. But we know that this layer has been proven vulnerable with resulting exploits that allowed attackers to control and pivot from these presentation servers. One popular flavor of Apache included a simple header injection that essentially enabled a port scan, allowing attackers to map out the attack surface of the presentation layer, including connections into other zones. The load balancer appliance added a double layer of indirection, but was brought in to solve engineering problems more than security, as much as the vendors now seem to want you to believe in the security pedigree of their devices.

Behind this presentation layer, again, are highly sensitive servers and services that are often poorly secured. Encryption in-transit terminates at the presentation appliance. Everything behind it is often unencrypted. Access controls, again, are handled in the web server with its relationship with the policy server. Everything behind it is node-to-node trust, at best. Often access controls don’t exist between backend layers. It’s assumed that if a server can make a request that it’s a request that we can trust, as in if the servers have been permitted to have a conversation with one another, it must be legit. Essentially this means that firewall rules between tiered zones are used as access controls. However, make one mistake in the app layer, like SQL injection, and your trusted connection from the app zone opens a pivot from the data layer.

These old laws in new contexts, applied to new models of architecture are not bound to be successful. Once again the security team finds itself chasing after how applications are actually built, and from this defining how they are to be secured. However, this time there is much more awareness of security across organizations and even in the public. This strengthens the hand of the security team who now has influence over application architecture. However, the security team might not appreciate when applications are truly different than what they are familiar with. They are applying their application security laws to models that aren’t tiered in the same way.

What are the rules when an application is built from a language compiled into the web server itself? Should it require another web server acting as a reverse proxy in front of it? Perhaps, but what if there’s an appliance, like a load balancer? What about the backend? If this app uses a limited local store for end-user state and services to interact with persisted user data, should it require a separate database, isolated by horizontal zone? What if the application is one, small, discreet business function amongst many, many of its kind? Should they all be tiered to maintain status quo DMZ design? Should we concern ourselves with this extra overhead when can barely make the case that it makes anything more secure? Are we adding safety or are we making our diagrams work? Are we safeguarding our assets or the status quo? What does a firewall access control mean when resources that have always been buried in the backend are now found in the front-end?

These old laws weren’t all that effective in the first place. The attention to the servers (or nodes) and their placement distracts from the real goal which is to safely handle the data. Had attention been on data all along, adoption of new software models would meet less resistance because the data would remain the same and so too would the controls and most of the capabilities that support its safe handling.

Regardless, I’ve seen ruthless enforcement where there seemed to be no regard for architecture that did not fit the model. And so a fit was forced, drawing a horizontal line between layers that had no reason to be separated. The diagrams worked and the laws were applied, but the architecture was broken. Increased overhead and expense were apparently seen as a necessary price. The hidden costs of arbitrary complexity were realized later, under fire. The root cause will never quite be understood because it never is.

Written with StackEdit.

Wednesday, November 6, 2013

Developer Games

The typical developer focuses on business functionality, complex integrations, making sites work and look great, and so on. Those of us in the business or elsewhere in the technology org see a lot of the mistakes they make. We call their babies ugly.

Infosec folks find ourselves apoplectic about the developer’s ignorance about the security domain, like when they start talking about firewalls when you are trying to explain why their SQL injection flaw is a big deal. We’re frustrated when, year after year, we have the same conversations with the same developers.

But don’t fool yourself into thinking that developers are incompetent or even that they don’t care. I am a developer (mostly in my spare time these days). I know too well the pressure developers are under. They have to contend with impossible deadlines, ridiculous politics, poor technology decisions made elsewhere (often by execs over a game of golf), an IDE that corporate approves rather than the one that they like, and last minute requirement changes. What they love mostly is solving challenging problems.

The developer employed as the attacker has to be a whole different matter. I can only speculate since I have not practiced the dark arts outside of ethical hacking on behalf of the business with approvals.

We know from what we see in attack patterns that there is a tendency to go after low hanging fruit. This means that the developer-as-attacker is often focused on the weakness du jour. With this kind of focus, a good developer can really excel. I can just imagine the amusement they get from plowing through poor defenses. Good defenses are probably even more fun.

The infosec review process is often seen as yet another distraction, further dividing focus from the developer’s point-of-view. For that matter, the secure SDLC adds further complexity to a process that many developers already view as an imposition from irrelevant wonks who care about the wrong things.

We can’t estimate the threat by judging our adversaries by our own day-to-day. This isn’t because the attacker is far superior, it’s because they are far more focused.

Introducing your developers (and the org) to the process of attack helps them understand defense. Business would benefit from allowing their developers and other technology folks to turn they focus away from the day-to-day. Beyond having developers tinker with WebGoat, a red-team/blue-team exercise (gaming, if you want to look like a hip leader) would surely satisfy a developer’s intellectual curiosity while also strengthening their understanding of defense. Developers will see that infosec is not irrelevant, if anything, because they don’t want to face the embarrassment of being on the losing end of the game. This would also foster a less abstract sense of ownership and even accountability.

Written with StackEdit.

Monday, August 12, 2013

IT Meandering

Aside from pizza slinging, post-teen line cookery, and video production forays; I have worked in information technology most of my career.

I bought my first computer in 1983. I saved up money that I made delivering the morning and evening paper in my home town of International Falls, Minnesota.

I've seen the evolution of personal computing from no relevance outside specific industries (very few early on) to foundational for most businesses.

I'm not old enough to have used punch cards and just missed hobbyist kit computers.

I started on an Atari 800 XL, purchased from the Sears catalog.  Actually, I first had a Timex Sinclair.  Too small and weird.  I sent it back.  Then came the Coleco Adam.  Also too weird.  I sent it back.  By the time I bought the Atari, I had saved up about $1500.  I got the floppy drive, which was the Indus brand since it was faster than the Atari one (apparently I needed the speed).

In the early nineties, I first experienced the internet on a first generation Mac a friend brought to his dorm from home. He had a 2400 baud modem and dialed into the University of Minnesota's network. From there, he used the shell to get to IRC, news, and email applications. I was interested, but not enough to pull me away from the obsessions of a 17 year old set loose in a city. I recall what a big deal he thought it was, but I knew him enough already to know he was prone to exaggeration.

I also saw him interacting on a list of Bulletin Board Systems (BBS). This idea wasn't new to me. International Falls, for those who are unfamiliar, is quite remote. It's something more than a small town, but not by much. It's a working class, one industry mill town. To this day, it lags in technology.  I wanted to get on to a BBS, but it was impractical. I wanted to do more with my computing hobby. I had a 1200 baud modem but not much I could do with it.

A few years after I left in 1988, the owner of the local Radio Shack started a local ISP. This was not available to me while I lived there. ISPs were available but I could not afford them since they weren't local.

Kids who could afford it would dial BBS sites in larger locales, such as Duluth or Minneapolis. Some had accounts with Prodigy, an early ISP, however the long-distance charges on top of the service fees kept most away. I knew two people who could connect their modems to anything remotely interesting. Everyone else would connect to each other, more or less creating ad-hoc, terminal-based, one-on-one chat sessions.

Some time after my early exposures to the internet, I finally bought a used computer and began to be pulled back into my old obsessions. At this point, I had spent a number of years giving myself what I still consider a proper University experience. I focused on literature, philosophy, art, history, and film. No course I ever took had anything to do with computer "science" (to paraphrase @nntaleb, if a discipline has the word science in it, as in social science, it usually means it's not a science). I had set my obsession with computers down so that I could live the life of a young man venturing far from the familiar.

Then computers came back into my life at a time when the internet was just taking off.

The computer was a Mac SE/30. I bought it off the same friend who introduced me earlier. I also bought a modem and was frequently connecting to the University's network and getting familiar with Unix shells and tools. Frequently meant every couple of weeks... if that. (I'm certainly at the other end of spectrum today, as are so many people who don't go a waking hour without being online.)

I started work in an office at the University called the American Indian Learning Resource Center (AILRC). The program's mission was to help reduce the Native American student drop-out rate, which was very bad at the time and still is. (No doubt owing to the culture shock of coming out of a reservation and into a city while surrounded by an alien middle-class culture.) I was quickly recognized for my technical aptitudes and set about solving problems from banal printer and network problems to program logistics, like contact databases and general communications.

Dabbling in Appletalk networks and then Apple's Hypercard brought me into an emerging technology called Gopher. How exciting that it was so revolutionary, but also a local phenomenon! I even beta tested the GopherVR browser, which collected online resources into 3D scenes. Boy was this going to be big!

This dabbling led me to the world wide web that was finally gaining traction. The NCSA Mosaic browser was installed and frequently run at a time when a good site meant a well organized outline with tasteful use of formatting to match what was already familiar in word processors. They really did feel like inferior word processing documents. They couldn't include pictures and the formatting was very primitive and grating to me as a liberal arts student, familiar with proper style and the power of good aesthetics. However, like Hypercards and Gopher, you could link to other documents!

The habit of browsing had a new experience, but it was mostly a yawner for me. Then came Netscape Navigator with its integrated and optimized graphics. Now I got it. I talked my manager into creating a site for the AILRC. I created custom graphics and scanned program photographs on informational pages. I had each staff person, myself included, do a bio page with photos.

I had joined the WWW and brought the program with it. I mostly failed to realize how early my dabbling was. As I said, I had friends who were into technology much more deeply than I was. They seemed ahead, and they were. However, to the general public, I was riding the bleeding edge.

To create and manage these pages, I began teaching myself HTML and other languages. Of course, the fact that you could peek at HTML source was very helpful in getting me jump started. (I recall pondering the ethics. Was I stealing?) I learned editing in Unix shells, used Usenet News to connect with people with similar interests and similar skill levels. Eventually, I was a pro helping others and getting occasional jobs to kick-start web site initiatives both inside the University and in the private sector. Looking back, this has all of the elements and even habits of what I do to this day.

In 1995 I was finished with college and drifting from job to job, interest to interest. I stumbled on a 1968 Volvo 142 in Dinkytown (R.I.P.). I bought it for $300. It was literally in tatters internally. It had served as a way for an artist to haul paintings and a big dog. The exterior, however, was perfect in my eyes. I quickly decided that it needed a new engine, so I found a second 140 with a B20 engine and swapped engines over the summer. I documented every last detail of this job and found myself online every night, interacting with folks on an email listserv who also loved old Volvos and working on them.

By the time I was finished with the job, I had become fed up with the listserv owner. I had started documenting my Volvo obsession on my personal website (hosted at a local ISP called Visi). I decided that I wanted to create my own group for Volvo enthusiasts. I was familiar with news groups. There was one or two for Volvo, but I found that news group culture wasn't for me.

News groups had become a mix of noobs, warez fiends, perverts, and grouchy old veterans longing for a day that would never return where news groups were always filled with intelligent people and content rather than the unwashed masses. In short, it felt exclusive and fragmented.

I didn't share the dream to restore newsgroups to their former glory and perhaps knew, from my obsession with HTML and what had become of the WWW, that it would forever be left in the state it was in. I needed to be in the same place where all of this new stuff was happening.

I was starting to hear people chattering about it in the general public more than ever before. About 1997, I recall being in a restaurant with my girlfriend (and future wife) and hearing older people talking about sites that they found that were useful. Up to this time, I think people were aware, but it still seemed remote and geeky to most. Now ordinary people appeared to be finding it useful and even habit forming.

Within a couple of years, my mother and future in-laws would ask me to connect their modems to an ISP. Old people were connecting! By this time, I was an early DSL adopter and tossing wires to neighbors in my apartment building. I had a home office and my interests were attached to something that was about to explode on the scene.

I decided to build a Volvo enthusiast's forum using the format of news groups but hosted as a CGI that generated HTML. I named the site, brickboard.com, because fans of the 140 and 240 Volvos referred to their cars as bricks due to their blocky physique and because I was geeky enough to know about bulletin boards (mostly of the past by then). I had finally achieved my dream of having a BBS... sorta.

The site grew fairly popular. I was online before Volvo Cars was. There was an audience for sure. To keep it going, I had to learn different skills, including troubleshooting and even customer support. Once the site got too busy and burdened, I would optimize as best I could, but eventually decided that I needed a dedicated server. Up until this time, I had used shared servers (like Rackspace built their company on). I built servers to host it and colocated them in the Visi Minneapolis datacenter. I was a full on geek now! I terminaled into servers, in a rack, at a datacenter.  Oh boy. Later, I hosted them out of my basement (with the advancement of DSL).

When it came time to consider another server, I decided to try Amazon EC2 in 2010 and never looked back. Essentially, this was similar to the shared server experience earlier, however I had more control and would not have to be at the mercy of the bad code from my neighbors. I actually left the shared services because of a spate of compromises and the recognition of the limits of my ability to defend my site from them. The model was clearly broken. Amazon added up nicely for me since I could get back to a model of hosting remotely while keeping a similar level of control that physical servers gave me.

What has become a career for me started out with obsessions and hobbies. I stumbled into something that was in its infancy, namely personal computing.  The wider availability of computing was available to those who would tinker and eventually to those who would benefit from utility never before imagined. I recall subscribing to Compute! magazine and other home computer rags and reading about the emerging companies, like Apple and then Microsoft.

I saw that what I was doing was considered low-brow to a whole different level of technologists. I saw the uncertainty of the market and how business adoption was virtually zero. What these kids were into was irrelevant to most.

I had an aunt who would tease me about it and rib me about using my time as a youngster better, like chasing girls. I didn't care. It was exciting and I had a sense that it was going to be a big deal. (Of course, this was preteen where everything that matters to you is a big deal.)

As I started working in offices, I found endless interest in using technology to solve problems, thankfully, from my management (I have had the good fortune to have many great bosses). I was still tinkering with technology, but I was, in hindsight, solving business problem after business problem.

I've found that my meandering path has served me well. Getting a University experience, rather than a Votec-like experience, has helped foster my inquisitive nature and built soft skills that are immeasurably valuable.

The accessibility of technology today has opened up so much more potential than has been realized and I hope that bored preteens know this. I can only imagine what I would have done had I been able to connect to the wider world the way I can now.

Monday, August 5, 2013

Big Egg in the Sky

The perimeter and, with it, boundaries and domains are coming under a lot of scrutiny because of cloud initiatives in the enterprise.  "The perimeter is dead," cry the loudmouthed analyst/guru types.  "The perimeter is changed," cry more reasonable and informed people.  Chris Hoff was the first I heard say that it's now many perimeters on many objects rather than one big one, which is the POV I most agree with.  Whatever the case, we need to think about what the perimeter is, what is has meant, and what it has implied about how we build processes and services before we can understand how it will change and how we can permit it to evolve safely.

Since the nineties, the perimeter, and the DMZ, have been the first line of defense and continue to play an enormous role in the defense of enterprise assets. I recall the days when, after installing your ethernet cards, throwing cables, locating the specialized software, and plugging in; your machine would be directly on the internet. In short, the attack surface that grew from this might be seen as the incubator for the industry of hacking that we now are aware of even in the popular (and worthless) press, like Newsweek and Time.

This informal perimeter was also a problem from the inside, as people stood up ftp, irc, and nntp (news) servers for warez trading and worse. The insider threat was born, although it wasn't clear to the intrepid warez trader that he was doing anything wrong.

Then came the hardened perimeter and an internal world, as defined by NAT, that was increasingly walled off from the external world. An elite, credentialed group of network geeks were on the defense. Some of us worried about how much they knew about internal activities, and so mostly through FUD the accidental or naive internal threat withered. However, most of us rejoiced that we could now worry less about what could be done internally.

We were and still are in the egg era of business computing. It has a hard exterior and a soft, vulnerable middle (core). We trust the core and imagine that it is most trusted even though it's increasingly clear that it's dangerous.

Inside the egg creativity and productivity flourished. Distributed computing was the revolution that would tear down the constraints of big iron. We wouldn't wait for some centralized, soviet style committee to solve problems. We'd solve them ourselves. Some official business case problem solving group didn't lease space on the big computer, we'd do it on our own. Heck, we'd even build a server out of hopped up gaming boards and do it on the cheap. FreeBSD and Linux made this viable. X86 Solaris made it enterprisey, if you had to. It was going to be a great world.

Then came the wet blankets: IT security. Is it patched? Are you using access controls? Are you using encryption between nodes? Are employing encryption at-rest? Are you rotating your passwords? Are you considering the life-cycle of your service credentials? Are you considering the life-cycle of data? And on and on. But, but, but it's behind the firewall... it's internal! Are you admitting that your network guys can't do their jobs? Can't you just buy some tool to solve the problem? Short-cuts that we didn't even know that we were taking are now coming back and are seen as part of a death by a thousand cuts story.

So why isn't distributed computing seen as a failure even though it largely seems to have been one? You could certainly argue that it wasn't a failure in that it led to innovations that we can't imagine not having today. There was certainly an upside and the distributed computing revolutionary took as much of it as he could. He also had a long runway to get away from the downside. Let operations create a super-fund to clean up those messes.

The VP of the next revolution has moved on to cloud computing. The same revolutionary spirit that fueled distributed computing is now driving this. We want to do it ourselves and we don't want to wait. We'll go to the best provider of a service, build some ourselves, and integrate what couldn't be integrated before.

But can we take the same short-cuts we took with distributed computing? Can we get away with any of these short-cuts? Cutting to the chase, can the revolutionary wipe his hands clean, claiming success fast enough before his errors come beating down his door? I doubt it.

What I find most dubious is that the perimeter mentality is infecting the designs of cloud initiatives. This is obvious in how people seem to imagine identity will be handled in this space. If the data is to be set free to integrate with cloud services, can you really leverage even cloud-friendly solutions like SAML the very same way we have done thus far? Does this mean that every service provider will eventually know everything about every constituent or even any potential constituent in order to line them up with their data and in order to have their data ready... just in case? Will formal partnerships and legal agreements (and legal threats) force us to act any more responsibly toward data stewardship than we are with distributed systems? Will every player have to be large enough to take this responsibility in order to defend themselves when asked to be accountable for mistakes? Will we push data to every corner of the cloud, eventually creating an amorphous cloud data store which could never be governed? Will we simply resign ourselves to the Newsweek moronism that people don't care about privacy, so they'll accept (because they have to) that their data cannot possibly be governed?

This is the back-drop to just about everything on my mind today. How can I get the right principles in the heads of these revolutionaries so that they can do what they want to do without wreaking havoc? Are there tools that can help? Maybe. Are there standards we should adopt or extend? Yes. Do the right people know what these are? Not really. Might we need to create new ones? Yes. Should we do it alone? Certainly not. Can we make it so that the downside is known and felt by those who take the upside? I hope so.

Monday, July 15, 2013

B2B and B2C Are Not Dead

B2B and B2C are dead. This is a declaration I heard at the Cloud Identity Summit 2013. A provocative statement, yes, but I am certain that this is wrong.

The speaker was, I suspect, attempting to find a novel way to describe the, both, exciting and anxiety-inducing inevitability of rethinking security perimeters. If revolutionary technologies and channels are dependent on setting data free, then anyone with a pulse in identity needs to either brace themselves for change or be bold enough to try to get ahead of the challenge.

However, let's make this notion nothing more than a novelty. This statement reminds me of the efforts I've seen utterly wasted on attempting to make batch processes real-time (or quasi-real-time). I actually had to argue with someone who had the intention to turn a batch process, with 80,000 rows of data transferred daily, into a web service that required distinct calls for each row of data. It was to be called every night at midnight, 80,000 times until finished.

Congratulations, innovator, you just increased the size of transactions (dramatically), slowed down the process, and increased complexity for all parties. I didn't think it would be productive then and I don't think it will prove productive now to attempt to tinker with batch back-ends. Maybe later, but it's not an essential, or smart, tactical move to get where we want to be. (If the records were to flow in all day and there was a business case that would benefit from real-time, then the idea would have made sense.)

What proves this declaration wrong? There is a river of data flowing through the back-end. There is data arriving from and being pushed to partners, customers, regulatory agencies, banking platforms, researchers, &tc. Despite the dominance of attention received by the large portals we've built for over a decade now; the revolution that was once du jour and is now passe, this river has flowed on and with ever increasing current.

The message coming from many sources is that a perimeter-less approach to security is the future of handling IT data. I prefer Chris Hoff's assertion that it's not no perimeter, it's many perimeters. Whatever the case, the perimeter will move and migrate.

There will be an internal or core perimeter that remains for a long time to come. Behind it will be mainframes (declared dead 20 years ago by gurus of the day) and other core business that makes no sense to move. The corporate datacenter won't go away, although it will likely become smaller.

The challenge with the perimeter as moving target is the implications to the handling of ownership and responsibility. The good news is that we haven't done much to address this in the current state so there's not much to port. The bad news is that we have buried how we've done it across the IT landscape, specifically in large portals... but that's another blog entry.

Monday, July 8, 2013

Build a Network but Build it Better

When Google ventured into email, I recall many people wondering if it was an empty "me too" move since Hotmail had already been well established. And what did it have to do with search anyway?

Of course, it makes enormous sense when you understand that Google is not actually about search and that it's about monetizing high quality data aggregation. Sending emails tells Google a lot about you: what you like, what you don't, what you want, and who you know.

Of course it is the latter that makes Google, and companies like it, ingenious. Who you know is your network. Using Google to engage your network builds it's colossal network into insights about people that have yet to be imagined.

One can imagine the product pitch within Google being rather dull. "Let's do what all these other companies are doing and figure out how to monetize it later." Of course, the DotCom boom exploded leaving the notion that "if you build it, they will come" a very unfunny joke.

I doubt, however, that the pitch was all that difficult in Google because they certainly must understand the value of their network. In other words, they don't look for transactional qualities to get their heads around return on investment (of course there is an opaque quality to what is actually transparent).

When making product decisions, the network building company simply needs to ask how it can build the network. If it fails to build the network, it failed. If it fails to make money directly, it pays its way building the network where it hasn't been before.

In hindsight, it's hard to imagine Android working out so well without gmail. With phones, you further extend the network several different ways. For instance, the apps you download and use provide insight into the relationships you have with other companies. As with the Chrome browser, Google will have insight into these relationships app or no app. And so the network and the consumer inference potential grows.

Modern networking does not have to be transactional, per se. The payment is less tangible than money but no less powerful. The currency is actually kinetic, it's potential monetization. Maybe it will never actually make money, but will it create customer satisfaction or loyalty?

What's going on in the heads of the customer or potential customer? Have you asked the right questions to ensure that you understand? Can you derive or infer customer actions from the information that they have volunteered?

What about privacy?

I have to pull back on the giddy notion of limitless inference about consumer minds. Privacy matters. Despite what some journalists insist, it matters regardless of age.

To the latter, have you created an environment where they are willing to volunteer this data? Do they trust your network?

I am always advocating for empowering the customer. Corporations, especially large ones, have a tendency to make decisions without involving the customer. They'll partner with others to share your data however makes sense to them. They might have some notion of your consent in legalize, but they won't really ask for customer consent. The customer doesn't really have an alternative. They might even imagine that regulation is actually the customer's will being expressed by proxy. Damn government! Of course, empowered customers must avoid delegating authority that they could easily manage themselves.

But how easy is it? If we enable consent beyond EULAs and click-throughs, could we dramatically ramp up participation on the network? We can build technical systems that allow customers to express consent. Of course, it's actually more important that customers are able to revoke their consent. Would a customer behave differently if, when they revoke their consent, they can be certain that their data is removed from the network? I suspect that the answer is yes. (There's a good job for the government: ensure that revocations are actually being honored.)

This empowers the customer, without a doubt. It also implies transfer of responsibility, or more of it, to the customer. A shallow example of this is an analogy of "poison pill" functionality in smart phones. If you lose your phone, the carrier and/or the phone manufacturer have allowed you to pull the brake line and reduce the risk that your data will be misused. But you have to take action. You are responsible for acting when the conditions demand it: you've lost your phone. The carrier also benefits in that their services will not be stolen and abused.

What might be more interesting about building consent into the fabric of the network is that it will very likely have enormous influence on the behavior of the stewards of data. If the owner of the network makes a decision that adversely effects the perception of the customer, they is the threat that they will lose data and thus cause injury to the network. If such a situation involves enormous numbers of network participants, the impact could be significant.  It's always positive to have skin in the game, even when you imagine that it's not necessary.


Network envy is still alive and well in business. When executed poorly, I imagine the pitch being "it's like Facetime, but for doctors" or "it's like Instagram, but for patients." This is easy to get your head around but it's completely shallow. Beneath such efforts needs to be a thorough understanding of what human (beyond social) networks are, what they mean to the customer, and what they could mean to the business.

Monday, June 24, 2013

Revolutionizing How Business Interacts with IT

Virtualization technologies have allowed IT to streamline delivery of IT services to business. We can establish templates to support common requests and automate deployments. But is this all that we can do with IT virtualization?

What if we took the automation further and reduced the delivery of an entire technology stack to a few minutes and even seconds?

What if we allowed customers to order like they'd order a book from Amazon?

What if we allowed customers to offer their own services in the same marketplace as the basic offerings, again, like Amazon?

What if we allowed them the freedom to run as fast as they'd like while making clear which responsibilities transfer to them?

What if we provided services, with high levels of automation and self-services, that help them to take these new responsibilities with ease?

What if we were completely transparent about expenses and provided fine-grained measurement of services rendered and consumed?

What if customers could easily employ APIs to expand and contract their IT services consumption as load increases and decreases?

What if the business passed a similarly fluid use of IT services to their customers?

Now all we need is a name for this. The name needs to act as short-hand for all of the above.

Monday, June 17, 2013

Passion, Wisdom, and Humility

Guy Kawasaki, giving a speech about innovation in Minnesota recently, repeatedly spoke of the destructive combination of arrogance and stupidity. I had to wonder if some, in an audience filled with business and IT leaders, found themselves relieved to be arrogant and intelligent.

While this recipe for success might be true for them. First off, intelligence is not enough. What value is intelligence if it can't be articulated? What value is it when it is articulated but no one can stand the message?

What's required is wisdom. Arrogance erodes wisdom. And no wise person , in my estimation, will ever be arrogant-- not openly.

The recipe arrogance and intelligence is as useless as arrogance and stupidity. Failing to understand this and, worse, failing to see a difference between intelligence and wisdom is just plain stupid.

What about passion? Is it enough to be passionate about what you do? What if your passion is used to drive destructive outcomes? Not wise.

Arrogance and stupidity are catalysts to one another (whichever dances lead is dominant). The outcome is destructive. Arrogance and wisdom are antithetical. The outcome is also destructive; unless, of course, the practitioner feels so full of wisdom that it doesn't matter what quantity is eroded by arrogance (surely an idiot at heart).

What about passion and humility? Passion without humility is almost always destructive. Humility without passion is simply good showmanship. Passion and humility are catalysts to one another with a positive outcome (whichever dances lead is made whole by the closeness of the other). Passion and stupidity are, of course, destructive... and common. Arrogance, passion, and stupidity is a lethal recipe.

With this, I'm left with some explanation of the irrational experiences that I occasionally have to put up with professionally. What's most maddening is that it is always the least rational person that dresses up their destructive habits in rationality. The Socratic method is the weapon of choice, or some wild interpretation of it anyway. This is, no doubt, the arrogant and intelligent recipe for (apparent) success.

What lurks behind these episodes is often transparent to all who are too kind to speak of it: poor past decisions, failures masked in diversion, incomplete work, unrealized vision, lack of understanding, incomplete knowledge, fear of incompetence, etc. I see ambitious people who seem to spend the bulk of their energy on the hard work of holding all of this at bay. Of course, the simple alternative is to admit when something didn't work or became an outright failure. But this is not the path. And once you start down the wrong one, it's nearly impossible to change direction.

Why should we care? I have seen this behavior fuel territorialism that erects barriers between organizations that really must work more closely for mutual benefit. I've seen it block attempts to measure quality, essentially rigging the results so that problems are obscured allowing claims of success where failure is the eventual outcome (institutional ADD aids in this deception since time most often erases the path to accountability).

We should care because it deeply effects performance. Often these people with destructive tendencies are in positions of power and/or influence. They are most often the passionate defenders of status quo and work against true innovation. While defending the status quo is a perfectly rational position in a company that has success, it is often contrary to the values espoused by leadership. A company that doesn't innovate can only ride on past success so long before they are passed up. Chances are comfortable margins will shrink and inefficiencies will stand out plain as day over time.

We should at least encourage defense of the status quo with transparency about the positions being held; leave the challenge to those who want change to criticize and offer cogent alternatives (they can otherwise be charlatans). But the legion of the arrogant and intelligent often cannot articulate a true defense of the status quo. They tap it only as a source of power to keep up their charade. They tell senior management that there is no need to improve what has already served us well or dress up the old as something new (apparently the true meaning of big data). They'll use wise sounding phrases like "don't let the perfect be the enemy of the good" when what's being defended is not actually good enough but is instead fractured and brittle-- and, this, they know too well.

Again, why should we care? Because, if we're worth our pay grade, we're ambitious and crave innovation. We aim to build and rebuild entire markets. We can't do that with illusions about what we know and what we've accomplished already.

Nassim Taleb wrote “true humility is when you can surprise yourself more than others; the rest is either shyness or good marketing.” In other words, you have to be keenly aware of what you don't know-- what you aren't.

If we're not careful, we'll be building skyscrapers with gravel foundations. It is dangerous to operate from illusions.

Let's call the spades spades and let them either adapt or bring their con elsewhere. Let's be passionate, wise, and humble. Otherwise let's change our mission to: enrich the current stewards while congratulating them for the accomplishments of the past.
Written with StackEdit.

Sunday, June 9, 2013

Cyber Barkers

Who do you trust? Who can you trust? Could it be that major events that you've heard about on the news regarding major breaches have more, much more to the story than you could ever know? Could it be that the RSA breach involved a Chinese national hired by RSA itself? Could the Israelis have been the hired assassins in the clean-up that followed in the days after the discovery of Stuxnet?

There's a sector of the security industry that slings intrigue to sell products. Anytime I'm on a call with their bigshots, I imagine them pacing around in their walnut paneled dens, wearing smoking jackets, and swirling a cognac.

They have seen ugly things. Uglier than you can even imagine; uglier than they are allowed to reveal. You think you know something, kid?

They are connected. They were talking with three letter agencies just this morning about this very subject... the one we're talking about that's so scary... which agency exactly cannot be revealed.

They have people in Russia, right now in fact, trying to infiltrate a hacking ring that's targeting your industry. There's a lot of indication that your industry is about to be in the cross-hairs much more than they have been so far. Trust me, one of my people abroad... and she happens to be drop-dead beautiful... which helps her get information... I'll tell you stories over a beer sometime... tells me that this is going to hit hard by next year.  Take note... and cover. You've got a bumpy ride ahead.

These are most likely just the stories of sad, pot-bellied guys who eat too much on the road. The worst trouble they run into is probably with their expense reports. But they'd like you to buy it and, most of all, to believe that you need their services to keep your operations top-notch.

The plot of the best Pakula movie is their back-drop. Their wares? Murky.

They'd like you to believe that once you hire them, you'll have briefings not unlike the president gets from the CIA (a much more insidious source of snake oil... but I digress).

But they are more the "like it never happened" guys of CYBER security.  I have no doubt they can get your basement, so to speak, free of horrible smells after an unfortunate back-flow.  (Some seem to be little more than an overly hyped RSS feed.)  But do they mean anything at all to your security program? Can they provide more than having one of your employees join Infraguard? I doubt it.

Why should I be bothered by all of this amusement?

These guys take our eye off the ball. They allow us to check a box on a list and are more of a good luck charm than a true, practical solution.  They validate the narrow, tool buying activities-- the side that treats security like a pathology that requires medication rather than a discipline that requires vigilance.  They make the easier work look more interesting than it actually is.

The true challenge is in software and data security. It's in the architecture. It's fixing the mistakes we've made and embedding security into the day-to-day of every layer of the stack. It's in understanding our responsibilities for the data we're entrusted to handle.

It's time we stop spending so much time on the intrigue while pretending it's real and valuable work. Let's spend our time and money on something more than pop entertainment and innate impulses, like little boys playing guns in the backyard.  Getting serious will be much more difficult and, to many, a lot more dull.  We'll know we're on the right track when it feels more like a challenging university class than a video game.

Monday, June 3, 2013

Application Architecture Gravity

Portals have grown from glorified static pages to mostly functional customer interfaces. On the way, they've grown from business satellite features to Jupiter sized objects that contain their own planetary system. Everything threatens to be pulled into their orbit and, worse, to disappear into their atmosphere. If you're not careful, your innovative idea will become a comet burning up in the atmosphere of the giant portal.

Although everyone knows how destructive their gravitational force can be, the giant portals are the path of least resistance, the ticket to expediency. Forget all architectural burdens, we'll have to get to those later... and when will that be?

This has already created barriers to RIA and mobile adoption. Tacking mobile views on to large portals and shoe-horning terse markup, like JSON, is a quick fix. Long term, however, this will prove unsustainable. In some regard, this is extending the tangle of poor architectural decisions into the scattering of internet end-user devices.

It's easy to understand that, aside from the intoxicating effects of expediency, these decisions are being made because of the following:

  • The giant portal is where the data is.
  • The giant portal is where the developers are.
  • MVC design already prepared us for multiple rendering strategies... right?
  • If we tack on to the giant portal's domain, we can tap their authentication.

What's less obvious, and not fully acknowledged, is that the giant portals are the primary external facing security interface. Beyond authentication and identity, it's the home of complex authorization decision logic and where identity attributes are pushed/copied to make it all happen.

We already have many indications of the trouble to come with the incompatibility of portal authentication strategy with services generally and mobile particularly. The identity, authentication, and authorization aspects of how big companies do IT today are on a collision course with where they want to go: cloud or, more to the point, IT as a commodity. If cloud turns everything IT into a service, how can we live with a gaping hole for services security?

The architectural principle du jour is: Never make assumptions about how people will use your API. Beyond loose coupling, this paves the way for a future where anyone can dream up a new user experience or a new business function from an aggregation of existing ones. Ideally, anyone can bring a new business function to the portfolio with little expense and time. If the API Economy is where we're going, can we tack it onto the huge portals with their nineties heritage?

I say no. But we also can't avoid the influence of the big portals. Additionally, we can't ignore the gems that lie inside: complex business logic and authorization decision logic.

At the very least, and in the spirit of expediency, we must begin aggressive efforts to abstract away the surface of the portals. Behind this abstraction and under the surface will be a mining operation, intentionally separate from innovation efforts.

Anyone who says that the work we did in portals is not important will be as wrong as those were who declared the mainframe dead back in the nineties.

It is important that we control the stifling influence of past application architecture on new and a new architecture that truly enables innovation.

Written with [StackEdit]

Thursday, May 30, 2013

ChromeBook: a Commodity Endpoint for Commoditized IT

I've recently acquired a cheap, no frills Samsung ChromeBook (first gen).
Key observations:
  • The CPU is weak. Fine.
  • Boot time is awesome.
  • Battery life is awesome.
  • The charger is a toy and I wish it was a micro USB instead.
  • Very light and portable (I wanted this to do what people do with Mac Air: carry it in my bag to do non-job-related work which includes email, coding, and blogging.)
  • I'm not left wanting much more for what I was after.
For those unfamiliar with this new OS, I recommend researching how they built it from the ground up. From the way it boots to the way it handles disk is very different.

The attention to security is impressive. It updates itself. It verifies boots (if someone compromises my session and corrupts the OS, a new one will be fetched, essentially self-healing). Writes to disk use my authentication to encrypt per user. Writing to disk is not taken for granted making that attack surface much less. It has ASLR and DEP.

For more:
http://www.chromium.org/chromium-os/chromiumos-design-docs/security-overview http://www.chromium.org/Home/chromium-security/core-principles
http://www.chromium.org/Home/chromium-security/brag-sheet

So I have to put up with less power and heft? Well, there is the new Pixel which is a serious machine. But even if you want to go cheap, like I did, think about what you're not wasting your resources on: bloatware, stuff you never use built into the OS, antivirus, updates, patches, etc.

The ChromeBook appeals to me for its utilitarian foundations. It's like the original Mac, in some regard. It aims to serve you and what you want to do with a computer. It's elegant, but doesn't steal the show. Nothing is wasted on the OS insisting on being noticed (as an example, the Mac OS X fisheye dock irked me so much that I never bought into the OS and moved away from Macs altogether). It's predictable; you know where everything is (true with old Macs, not true with new... also true with FreeBSD, which I love).

The best part is because it has a solid foundation of simplicity and utility, you don't need all the extra crap that modern OSes tend to have. I've started thinking of Windows and OS X as washed up celebrities who need constant care and attention; agents, handlers, therapy, medicine, press. For them, it's not about the art, it's about bloated ego and self-delusion. The hollowed out principles have turned the audience motivation to morbid curiosity more than respect and admiration. They were once beautiful and elegant. Now they are just a burden to everyone and their missteps are mostly amusing.

I find myself asking why ChromeBook isn't the future of workstations. (Yes, I know that this is like a NetBook or even a dumb terminal. Everything that's old is new again eventually.) In most companies you could put a good number of employees on one of these and get all of the work done.

In large, security sensitive companies, you could combine this with Citrix Receiver or other similar virtualization solutions and handle even protected data and highly sensitive admin processes. In fact, it might be a very sound approach to have privileged users carry a ChromeBook around to log into their jump hosts. It's harder to attack ring 0 on one of these after all (the attack that brings all BYOD into question..TMP to the rescue?). Best of all, weaning people off of their wayward beast OSes will dramatically reduce total cost of ownership.

A ChromeBook strategy could be thought of as a gray area between BYOD and traditional "controlled" workstations. It moves the enterprise in the direction of BYOD, while practicing caution for highly sensitive business functions. It's safer than allowing your firewall admins to use BYOD to get to their consoles. It's cheaper than insisting on the traditional workstation.

Until Microsoft realizes that the OS is a commodity and that a bare bones OS is all that is needed to get to Office 365 and until Citrix embraces a commodity device with Chromium-like principles (CitrixBook); ChromeBooks (and workstation versions) are very appealing.

-written on a ChromeBook

Followers