Wednesday, April 11, 2007

Complexity and the Data Center

I just finished rereading a science book that has been tremendously influential on how I now think of software development, data center management and how people interact in general. Complexity: The Emerging Science at the Edge of Order and Chaos, by M. Mitchell Waldrop, was originally published in 1992, but remains today the quintessential popular tome on the science of complex systems. (Hint: READ THIS BOOK!)

John Holland (as told in Waldrop's history) defined complex systems as having the following traits:
  • Each complex system is a network of many "agents" acting in parallel
  • Each complex system has many levels of organization, with agents at any one level serving as the building blocks for agents at a higher level
  • Complex systems are constantly revising and rearranging their building blocks as they gain experience
  • All complex adaptive system anticipate the future (though this anticipation is usually mechanical and not conscious)
  • Complex adaptive systems have many niches, each of which can be exploited by an agent adapted to fill that niche

Now, I don't know about you, but this sounds like enterprise computing to me. It could be servers, network components, software service networks, supply chain systems, the entire data center, the entire IT operations and organization, etc. What we are all building here is self organizing...we may think we have control, but we are all acting as agents in response to the actions and conditions imposed by all those other agents out there.

A good point about viewing IT as a complex system can be found in Johna Till Johnson's Networld article, "Complexity, crisis and corporate nets". Johna's article articulates a basic concept that I am still struggling to verbalize regarding the current and future evolution of data centers. We are all working hard to adapt to our environments by building architectures, organizations and processes that are resistant to failure. Unfortunately, entire "ecosystem" is bound to fail from time to time. And there is no way to predict how or when. The best you can do is prepare for the worse.

One of the key reasons that I find Service Level Automation so interesting is that it provides a key "gene" to the increasingly complex IT landscape; the ability to "evolve" and "heal" the physical infrastructure level. Combine this with good, resilient software architectures (e.g. SOA and BPM) and solid feedback loops (e.g. BAM, SNMP, JMX, etc.) and your job as the human "DNA" gets easier. And, as the dynamic and automated nature of these systems gets more sophisticated, our IT environments get more and more self organizing, learning new ways to optimize themselves (often with human help) even as the environment they are adapting to constantly changes.

In the end, I like to think that no matter how many boneheaded decisions corporate IT makes, no matter how many lousy standards or products are introduced to the "ecosystem", the entire system will adjust and continually attempt to correct for our weaknesses. In the end, despite the rise and fall of individual agents (companies, technologies, people, etc.), the system will continually work to serve us better...at least until that unpredictable catastrophic failure tears it all down and we start fresh.

No comments: