So just what is converged infrastructure all about anyway? Ever since the term entered the tech lexicon a few years ago, this question keeps coming up. The answer is simple but its implications are far-reaching.
For years, IT shops have been building out best-of-breed infrastructures based on the latest and greatest gear that came down the pike, which was about every year or so. At first, this seemed like the best and, frankly, only approach that made sense unless you wanted to get trapped by the dreaded “vendor lock-in”, where you relied on one vendor to provide you with all of your hardware, platform and networking needs.
Even if this did happen, it was more or less fine for a while. But, over time, as the demand for new services and new applications started to outpace IT’s ability to deliver them within any kind of meaningful timeframe, it became apparent that the old model — the one still employed in most data centers today — of buying and provisioning new hardware and software every time some group within the organization needed some more capabilities, was badly in need of an overhaul.
At first, big businesses thought that outsourcing the entire problem to an third party vendor who would take over all of their operations soup to nuts was the answer, but that was only a partial solution. These providers, while more efficient due to economies of scale, still had the same basic problems as their clients: it was very difficult to get hardware and applications to scale.
This is when some smart folks got together and decided a solution from the mainframe days, virtualization, might just be answer. They were right. But, again, only to a point. Virtualization by itself does not scale either and can lead to just as many problems as it solves. You still need to build out and provision servers and networks and data center management solutions on a more or less a la carte basis every time a request comes in for more compute cycles or a new application.
What converged infrastructure (CI) does is take virtualization and the cloud, where applications and compute power is consumed on-demand, and add scale to the hardware and provisioning side of the equation. Where cloud and virtualization are all about better utilization rates of existing resources and leveraging the one-to-many model by pushing apps out to end users as shared services, CI is all about supplying IT with a plug-and-play infrastructure that is factory optimized to run in your environment right out of the box.
Not only does this improve performance from a resource — people, power, and pipe — point of view, it drives cost out of the data center by allowing system admins to spin services and applications up and down as needed in a matter of minutes or hours compared to days, weeks, or months under the other model.
This allows the business, which IT is supposed to be enabling with all this technology anyway, the flexibility to respond to new opportunities faster and gives IT the advantage of doing yet more with less — its standing orders since about the middle of the last decade.
Done correctly, CI can lower costs inside the data center through better resource utilization and allocation, increase the business’ top-line opportunities by allowing folks to do their jobs better in the anywhere/anytime fashion they now demand, and improve bottom line performance by cutting costs in what many in the business world view as the largest cost center of all — the IT department.