With the seemingly endless list of daily IT tasks and projects, it’s difficult to step back and ask a hypothetical question: How would we build out data center infrastructures if we could completely start over? My vision includes the complete abstraction of the underlying complexity of IT components. Rather than thinking of servers, hard disks, networks, and switches, we’d think of the data center itself as a seamless pool of capacity. It would be self-managing, highly-virtualized, and completely automated. We’d think of computing capacity much like we do electricity – we typically wouldn’t care about the details, as long as there’s sufficient capacity and reliability to meet our needs. But is this just the IT version of a pipe dream?
Let’s look “back” at today. Virtualization has clearly delivered on the promises of increasing server hardware utilization, reducing data center costs, and supporting business processes. Abstraction layers are typically used for server hardware (processing and memory), storage, and networking. Hardware and software vendors have recognized the long-term potential of application-focused automation. This approach allows systems administrators, developers, and even end-users to define their requirements (perhaps with a little help from capacity planning tools), and then simply request the resources they need. Data center administrators would be removed from tasks such as reconfiguring disks, allocating storage, managing network switches, and worrying about individual server hosts. Rather, they’d focus on the data center based on its overall capacity requirements.
Organizations can deploy disaster recovery and high-availability features as required by using a consistent approach across workloads. Furthermore, they can adapt to business requirements more quickly than ever before by rolling out new virtual servers in a matter of minutes.
So, what’s left to do? Data center automation approaches are definitely big steps in the right direction, but there’s also plenty of room for improvement. The goal should be to manage other workloads at as high a level as possible. For example, systems administrators should typically be working with developers and end users to determine capacity requirements for their applications and services. Ideally, they’d spend far less time worrying about disk arrays, networking configurations, and CPU/memory allocations.
In order to get to this goal, organizations will need to invest in virtualization management solutions that are:
- Centralized: The vast majority of data center administrators rely on many different tools to fully manage their infrastructure. They might use a remote desktop client to log in to configure servers, remote command line tools to automate some operations, and third-party utilities for managing storage and networking devices. It’s a lot to learn, remember, and coordinate. While larger organizations tend to have specialists in each area, their time would still be better spent working at a higher level than the “nuts and bolts” of the infrastructure.
- Integrated: It’s one thing to manage VMs across the host server pool in isolation. It’s another entirely to be able to coordinate all of the necessary changes to storage arrays, virtual and physical network switches, and VM configuration options. All of these aspects are required to ensure optimal VM management that keeps security, compliance, and management needs in mind.
- Self-Managing: Ideally, IT professionals would focus on what needs to be done, rather than how it should happen. If, for example, a new deployment requires three web application servers (in a load-balanced configuration), a middle-tier data access server, a firewall, and a highly-available database cluster, data center administrators should be able to order off a menu and have the necessary resources provisioned and configured automatically.
- Energy Efficient: While electricity itself can present a significant cost, overhead related to cooling, power continuity, physical space, and labor can all add to the total bill. Cutting edge data centers focus on overall efficiency, which should lead to an increased return on investment. This can be measured using various methods, including power usage effectiveness (PUE). Automated solutions can also power down servers when they’re not needed, and migrate workloads to dynamically balance load.
- Cloud-Enabled: Cutting edge data centers should be able to scale quickly and efficiently. Some organizations will choose to purchase and manage their own infrastructure assets to support their users and customers, and others will leverage partners to provide permanent or temporary capacity. This is the key to a cloud-based infrastructure, whether it’s public, private, or a hybrid – leveraging hardware resources on a commodity-basis, rather than in a specialized approach. The key is for workloads to be portable and secure, regardless of where they’re running.
Now, back to the future. Assuming that an organization has these areas covered, what’s next after that? IT organizations’ goals should focus on improving upon their existing automation and virtualization initiatives to reduce capital expenditures and increase operational efficiency. The process usually involves identify areas of high costs or manual administration, automating them, and repeating (as desired).
Perhaps someday our data center infrastructure will be telling us what to do to keep it happy. Until then, I think most of us can rest easy knowing that we’ve got plenty of work to do to improve data center automation. It’s nice to feel needed.