The availability of new technology and approaches is often the beginning of a new set of challenges for IT professionals. The real trick is in finding out how to get the benefits of improvements while staying within the harsh constraints of budgets, human resources, time, expertise, and business requirements. Studies have shown that storage-related costs are often the largest component of a virtualized data center environment. Capacity needs are always growing, and it seems like data always finds a way to fill (or exceed) the amount of available storage. Performance bottlenecks are common as applications requesting ever-increasing numbers of IOps and throughput. And, the potential costs of data loss are increased as more eggs are stored in each basket. Mix in the issue of limited budgets, and it seems like an insurmountable challenge.
Assuming that you can’t simply ignore your existing investments and buy into a completely new storage architecture, let’s look at some ways in which you can improve your environment.
First and foremost, I think it’s most important for data center administrators to collect and question their storage requirements. All too often, all workloads are deployed with the same level of performance, availability, and capacity requirements. In most environments, storage is sliced up, combined, and served to developers, end-users, and customers, based on their stated requirements. Ideally, everyone would prefer Tier 1, Enterprise-Grade, Lightning-Fast Storage. But what are the real requirements? Obviously, they’ll differ, and organizations should consider purchasing and deploying resources based on these requirements. Figure 1 shows an example of storage-related tiers and their service levels (in general terms).
Figure 1: Defining Storage Tiers and their Service Levels (Example)
Organizations can take the same approach for other areas, including network performance, backup and disaster recovery, and priority for CPU and memory resources. The key is to provide only what’s really required, and the goal is to reduce costs while still meeting users’ requirements.
Data center administrators can also make the never-ending requests for more and better storage a shared struggle with the rest of the business. As a first step, IT staff can present basic reports that show how much storage each department or workgroup is using (and its associated costs). Taking this a step further would require implementing charge-backs, where IT “bills” its internal customers for the resources they use. IT could present varying levels of SLAs (as shown above in Figure 1), each with its own set of costs and performance levels.
Finally, it’s important to keep in mind that organizations don’t have to do it all themselves. Cloud-based partners can provide additional capacity on a pay-as-you-go basis. The numerous benefits include low (or no) capital expenses, and minimal hardware management overhead. Cloud-based storage can also be used for disaster recovery.
The storage problem isn’t going to go away, but there are many different approaches, tools, and technology available to help manage it. In this post, I presented some readily-available methods of phasing in new storage-related features without throwing out existing investments.