The enigmatic world of quantum mechanics notwithstanding, it’s difficult to create something from nothing. That’s especially true if you’re an IT professional who has been tasked with meeting storage needs. Yet that’s what many data center administrators are asked to do just about every day. Requests often pour in for more storage space, higher performance, and improved data protection. Oh, and the budget: Right around zero dollars and no cents. So how can administrators meet increasing needs in this seemingly no-win battle? The key must be in increased efficiency. You have to find a way to make a given set of storage resources stretch further, while limiting new costs.
Fortunately, there are many storage-related approaches that are aimed at increasing storage efficiency. Let’s look at some technical features and management approaches that can help administrators manage the data center data deluge:
- Thin provisioning: When it comes to requesting storage, users’ appetites are almost unlimited. Rather than committing large amounts of storage based on their requests, IT departments can provision virtual storage that dedicates actual storage when it’s needed. Capacity can then be managed as a whole on central, shared storage arrays, thereby reducing “slack” caused by unused disk space. You can also phase in thinly-provisioned storage without users ever noticing (always a plus from an approvals standpoint).
- Data Deduplication: Block- and file-level deduplication features can help increase the efficiency of raw storage by reducing redundant copies of data. Depending on the type of storage, organizations can often see huge savings in physical storage requirements. In many cases, the potential performance overhead is offset by reduced IO operations.
- Data Compression: Many types of data can be highly compressed to save overall storage space. Implementing efficient data compression algorithms within a storage array or file server can help you “magically” add more capacity.
- Availability, Scalability, and Performance: As more critical infrastructure components start to rely on storage, the costs of downtime and lapses in availability can have a tremendous impact. One way to reduce single points of error is through the use of creating multiple, independent network paths to storage (often referred to as “multi-pathing). Features such as NIC teaming (for load-balancing and fail-over) and multiple switched routes can help reduce causes of downtime and can increase performance.
- Replication: Creating and synchronizing copies between sites and servers can be helpful for performance (keeping data closer to users throughout the world), and disaster recovery (maintaining a “warm” standby site for fail-over).
- Hierarchical Storage Management: Do you remember that Tier 1 storage that was originally designed to store important company data files? Well, most of those files are at least five years old, and then can be safely moved to lower-cost, lower-performance hardware. Automated storage management software can automatically move seldom-requested data to other tiers of storage, while still making sure that it’s available on-demand, if needed.
Keep in mind that not all of these features require relatively expensive, high-end storage arrays. For example, Microsoft’s Windows Server 2012 platform includes implementations of most of the above features which ship “in the box” with the operating system itself.
On that note, it’s sometimes possible to repurpose existing hardware. If you want to migrate from direct-attached storage, it’s possible that the “old” disks will meet the performance requirements for building new storage arrays. Combine those cost savings with more efficient use of storage space, and the solution could pay for itself (at least partially). It’s not exactly creating something from nothing, but these approaches can really help meet business needs without breaking the budget (which may or may not exist).