Storage Management: Shelf life

In a world where all budgets are being slashed, and IT budgets are being slashed harder than most, IT directors are finding themselves with a near impossible problem. Forget e-business, or the constraints on new projects, the big crunch issue more often than not is storage, which, say Gartner and the Meta Group, now accounts for as much as 50% of many IT budgets.

As Chris Atkins, storage marketing manager at Sun Microsystems notes, the IT director can’t turn off the growth in the company’s demand for storage, and he or she can’t call a halt to the need for more and more time sensitive information to be accessed by ever more members of staff.

“Service level agreements with user departments just keep going up, and on top of everything else, since the events of 11 September, back up and retrieval of data has become even more critical as part of the business continuity plan – yet budgets are down, so the IT director is getting squeezed from every side,” he comments.

This pressure is finding its way back to the storage industry, which is having to radically rethink its act. In an ideal world, from a storage vendor’s point of view, you would sell the customer a unique, proprietary solution which they would continually need more of as the years went by.

Obviously, in this ideal world, the last thing vendors want is a determined push for open standards and interoperability between different vendors’ equipment.

“In many ways, the storage industry today is where the client server world was 15 years ago, when it was dominated by proprietary vendors.

Now open systems are the rule. In the ’80s, storage was only 15% of the budget, now it is 50% and it is in many ways the least efficient element of the budget. User organisations are now very aware of the inherent inefficiencies in the current storage solutions, and the industry is under pressure to deliver far more cost effective solutions,” Atkins says.

He argues that a standards based industry allows innovation to move forward rapidly by enabling many companies to contribute new ideas. The storage industry has two standard setting bodies, the Storage Network Industry Association and the Fibre Channel Association, and the two bodies are due to merge into one. But until recently, although all the major players gave lip service to these associations, foot dragging and “clever boxing” had kept the standards effort moving forward at a snail’s pace. There are signs now that vendors are positioning themselves to be front runners in a new look, open standards storage sector.

The key concepts in this change are virtualisation and storage management.

While the sector has continually delivered improvements over the last decade in the cost per megabyte of storage, the task of delivering effective management of an enterprise’s total storage resources is notoriously complex.

The habits of a few decades of buying direct attached storage (DAS) (where servers are bought already configured with their own disk drives) has left enterprises with information storage scattered like confetti across their organisations.

Atkins points out that typically, in this kind of environment, it is all but impossible for an enterprise IT director to pull out utilisation figures, for example. Getting a sense of which servers are near capacity as far as storage is concerned, and which have scarcely tapped storage resources is extremely difficult. Even worse, there is no simple way of allowing an application running on server A, to grab additional DAS storage from server B. This kind of storage infrastructure fails just about every conceivable efficiency test and there is obvious room for improvement, beginning with a move away from DAS, towards a more centralised, network based strategy. This tends to mean migrating towards a Storage Area Network (SAN) based infrastructure.

Where organisations have made a determined effort to buy centralised storage, typically by buying top of the range dedicated storage platforms from a vendor like EMC, they have found themselves trapped on a proprietary path. Plus this approach runs into the usual head-on collision when the organisation acquires a company with a similar policy based on a different vendor’s kit. At this point, the “one vendor only” policy either ends up getting derailed or the user organisation is faced with a painful and expensive choice as to whose kit they are going to throw away, and whose they are going to keep.

This kind of wasteful inefficiency may have been bearable in profitable times, but now the storage industry has had to “show willing”, and come up with a better approach. SANs and the complementary technology, Network Attached Storage (NAS), are one answer to the DAS problem, in that they separate the server from the storage and make the latter an independent resource that can be shared between multiple servers. This solves load balancing and utilisation monitoring issues, but the more complete answer lies with “virtualisation”, which aims to weld all three storage types, DAS, SANs and NAS, into a logical whole that can scale with the organisation’s needs.

At its root, virtualisation attempts to create a logical whole out of the company’s disparate storage resources, using storage management software to create an overall view of the storage resources. It also provides the tools to implement back up and archiving automatically across the logical whole, perhaps using a specialist framework management system such as CA’s Unicenter, IBM’s Tivoli or HP’s OpenView to handle each of the discrete elements making up the logical whole.

“The industry still has some way to go, both on the interoperability front, and in bringing intelligence into the storage infrastructure,” Atkins says. Ideally, an application should be able to tell the organisation’s storage infrastructure that it needs more storage and then be assigned that additional resource automatically. We are not there yet.

Geoff Cole, sales support manager for IBM storage networking solutions in EMEA, argues that virtualisation has been around, in one sense at least, as long as RAID arrays. “Where you have a RAID array striping data across six disks simultaneously, you have to present the server with a logical, or “virtual” image that looks as if all the data is on one physical disk. Virtual storage management advances this concept hugely by presenting logical views of all the company’s storage,” he says.

However, Cole warns that if one takes usual IT world product life cycle, which moves from massive hype, through the trough of disillusionment and the subsequent slow crawl towards the plateau of reality, storage virtualisation currently sits squarely at phase one, where hype is king. “The clear message we are hearing from our customers is that they want to consolidate their distributed storage infrastructure, reduce costs and manage the total cost of ownership of storage more effectively. We have seen a very strong trend in storage over the last two to three years, to move more and more storage onto the network, using SANs and NAS, to make it more readily accessible, easier to manage, and easier to share. Currently around 40% of all storage investments are SAN or NAS related and the analysts are predicting that by 2005 this figure will have grown to 70%.

A different, but related approach to management and cost containment concerns the policies companies take on data. When the size of e-mail attachments are burgeoning, the idea that absolutely every last bit and byte should be held on disk is certain to escalate costs out of sight.

Both Atkins and Cole reckon that better automation techniques have resurrected the old ’80s concept of hierarchical storage management (HSM). The basic idea here is that material that users can tolerate, say, a 15 second time lag on retrieving, is pushed off from disk to near line tape as a policy.

The system also archives information that is not accessed for a user determined period of time. So anything that is not looked at for six months, say, is automatically moved from disk to tape. As Cole says, too many companies are incurring substantial storage bills by keeping e-mail traffic on disk instead of moving it off to a near line tape retrieval system and from there to a compressed archive.

Adam Sharp, acting marketing director at the SAN switch manufacturer, Brocade, foresees big changes in the interoperability of storage related components from various vendors over the next six months. “You will see some of the big name vendors undertaking to support SANs involving equipment from other vendors. Solution providers like IBM Global Services and EDS are launching their own SAN solutions centres, and their customers will insist on interoperability,” he says. Going forward, all vendors will come to see interoperability as a revenue generating opportunity.

Related reading

Life Belt with Computer Folders
HMRC banknotes