Storage Magazine - UK
 

Survival of the fittest

From STORAGE Magazine Vol 12, Issue 04 - September/October 2012

THE ADVENT OF THE CLOUD IS CHANGING THE DISASTER RECOVERY LANDSCAPE: A HARD TO JUSTIFY 'NICE-TO-HAVE' IS INCREASINGLY VIEWED AS AN AFFORDABLE NECESSITY. BUT IS THIS NECESSARILY A GOOD THING, AND WHAT IMPACT MIGHT CLOUD SERVICES AND OTHER EMERGING TECHNOLOGIES HAVE ON PLANNING FOR DISASTERS? STORAGE MAGAZINE EDITOR DAVID TYLER REPORTS

Until fairly recently, Disaster Recovery usually meant one of two things, depending on who you were: for a large enterprise it meant huge capital investments, and for everyone else it meant backing up only their most critical data (usually to tape) and storing it offsite. But the growth in cloud services is proving a game changer for the storage industry as well as for end user organisations. Are services such as DropBox and GoogleDocs confusing the user about what a good DR plan should involve?

Stéphane Estevez, Sr. Product Marketing Manager EMEA & APAC at Quantum believes that there is much confusion around cloud as a backup medium and it should not be seen as a magic bullet that makes disaster recovery a process whereby you simply 'press a button'. He comments: "Cloud storage and cloud backup solutions have similarities; however can deliver very different results. You can easily use a cloud storage service such as Amazon Glacier to protect your primary data and can be useful in the event of an outage. This is not an effective disaster recovery strategy though, as primary data alone is not enough to recover your full set up. What about your applications, licenses, user rights and configuration files for example? It also does not protect you against human errors - deleted or overwritten data - unless you upload your backups into the cloud."

In Quantum's view, although cloudbased solutions can be cheap, backing up can mean huge amounts of data being sent to the cloud, requiring bandwidths capable of transferring this data. Hence bandwidth (to upload and restore) can become an issue as this impacts on recovery time. When choosing a cloud disaster recovery solution, restore time SLAs, contract limit and capacity all need to be taken into consideration.

Bill Hobbib, VP of marketing at Exagrid, argues that the focus should be more on the process and the planning for disaster recovery than on any one specific technology approach (see boxout '7 steps to protect your data'): "While your disaster recovery plan should cover the protection of your hardware, applications and your data, your data is typically the most valuable asset for any organisation because without it, most companies would be out of business. Today, most of this data is stored on fileservers or in virtual machines in the form of Exchange, SQL and application created data or regular file-system files. So all your data should be backed up regularly with a disaster recovery and backup plan that meets your company's requirements."

Syncsort's Peter Eicher believes that, for small businesses at least, services like GoogleDrive, Carbonite, DropBox and so on, represent a quantum leap in ease of use. However, he goes on, "For a larger business, it's a dramatically different story. For starters, cloud services are generally file-oriented and they are not application aware. Once your organisation is large enough to be running email servers and databases, then you need applicationaware disaster recovery. Cloud services generally don't understand system data or application concepts. This alone is a deal breaker."

Eicher is also concerned about the difference between backup and restore in a cloud environment: "While cloud backups can be very efficient, cloud restores are another matter. It's quick and easy to restore a few deleted vacation photos or some PowerPoint slides, but restoring something like a 500 GB data volume over a thin pipe and off of a generally slow, massively shared disk system, sitting somewhere in the ether, will give you enough waiting time to go on another vacation and take some more pictures. This is simply unacceptable for a larger business, or even a small business that generates big data."

Ash Ashutosh, founder and CEO of Actifio shares the concerns about the capability of the cloud to cater for end users' high expectations of disaster recovery in an increasingly 24/7 world. He argues that legacy data management systems predominantly focus on private, non-shared, dedicated use and do not allow scaling for cloud service providers. As a result, their complexity and cost prohibit the delivery on the economic promise of cloud backup and disaster recovery services.

"Each legacy point tool is a vertically integrated application, performing the same four basic operations: copy, store, move and restore, each independently copying, storing and moving redundant data and restoring overlapping information on a myriad of disks and tapes," explains Ashutosh. "While production data is growing linearly at an average of 8%, copy data is growing exponentially due to the legacy practice of deploying point tools each time a new business requirement emerges. Independent copies are made for backup, disaster recovery, snapshot, and business continuity tools and by several other homegrown or proprietary tools for test and development, compliance, analytics and archival usage."

The sheer preponderance of different 'copy points' across different sub-systems will be a major concern for cloud-based DR, according to Actifio. According to IDC, most companies are already spending heavily on hardware to support copy data, and the firm predicts that by 2013, the total cost of supporting copy data will surpass that of supporting the production data that copy data is intended to protect. Ash Ashutosh goes on to explain why this is a problem: "Based on current practices, it's not uncommon to have a single piece of data that is backed up any where between 13 and 120 times, wasting storage space and costing organisations time and money in managing such a complex environment.

The increase in storage footprint, cost and management requirements has exceeded the cost of primary storage by far, driving many providers to seek new ways to ensure more efficient data protection and availability options."

Let's go back to Quantum's Stéphane Estevez for the last word: "Ultimately not all applications are virtualised, and businesses may not wish to replace their backup tool for a new one that uses a native format, so the common approach is still hybrid. For example mixing tape and vaulting, using cloud or implementing co-location are all potential strategies. Depending on how critical your data is, or the data protection methods you use, you may need to use a number of strategies (e.g. snapshots for non-critical virtual machines, backup on tape for long term retention). But whatever the approach, just think about what it takes to restore and make sure you test your disaster recovery.

We have not yet reached disaster recovery at the 'press of a button' but backup-as-a-service and disaster recovery-as-a-service solutions will get us there." ST

The products referenced in this site are provided by parties other than BTC. BTC makes no representations regarding either the products or any information about the products. Any questions, complaints, or claims regarding the products must be directed to the appropriate manufacturer or vendor. Click here for usage terms and conditions.

©2006 BTC. All rights reserved.
No part of this site may be reproduced without written permission of the owners.
For Technical problems with this site contact the Webmaster