How to curtail the rising cost of disaster recovery by moving to the cloud

If a disaster struck your company, could you recover? Do you have a place to store your data so it’s safe and accessible, and do you have a way to recover it after a disaster without bankrupting the company?

Investing in redundant infrastructure and hiring specialized staff to protect yourself is hard to justify in today’s business climate, especially when the rising cost of disaster recovery pushes other critical projects to the back burner. But the answer may be in the cloud, says Ram Shanmugam, senior director of product management for Recovery Services at SunGard Availability Services.

“Recovery in the cloud is offering customers reliable and cost-effective options to increase application availability,” says Shanmugam. “It’s no longer a matter of do you need higher application availability but how can you do it effectively and efficiently compared to traditional recovery models.”

Smart Business spoke with Shanmugam about the advantages of outsourcing disaster recovery to the cloud.

Why is the cloud advantageous?

Organizations require consistent and reliable availability of their recovery infrastructure to match the business value of their full range of applications and data. These range from mission-critical to less critical. Disruption and outages in the availability of mission-critical applications do the most damage to organizations financially and in terms of impact on quality of service, lost reputation and competitive advantage. To design and implement a recovery plan, the IT organization must determine the recovery point objective (RPO) and recovery time objective (RTO) for each mission-critical application. The RPO is the amount of down time and data loss the company is willing to sustain after a disaster, and the RTO establishes the timeline and priority for restoring critical business processes and applications. Finally, to meet the RPO and RTO requirements, the IT organization must invest in space, capital equipment and software, and hire experienced staff to replicate or back up data, then try to ensure recovery by executing rigorous testing protocols.

In contrast, cloud-based recovery offers a reliable and affordable alternative for achieving RPO and RTO requirements and ensuring higher availability for mission-critical applications. Cloud-based recovery solutions offer access to low-cost or pay-as-you-use recovery infrastructure, which can be provisioned on demand to recover mission-critical applications in the wake of failure events, with sufficient security and guaranteed performance.

What should executives consider before outsourcing disaster recovery to the cloud?

* Cost savings is a significant driver.

* RPO/RTO. Companies often forsake their RPO/RTO requirements because in-house solutions are cost prohibitive. The cloud offers the ability to significantly improve application availability in a cost-effective manner.

* Reliability. The ROI of a recovery environment is in the reliability of its performance at the time of disaster. Compared to in-house solutions, managed cloud solutions offers higher reliability in recovery of mission-critical applications after failure events, with sufficient security and guaranteed performance.

* Skilled resources. In-house recovery solutions require investment in talent to support the infrastructure. In contrast, the cloud eliminates the need for that investment, freeing up resources to focus on value creation.

Does migrating to the cloud create a loss of flexibility?

No. In fact, the cloud allows IT organizations to optimize their investment and resources by offering configurable options to meet the individual availability objectives of each application or business process.

IT organizations also have the flexibility to customize a cost-effective hybrid recovery environment by integrating cloud with dedicated internal infrastructure to support availability of large, complex applications and business processes.

What should CIOs consider when evaluating prospective partners?

Ask these questions to evaluate potential cloud partners when considering cloud-based recovery options.

* Does it offer meaningful service level guarantees for recovery of mission-critical applications? Can it reliably recover mission-critical applications in the wake of failure?

* Does it support heterogeneous computing platforms (e.g. Windows, Linux) and hybrid architectures that meet the recovery needs of the entire IT portfolio?

* Does the staff have hands-on disaster recovery experience? Has it recovered from a disaster? Does it understand the entire disaster recovery lifecycle? Can it provide audit-ready test reports?

* Can the partner support a broad portfolio of RPO/RTO requirements in its cloud solution? Does it provide options for high availability, as well as less critical applications, in a heterogeneous environment?

* What is the range of options supported for moving data to the cloud? Does it use monitoring and automation tools to ensure rapid and effective response to failures?

* Can the cloud partner handle your current and future needs? Can it expand and contract on demand, handle sudden growth or support large amounts of application data?

* Can clients pay as they go?

Is data in the cloud secure?

A cloud partner should offer multiple levels of security and service options to fit your needs. For those concerned that some data are too sensitive for the cloud, despite security, they can use a private cloud, while selecting a shared cloud for everything else.

One size doesn’t fit all, so a cloud partner should offer a range of private, hybrid and physical environments to make sure your data is secure and can be recovered after a disaster.

Ram Shanmugam is the senior director of product management for Recovery Services at SunGard Availability Services. Reach him at [email protected]

How to avoid business interruption with a pragmatic approach to disaster recovery

Kerwin Myers, Senior Director of Product Management, Sungard Availability Services

There’s been no shortage of threats to business continuity over the past decade; terror attacks, hurricanes and tornadoes are constant reminders of the need for a well-honed disaster recovery plan. But the era has also featured two devastating recessions and a rash of corporate downsizings, leaving executives with fewer resources to guarantee the timely restoration of critical business operations and databases.

“The events of the past decade have taught us that businesses must deal with disaster recovery in a pragmatic way,” says Kerwin Myers, senior director of product management at SunGard Availability Services. “Otherwise, companies may make critical mistakes and may never recover from a disaster. Conversely, overspending on disaster recovery can also pose a threat to your company.”

Smart Business spoke with Myers about how to ensure business continuity by taking a realistic approach to disaster recovery.

How have the events of the last decade impacted disaster recovery?

Executives seldom thought about disaster recovery before Sept. 11. Now, they recognize the need for planning, but restoring a network and recovering data require professionals with specialized skills, and the rigorous process often takes a back seat to day-to-day operations. Additionally, companies must now adhere to governmental regulations and industry mandates designed to ensure organizations develop business continuity plans.

What are the typical pain points that organizations face when recovering from a disaster?

Recovering from a disaster hinges on accurate and current disaster recovery procedures. Many organizations fail to recover or take longer to recover because these procedures are not accurate or not current. Production Information Technology environments are constantly changing. This means that an effective change management practice that includes a process for updating recovery procedures and recovery configurations is a crucial component to successful restoration. Changes in the production environment happen daily, impacting recovery. As a result, the recovery plan must be kept up to date with day-to-day production changes.

In many cases, organizations depend on the same staff for production and disaster recovery. This requires production to be redeployed to restore critical applications and data during a disaster. But the event may prevent workers from reaching the facility or inflict personal hardships or injuries that keep them from working.

Last, IT professionals spend most of the time maintaining and updating applications, so restoration efforts may be hampered by a lack of knowledge of restoration practices.

What are the key planning elements to help ensure a seamless recovery?

Recovery plans should be customized to individual businesses but should include these critical steps to ensure effective recovery.

* Create specific and sequential recovery processes and procedures. Employees need  clear procedures to restore critical IT services.

*Establish priorities. Some mission-critical applications and technical functions must be restored immediately to minimize financial loss. Consider cost/performance trade-offs, estimated recovery times and business needs when establishing post-event priorities.

* Close skill gaps. Staff members must take on specific roles and duties during recovery, but there’s no time for training once disaster strikes. Inventory the required skills to execute the plan and close gaps through training or by contracting with external providers.

* IT organizations must ensure production changes are being replicated in recovery configurations and procedures.

What mistakes may impede or prevent a complete recovery?

An outdated recovery plan can stymie recovery. Companies need to reconcile the plan with the changing technical configuration and update procedures and priorities to align with the business requirements on a quarterly or semi-annual basis, as recovery may fail if the plan elements aren’t  tested and refined.

Should all data be recovered in the same way?

Most data centers are a collection of new and legacy systems and applications from multiple vendors, which means all data can’t be recovered in the same way. For example, data from critical tier-one applications may be replicated on servers in other locations, which is expensive, but the investment practically eliminates down time after a disaster.

Applications that run in the cloud can be accessed from any location and the provider assumes responsibility for disaster planning and recovery. Tier-two apps could run on separate servers and are restored from tape backups or a virtualized environment.

How are virtualization and cloud-based solutions impacting backup and recovery processes?

The emergence of the cloud and virtualization has created new rapid recovery options at a better price point. Applications that run on Web-based platforms can be supported by third-party providers with hundreds of servers, so recovery can be as simple as switching to another site. The best providers take a holistic approach by considering the interdependency between legacy and Web-based applications and offer a comprehensive solution.

What should an IT manager look for in an outsourced disaster recovery service provider?

Beyond price and equipment, an IT manager should evaluate the following criteria.

* Experience and expertise in processes and procedures.

* Commitment and conviction backed by guarantees and SLAs.

* Track record. Has the firm been tested by a real disaster? Was the recovery successful?

* Testing and audits. A provider should conduct hundreds of tests and audits each year, so ask to review its documentation before committing.

Kerwin Myers is a senior director of product management for SunGard Availability Services. Reach him at [email protected]

How to pick a colocation provider to meet your business’s IT needs

The relentless pace of business automation and Internet commerce has led to a staggering increase in the amount of data that businesses need to store. And that growth has created a corresponding need for businesses to expand their IT capabilities.

However, a direct investment distracts from your core business and can cost up to $10 million for buildout and $5,000 per square foot for operational overhead. That’s why many companies are opting to outsource their IT through colocation.

“Colocation is all about economies of scale, focusing on your core competencies as a business and letting someone else handle the data center aspect,” says Joe Sullivan, senior director, colocation product management with SunGard Availability Services. “From a financial perspective, it allows you to take a large cash outlay or capital expense and convert that into an operating expense.”

Smart Business spoke with Sullivan about how to decide if colocation is right for you, and what to look for in a colocation provider.

Why are companies considering colocation?

The No. 1 reason companies consider colocation is to avoid making IT a core competency of their business. The second major reason is to gain the economies of scale you get from sharing a facility with others. Instead of having to build and maintain your own facility, you can leverage another company to do it for you and share those costs with other customers that you might not even know.

Some companies already have their own data center, but their current facility can’t keep up with their business’s growth. Adding a second site to handle the growth is one reason companies consider colocation. Disaster recovery planning is another reason. Companies may need an alternate location to protect against infrastructure downtime, either from natural catastrophes or hardware failures.

What are the key factors to consider when picking a colocation provider?

There are five key factors. First is how much power a customer needs, not only today, but in the future. The business is based on power costs, as well as the cooling needed to cool that power. That drives a large portion of the cost structure and facility capacity, so it’s the No. 1 thing providers ask customers.

The second factor is environment size. Do you need a cabinet, two cabinets, or do you need a cage to store your data? Think of cabinets and cages as storage lockers and apartments. You might only need a locker, or you might need an apartment, depending on how much you have. Basically, a collocation provider rents highly powered, highly cooled and fully redundant storage lockers and apartments.

Geographic location is important, as well. Many customers like to be close to their facilities. They want to touch and feel their data and make all the changes themselves. Others want their provider to be as far away from their main facility as possible, because their main concern is disaster recovery. Those customers ask about colocation facilities in St. Louis, Denver, Phoenix and Dallas, because those sites don’t have as much natural disaster activity.

Fourth is connectivity. You may call it bandwidth, telecom or fiber, but it’s all connectivity. This is important because you are going to need to get your data out of those storage lockers or apartments to somewhere — either back to your facility or to your customers through the Internet.

Last, what services do you need on top of colocation? Some companies just provide space, power and connectivity. Others provide services such as data backup, storage, security monitoring and intrusion detection on your servers, and services such as cloud applications.

You need to make a decision up front on whether you may want those services at some point, because if you are outsourcing your colocation, you may end up outsourcing other services, as well. If that’s the case, you want to make sure you choose a provider that has the capability to do that. The services you need on top of colocation today or in the future should be a big factor in your choice of provider.

What questions should you ask a potential provider to determine if it can meet your needs?

What facilities in the geographies we are interested in have the space and power that we need? That’s the first question. Because you’re not going to care if a provider has 55 locations and only three of them are in the geography you want and none of them have space and power available to meet your needs.

Next, get into resiliency questions. Typically, customers will look for companies that have fully redundant power systems. At every point in the process of the power coming from the utility company, through two feeds into the building, hitting two power plants, one system should be able to fail over to the other. In the event that anything in that chain fails, you have fully redundant systems.

That is a large differentiator between providers. Some are fully redundant and some have single points of failure. What the customer needs to determine is whether those single points of failure are acceptable for the applications they are running and for the price discount you would get.

Then you get to pricing. Not all prices are created equal in colocation. You might not be getting the same thing, even if it sounds the same. You might have two providers, one who advertises a cabinet for $1,000 and the other for $1,500. It may seem like the $1,000 cabinet is the better deal, but the $1,500 cabinet might give you three times the power density, which would make it a better deal.

You should ensure transparency in pricing from a provider, and always make sure you understand what’s included and what’s not included.

Joe Sullivan is senior director, colocation product management with SunGard Availability Services. Reach him at (303) 942-2937 or [email protected]