A growing number of businesses are using cloud computing to access resources over the Internet, store data and run applications. However, in abandoning traditional on-premise computing and data storage for a cloud-based solution, many companies fear what will happen if the remote data center housing the cloud experiences its own crash.

“As Amazon’s recent outage at its Dublin data center showed, it is possible, though unlikely, that a data center might go down,” says Indu Kodukula, executive vice president, products, and chief technology officer of Sungard Availability Services. “That is a risk that companies are able to mitigate with a managed multisite availability solution.”

Smart Business spoke with Kodukula about how managed multisite availability is changing what’s possible in the cloud, and how your business could benefit.

What is managed multisite in a cloud environment, and why is it important?

If you look at managed multisite availability, each term essentially defines what the service is. ‘Multisite’ is the next logical evolution of our cloud platform. Instead of having one, we now have multiple sites where the cloud is available. That allows the cloud to be geographically redundant.

No matter how unlikely, a cloud infrastructure that is housed in a single data center has the potential to be the victim of either natural or man-made disasters. To provide a better level of availability, a cloud provider needs to be able to keep services and customer environments up and running, even in the event of an entire site disaster.

‘Availability’ fundamentally refers to the fact that most applications can only tolerate a certain amount of downtime that is directly related to the business value of the application. Most cloud developers use the cloud to run development and test environments. To ensure high availability in a production setting, a cloud environment should be built from the ground up to run production applications and customer environments, which have a higher availability threshold than development and test environments. A multiple-site cloud environment provides availability for an application that is commensurate with what’s appropriate.

The last aspect is ‘managed.’ In contrast to many cloud service providers that essentially provide DIY service, a business should find a provider that builds the environment for information technology (IT) from day one.

Why is it important to have a cloud environment with IT capabilities?

If you are looking for a cloud environment for production that provides all the capabilities and processes expected with IT — change management, security, operations control, the ability to resolve problems and issues — those are all part of the managed services that should be provided on top of the cloud environment. That means companies can have a tremendous level of comfort and that they can trust the production environment and get the level of availability they need.

That is very different than the DIY model that many cloud developers provide, in which you could be left to fend for yourself.

Is cloud computing for everyone?

There are several points that companies typically walk through when making the decision to use the cloud. The No. 1 reason that companies want to use the cloud for their applications is to align their spending with business value. Increasingly, enterprise IT has become very capital intensive. Companies don’t know up front what business return they would receive from a capital investment in enterprise IT, but they would make the investment anyway and hope that it all works out.

Using the cloud is fundamentally different, because you only pay for the data or compute resources that you use or store, you don’t have hardware to buy or install and, in a managed environment, you don’t need internal  resources to manage your IT. Here, the service provider takes responsibility for maintaining the software, servers and applications.

Therefore, companies utilizing the cloud for enterprise IT can make investments that are in line with the business value. Then, they can invest more capital into infrastructure and resources as the application supports it and as the business becomes more successful.

However, there are multiple concerns. The moment something moves outside your firewall, you don’t own it anymore. So you have to decide what to keep in house and what to move to the cloud. Others are concerned about performance and availability of data in the cloud. The multisite availability feature is most useful for applications that can tolerate only about four hours of downtime a year, need geographic redundancy, or are responsible for keeping the business up and running if you don’t want to have the internal responsibility of running the application yourself.

How can businesses get started?

The first step is to do a virtualization assessment. Then, there is the option of what processes to virtualize. Next, take the virtualized application and decide what to keep in house and what to move outside your firewall.

Look for a cloud service provider that will guide you through the process, helping you understand and decide what applications should stay in house, either because they are not ready to be virtualized or they are too tied into business, and which applications can be moved safely. The goal is to create a roadmap for moving applications to the data center.

What applications are good fits for the cloud?

If you have an application that supports your business and has such strong growth that it will need 10 times more resources next year than it does today, the elasticity the cloud offers is a great option. If the application also uses modern technology, which is easier to virtualize, that combination makes it compelling to move that application to cloud.

The business argument for moving older technology, like ERP, to the cloud is much less strong.

Indu Kodukula is executive vice president, products, and chief technology officer with SunGard Availability Services. Reach him at indu.kodukula@sungard.com.

Published in Philadelphia

If a disaster struck your company, could you recover? Do you have a place to store your data so it’s safe and accessible, and do you have a way to recover it after a disaster without bankrupting the company?

Investing in redundant infrastructure and hiring specialized staff to protect yourself is hard to justify in today’s business climate, especially when the rising cost of disaster recovery pushes other critical projects to the back burner. But the answer may be in the cloud, says Ram Shanmugam, senior director of product management for Recovery Services at SunGard Availability Services.

“Recovery in the cloud is offering customers reliable and cost-effective options to increase application availability,” says Shanmugam. “It’s no longer a matter of do you need higher application availability but how can you do it effectively and efficiently compared to traditional recovery models.”

Smart Business spoke with Shanmugam about the advantages of outsourcing disaster recovery to the cloud.

Why is the cloud advantageous?

Organizations require consistent and reliable availability of their recovery infrastructure to match the business value of their full range of applications and data. These range from mission-critical to less critical. Disruption and outages in the availability of mission-critical applications do the most damage to organizations financially and in terms of impact on quality of service, lost reputation and competitive advantage. To design and implement a recovery plan, the IT organization must determine the recovery point objective (RPO) and recovery time objective (RTO) for each mission-critical application. The RPO is the amount of down time and data loss the company is willing to sustain after a disaster, and the RTO establishes the timeline and priority for restoring critical business processes and applications. Finally, to meet the RPO and RTO requirements, the IT organization must invest in space, capital equipment and software, and hire experienced staff to replicate or back up data, then try to ensure recovery by executing rigorous testing protocols.

In contrast, cloud-based recovery offers a reliable and affordable alternative for achieving RPO and RTO requirements and ensuring higher availability for mission-critical applications. Cloud-based recovery solutions offer access to low-cost or pay-as-you-use recovery infrastructure, which can be provisioned on demand to recover mission-critical applications in the wake of failure events, with sufficient security and guaranteed performance.

What should executives consider before outsourcing disaster recovery to the cloud?

  • Cost savings is a significant driver.
  • RPO/RTO. Companies often forsake their RPO/RTO requirements because in-house solutions are cost prohibitive. The cloud offers the ability to significantly improve application availability in a cost-effective manner.
  • Reliability. The ROI of a recovery environment is in the reliability of its performance at the time of disaster. Compared to in-house solutions, managed cloud solutions offers higher reliability in recovery of mission-critical applications after failure events, with sufficient security and guaranteed performance.
  • Skilled resources. In-house recovery solutions require investment in talent to support the infrastructure. In contrast, the cloud eliminates the need for that investment, freeing up resources to focus on value creation.

Does migrating to the cloud create a loss of flexibility?

No. In fact, the cloud allows IT organizations to optimize their investment and resources by offering configurable options to meet the individual availability objectives of each application or business process.

IT organizations also have the flexibility to customize a cost-effective hybrid recovery environment by integrating cloud with dedicated internal infrastructure to support availability of large, complex applications and business processes.

What should CIOs consider when evaluating prospective partners?

Ask these questions to evaluate potential cloud partners when considering cloud-based recovery options.

  • Does it offer meaningful service level guarantees for recovery of mission-critical applications? Can it reliably recover mission-critical applications in the wake of failure?
  • Does it support heterogeneous computing platforms (e.g. Windows, Linux) and hybrid architectures that meet the recovery needs of the entire IT portfolio?
  • Does the staff have hands-on disaster recovery experience? Has it recovered from a disaster? Does it understand the entire disaster recovery lifecycle? Can it provide audit-ready test reports?
  • Can the partner support a broad portfolio of RPO/RTO requirements in its cloud solution? Does it provide options for high availability, as well as less critical applications, in a heterogeneous environment?
  • What is the range of options supported for moving data to the cloud? Does it use monitoring and automation tools to ensure rapid and effective response to failures?
  • Can the cloud partner handle your current and future needs? Can it expand and contract on demand, handle sudden growth or support large amounts of application data?
  • Can clients pay as they go?

Is data in the cloud secure?

A cloud partner should offer multiple levels of security and service options to fit your needs. For those concerned that some data are too sensitive for the cloud, despite security, they can use a private cloud, while selecting a shared cloud for everything else.

One size doesn’t fit all, so a cloud partner should offer a range of private, hybrid and physical environments to make sure your data is secure and can be recovered after a disaster.

Ram Shanmugam is the senior director of product management for Recovery Services at SunGard Availability Services. Reach him at ramanan.shanmugam@sungard.com.

Published in Philadelphia

There’s been no shortage of threats to business continuity over the past decade; terror attacks, hurricanes and tornadoes are constant reminders of the need for a well-honed disaster recovery plan. But the era has also featured two devastating recessions and a rash of corporate downsizings, leaving executives with fewer resources to guarantee the timely restoration of critical business operations and databases.

“The events of the past decade have taught us that businesses must deal with disaster recovery in a pragmatic way,” says Kerwin Myers, senior director of product management at SunGard Availability Services. “Otherwise, companies may make critical mistakes and may never recover from a disaster. Conversely, overspending on disaster recovery can also pose a threat to your company.”

Smart Business spoke with Myers about how to ensure business continuity by taking a realistic approach to disaster recovery.

How have the events of the last decade impacted disaster recovery?

Executives seldom thought about disaster recovery before Sept. 11. Now, they recognize the need for planning, but restoring a network and recovering data require professionals with specialized skills, and the rigorous process often takes a back seat to day-to-day operations. Additionally, companies must now adhere to governmental regulations and industry mandates designed to ensure organizations develop business continuity plans.

What are the typical pain points that organizations face when recovering from a disaster?

Recovering from a disaster hinges on accurate and current disaster recovery procedures. Many organizations fail to recover or take longer to recover because these procedures are not accurate or not current. Production Information Technology environments are constantly changing. This means that an effective change management practice that includes a process for updating recovery procedures and recovery configurations is a crucial component to successful restoration. Changes in the production environment happen daily, impacting recovery. As a result, the recovery plan must be kept up to date with day-to-day production changes.

In many cases, organizations depend on the same staff for production and disaster recovery. This requires production to be redeployed to restore critical applications and data during a disaster. But the event may prevent workers from reaching the facility or inflict personal hardships or injuries that keep them from working.

Last, IT professionals spend most of the time maintaining and updating applications, so restoration efforts may be hampered by a lack of knowledge of restoration practices.

What are the key planning elements to help ensure a seamless recovery?

Recovery plans should be customized to individual businesses but should include these critical steps to ensure effective recovery.

  • Create specific and sequential recovery processes and procedures. Employees need  clear procedures to restore critical IT services.
  • Establish priorities. Some mission-critical applications and technical functions must be restored immediately to minimize financial loss. Consider cost/performance trade-offs, estimated recovery times and business needs when establishing post-event priorities.
  • Close skill gaps. Staff members must take on specific roles and duties during recovery, but there’s no time for training once disaster strikes. Inventory the required skills to execute the plan and close gaps through training or by contracting with external providers.
  • IT organizations must ensure production changes are being replicated in recovery configurations and procedures.

What mistakes may impede or prevent a complete recovery?

An outdated recovery plan can stymie recovery. Companies need to reconcile the plan with the changing technical configuration and update procedures and priorities to align with the business requirements on a quarterly or semi-annual basis, as recovery may fail if the plan elements aren’t  tested and refined.

Should all data be recovered in the same way?

Most data centers are a collection of new and legacy systems and applications from multiple vendors, which means all data can’t be recovered in the same way. For example, data from critical tier-one applications may be replicated on servers in other locations, which is expensive, but the investment practically eliminates down time after a disaster.

Applications that run in the cloud can be accessed from any location and the provider assumes responsibility for disaster planning and recovery. Tier-two apps could run on separate servers and are restored from tape backups or a virtualized environment.

How are virtualization and cloud-based solutions impacting backup and recovery processes?

The emergence of the cloud and virtualization has created new rapid recovery options at a better price point. Applications that run on Web-based platforms can be supported by third-party providers with hundreds of servers, so recovery can be as simple as switching to another site. The best providers take a holistic approach by considering the interdependency between legacy and Web-based applications and offer a comprehensive solution.

What should an IT manager look for in an outsourced disaster recovery service provider?

Beyond price and equipment, an IT manager should evaluate the following criteria.

  • Experience and expertise in processes and procedures.
  • Commitment and conviction backed by guarantees and SLAs.
  • Track record. Has the firm been tested by a real disaster? Was the recovery successful?
  • Testing and audits. A provider should conduct hundreds of tests and audits each year, so ask to review its documentation before committing.

Kerwin Myers is a senior director of product management for SunGard Availability Services. Reach him at kerwin.myers@sungard.com.

Published in Philadelphia