There’s been no shortage of threats to business continuity over the past decade; terror attacks, hurricanes and tornadoes are constant reminders of the need for a well-honed disaster recovery plan. But the era has also featured two devastating recessions and a rash of corporate downsizings, leaving executives with fewer resources to guarantee the timely restoration of critical business operations and databases.

“The events of the past decade have taught us that businesses must deal with disaster recovery in a pragmatic way,” says Kerwin Myers, senior director of product management at SunGard Availability Services. “Otherwise, companies may make critical mistakes and may never recover from a disaster. Conversely, overspending on disaster recovery can also pose a threat to your company.”

Smart Business spoke with Myers about how to ensure business continuity by taking a realistic approach to disaster recovery.

How have the events of the last decade impacted disaster recovery?

Executives seldom thought about disaster recovery before Sept. 11. Now, they recognize the need for planning, but restoring a network and recovering data require professionals with specialized skills, and the rigorous process often takes a back seat to day-to-day operations. Additionally, companies must now adhere to governmental regulations and industry mandates designed to ensure organizations develop business continuity plans.

What are the typical pain points that organizations face when recovering from a disaster?

Recovering from a disaster hinges on accurate and current disaster recovery procedures. Many organizations fail to recover or take longer to recover because these procedures are not accurate or not current. Production Information Technology environments are constantly changing. This means that an effective change management practice that includes a process for updating recovery procedures and recovery configurations is a crucial component to successful restoration. Changes in the production environment happen daily, impacting recovery. As a result, the recovery plan must be kept up to date with day-to-day production changes.

In many cases, organizations depend on the same staff for production and disaster recovery. This requires production to be redeployed to restore critical applications and data during a disaster. But the event may prevent workers from reaching the facility or inflict personal hardships or injuries that keep them from working.

Last, IT professionals spend most of the time maintaining and updating applications, so restoration efforts may be hampered by a lack of knowledge of restoration practices.

What are the key planning elements to help ensure a seamless recovery?

Recovery plans should be customized to individual businesses but should include these critical steps to ensure effective recovery.

* Create specific and sequential recovery processes and procedures. Employees need  clear procedures to restore critical IT services.

*Establish priorities. Some mission-critical applications and technical functions must be restored immediately to minimize financial loss. Consider cost/performance trade-offs, estimated recovery times and business needs when establishing post-event priorities.

* Close skill gaps. Staff members must take on specific roles and duties during recovery, but there’s no time for training once disaster strikes. Inventory the required skills to execute the plan and close gaps through training or by contracting with external providers.

* IT organizations must ensure production changes are being replicated in recovery configurations and procedures.

What mistakes may impede or prevent a complete recovery?

An outdated recovery plan can stymie recovery. Companies need to reconcile the plan with the changing technical configuration and update procedures and priorities to align with the business requirements on a quarterly or semi-annual basis, as recovery may fail if the plan elements aren’t  tested and refined.

Should all data be recovered in the same way?

Most data centers are a collection of new and legacy systems and applications from multiple vendors, which means all data can’t be recovered in the same way. For example, data from critical tier-one applications may be replicated on servers in other locations, which is expensive, but the investment practically eliminates down time after a disaster.

Applications that run in the cloud can be accessed from any location and the provider assumes responsibility for disaster planning and recovery. Tier-two apps could run on separate servers and are restored from tape backups or a virtualized environment.

How are virtualization and cloud-based solutions impacting backup and recovery processes?

The emergence of the cloud and virtualization has created new rapid recovery options at a better price point. Applications that run on Web-based platforms can be supported by third-party providers with hundreds of servers, so recovery can be as simple as switching to another site. The best providers take a holistic approach by considering the interdependency between legacy and Web-based applications and offer a comprehensive solution.

What should an IT manager look for in an outsourced disaster recovery service provider?

Beyond price and equipment, an IT manager should evaluate the following criteria.

* Experience and expertise in processes and procedures.

* Commitment and conviction backed by guarantees and SLAs.

* Track record. Has the firm been tested by a real disaster? Was the recovery successful?

* Testing and audits. A provider should conduct hundreds of tests and audits each year, so ask to review its documentation before committing.

Kerwin Myers is a senior director of product management for SunGard Availability Services. Reach him at kerwin.myers@sungard.com.

Published in Philadelphia

The relentless pace of business automation and Internet commerce has led to a staggering increase in the amount of data that businesses need to store. And that growth has created a corresponding need for businesses to expand their IT capabilities.

However, a direct investment distracts from your core business and can cost up to $10 million for buildout and $5,000 per square foot for operational overhead. That’s why many companies are opting to outsource their IT through colocation.

“Colocation is all about economies of scale, focusing on your core competencies as a business and letting someone else handle the data center aspect,” says Joe Sullivan, senior director, colocation product management with SunGard Availability Services. “From a financial perspective, it allows you to take a large cash outlay or capital expense and convert that into an operating expense.”

Smart Business spoke with Sullivan about how to decide if colocation is right for you, and what to look for in a colocation provider.

Why are companies considering colocation?

The No. 1 reason companies consider colocation is to avoid making IT a core competency of their business. The second major reason is to gain the economies of scale you get from sharing a facility with others. Instead of having to build and maintain your own facility, you can leverage another company to do it for you and share those costs with other customers that you might not even know.

Some companies already have their own data center, but their current facility can’t keep up with their business’s growth. Adding a second site to handle the growth is one reason companies consider colocation. Disaster recovery planning is another reason. Companies may need an alternate location to protect against infrastructure downtime, either from natural catastrophes or hardware failures.

What are the key factors to consider when picking a colocation provider?

There are five key factors. First is how much power a customer needs, not only today, but in the future. The business is based on power costs, as well as the cooling needed to cool that power. That drives a large portion of the cost structure and facility capacity, so it’s the No. 1 thing providers ask customers.

The second factor is environment size. Do you need a cabinet, two cabinets, or do you need a cage to store your data? Think of cabinets and cages as storage lockers and apartments. You might only need a locker, or you might need an apartment, depending on how much you have. Basically, a collocation provider rents highly powered, highly cooled and fully redundant storage lockers and apartments.

Geographic location is important, as well. Many customers like to be close to their facilities. They want to touch and feel their data and make all the changes themselves. Others want their provider to be as far away from their main facility as possible, because their main concern is disaster recovery. Those customers ask about colocation facilities in St. Louis, Denver, Phoenix and Dallas, because those sites don’t have as much natural disaster activity.

Fourth is connectivity. You may call it bandwidth, telecom or fiber, but it’s all connectivity. This is important because you are going to need to get your data out of those storage lockers or apartments to somewhere — either back to your facility or to your customers through the Internet.

Last, what services do you need on top of colocation? Some companies just provide space, power and connectivity. Others provide services such as data backup, storage, security monitoring and intrusion detection on your servers, and services such as cloud applications.

You need to make a decision up front on whether you may want those services at some point, because if you are outsourcing your colocation, you may end up outsourcing other services, as well. If that’s the case, you want to make sure you choose a provider that has the capability to do that. The services you need on top of colocation today or in the future should be a big factor in your choice of provider.

What questions should you ask a potential provider to determine if it can meet your needs?

What facilities in the geographies we are interested in have the space and power that we need? That’s the first question. Because you’re not going to care if a provider has 55 locations and only three of them are in the geography you want and none of them have space and power available to meet your needs.

Next, get into resiliency questions. Typically, customers will look for companies that have fully redundant power systems. At every point in the process of the power coming from the utility company, through two feeds into the building, hitting two power plants, one system should be able to fail over to the other. In the event that anything in that chain fails, you have fully redundant systems.

That is a large differentiator between providers. Some are fully redundant and some have single points of failure. What the customer needs to determine is whether those single points of failure are acceptable for the applications they are running and for the price discount you would get.

Then you get to pricing. Not all prices are created equal in colocation. You might not be getting the same thing, even if it sounds the same. You might have two providers, one who advertises a cabinet for $1,000 and the other for $1,500. It may seem like the $1,000 cabinet is the better deal, but the $1,500 cabinet might give you three times the power density, which would make it a better deal.

You should ensure transparency in pricing from a provider, and always make sure you understand what’s included and what’s not included.

Joe Sullivan is senior director, colocation product management with SunGard Availability Services. Reach him at (303) 942-2937 or joe.sullivan@sungard.com.

Published in Philadelphia