A data center is the infrastructure a business uses to house its IT assets — space, power, cooling, network connectivity, wiring, etc. Depending on the business’ size, it may be a spare closet, a dedicated building or space leased at a public data center.
“The data center itself is infrastructure and doesn’t generate revenue or create differentiated business value,” says Mike Tighe, executive director, Data Products at Comcast Business. “So, the CFO frequently says, ‘Rather than utilize precious capital to build or expand a data center, there are other options including great public data centers where we can lease space.’”
Smart Business spoke with Tighe about data center best practices, including network and bandwidth considerations.
Why are data centers so important today, and what’s in store for the future?
The function of a data center is to ensure availability of IT applications and data. If employees don’t have access, they can’t be as productive and in some cases, the business can’t run. The trend to place IT assets -—applications, servers and storage — in public data centers is rapidly evolving for businesses of all sizes, either as a main data center or as part of business continuity strategy.
Over the next five years the trend of renting rather than owning IT infrastructure will accelerate as businesses utilize cloud-based infrastructure and applications. This is not just because of better economics, the ‘cloud’ enables rapid deployment and the ability to scale applications that drive better productivity.
When should you look at outsourcing a data center?
When IT becomes an important component of how you run your business, you have to ensure high availability. If, for example, you install specialized applications used for resource planning and creation of content, but the server starts going down because of power or network connectivity loss, it impacts your business’s ability to run.
Another factor is economic. As businesses make IT decisions, they may not have the capital to build or upgrade data centers, so they’ll look at alternatives.
What are some options to consider with public data centers?
By their very nature, there are more capabilities in a public data center because everyone is sharing the cost of the generator, the physical security monitoring, having multiple network providers, etc. However, some things to consider are:
- Physical security procedures.
- Redundancy of critical components.
- The ability to expand as your IT infrastructure requirements increase.
- Network for primary and backup connections. What providers have extended their network into the data center to provide connectivity and ensure access?
- Location. Regional events including loss of power and natural disasters dictate that the backup site be located far enough from the main data center so as not to be affected by a single incident. Hurricane Sandy certainly brought home the point that a redundant data center far enough inland on a separate power grid helps ensure application availability.
How can companies build the right network?
Strong network connectivity becomes more important as IT assets are put into public data centers. Know how much your company’s bandwidth requirements are growing, and your network’s ability to scale for future requirements. On average, over the past decade, a business’s bandwidth requirements have grown around 50 percent per year. Look at network technologies that can cost-effectively scale — from 10 megabytes, an average site requirement, to one gigabyte, for example. Ethernet technology, which local-area networks are built on, is one solution that businesses are leveraging for their networks.
How do data center solutions impact a business’s bottom line?
With the economic downturn, use of company capital became a focus. Executives decided that the data center, while important, doesn’t produce any intrinsic value. And you can lease the space and preserve capital for projects that improve the bottom line. Companies can rent space by the square foot, rather than having to build another data center as IT needs expand.
Mike Tighe is a executive director, Data Products at Comcast Business. Reach him at (215) 286-5276 or email@example.com.
Insights Telecommunications is brought to you by Comcast Business
Installing the redundancy measures necessary to make sure company data is available 24/7, regardless of calamity, is prohibitively expensive and requires a great deal of know-how, which is why many organizations outsource their data protection to companies that are specialized to guard it.
“We live in an age where data has a critical role in our lives on a daily basis. Losing access to that data, whether from being knocked offline or because of a catastrophe, can be terminally disruptive, so having backup systems in place is critical,” says Pervez Delawalla, president and CEO of Net2EZ.
Specialized data centers are dedicated buildings constructed to house server equipment that hold data — business or personal, critical or otherwise. They are designed for redundancy in physical functions, such as power and cooling, as well as network redundancy to keep data available to its customers. But what separates one from another?
Smart Business spoke with Delawalla about how to grade data centers to ensure you find one that offers the best protection for your most valuable commodity, your data.
What are the differences between data centers?
The biggest misconception is that all data centers are built the same, which leads many to ask the question, ‘Why would I pay more for one when I could get it cheaper down the street?’ The answer lies partly in Tier rating.
What is Tier rating?
Tiers represent the availability of your data based on the probabilities of system failures in a given year. Tier 1 guarantees 99.67 percent data availability in a year. Tier 4 is 99.995 percent availability. These percentages are based on the life expectancy of equipment such as power and cooling systems and distribution panels.
So that 99.67 percent represented by Tier 1 equates to, in any given year, 29 hours that systems could be offline and data inaccessible. While that might not sound like much, if you’re doing the volume of online business Amazon does, you can’t afford that. In instances where customers are trying to get to your site nearly every minute of the day, it needs to be up all the time to accommodate them, so you need the maximum level of redundancy for protection.
Tier 4 data centers, on the other hand, guarantee a maximum of 2.4 minutes offline in any given year. The percentage differences, measured in tenths, may seem negligible, but it accounts for a big difference when your data is affected.
How reliable is a Tier rating?
Data centers can have their Tier rating certified by a third party. Certification bodies include the Uptime Institute, as well as more traditional auditing firms such as Deloitte and Ernst & Young, which have technology arms capable of making an assessment. There’s also SSAE 16 certification for service organizations, which is used for reporting on controls.
How can companies ensure they have the highest level of data protection?
There are different methods for achieving redundancy. For instance, you could employ multiple Tier 1 data centers that fail over to each other. But that can be expensive. It might make more sense to use two Tier 4 data centers, one of which can serve as a geographic redundancy — it should be located a great distance from your main office and your primary data center to guard against failure caused from natural disasters, such as earthquakes.
What else should companies ask?
Make sure you’re aware of a data center’s redundancy for its network — the physical fiber that comes through the building — and how it interconnects with the rest of the network and Internet exchange points.
Also consider the support environment. Not all centers have 24/7 on-site engineering support to take care of the back of the house, such as the generators. While customers often overlook it, it’s critically important to have someone physically monitoring those systems and on hand to react to any major outages or prolonged system failures. Similarly, it’s great to have engineering and technical support on the server and router side of it to work directly with customers.
Pervez Delawalla is president and CEO at Net2EZ. Reach him at (310) 426-6700 or firstname.lastname@example.org.
Insights Technology is brought to you by Net2EZ
Voice over Internet Protocol (VoIP) is changing the way businesses communicate. By converging traditional voice and data services on a single platform, VoIP lowers operating costs and provides greater efficiencies than traditional phone systems.
A good VoIP provider can build a customized system to meet your needs and is willing to let you test new features to see if it makes sense for your business.
“In the world of VoIP it is easy to try something on a trial basis to see if it will work for your organization,” says Alex Desberg, sales and marketing director at Ohio.net. “If your provider is unwilling to let you kick the tires without a long-term contract you might want to look at finding a new provider.”
Smart Business spoke with Desberg about VoIP, the importance of gathering employee feedback and the dangers of choosing flash over function.
How are companies wasting time and money using traditional phone systems?
We often hear clients utter the phrase, ‘We would like one throat to choke,’ meaning it would be nice to have one service provider handle everything. When you have multiple providers for services that work together like the Internet and phone, a lot of time can be wasted trying to track down the right person if a problem arises. We’ve also found that many organizations have taken on the responsibility of managing their phone system themselves. Because they are not experts in the field they tend to Band-Aid problems rather than having a telephone professional properly address options for improved customer service and long-term efficiencies.
How should a phone system serve a company and its customers?
A phone system must be a conduit of communication. It should be designed to deliver the customer to the solution they need. Any complication, ranging from difficulty dialing the number to being unable to speak to the person they are seeking, adversely impacts a customer’s experience. Whether it is a retail customer, a professional services company or a manufacturer, the idea is there should be one-call closure. Hosted phone systems have the ability to deliver the customer to directly what they need, if engineered properly.
Why should the correct personnel make decisions about hosted phone systems?
The role of managing phone systems is falling on two people these days: the office manager and the IT professional. The office manager doesn’t necessarily know much about technology, but they know how the business operates. IT professionals know how data works and what type of technologies work for a business, but they might not know how to apply phone technologies. By interfacing with office managers and IT personnel, we can quickly learn what works best for the organization and which features should be added to a new phone system.
If your customers are not reaching the right people on a regular basis it’s important to investigate. For example, I was with a prospective customer the other day and we talked for an hour about the advantages of changing their phone system. After the meeting I asked the receptionist about her thoughts. I discovered that if a customer called on the company’s second line, all of the lines rang busy. Nobody at the top level realized this because they never solicited information from their employees. Often businesses have meetings about cash flow and other financial principles, but they forget about discussing operations.
How important a role does technology play in communications?
It’s not necessarily the technology that’s important, it’s the function. You could have the most feature-rich, complicated phone system on the face of the earth, but if it doesn’t serve the needs of your customer then it has no value. There are a lot of bells and whistles and whiz-bang technologies out there that might not help your business. When choosing the right technology, start with what the company needs. It’s important that your provider uses a consumer-centric approach. After all, it’s impossible to tell a business what they should buy without knowing what they need.
What is sustainable IT? As many companies deploy sustainability strategies aimed at improving energy efficiency, preserving natural resources and lowering operating costs, IT is becoming a major part of this initiative.
“Over the past few years, many companies have focused on LEED Certification and sustainable design methods to achieve these objectives,” says Rich Garrison, Senior Principal at Alfa Tech. “Now, IT organizations are continually pressed to deliver more by way of applications and content, while being asked to lower capital and expense costs. This is combined with the fact that for many organizations, IT is one of the largest consumers of energy and natural resources to operate data centers, labs and office environments.”
Smart Business spoke with Garrison about the impacts of sustainable IT in today’s business world.
How does sustainable IT work?
Information technology leaders are turning to sustainability-focused initiatives to reduce costs and align with corporate sustainability strategies. Sustainable IT is simply the process of planning, designing, and implementing technologies that improve efficiency and reduce environmental impact.
How is virtualization and cloud technology impacting sustainable IT?
Sustainable IT examples include the wide adoption of virtualization technologies intended to reduce the number of physical servers and increase the utilization of these hardware assets. The next level beyond virtualization is the adoption of private or public cloud service offerings, which allows companies to utilize computing hardware, often hosted by third-party service providers. The primary objective is to have ‘just in time’ capacity, improved reliability and more predictable costs. While cloud is not for everyone, the adoption rate is high and on the radar for most IT professionals.
What role can Wi-Fi play with sustainable IT?
Another technology having a significant impact is wireless or Wi-Fi technologies. In today’s workplace, employees have an average of three wireless devices each. With the adoption of smartphones, tablets and laptops, some analysts predict conventional desktop workstations will be obsolete within the next five years. This adoption of wireless devices in the workplace combined with the evolution of a more collaborative workspace means there’s a demand for more reliable wireless networks with adequate bandwidth.
With new or remodeled facilities, it’s important to weigh the impact of architectural considerations on the performance of wireless technologies. The selection of materials and the building’s physical layout can significantly affect the wireless network’s performance. Predictive tools can help design a wireless solution during the building project’s design phase and eliminate potential issues in advance. As companies adopt wireless solutions, it also creates an opportunity to reduce and, in some cases, eliminate traditional structured cabling systems. From a sustainability perspective, this has major advantages when eliminating the use of copper material.
How are building monitoring and automation systems influencing sustainable IT?
As facilities organizations focus on building monitoring systems (BMS) and automation systems (BAS) to optimize the use of lighting, HVAC and other energy consuming resources, these BMS and BAS solutions are becoming more advanced and sophisticated. The products are becoming more network-enabled and require more advanced and reliable network infrastructure to support them.
For many companies, building management and control systems have become critical applications requiring the same level and support as traditional business applications. As part of the IT sustainability strategy, IT organizations also are leveraging these systems to monitor and trend their consumption of power and energy efficiency. There’s a significant need to engage IT professionals earlier when designing buildings or data center facilities to provide input or solutions for a well-architected network capable of supporting these building systems and applications.
While other IT-related sustainability initiatives can be considered, virtualization, wireless and BMS are greatly impacting both IT organizations and facility planning.
Rich Garrison is Senior Principal at Alfa Tech. Reach him at (408) 487-1209 or email@example.com.
Insights Technology is brought to you by Alfa Tech
Instead of merely maintaining technology, IT service providers are increasingly being asked to become technology drivers and bring innovation and new product ideas to their clients.
Cliff Justice, author of the report, “The Death of Outsourcing,” was recently interviewed by CIO.com and stated that there was a shift around 2006 to 2007 from outsourcing as a commodity focused on price to a service that’s value-oriented.
“We’re clearly seeing this shift,” says Deen Ferrell, business development executive at Cal Net Technology Group. “Clients want an insourcing partner today. Insourcing requires a broader talent pool, one that offers skill sets in all areas where technology touches the organization. Ongoing research passes critical intelligence of emerging technologies to the field so it can be applied to benefit the client. The focus is on best practices that better integrate technology platforms into a working strategy that drives profit while reducing redundancy and cost.
“The good news is that this shift represents a stronger commitment from the service provider sector. Providers are now expected to add value beyond ground-level support,” Ferrell says.
Smart Business spoke with Ferrell about the trend toward IT providers who help move business goals forward and the benefits to businesses from advancements such as cloud computing and unified communications.
Why is the shift toward insourcing occurring?
Mom-and-pop shops aren’t getting the job done because of a lack of depth and bandwidth — the proverbial ‘can’t see the forest for the trees.’ Providers get so focused on dealing with immediate issues that they can’t step back and think strategically.
A successful insourcing partner impacts all areas where technology touches the organization, as well as providing standard IT support and maintenance that allows companies to maintain core efficiencies.
What has been the impact on information security?
Retaining and securing sensitive information is a critical component of IT services. In a global marketplace where information is king, an ongoing managed security strategy can give organizations peace of mind related to risk management, security assessment, compliance issues and gap analysis.
What is meant by unified communications?
A unified communications strategy allows information to flow seamlessly through an organization by using tools such as voice over Internet protocol (VoIP), video conferencing, mobility solutions such as iPhone, iPad and tablet integration, and call center functionality such as call recording and reporting.
How can cloud consulting benefit companies?
For cloud solutions to deliver on their promise of reducing cost and risk while improving competitive advantage, they must be viable, supportable and secure. Vendor research and management, with an understanding of hosted offerings such as Office 365, infrastructure as a service (IaaS) and software as a service (SaaS), are critical to helping the organization realize cloud potential while avoiding pitfalls.
How does insourcing promote innovation?
An innovative environment is one where workflow automation and collaborative computing free up valuable time and provide on-demand access to critical information through dashboards, scheduling and customer portals.
Insourcing partners are providing supplemental chief information officer services such as documentation, change management and vendor relations support, which allow companies to cut waste, streamline processes and better position themselves competitively.
With the insourcing crowd becoming increasingly innovative, cost-conscious and competitive, it appears that the outsourcing model is on its way out.
Deen Ferrell is a business development executive at Cal Net Technology Group. Reach him at (818) 725-5062 or firstname.lastname@example.org.
For information on the benefits of insourcing to Cal Net, visit http://www.calnettech.com/ourservices_OngoingSupport_InsourceBenefits.php.
Insights Technology is brought to you by Cal Net Technology Group
In today’s world, few things change as quickly as technology. Add to this the fact that technology change is usually toward greater complexity, and it becomes easy to see why some executives throw up their hands in exasperation when attempting to manage technology. Technology, however, is a key driver in execution and in maintaining your company’s competitive advantage — it can’t be ignored or delegated.
“One of the keys to managing technology is to not lose sight of the fact that it is a means to an end, not an end itself,” says Kirk O’Hara, vice president, consulting services at Executive Career Services.
“Executives need to understand the essential purpose of technology in their business, be able to incorporate it into their strategic plan and know how to easily and efficiently adapt new technology into business systems and operations,” he says.
Smart Business spoke with O’Hara about what executives need to know about integrating technology into their companies.
What should executives understand about technology and using it to execute business functions?
Leveraging technology starts with an understanding of how it can be used as a strategic resource. Every strategic plan should have a section devoted to technology and its role in driving the mission. This means that the IT department needs to be integrated into the company’s mission and not seen as an ad hoc department to go to when there are problems. In this respect, IT can be seen as going through the same sort of transformation that human resources did a couple of decades ago. Prior to that, HR was typically called ‘personnel’ and was seen as a necessary evil to avoid problems. Today, HR is viewed as a valuable strategic partner and talent management is a major concern of most executives. It is time for IT to be elevated to the same position.
Most executives do not need to get into the details of how technology works, but they should be familiar with the basic input, throughput, output cycle. For example, what data need to be collected for the input of business systems such as accounting, inventory control and customer relationship management? Remember the IT adage ‘GIGO’ — garbage in, garbage out. Collecting the data necessary to run a business is essential to maintaining a strategic advantage.
How involved should executives be with a company’s technology?
Executives should be intricately involved in the output. What reports are needed to properly manage cash flow, maintain optimal inventory levels and keep an eye on customer relationships? Part of the value of technology is that it can spew out a tremendous amount of information. In this regard, it is easy for executives to request too many reports and get lost in the information overload. The same can be said of business unit leaders and departmental managers. Monthly and quarterly reports accumulate over time and may never be used to make business decisions. Executives may want to try this simple technique. Occasionally discontinue a report and see if anyone notices it is missing. If no one complains, it is a safe bet that the report isn’t necessary.
Should a company make sure it has the latest hardware and software?
Throughput considerations will typically involve matters of technology, such as hardware and software upgrades. While it may seem wise to always have the latest and greatest technology, this isn’t always the case. Software updates often have bugs and new hardware may have higher failure rates. Unless your company is very technology dependent, it may be wise to put off updates until they have proven themselves in the business world, and only then when it is clear that the upgrades will have material benefit.
Leveraging technology isn’t all about systems. Executives also need to be sure that they are using personal technology efficiently and effectively. Smartphones and tablets are quickly replacing laptop PCs. Text messaging is replacing voicemail and email is a ubiquitous part of everyone’s work life. In addition to ensuring that technology is used as a strategic resource for the company, executives need to be sure that their personal use of technology is efficient.
How much should a company rely on technology to do business?
Above all, executives should ensure that in-person face-to-face communications aren’t lost in the crush of today’s workload. In-person meetings are essential when forming new teams, creating and nurturing new relationships and/or discussing areas that are emotionally laden or when intended messages can be easily misinterpreted. Email notes have their advantages, to be sure. They allow for a wide distribution where everyone receives the same message and they serve as historical records for documenting what was said.
Too many managers, however, try to manage through email, and this is poor technique. In particular, some executives will rely on an email note to convey a difficult message, for example, to address a conflict. A good executive will never opt to use email when a personal conversation is indicated.
Technology has pervaded — some will say invaded — virtually every aspect of our professional lives. We don’t need to get tangled up by it, however, if we keep the focus on how it can be used as a strategic advantage and never allow it to replace interpersonal interaction.
Still having trouble getting your head around technology? Find an IT liaison who speaks your language. After all, they are people, too.
Kirk O’Hara is a vice president of consulting services at Executive Career Services. Reach him at email@example.com.
Many organizations have in-house IT staff that has been around for a long time. However, if the organization has not invested in employee skills, there is a tendency for complacency and stagnation, says Lou Rabon, Cal Net Technology Group’s information security practice manager.
“This stagnation comes in the form of believing that solutions the in-house IT people are providing are the best ones out there based on their experience,” Rabon says. “For loyal IT staff, their experience is usually only in one environment, and if no new education or experience has been acquired, then an element of risk is introduced into the organization. Not only will the organization be getting outdated and inadequate service and solutions, but the risk introduced may prove to be fatal to an organization’s data, as well.”
Smart Business spoke with Rabon about how to spot IT staff stagnation and what steps to take to address the problem.
How critical is the need to update IT skills?
Information technology experiences paradigm changes over very short periods of time. New, disruptive technologies are appearing all of the time, sometimes in as little as months. In information security, this trend is even faster, where minutes and seconds can separate effective solutions from completely inadequate, and expensive, defenses.
What are signs that IT staff might have stagnated?
If your IT person has been doing the same thing since 2007, you can be assured that there are going to be problems. Large and small companies should take stock and ask:
• Does current IT staff/policy favor convenience over security?
• Are there direct remote connections to machines because a virtual private network or remote access solution was considered too complicated or not possible?
• Are there passwords that are not complex or do not change?
• Do easy-to-remember — and therefore easily crackable — administrative passwords exist that have access to sensitive data?
• Is there a lack of visibility on the network?
• When problems occur, is root cause rarely determined and downtime frequent?
• Is there resistance to change?
• Are overly technical and confusing answers given when approached for advice or questions?
These are just some of the more obvious ways to determine if your current IT staff might need a knowledge refreshment or replacement. Unfortunately, most internal IT staff will believe everything is being done right, despite evidence to the contrary. This is what psychologists call the Dunning-Kruger effect, ‘in which unskilled individuals suffer from illusory superiority, mistakenly rating their ability much higher than average.’
What steps can be taken to address this problem?
The first might be to look at how staff is managed. Maybe the reporting structure should be changed. In many growing organizations, IT will typically be CFO-led. Ideally, IT staff should fall under a COO or, better yet, a dedicated CIO who can look at the big picture of where an organization is headed and drive this strategy.
Another option is training. Incompetence of any staff might be a failing of the organization itself to properly invest in its work force. Picking the right training can be a challenge, but there are a number of solutions. Vendor training is an option and can typically be obtained at a reasonable cost, especially if the organization has used one vendor’s technology over a long time and can leverage fidelity for a reduced training cost. New vendors also can be looked at to displace existing technology and they may throw in training as part of a purchased bundle. Many specialty organizations offer training such as A+. For security, the SANS Institute has an excellent Security Essentials Boot Camp, which can start to embed some of the basic tenants of security for any staff working with sensitive information or information technology. Finally, continuing education at a local university and even some of the free courses released by institutions such as Stanford might be a good way to stimulate critical thinking and encourage the staff to refresh its skills.
Another solution, which could be the easiest, is to augment the staff with outside talent. Bringing in an outside consulting firm can give an internal IT department a kick in the pants. Personnel will respond differently to this, with some seeing it as a threat and others embracing the help. Both perceptions can be helpful. An outside firm will help you navigate the technology, but more importantly, a good outside firm will help you identify who in the organization you should keep and who should go.
What about outsourcing all IT work?
Some organizations are much better off going in this direction, depending on what internal resources are available. IT, in and of itself, is a business, and, if you’re a small to mid-sized company, you might want to ask yourself, ‘What business am I in?’ For those organizations that prefer to concentrate on their core competency, outsourcing is a great solution. Doing so can help dramatically reduce costs, increase efficiency and productivity, and increase the security posture of an organization. A good IT outsourcing company is continually investing in its team, and because it sees many different IT environments, it is in a unique position to see what works best and provide those best practices to its clients.
Risk in any organization must be managed and mitigated as much as possible. Continuing to employ or engage unskilled or inadequate IT resources introduces an unacceptable level of risk. Your first step is to take a hard look at your organization, and evaluate whether or not you need to invest in IT skills or bring in external resources to best manage the information assets of the organization.
Lou Rabon is information security practice manager for Cal Net Technology Group. Reach him at (818) 721-4414 or firstname.lastname@example.org.
When hiring a member of the IT team, weeding through all of the candidates out there is a tremendous challenge. Particularly if you are a smaller organization, it is likely that a non-technical person is doing the interviewing. In that case, it is very difficult to determine whether or not the person you are talking to actually knows their stuff. Even someone with a very technical background can be fooled by an impressive resume and a smooth talker.
“IT people are weird. I should know — I’m one of them,” says Zack Schuler, founder and CEO of Cal Net Technology Group. “They are the hardest to hire and even harder to retain, and are sometimes hard to fire, as many of them make themselves indispensable as they convince management that their skills are unique. Many of them have technical egos that are larger than life.
“At Cal Net, we have roughly 35 talented IT engineers that we had to hire, train and retain. And we’ve had to let some go over the years. We would like to think that we have this down to a science.”
Smart Business spoke with Schuler about the best process for hiring and retaining the right IT people.
Should IT people be interviewed differently than other potential hires?
Like with any position, you should be screening for the personality traits. An egocentric IT person is the last person you want on your team. Some interviewers are naturally talented at sniffing this out. For others, I would recommend a personality profile. In my opinion, personality is more than 50 percent of what you should be screening for.
Another of the most important traits is good communication skills. We have all experienced the IT guy who wants to sit in a closet somewhere to minimize his contact with humans. If they do make enduser contact, it is usually a painful experience, as they will say the least amount possible so that they can head back to their cave. You should have the expectation that your IT person will be able to communicate as effectively as anyone else in the organization.
How should a company screen an IT person?
Start with, ‘Tell me about your IT environment at home.’ If they give you an answer along the lines of ‘I have three physical servers, running seven VMs for testing, and I’ve got my own mail server running Exchange, and I’m running VDI for my primary workstation,’ then that is a good first step. They view this as their ‘sandbox.’ If they respond, ‘I’ve got a laptop at home and I try to stay away from the computer as I get enough of it at work,’ then they probably aren’t a good technical fit. You want your IT folks to be passionate about technology, and most of them do their best research and learning at home, after hours.
The second easy way to screen is to have a short technical quiz that can be administered by anyone. Feel free to email me for our quiz.
Last, and perhaps the most time-consuming and difficult process, is to put them through a technical lab. We require that our new hires come in and build a network in an eight-hour time period. We have a point system that scores the candidate, as no one ever finishes the lab. This gives us an excellent assessment as to what they do know, and what it is that they need help with. Depending on what you are looking for, there are companies that will administer these sorts of labs for you. If you are testing on Microsoft infrastructure skills, we can administer this sort of lab.
What are some of the challenges of retaining IT people?
In general, IT people are motivated by advancement and the quest for knowledge. In organizations where there isn’t any room to move up nor is there anything new to learn, IT people will stagnate and usually move on.
Good IT people are always looking to explore and learn the latest and greatest technologies. Just as they have a sandbox at home, they want to work for an organization that invests in IT and gives them an opportunity to learn.
Good IT people are also looking to move up the food chain. While some IT folks are motivated heavily by pay, many are more motivated by an increase in title and responsibility.
How can these challenges be overcome?
Quenching the IT person’s quest for knowledge isn’t always the easiest thing to do. There are two ways to attack this. First of all, if you hire someone who is a master of all of the technologies that you are currently running, you’ll get someone who can hit the ground running, but you will also get someone who becomes bored quickly. On the other hand, if you hire someone with like experience and aptitude, but not exact experience in the technologies you are running, you will give someone an opportunity to learn. You will obviously have to weigh the business risk in doing this — and while they are learning you may want to supplement their skills with a consultant — but it can be well worth it in the long run. In short, I recommend slightly ‘under-hiring’ for the position.
The second way to attack this is to give your IT person some latitude when it comes to decision-making. If they want to implement a new technology that is reasonable from a cost standpoint, and delivers business value, I would err on the side of letting them do it. Even small concessions can give your IT person a sense of worth and something new to learn.
Last, in terms of advancement, don’t ‘over-title’ a person. Don’t call your lone IT person ‘IT director’ right away. Create a career path: network administrator, senior network administrator, IT manager, IT director and so on. Even very large IT organizations should be using this model. Look for increases in responsibility along the way, along with small increases in pay. Thinking out a career path before you hire someone will go a long way in making sure that they hang around for a long time.
Zack Schuler is the founder and CEO of Cal Net Technology Group. Reach him at ZSchuler@CalNetTech.com.
Insights Technology is brought to you by Cal Net Technology Group
Network reliability is vital for any business. With so many systems and departments being dependent on your company’s network, it’s vital that your systems are up 100 percent of the time.
As most of us already know, network outages can potentially cost a company thousands and, in some cases, millions of dollars. One way to prevent outages is by doing a proper network assessment and finding out where your network’s weaknesses are.
A company’s network architecture includes hardware, software, connectivity, communication protocols and the mode of transmission, such as wired or wireless. You need to assess your network architecture routinely to ensure that everything is current and in line with your ever-changing business model, says Mark Giles, wireless design engineer at PowerNet Global. If not, you’ll want to begin integrating changes to ensure your network is running efficiently. Conducting an assessment also allows you to see if your company’s security has been compromised, allowing you to fix any problems and prevent these breaches from happening in the future, he says.
“When you go through an assessment, you end up with good documentation and can find where your weak spots are,” says Giles. “A lot of companies have single points of failure, meaning there’s no redundancy if a portion of their network fails. A network assessment can help identify those single points of failure so that a plan can be put into place to fix these issues.”
Smart Business spoke with Giles about what you need to understand about a network architecture assessment and how to implement any changes.
What is involved in a network architecture assessment?
It starts with a site survey done by a network engineer or network consultant who will look at existing drawings and documentation of how the network is set up. Nine times out of 10, companies don’t have documentation or their documentation is outdated. This means the first step in the assessment will be to map out how the network is currently run.
Next, your network engineer or consultant will look at the current hardware and bandwidth utilization to see if your circuits are overloaded or your hardware is maxed out. Then they’ll review your routing to see if that’s being optimized and how your configuration is set up. This will help determine whether you need to upgrade or just optimize how traffic is flowing and configure your equipment accordingly.
Very often, it’s less likely that you need new hardware or circuits, and more likely that your current equipment needs to be configured more efficiently. This is where many companies fail to do a proper network assessment. They will pay top dollar for hardware but go cheap on the person configuring and maintaining the hardware. You could have the best hardware possible, but if the person configuring it has little to no experience, it will end up costing you more money in the long run.
What are some key items business leaders need to understand about their network architecture and implementing a plan?
A lot of it comes down to what your benefits will be and the costs associated with them to determine if it’s going to be worthwhile. If you’re going to be upgrading a piece of equipment, you need to understand why you are upgrading it and if the cost outweighs the benefit.
You’re also looking for service-impacting changes during the implementation portion while ensuring everything is designed well and that your implementation plan is solid. You should ask whether the network changes are going to be service or customer impacting. That’s the big one people want to know — who will be affected? Is there going to be an outage when you’re implementing or upgrading? How long is that outage going to be?
Who should be involved in the network architecture assessment and what are the costs?
Your network engineer or consultant should be the one doing the assessment and it should be conducted any time you’re coming into a new business environment or making changes. Then every six months to a year, depending on how rapidly your network is changing, you’ll want to go through it again. Check everything within your network and make sure the drawings and documentation are current. You’d be surprised how quickly things can change and become outdated.
The cost of an assessment depends on the size of your network and the accuracy of your documentation. It also depends on what you are looking to do. If you need new equipment, it might be more expensive than updating drawings. If you hire a consultant to run the assessment, the cost will typically range anywhere from $125 to $200 an hour.
How often should you implement changes to your network architecture and how should this be accomplished?
You should never stop making changes to your network; you should always try to improve it. According to CISCO’s model, you need to prepare your network, develop a plan to assess your company’s readiness to support any changes and create a detailed design to address any technical and business requirements. Then, implement any new technology, operate and maintain the most up-to-date network systems on a day-to-day basis, and optimize your network by making ongoing improvements to ensure that you have the most efficient network running.
Once you find you’re at the optimization stage, go back to step one. You need to go through this process continuously to make sure your network is up to date and running efficiently.
Mark Giles is the wireless design engineer with PowerNet Global. Reach him at (866) 764-7329.
Insights Technology is brought to you by PowerNet Global
You may not want to think about it, but it’s bound to happen sooner or later: turnover in your IT department.
“Not a day goes by where we don’t receive an emergency phone call from a frantic executive with a story that we hear time and time again, ‘My IT guy has just quit, and he has all of our passwords, and we can’t do anything without him,’” says Zack Schuler, founder and CEO of Cal Net Technology Group.
Many companies don’t plan for this sort of exit, though this type of exit will be inevitable for every company at some point or another. It is safe to say that no one stays with a company forever, and when IT people leave, it can be especially painful.
Smart Business spoke to Schuler about how to put the proper backups and protocol in place to keep operations running smoothly even after the departure of trusted IT personnel.
What protective measures can businesses take to be ready for the departure of a key IT person?
1) Insist that your IT folks provide you with administrator and all passwords that they are in possession of. There is nothing worse than an IT person leaving, and not being forthcoming with password information. If you make this a requirement early, and ask for any changes often, you shouldn’t have an issue getting the information that you need. There are pieces of software that you can buy to securely store your passwords that you can give two or more people access to. The key here is making sure that there isn’t one person who has the ‘keys to the kingdom.’
2) Your IT team should provide you with complete and comprehensive network and systems documentation. I could fill up this article with the list of everything that should be documented, but let’s leave it simple and say that everything related to IT that has a power cord should be documented. Also, it is not good enough to document it once and then walk away, but a routine and methodical process of having it updated, at least quarterly, is a critical step. IT changes quickly, so you always want to have up-to-date documentation.
For some companies, this will be hard to get. For many companies, they’ve asked this of their IT folks, and it hasn’t been produced. Why? Most of the time, the pushback from IT is, ‘I have other, more pressing issues that get brought to my attention every day, and documentation always gets put on the back burner.’ One tip we’ve used here is to ask the IT folks to come in on the weekend (and offer to pay them if they are hourly, which they likely are, or at least should be), in order to get documentation done, uninterrupted. It doesn’t take that long once they get into the groove. If IT still pushes back, hire a company to come in and do the documentation for you. You’ll get it done, and have the benefit of an audit of your IT person’s work.
Once this is done, and done well, if the IT person leaves, it is a lot easier to have someone jump into their shoes and take over quickly.
3) Do your best to ensure that your IT people are cross-trained to the fullest extent possible. If you put a serious cross-training program in place, it may save you in the long run. It also gives you the opportunity to feel like you are not tied to a ball and chain with any one IT person, and it makes them replaceable, if the need be.
4) Develop a ‘lock out’ procedure. In the event that an IT person leaves, or is asked to leave, it is important to have a lock out procedure documented, and a plan in place to execute it. As soon as or just before the person is out the door, you should disable their user account and wipe their cell phone, if it is company property. Also, many times it is wise to have the user community reset their passwords, as, in some organizations, the IT guy had access to those as well. An exit agreement drafted by your attorney that lets them know that they are to give back any confidential information is advisable as well.
5) Hire an outside firm to be your backup. One of the duties that we fill for many of our clients is the role of backup IT provider. Most of our clients have an in-house IT staff, and we work with their staff on issues that they don’t have the skill sets to tackle themselves, or in areas where there is simply more demand than supply. Many of our clients hire us to help out, with the secondary benefit of being able to rely on us should an IT person quit or be let go. We are able to fill in for that person with minimal interruption because we’ve become familiar with the environment. Sometimes the company realizes that just part-time consulting work is all that they need, and other times we continue to work full time until they’ve backfilled us with a new resource, who we then train. Having a backup IT provider can be a very smart move.
It’s not always well received when the backup IT provider is brought to the table, as internal IT usually feels threatened. That being said, in almost every case, we work alongside that person well, and they get to understand our value. In many cases, we become the reason that the IT person is able to go on vacation, as we become his or her trusted resource. We want to become the IT person’s trusted resource, as well as the executives’ trusted resource, should the employment relationship go awry.
In short, protecting your IT environment means making sure that you have control over it. Nobody ever got fired for being prepared.
Zack Schuler is the founder and CEO of Cal Net Technology Group. Reach him at ZSchuler@CalNetTech.com.
Insights Technology is brought to you by Cal Net Technology Group
Executives are jumping on the outsourcing bandwagon as cloud service providers promise unlimited scalability, reduced expenditures for hardware and IT staff, and the ability to offload software and routine maintenance at a moment’s notice.
In fact, Gartner analysts predict that 35 percent of enterprise IT expenditures will be managed outside the IT department’s budget by 2015.
But overzealous executives eager to jump to the cloud may encounter security issues down the road, as the security practices of the cloud service provider are often unclear — up to and including where the data is stored. A survey by Symantec shows that only 27 percent of companies have set procedures to approve cloud applications that use sensitive or confidential information.
“It’s easy to deploy data and applications to the cloud, but most executives don’t have a handle on the true risks associated with those decisions. So they fail to build the proper assurances into the procurement process,” says Brian Thomas, IT advisory services partner for Weaver.
Smart Business spoke with Thomas about the risks of outsourced computing services and why companies should seek an auditor’s assurance during the procurement process.
What are the specific risks associated with the cloud and outsourced computing?
Possible issues include data integrity, confidentiality, privacy and security, system availability and reliability, and data retention and ownership. But the threat level and mitigation strategies vary depending upon the importance and sensitivity of the data being processed by the cloud service provider.
It may not matter if you can’t access your sales prospects for a few hours if your hosted CRM application goes down, but business would come to a halt if your hosted e-mail or e-commerce system crashes. Therefore, the provider’s server redundancy and service-level contract guarantees may be the most critical risks to address, where in other cases, the primary concerns may be security and privacy issues. Certainly, regulated companies need to pay particular attention to how the cloud service provider addresses their regulatory risks.
How can executives identify outsourcing risks?
When considering cloud computing project ideas, executives should ask a lot of questions. First, they must understand the nature of the cloud services being procured and the sensitive aspects of the systems being hosted or managed by the provider. After getting an understanding of the types of data and systems that will be exposed to the cloud, executives should ask ‘what if’ questions of their project teams. Such questions should be focused on general risk areas including data integrity, confidentiality, privacy and security, and system availability and reliability.
Executives should also get an understanding of their company’s exposure to risks related to data ownership and retention. Examples of questions to ask include, ‘What will happen if we lose connectivity to our cloud service provider for an extended period of time?’ And, ‘What happens if our cloud service provider is acquired by another company?’
How can executives use an outside audit to ensure the performance of service providers?
A third-party assessment by a qualified professional is the only way to know whether a cloud service provider has designed and implemented effective measures to identify and mitigate relevant risks, as self reporting is inadequate and providers may simply tell you what you want to hear.
You can save money by having your auditor review a cloud service provider’s service organization controls (SOC) report. There are three reports available under the AICPA’s standards for service providers. SOC 1 is based on the Statement on Standards for Attestation Engagements No. 16 (SSAE 16) and is best suited for companies that previously used SAS 70 for Sarbanes-Oxley or financial audit compliance. SOC 2 addresses the design and operating effectiveness of a service organization’s controls over the security, availability, processing integrity, confidentiality and privacy of a system. This may be more valuable for executives evaluating the controls a cloud service provider has in place to address risks beyond those relating to financial reporting.
SOC 3 involves the same scope as SOC 2; however, the report contains less detail and is intended for broader (marketing) audiences.
When are SOC 2 and SOC 3 appropriate?
Executives should request that their cloud service providers submit a SOC 2 report where applicable. The scope is generally best suited to address the concerns of users of cloud services. SOC 2 reports provide details of the procedures executed by the auditor to test the controls in place at the cloud service provider, and the results of those procedures.
If a cloud service provider only has a SOC 3 report available, that may be sufficient for getting comfortable while evaluating the service provider during the procurement process. However, executives responsible for the cloud services should request that the service provider submit a SOC 2 going forward to ensure that they can monitor the provider’s efforts to address any failed control activities.
Are there other certifications that can help mitigate risk when transitioning to the cloud?
If the provider cannot provide a SOC 2 report, see if they are certified as ISO 27001 compliant or if they have obtained assurance reports from a security firm addressing the ISO 27001 standard. If the provider processes, stores or transmits credit card information, it is required to meet the Payment Card Industry’s Data Security Standard (PCI DSS). Be careful when using these other forms of assurance. Their scope is generally narrower than SOC reports and may follow less rigorous quality assurance standards. However, in the proper context, they can be useful for executives attempting to get information about the activities performed at the cloud service provider.
Brian Thomas is an IT advisory services partner at Weaver. Reach him at (713) 850-8787 or email@example.com.
Insights Accounting is brought to you by Weaver
As a company grows, its information technology (IT) needs to grow with it. But some areas may be overlooked in the day-to-day hustle of getting the job done, says Timothy A. Heikkila, a principal with the Skoda Minotti Technology Partners Group.
“Companies should be considering options such as the cloud, looking at the security of their data and setting up a disaster recovery plan,” says Heikkila. “An outside advisor can help you ask the right questions and identify areas of concern.”
Smart Business spoke with Heikkila about what IT issues growing businesses should be concerned about and how to address those issues.
What is the first IT issue that growing businesses should look at?
As a business’s IT needs grow, companies need to consider whether cloud computing makes sense. If you aren’t familiar with cloud computing, it’s essentially remote access to applications and services via the Internet; it gives you secure access to all your applications and data from any network device.
Would it be cost effective to take your company’s e-mail to the cloud so that you don’t have to worry about maintaining data at your own location? When considering questions like these, companies should really weigh the pros and cons of taking that step. For instance, do you already have a location for your servers in-house, are you going to have remote offices, do you have a large traveling sales force? For a single location office, the cloud may not be a beneficial or cost-effective step, but for a company with multiple locations or a traveling sales force, it could make perfect sense to have your data housed at a central location in the cloud so that everyone shares access.
How can an outside technology expert help determine your needs in the cloud?
Outside expert advice is definitely recommended because the industry is changing so quickly that the types of questions you need to ask and the way to ask them are changing daily. For example, does the cloud provider have multiple Internet connections coming in to eliminate service interruption? What is the cloud’s capacity? How much is your business going to be able to grow at your current facility without shortchanging yourself?
Security is another important area to ask about. A lot of data centers that house this equipment are having SOC Reports prepared to make sure they have the proper controls in place that ensure their data is secure and not at risk of being breached.
What other technologies should growing businesses be aware of?
We’re seeing a lot of mobility with the evolution of the iPad and other tablets. A sales force can really take advantage of those devices by using them to take notes, share presentations, adjust quotations on the fly, get signed quotes, and close deals on the spot. It benefits the sales team because they can be connected to the office immediately, respond to e-mail and get instant answers as if they were sitting at their desks in their office.
One area of concern around these devices that a company needs to consider is security. Companies need to make sure that they have a policy in place that protects the company’s data in the mobile hands of the employees. For example, companies should be able to lock down or control the devices should they get lost. If a salesperson accidentally leaves an iPad somewhere, the company needs to be able to erase all of the data on that device so that it doesn’t get into the wrong hands.
Most e-mail servers have controls built into them that allow you to send a signal wirelessly to devices to erase the data, but if you don’t have an e-mail server with that capability, you have to get a third-party, add-on product that can erase it wirelessly. Companies need to have a plan in place to cover these new and growing concerns.
What should businesses think about when considering a disaster recovery plan?
Disaster recovery is another area that can help a business grow, or at least ensure that it is not set back. As technology grows more complex, having a disaster recovery plan is becoming more vital, and planning for if something does fail has become almost as important as investing in technology to grow your business.
A disaster recovery plan starts with sitting down to figure out what disasters your company should plan for, prevent, or recover from. For example, if you are OK with a tornado coming through your building and you don’t think it’s worth the investment to plan for a second, off-site location to back up your data, then you don’t need to plan for that event.
But, if you want to prepare for a virus attack against your mail server because it’s critical to get that server up and running again, it’s a complex process. Businesses need to sit down and figure out what they want to plan for and determine the most critical pieces of technology that they need to have up and running again if something should fail. Once the company determines which critical pieces of technology they need to have up and running, the next question to ask yourself is how quickly does it need to be up and running? For example, if you need to have your e-mail fully functional within two hours, you will need to have a standby e-mail server already built and ready to go.
Too many companies understand that something could happen, but they put the blinders on and think that it won’t actually happen to them. There are a lot of things they can’t control, though, and that they may not have thought about. This is another area in which an outside technology expert can help. That person will know all of the questions that go into building a disaster recovery plan and make sure that plan can be executed if needed.
Timothy A. Heikkila is a principal with the Skoda Minotti Technology Partners Group. Reach him at firstname.lastname@example.org
Last month’s ii2P Insights article described how small and medium-sized businesses (SMB) are facing a “perfect storm” in terms of balancing costs and customer intimacy. This month, according to Steve Carter, president and CEO of ii2P, SMBs that have decided to take action should follow some tried-and-true guidelines.
“By clearly understanding the objectives for your enterprise, you can make certain that your implementation of an end user or customer self-service platform actually becomes the end users’ preferred method of receiving support,” he says.
Smart Business spoke with Carter about implementing a self-service platform and the benefits of providing value to end users.
What should be the first step in implementing an effective self-service platform?
If there is a single step that misleads a company worse than any other, it is not getting the setup right at the start. Most of the time, executives deliver the mandate for someone to implement a self-service solution, thinking that they understand the issues. Nothing could be more detrimental than starting out with the wrong calibration.
Companies need to understand the real objectives of self-service. It is not just about trimming costs. It is about creating a true change in human behavior that drives and motivates more intimate end user experience between the customer and the company.
The objective should be to attract and retain solid, powerful end user participation with the value that you are trying to extend. The objective should be about developing a lasting platform for customer intimacy.
What would be the next step?
Once the fundamental objective is established for implementing an effective self-service platform, then it’s time to determine the true opportunity for your customers to help themselves. Another frequent error is thinking that self-service is limited to helping users ‘fix’ their own problems, such as ‘how to’ questions, or ‘fill in something.’ While these are certainly common and often easy to incorporate, that’s not the limit of effective self-service.
Quantifying the true level zero (self-service) opportunity is going to be more expansive than you typically first believe. Credit your smarter customer for that.
What do business owners need to include in their self-service platform?
Customers, especially in this day and time, are looking for self-service interactions that yield more value and independence. It’s becoming more of an environment of, ‘I want to track this,’ or ‘I want to compare these two products,’ or ‘I want to manage the entire buying or fulfillment process on my own.’
Along with the fixes and the finds, it makes great sense to consolidate many of the functional interfaces that your users are using today. A great example is expanding the IT self-service site to also serve as the gateway to other business functions, such as human resources, or information review (relevant news feeds).
Tying your customer-facing self-service site to your fulfillment tracking (such as Fedex or UPS shipping), albeit seemingly insignificant, is huge when it comes to adding value to the self-shopper.
Finally, it’s important to find a way to collect measurements of customer experience with your self-service site transactions. This correlation is going to be the most valuable information you can harvest. It will help drive ongoing improvement to the site.
What are some of the best-suited and easy-to-implement aspects of end user self-service solutions?
Avoid making the site too cluttered, but at the same time, there are some relatively common-sense elements to include. Certainly, have a strong search engine tied into a well-maintained knowledge base of solutions specifically created for self-service. One horrific mistake many companies make is placing a massive technical knowledge base in front of general purpose users and telling them, ‘Have at it!’ I call that, ‘where angels fear to trod,’ and nothing disenchants a user more than that. It is intimidating, and many times users won’t return once they experience that.
Bring any enabling technology to the site, such as self-service password reset technologies, or the ability to create a service ticket, or check the status of an existing one. Users don’t want to have to call someone to do those simple things. Make that available.
Allow users to submit requests for common services, or even new information. One caution here — someone needs to monitor and respond to those requests. If users ever sense that no one is minding the store, they will quickly lose confidence in the site, and revert back to labor-intensive methods. It’s hard to regain their confidence at this point.
What is the most important thing about implementing self-service?
This is big: Don’t succumb to building a ‘portal to nowhere ’. Standing up the self-service site that is an afterthought or an also-thought will fail. There is a proverbial bone-yard of customer self-service sites that have ended up there.
If you are not going to implement these three elements of a successful self-service platform — effective technology, solid business practices and committed managerial disciplines — save yourself the time and money and wait until you can.
Self-service is an investment to growing customer intimacy and loyalty. Done properly, it will change human behavior and deliver lasting benefits.
Steve Carter is president and CEO of ii2P. Reach him at (817) 442-9292 or email@example.com.
The “gap” between facilities and IT organizations has become an industry standard term over the years. While some companies are making strides to overcome this challenge, most struggle with this issue. So, what is the gap? Simply stated, it is when two departments don’t see eye to eye, and in many cases don’t work well together.
Over the past few years there has been a surge in the need for high-capacity and high-density data center facilities to meet the growing demands to store and manage information. This is being driven in large part by social networking, social media and cloud computing services growing at unprecedented rates. Data centers, unlike any other portion of a company’s real estate portfolio, requires input and support from both facilities and IT management and staff.
“IT is in the business of managing information — how it flows at the application layer, how it is transported, processed and stored at the hardware layer, and how it is protected,” says Rich Garrison, senior principal of Alfa Tech. “That is done through a combination of server, storage and network infrastructure designed to deliver and manage information, which in today’s information age is the greatest asset of most companies. Facilities are all about managing the real estate portfolio, space, power and supporting infrastructure.”
Smart Business spoke with Garrison about how to create a more productive work environment in which these two departments can work more effectively together.
Why is there often a gap between facilities and IT?
The gap occurs because of several factors, most originating from the human element. First, IT and facilities speak different languages and often simply don’t understand each other’s needs and priorities. Another major contributor to the gap is that in most companies IT and facilities are two separate organizations with separate budgets, schedules and agendas with competing priorities.
Some companies have rolled up the two groups into one organization to help align the two groups. The fundamental problem is getting those groups on the same page — or even to speak to each another in some cases. This leads to the more subtle interpersonal issues, like pride and ego, that often get in the way. It’s common for power struggles to occur over who is controlling what, allowing both sides to lose focus on what is really in the company’s best interest.
What are some consequences of the gap?
Employees become frustrated. They get tired of beating their head against a wall, make poor decisions and often are forced to settle for solutions that really don’t meet the business’s needs with respect to capacity, reliability and scalability. IT has a history of asking for ‘more than they need’ when it comes to space, power and other facilities resources. This is often due to the fact that long–term requirements are unknown, yet IT must be able to support whatever comes along. Some of these unknown factors may include changes in technology, mergers and acquisitions, changes in the companies’ products or services to name a few. Facilities on the other hand are pressured to ensure that real estate assets are cost effective and operationally efficient. Therein lies the gap, a gap in priorities, business requirements, budgets and management support or direction.
At the end of the day, the company ends up suffering because it doesn’t get the right solution or it spends too much money getting a solution that meets the business’s needs. We have seen IT groups choose colocation simply so they can maintain control of the data center, not because it was the most cost-effective way to meet the company’s data center demands.
Today’s server, storage and network hardware platforms are forcing IT to understand more about power and cooling due to the significant increase in density in recent years. However, having IT staff responsible for planning or managing space, power and cooling is not always the best solution. They usually end up getting it wrong, which can result in unnecessary risks or even catastrophic failures of the data center facility itself by not understanding the underlying facilities infrastructure.
How can companies bridge the gap between facilities and IT?
In almost every instance where this gap is an issue, the companies lack a strategic plan for IT, facilities or both. When companies get serious about developing a formal data center strategy they get much closer to bridging this gap. One particular tool I’ve developed to help bridge the gap is the OPR (Owner Program Requirements) document. The purpose behind this document is to facilitate a process to get facilities and IT to stop thinking about technical solutions, take a step back and start thinking about the business requirements, corporate goals and objectives. It then looks at the functional requirements of both organizations necessary to meet these corporate objectives. Next is to define in their own language the supporting technical and operational needs of both organizations necessary to be successful. This collaborative approach to developing a strategy and plan has proven to be a successful method to begin to bridge this gap.
Getting the two organizations to collaborate and talk in their own languages while finding that common ground is the point of the OPR. It demystifies technology by defining the requirements in terms both IT and facilities groups can understand. For a new data center project, this can be expanded to include a set of design considerations and criteria, written in more technical language that designers and engineers need to understand.
When we see IT staff taking an active interest in understanding facility operations and facilities staff take an active interest in understanding IT requirements, the results have been positive and bring about successful projects that deliver cost-effective solutions for the companies they work for.
Rich Garrison is a senior principal with Alfa Tech. Reach him at (408) 487-1209 or firstname.lastname@example.org.
The relentless pace of business automation and Internet commerce has led to a staggering increase in the amount of data that businesses need to store. And that growth has created a corresponding need for businesses to expand their IT capabilities.
However, a direct investment distracts from your core business and can cost up to $10 million for buildout and $5,000 per square foot for operational overhead. That’s why many companies are opting to outsource their IT through colocation.
“Colocation is all about economies of scale, focusing on your core competencies as a business and letting someone else handle the data center aspect,” says Joe Sullivan, senior director, colocation product management with SunGard Availability Services. “From a financial perspective, it allows you to take a large cash outlay or capital expense and convert that into an operating expense.”
Smart Business spoke with Sullivan about how to decide if colocation is right for you, and what to look for in a colocation provider.
Why are companies considering colocation?
The No. 1 reason companies consider colocation is to avoid making IT a core competency of their business. The second major reason is to gain the economies of scale you get from sharing a facility with others. Instead of having to build and maintain your own facility, you can leverage another company to do it for you and share those costs with other customers that you might not even know.
Some companies already have their own data center, but their current facility can’t keep up with their business’s growth. Adding a second site to handle the growth is one reason companies consider colocation. Disaster recovery planning is another reason. Companies may need an alternate location to protect against infrastructure downtime, either from natural catastrophes or hardware failures.
What are the key factors to consider when picking a colocation provider?
There are five key factors. First is how much power a customer needs, not only today, but in the future. The business is based on power costs, as well as the cooling needed to cool that power. That drives a large portion of the cost structure and facility capacity, so it’s the No. 1 thing providers ask customers.
The second factor is environment size. Do you need a cabinet, two cabinets, or do you need a cage to store your data? Think of cabinets and cages as storage lockers and apartments. You might only need a locker, or you might need an apartment, depending on how much you have. Basically, a collocation provider rents highly powered, highly cooled and fully redundant storage lockers and apartments.
Geographic location is important, as well. Many customers like to be close to their facilities. They want to touch and feel their data and make all the changes themselves. Others want their provider to be as far away from their main facility as possible, because their main concern is disaster recovery. Those customers ask about colocation facilities in St. Louis, Denver, Phoenix and Dallas, because those sites don’t have as much natural disaster activity.
Fourth is connectivity. You may call it bandwidth, telecom or fiber, but it’s all connectivity. This is important because you are going to need to get your data out of those storage lockers or apartments to somewhere — either back to your facility or to your customers through the Internet.
Last, what services do you need on top of colocation? Some companies just provide space, power and connectivity. Others provide services such as data backup, storage, security monitoring and intrusion detection on your servers, and services such as cloud applications.
You need to make a decision up front on whether you may want those services at some point, because if you are outsourcing your colocation, you may end up outsourcing other services, as well. If that’s the case, you want to make sure you choose a provider that has the capability to do that. The services you need on top of colocation today or in the future should be a big factor in your choice of provider.
What questions should you ask a potential provider to determine if it can meet your needs?
What facilities in the geographies we are interested in have the space and power that we need? That’s the first question. Because you’re not going to care if a provider has 55 locations and only three of them are in the geography you want and none of them have space and power available to meet your needs.
Next, get into resiliency questions. Typically, customers will look for companies that have fully redundant power systems. At every point in the process of the power coming from the utility company, through two feeds into the building, hitting two power plants, one system should be able to fail over to the other. In the event that anything in that chain fails, you have fully redundant systems.
That is a large differentiator between providers. Some are fully redundant and some have single points of failure. What the customer needs to determine is whether those single points of failure are acceptable for the applications they are running and for the price discount you would get.
Then you get to pricing. Not all prices are created equal in colocation. You might not be getting the same thing, even if it sounds the same. You might have two providers, one who advertises a cabinet for $1,000 and the other for $1,500. It may seem like the $1,000 cabinet is the better deal, but the $1,500 cabinet might give you three times the power density, which would make it a better deal.
You should ensure transparency in pricing from a provider, and always make sure you understand what’s included and what’s not included.
Joe Sullivan is senior director, colocation product management with SunGard Availability Services. Reach him at (303) 942-2937 or email@example.com.
Clinton Coleman was given a mission when he was named interim CEO at Bell Industries Inc. back in 2007. Bell Techlogix, a business segment of Bell that provides IT managed services was missing out on a chance to cash in on the growing demand for its offerings. It was Coleman’s job to find a way to capture that market.
Segment sales had dropped from $90.3 million in 2004 to $75.6 million in 2006, causing leadership at Bell Techlogix to lose confidence in its core business and begin searching for other ways to make money.
“That usually is a recipe for not very good results if a management team is focusing on things that don’t build upon what you already do well,” Coleman says. “A lot of that had to do with some of the previous managers’ own personal interests rather than what really made sense for the company to be focusing on.”
Bell Industries was taking a hit, too, dropping from $136.2 million in 2004 revenue to $120.3 million in 2006.
Coleman knew there was demand for what Bell Techlogix did. He knew companies were looking for a better way to manage their IT services. The effort just wasn’t being made to capitalize on this opportunity.
The board of directors at Bell Industries agreed with Coleman and installed him as the man to make it happen at Bell Techlogix, which has 608 of the company’s 714 total employees.
“Bell Techlogix really needed to reinvigorate its strategy for growing IT managed services,” says Coleman, who is also CEO at Bell Industries. “It was something that Bell did well, and they had the core competencies to do it. But over time, they really lost their way with respect to sales and marketing execution. They had lost their sales-driven focus. We needed to reinvigorate that growth where there were some good opportunities.”
It wasn’t going to be easy as Coleman had to energize his team in the midst of a tumbling economy.
“But it was what it was and you can’t really change any of that,” Coleman says. “So we just had to deal with those issues.”
See who is with you
As the new guy in charge, Coleman had a captive audience. But if he didn’t move swiftly to give them something useful to latch onto, his audience would start to lose interest and fall back into their old routines.
“I really challenged the team to realize that we can be successful doing this,” Coleman says. “We needed to do things differently and we needed to make some changes. But it was always about building around those core capabilities that Bell Techlogix has and realizing that those can be successful.”
Coleman expressed confidence that the tough economy was only going to increase the demand for the company’s services.
“We were able to identify how our strategy of growing the company in some ways very much fit with the challenges our customers were having,” Coleman says. “We help companies reduce their IT support costs while also improving their service levels. That created some opportunity for us. They were facing challenges that required them to re-evaluate the long-held beliefs that they had about IT support.”
Coleman needed his team to pick up the pace and put in the work if he was going to take advantage of this great opportunity that was out there. He began by reaching out to the managers at Bell Techlogix.
“Getting the attitude right among the managers is absolutely the first thing you need to do,” Coleman says. “If you don’t have the right attitude amongst the managers while you’re trying to refocus a company’s strategic direction for growth, while also at the same time handling an economic downturn, it’s very easy for people to be pessimistic.”
It really comes down to showing your passion and commitment and finding out who is willing to stand up and follow you.
“Who is up to the challenge?” Coleman says. “Who do you want to be working through this with for the next year?”
To begin answering those questions, he moved the company’s monthly leadership meetings away from a monotonous rundown of the financial spreadsheet.
“That was a new experience for a lot of managers making our monthly business reviews into real business discussions,” Coleman says. “Not just running through the numbers, but real discussions about what we learned this month. … You allow the senior management team to all be hearing the same thing and be participating in the same discussion. Through that process, you realize that some people are more up for the task than others as far as how they respond to that discussion.”
There was a key question Coleman needed to answer before he could move forward: Did his team understand the difference between a goal and a strategy?
“We’re supposed to grow our sales by X amount this year,” Coleman says. “That’s not our strategy. That’s our goal. Are we able to talk about what we are doing? How do our results from one month to the next indicate we’re really doing a good job in this area and we’re trying to re-evaluate this other area and look at things we can do differently.”
He needed to see who was willing to dig beyond the numbers. Who was willing to put in the time to figure out what the customer wants and then take those findings and figure out what the company needs to do in order to meet the needs of the customer?
“You need to make sure each one of the managers are in a position to carry forth the company’s vision and hold regular discussions with their direct reports,” Coleman says. “That really does flow from the top down. But that only happens if you have a broad group of managers that are all part of the same discussion of where we are trying to go as a company and what we are trying to achieve.”
The result of the initial group meetings was that some managers proved to have what Coleman was looking for while others did not.
“We weren’t painting everybody with the same brush,” Coleman says. “It was a real evaluation of putting managers in position to demonstrate how they could be part of the company’s success going forward.”
Coleman felt that lack of communication was a key reason why the company wasn’t making more money on its offerings.
“We’re not doing a good job of communicating with new customers and our existing customers about what we do and how what we do helps them with the challenges they are facing,” Coleman says. “There are other companies in the space, other midsized IT managed service providers, that have been able to have some success.”
One of the reasons companies fail to capitalize on new opportunities is that they become unwilling to analyze what they are doing and make changes to their plan.
“Every year, we would set a business plan,” Coleman says. “The important thing wasn’t that we set a business plan that remains frozen in place. … It was very important for us to always be listening to our customers and the managers and the people on the ground at Bell and the salespeople that were actually interfacing with customers every day. It was really listening to that feedback and making sure we were responding to it.”
Coleman needed to instill a mindset whereby the annual plan would be used as a guide that was fluid. It could be changed when circumstances called for it. It would not be set in stone and it would not be the sole factor used to determine whether business was good or bad.
“Companies will set a budget at the beginning of the year and for the rest of the year, you’re judged against that budget and that’s the primary measure of whether you’re doing well or not doing well,” Coleman says. “That can create a shortfall for a company. What you really need to be having is not a discussion of the budget numbers, but what’s going on behind that.”
There are a number of factors that contribute to the financials that appear on any company’s ledger. You need to talk about those factors.
“What you really need to focus on is how are the things that we’re doing every month either leading to or not leading to having success in that metric,” Coleman says. “That’s by looking at things beyond just a static budget that’s set in place and stays in place for the year. That can also lead to missing out on some of the bigger opportunities where you have more momentum than you realize and you need to have a reallocation of resources.”
When Coleman took over as interim CEO, he approached the job with a sense that he had a lot to learn. He wanted that attitude of curiosity and naiveté to give his people reason to open up and share with him their ideas to help the business grow.
If they were to be passionate about it, they couldn’t just do things because he told them to.
“It was a gradual process,” Coleman says. “I didn’t come in and pretend to be an expert on the company and the company’s business. It was one where we worked toward a more natural conclusion with the managers by using their input and building upon the realities of what’s going on in the company and what our strengths and weaknesses were rather than coming in and already having a predetermined idea of what we’re going to do and how we’re going to do it.”
You need to talk about outcomes if you’re going to move your company forward. It can’t just be a discussion of ideas that never lead to anything.
“This is how we’re going to know if we’re making progress or not,” Coleman says. “That was part of my job was to make it real for them. This whole idea of talking about the strategy to the company wasn’t just a theoretical discussion. Make it a real discussion: What are we doing this month to help us get toward those goals? It’s having an open and honest discussion about why we did or didn’t meet those goals and doing it with the entire management team so everybody is hearing the same message and everybody is part of the same discussion.”
It’s your job to make sure your people have that idea in the back of their mind about what the bigger goal is while they are dealing with the day-to-day.
“We’re trying to go and make sure all our daily efforts are working toward that vision,” Coleman says. “Remind everybody of what was the underlying strategic goal behind the metrics. What did we learn this month to indicate to us whether we’re being successful or not successful? What should we do differently next month to better achieve our strategic goals?”
In addition to addressing concerns, you also need to celebrate successes.
“In any particular quarter, one division may be having more success than the other,” Coleman says. “But by having the discussion together with the management team, it does help improve everybody’s overall morale and optimism if they are able to hear about and see the successes that other areas of the business might be having in that particular quarter toward the overall strategic vision.”
As Coleman looks at his company now, he sees progress. Sales for the Bell Techlogix segment grew from $62.9 million in 2008 to $66.1 million in 2009 and Coleman expects 2010 to show even more growth.
“Working through the economic downturn didn’t really help us in achieving those goals as quickly as we would have liked,” Coleman says. “But we’ve gotten to the point where we’ve rebuilt our entire commercial sales team. Last year, we had success in growing our IT manager service business at a pace it hasn’t grown in years and years. We’re doing a much better job of communicating with our customers and packaging our services in a way that customers are demanding it.”
How to reach: Bell Industries Inc., (866) 782-2355 or www.bellind.com
Clinton Coleman, CEO,
Bell Industries Inc.
Born: Cleveland, Ohio
Education: Double major in physics and economics, Vanderbilt University. I spent my junior year at the London School of Economics and Political Science. A lot of it was exposure to international culture and international ways of doing business. The makeup of classes is from all over the world. It was intellectually stimulating from that perspective, but it was also a lot of fun. I spent a lot of time backpacking around Europe and doing things that you only get a chance to do when you’re that age.
What was your very first job?
I was a busboy and also did the salad bar prep for Steak and Ale restaurants.
Who has had the biggest influence on you?
Rick Leaman, [former] head of mergers and acquisition in the United States for UBS. The first job I had out of college, I was in the mergers and acquisitions group at UBS in New York. So I was working on Wall Street doing M&A investment banking work.
I worked with him on a number of deals. He took a lot of interest in me, and he allowed me to tag along with him at board meetings and negotiations with management teams at a bunch of different companies. He kind of took me under his wing and allowed me to get exposure to the corporate decision-making process at large companies in a way that is very difficult to replicate when you’re 22.