If your company’s computers are still using the last generation of network technology, it might be time to consider an upgrade — especially if you are planning to virtualize any of your processes or data.

“The previous standard for most companies has been traditional T1 lines, which were not cost effective and had limited bandwidth,” says Carlos F. Olortegui, manager of the Enterprise Metro Ethernet Division with Comcast Business Services. “Metro Ethernet technology is more cost effective, reliable, robust and scalable, and it allows you to adjust bandwidth measurements with ease.”

Smart Business spoke with Olortegui about how this technology could benefit businesses and what kind of return companies should expect on their investment.

Why is Metro E technology important and how can it impact a business?

Today, everyone from small, medium to enterprise-level and multi-national corporations can use Metro E technology to improve their telecommunications.

For instance, a franchise using point of sale (POS) transactions and replicating that data could use the Metro E technology to have the option to measure and adjust its bandwidth as necessary.

One major benefit of Metro E is that the connectivity from the customer to the service provider is simplified. It’s just router to router. The main focus of Metro E is the Ethernet connectivity. It’s called Metro E because you are literally plugging in an Ethernet connection. The handoff from service provider to the customer is just an Ethernet plug — pure simplicity.

In the past, companies needed a lot of capital expenses and operating expenses for databases, hardware, larger UNIX servers, even your exchange servers for e-mail. Today, everyone uses e-mail, so the need for archiving and data warehousing is huge.

What is virtualization and how can it benefit businesses?

Virtualization is the process of contracting an amount of space on a large server that is housed by a provider and storing your data there. If you virtualize, you do not have to purchase all the computer hardware and manpower to handle your data and processes. You don’t have the large operating expense and headcount necessary to maintain the high-cost hardware and ensure uptime.

There are two scenarios in which companies can benefit from virtualization. First is disaster recovery. Second is by making a virtual version of your databases or e-mail, which are utilized on a daily basis  — you have the ease of connectivity for the transport of all that information to a virtualization footprint via Metro Ethernet.

Here is where your ROI comes into play. You get a bigger bang for your business dollar and the products and services you sell, other than payroll and real estate, the IT budget is the largest budget for most enterprise customers. If you can drop those operating and capital expenses, your ROI and profitability increase.

How can Metro E technology improve the virtualization process?

You need connectivity to that virtualization footprint .That’s where Metro E comes into play, because of its service ability, and the ability to have bandwidth on demand. Companies can consult 30-, 60-, or 90-day bandwidth utilization reports. If you need more bandwidth, it’s just a turn of the dial.

Virtualization provides much more bandwidth than traditional T1 lines can. If you are virtualizing your back-office environment, it is critical that you have no downtime as these applications are considered ‘high availability.’ That is another advantage of Metro E — it is very stable.

How does the ability to adjust bandwidth impact businesses?

Let’s say you are a corporate entity that owns a chain of retail stores. You have peak seasons: different times of the year where you have huge mail distributions or promotions. Your business is very seasonal, so November and December are the peak sales months. There are a lot of promotions, and your website gets hit more at those times. With Metro E, you can adjust your bandwidth to be higher during those peak times, because you want to make sure people can access the website and that all their transactions are being replicated and archived correctly, hence making the customer experience a positive one.

When there is greater demand, the company simply notifies its provider, which increases the bandwidth. They can watch bandwidth utilization reports to see trends, so they can monitor their expenses. Business owners can see that they’re utilizing X amount at a certain time of the year and budget accordingly.

It also ties into virtualization, because one of the main components of virtualization is on-demand storage.

What kind of cost reduction or ROI can businesses expect from using this technology?

First, businesses can expect lower capital expenses from not having to purchase all that computer hardware or enterprise server hardware for their back office databases and e-mail. Second, less manpower is needed because you have that virtual environment, so you have the reduction of overhead payroll. Third is the stability of Ethernet technology. You don’t have to utilize T1 lines or ‘leased-lines,’ those clear-channel point-to-point lines, which are very high in cost because you have to have a certain type of hardware that resides at the customer’s site. With Metro E, you have a simplified device on the back end of the service provider, which is lower cost equipment because it is plug-and-play Ethernet. Together, those three components can reduce expenses 20 to 40 percent.

Carlos F. Olortegui is manager of the Enterprise Metro Ethernet Division with Comcast Business Services. Reach him at (305) 770-5941 or carlos_olortegui@cable.comcast.com.

Published in Florida

Smart Business spoke to Bill Mathews of Hurricane Labs about what happens when the cloud fails, and how to not panic when it does.

I’ve written quite a bit on why I think business should cautiously embrace the cloud and see what happens. I promise it is not as terrible as a lot of folks are telling you but it does have its faults.

Many folks seem to think that I sing the cloud’s praises and speak nothing of its many faults. This is patently untrue. As anyone who knows me will tell you, I pretty much dislike almost everything and find fault in nearly everything — the cloud is no different. A lot of applications in the cloud have many, many issues. For instance Twitter, which runs in the “cloud,” has its own share of documented issues. From being over capacity (hello fail whale) to just simply being down, cloud failures do happen and it is not a nirvana. Gmail has developed quite a reputation (unfairly some would say) for being down. Here’s the challenge I have for you though: Measure your network and application uptime against theirs. Let me know the results.

One question I am asked: what do you do when the cloud breaks?

Lots of prayer if you’re into that sort of thing; then you really dig in. If it’s a software or infrastructure as a service you really have no choice but to wait, it’s not your code or your servers, so waiting it out is really the only option. I know that sounds terrible (and it is, believe me) but no one is more motivated to keep their systems up than these providers. Every minute they’re not up is a minute they’re not billing you, they don’t want that. Economically speaking it’s in their best interest to keep their stuff running. This may sound like common sense but there are a lot of FUD spreading folks out there basically claiming that Google and Amazon just throw things up there and put no thought into it. I’m not going to go so far as to say they put the stuff up there and never under-think anything, but chances are, if they’re putting something up for you to pay for, then they’re going to want to make sure it is available as much as possible. Availability is a big issue in the cloud, and it should be.

My advice is to measure their uptime (the amount of time a given system is available) with yours and see what the difference is. If yours is significantly higher than theirs, congratulations, you’re better than some of the biggest tech companies in the world (and you should be proud). But, if not, you should investigate it a little further. If you’re not measuring your uptime then we should have a separate conversation. The point is, don’t be dismissive. You might actually be able to increase your service and decrease your cost, and that sort of thing is truly rare.

What improvements would I like to see from these sorts of providers?

Logs, logs and more logs. Let me know what’s going on with my instance of the application — a little more truth in monitoring. If something is down, let me know so I can work around it. Don’t make me find out by hitting refresh and waiting until you timeout. Every cloud provider should have both a truthful status dashboard and an emergency broadcast Twitter account (that maybe sends to Facebook and Google+ too for good measure), when there’s an outage. The guys over at 37signals do this very well with their Twitter account whenever their Basecamp or related services are down or have other issues. It wins with their customers because they’re being up front and honest about it. We’ll be launching a few cloud based services very soon and, believe me, this sort of approach will be baked in.

My overall point isn’t to be a giant cheerleader for the cloud — it doesn’t need me to do that — but to get smart and good people to lay down their fears and try something new. A lot of these folks can bring a lot to the various realms of cloud security and can help make massive improvements. Instead of saying “No, no, no,” I’m just looking for an “Okay, let’s try it out and see what happens.” Is that too much to ask?

Bill Mathews is Lead Geek of Hurricane Labs, an IT security services firm founded in 2004. He has nearly 20 years experience in IT, 13 of that in information security, and has been interested in security ever since C3P0 told R2D2 to never trust a strange computer. He can be reached at @billford or @hurricanelabs on Twitter, and other musings can be read on http://blog.hurricanelabs.com.

Published in Cleveland

Smart Business spoke to Bill Mathews of Hurricane Labs about not letting an irrational fear of the new keep you from using cloud solutions for your IT dilemmas.

We have an expression at my company, “Everything in the cloud!” Basically it means, if you’re asking for infrastructure then have you considered the cloud? If not, why not? We tend to get very wrapped up in the security of things so we shy away from putting anything out of our control, but lately we’ve come out of our shell a bit and moved some things to the cloud that made some sense. This is the story of those decisions and their reasoning.

Download site

We host a download site for our customers, which is basically a large (approaching 105 GB at this writing) software repository that houses the software we need to do our jobs for our customers. While cost was definitely a factor — the site costs a few dollars a month to host — the biggest issue was speed. When we hosted the download site ourselves it was slow, especially overseas. Basically there was nothing we could do about that other than — you guessed it — everything the cloud! Now customers are able to download things fast with really little regard to their geographic location, and it’s been great.

Obviously our download site isn’t what you would call “confidential” or “private” information so it was a pretty easy decision to move it to the cloud and be happier campers. Of course, we took all the necessary security precautions. For instance publicly available links can be made to expire after a certain amount of time, which is great. An increase in customer happiness plus less infrastructure to purchase made the business owner in me very happy. Cloud: 1, Irrational Fear of the New: 0.

Log storage

We need to be able to keep A LOT of log files around for various reasons. These can get very large very quickly; buying the infrastructure up front is expensive and it can be cumbersome to maintain. What should we do? That’s right, everything the cloud! This one was a little trickier because logs can contain very sensitive data, so we dug into our brains and came up with a pretty simple solution: encryption. Encryption is cheap and, if you do it right, it’s easy. I ended up writing a tool called “logsup” (Log Secure Upload) and basically it does exactly what it says. First we generate a private key (which stays on our site), compress the data, encrypt the data and then upload the encrypted data to our cloud storage. The cloud storage then implements the rest of our security. We never make the files “public” and we provide no other interface into it. Secondarily, logsup writes out a receipt for the log file so we can better keep track of what file is where. No system information or other identifying information is ever stored with the encrypted file. Really it’s a simple and novel solution to what, on the surface, appears to be a big problem.

This one hasn’t gone into production yet but it will soon. It should cut our storage costs significantly and actually increase the security of our long-term stored log files. This is another instance of a practical application of old principles to supposedly new technologies. Encrypt early, encrypt often, I always say. Cloud: 2, Irrational Fear of the New: 0.

The moral of these two stories is that new technologies do not have to be scary. We didn’t have a large learning curve to implement this stuff (all done on Amazon Web Service, by the way) and while it did require a small coding effort, it was just that, a small effort. New things can be scary but you should always be willing to give it a shot with the appropriate amount of caution, of course. Security matters, performance matter, just make sure you’re worried about the right ones in the right order.

Will you be increasing the cloud’s score?

Bill Mathews is Lead Geek of Hurricane Labs, an IT security services firm founded in 2004. He has nearly 20 years of experience in IT, 13 of that in information security, and has been interested in security ever since C3P0 told R2D2 to never trust a strange computer. He is also not a cloud fanboy, but likes to apply new technology where it makes sense. He can be reached at @billford or @hurricanelabs on Twitter, and other musings can be read at http://blog.hurricanelabs.com.

Published in Cleveland

Smart Business spoke with Mike Landman, CEO of Ripple IT, about how business owners can ensure their company’s IT department is using the right backups.

Every business leader I talk to is certain their company has good backups. Well, pretty sure. Kind of sure? There's tapes, so there must be a backup, right?

When pressed, most business leaders find that they don’t really know the status of their backups.

I’ll grant you, backups are boring. Like insurance, flu shots and TPS reports. But once you’ve seen the face of someone that has lost their company data — or even thought they lost their data — the boredom ends quickly.

As a leader, you want to trust your IT guy, or your IT department, or your brother-in-law that handles your IT. They know technology, and this is their role. But there is a difference between delegation and abdication. And with backup, I think a leader needs to know what’s up.

Here’s a few things you should know to keep on top of data protection:

Backups fail. Every backup software can and does fail. More often than you might think. There are three things you can do about it:

1. At Ripple we decided that no single backup software is good enough to shoulder the responsibility for client backups. So we use two completely different software vendors and technologies for backup. The downside is, of course, that it costs more to implement and to manage. The upside is a nice reduction in risk of data loss from a failure or a software bug.

2. Get looped-in. Have a chat with IT, and get a report every day (just like they do) of the status of backup. If everything is OK, you have spent 30 seconds over coffee getting reassurance that your company is safe. If not, you can help out with some positive support.

3. Set the tone. Troubleshooting backup failures is difficult and time-consuming, and it often happens without management even knowing there was a failure, because IT is nervous to tell leadership. So they work on it silently. But now that you are looped-in, you can help. Let them know that you know software fails sometimes, and that it’s a top priority to you that they have the time to get it fixed. Then let the rest of the company know that regular support will be a little slower while IT works on an issue that’s important to the company’s security. Those words mean a lot more when they come from leadership rather than from IT, and you will buy your IT team time to fix the problem, rather than shelving it because of daily IT fires.

If your backup is not offsite, you are not safe. The kinds of events that require restoration from offsite are certainly more rare, but they are company killers if there’s no offsite backup. A fire, the cleaning crew sets off the sprinkler, natural disasters — they happen. This is what backup is for. The same day you ask IT to add you to the daily backup report, ask them how the company handles offsite backups. You might be surprised at the answer.

Some of your most valuable data is not on the server. The mantra of IT for as long as I can remember has been “if it’s not on the server, it’s not backed up.” While this has some measure of CYA for IT, it’s not a viable strategy. It makes your end-users (particularly your mobile ones) responsible for backups, and if you’re honest with yourself, you have probably had an important file (or 50) nowhere but your laptop. And if you’ve done it, you can bet every laptop user you have has done it too. Yes, there is an expense to backing up all of your laptops, but it’s nothing like the expense of watching your highest paid employees scramble to recreate a presentation after having their laptop die. Unless you enjoy saying “I told you so” more than you enjoy having crisp, timely presentations from your road warriors — backup your laptops.

Backup is important enough for leadership to pay attention to. Just like you don’t have to be an accountant to keep an eye on your company’s cash, you don’t have to be an IT guy to keep an eye on your data.

Mike Landman is the founder and CEO of Ripple IT, an IT company that makes IT run smoothly for companies with less than 100 employees.

Published in Atlanta

When it comes to maximizing the performance, scalability and value of their IT infrastructure, corporations want the best of both worlds: the features, functionality and benefits of their business applications without the headache of managing and running them.

It has been a challenging proposition for enterprises to optimize their outsourced investment and the performance of their IT spend. Increasingly, however, these companies are turning to providers of colocation and managed services for an IT optimize solution that extends beyond the infrastructure to provide above platform-level support and services.

Smart Business spoke with Christian Teeft, vice president of engineering at Latisys, to help executives determine what to look for in an IT optimize solution, and how they can gain the comfort and assurance needed in turning over ownership and management of their IT infrastructure to a colocation and managed services provider.

What is the most significant difference between an IT off-load solution and an IT optimize solution?

The primary distinction between ‘off-load’ and ‘optimize’ has to do with the responsibilities around the compute layer of an IT infrastructure. With off-load, an IT organization is responsible for the procurement and support of the server hardware, as well as the hypervisor and operating system. With an IT optimize solution, the server hardware, hypervisor and operating system components become the responsibility of the service provider. The provider will capitalize the hardware purchases to leverage their economies of scale, utilize ‘Service Provider Licensing Agreements’ to provide the software in a cost effective and scalable fashion, and provide advanced around-the-clock support for both — all for a predictable monthly expense.

What kind of requirements do companies that are ready for an IT optimize solution have?

Given that these profiles build upon each other, the functional requirements from clients that are classified within the off-load and off-site profiles are similar. Organizations within each profile require highly robust platforms from which they can deploy the critical IT services needed to operate their businesses.

The primary distinction we see between optimize and other profiles is the need for an organization to further optimize their operation by:

  • Moving even more dollars from CapEx budgets to OpEx budgets by shifting hardware and OS/hypervisor software licensing responsibilities to the provider.
  • Optimizing head count through eliminating the need for IT staff to manage the tactical care and feeding of the network, hardware, hypervisor and operating systems.
  • Filling knowledge gaps for server hardware, hypervisor technology, and operating systems by leverage the expertise and support of the service provider.

What are the key benefits of an IT optimize solution provided by a colocation/managed services provider?

Optimize enables an IT organization to focus on meaningful, strategically important IT initiatives and better utilize available IT budgets. This holistic approach allows the organization to spend time innovating the systems, processes and procedures that improve operational efficiencies and drive hard dollars to a company’s bottom line, which not only simplifies the conversation between the business and technology leaders, but also improves the overall reliability of the infrastructure.

How can an IT optimize solution ease the burden of the capital investment required to adopt the latest data center technologies?

IT optimize eliminates the capital burden altogether. This cost is shifted over to a predictable monthly expense that encompasses all costs for the technology — from initial acquisition to ongoing support. On top of the server, hypervisor and operating system costs, the optimize profile has the potential to eliminate all of the big-ticket items typically found within the SME infrastructure, including Storage Area Network (SAN) and data protection platforms.

Many firms are dabbling in virtual servers, but managed virtualization is different. How?

The big difference, aside from the subject matter expertise that comes along with the service provider offering (24-7-365 access to more than one expert) has to do with the way the costs are structured. Off the shelf, the hypervisor software often costs as much as the hardware costs. The optimize profile reduces the economic stair function associated with scaling out virtual machine infrastructure. This allows for managing capacity based on actual resource utilization as opposed to the need for avoiding uncomfortable conversations with the CFO.

How can firms that are hesitant to turn over ownership and management of their IT infrastructure pick the right partner?

Many organizations think they give up strategic control of their operation if they don’t own the hardware.  This simply doesn’t have to be the case. Here is a short list of tips for picking the right partner:

  • Ensure there is rapport. IT professionals should want to engage your team members, inherently supporting the transfer of knowledge that makes the IT organization stronger.
  • The adoption of services should not be adversely disruptive. If you have to change every process and procedure in the book, it’s probably not a good fit.
  • Maintain control. Engaging a service provider isn’t about giving up control but changing the perspective from tactical to strategic.
  • Don’t make your decision based solely on the lowest bid. Consider all costs variables to understand the implications of selecting a particular partner.

Christian Teeft is vice president of engineering at Latisys. Reach him at christian.teeft@latisys.com.

Published in Orange County

As social media moves to the forefront of the information security industry, many bloggers and information systems analysts have been working around the clock to promote what should be understood about the problems social media may pose.

Smart Business spoke to Chris Crane, a project manager with Hurricane Labs, about the threats involved in using social media.

What are the inherent risks in using social media?

Social media in itself encompasses all major forms of communication and ways to provide information, and in an incredibly easy way. It is available for use by anyone, with extreme portability, and welcomed by all. This may not appear to be a problem to the random users who finds its ability to make and keep connections as a very handy tool, but what is missed underneath the surface are the doorways to intrusion that it carries along with it.

Attacks such as the Zeus Trojan or the evolving Koobface can be easily be manipulated and provided to others via social networking sites. Information provided ‘at will’ can be gathered and used for social engineering purposes. I do not promote myself to be someone who can socially engineer information, but even I have learned about aspects of people’s lives and their jobs (remote user accounts that just happen to form ironical humor) that should have never reached the pages of regularly used social networking sites.

How can users protect themselves?

Social media exploitation will continue to pose threats to the IT community, but when an evolutionary threat presents itself, knowing a good stance or having the right mindset from an individual user’s perspective is a good starting point. This should be a good base to implement a solid policy that can be watched and reacted to. From there, gather what information is needed to re-evaluate the policies that you want to enforce.

Here are some ideals that every user should be acquainted with to better secure themselves:

Self censorship. Know what it is that is being posted when it is posted. To be aware of any potential threats this information may cause to the user or to the user’s place of employment. This is in no way a means to destroy individuality. The user must be aware of the ease of access to anything that is posted via the Web. The information being spread, no matter the depth, can be used by anyone willing to spend the time gathering a personal database against the user or the company the user works for. For example, think of the security questions answered while setting up a personal e-mail account. Answering with the name of a favorite pet and then flooding a Facebook page with pictures and posts of ‘Socrates’ does not leave too much of a challenge to those interested. Especially if the personal e-mail address you answered that question for is listed as a means of contact on a blog/Facebook/etc.

This may be thought of as a long shot towards affecting a company, but how many times does one recycle personal passwords? How often is personal e-mail used in the workplace as a work-around when accomplishing a task involving sensitive material?

Trust. Create a personal social networking cloud and understanding the threats they may offer. These are the people that will be reading all of the data that is provided by the user. Outside of the information that will be shared out, these are the people that will be providing the information coming in. Not everyone has malicious intent, but everyone is vulnerable to malicious attacks. Common attacks to social media are intended to spread easily and quickly, so that by the time it is noticed as a threat, a significant amount of damage has been done. This means understanding what is being offered as a link, what the intent of a message is, and what may be offered as something beneficial, but in turn is potentially harmful. Just because it comes from a picture of your mother doesn’t mean that it is necessarily her behind the wheel.

Become a super-user. Know what the application or site can offer. Know what can be done with the application or site to tailor it to provide what is intended. What social media offers is not something to be afraid of. Like all things, there needs to be a level of control, and these sites and applications provide the tools and configurations necessary to maintain a level of privacy. It is always a best practice to fully understand the capabilities of any application, website, or communications tool.

Training and understanding of the social media landscape should not be overlooked. It is something that will have to be dealt with as this landscape moves and reshapes itself. To quote a former instructor of mine, ‘They asked me what would be the No. 1 thing I would do to help secure their network. I told them: remove the users.’ As comical as that sounds, it holds truth. Hopefully educating everyone on social media security will allow for some ‘give’ to that statement.

Chris Crane is a project manager with Hurricane Labs. Reach him at (216) 923-1330, ext. 3.

Published in Cleveland

When it comes time to search for an IT consulting partner, there are a lot of areas that you should consider before selecting a firm. According to Zack Schuler, founder and CEO of Cal Net Technology Group, it takes a specific skill set to understand and address the technology issues that businesses face.

“Over the years, we’ve taken over from sub-standard providers, and I’ve seen some pretty bad work that our clients have paid a lot of money to get done,” he says.

Smart Business spoke to Schuler about how to choose the right IT partner for your business’s needs.

How can business leaders best approach the process of finding the right IT firm?

In my experience, there are six things to look for when selecting an IT consulting partner:

1. Years in business. I’ve seen a ton of ‘fly-by-night’ IT companies. They usually start with a very technical owner, who has difficulty hiring and managing good people, and are out of business within three years of getting started. When looking at years in business, it is important to see whether or not the company survived the last recessions. For example, if they started their business in the ’90s, they’ve been through the dot-com bubble, as well as the latest recession. If they survived one or both, that is a good sign. My recommendation: if they’ve been in business less than five years, I would steer clear.

2. References from your industry. Even though many of the IT systems are the same across industries, there are some industries that have their idiosyncrasies. For example, with accounting firms, an IT provider familiar with that industry would plan upgrade projects in November. Then, between the Christmas holiday and April 15th, they wouldn’t make any changes unless absolutely necessary. And while they might not be experts in tax accounting software, they have enough experience with the packages to know when to call the software vendor when they run into an issue. My recommendation: hire an IT firm who can provide references in your industry, and call those references.

3. Industry certifications. IT is one of those areas where you don’t need any sort of minimum certification to practice. It’s like hiring a contractor without a license, or a lawyer who hasn’t passed the Bar. Because of this, it is important to see if the companies themselves have industry certifications. This entails their engineering team to have personal certifications, among other things that the company has to do. Also, check to be sure that their certifications are current. For example, they could have been a Microsoft Gold Certified partner two years ago, but haven’t qualified this year for the new requirements. My recommendation: look closely at industry certifications when selecting a partner, and make them prove their currency.

4. Strategic IT consulting. In today’s times, it’s relatively easy to find an IT provider who can patch your servers and workstations, update your anti-virus software, and fix your e-mail when it’s not working. These types of services have become somewhat commoditized simply by the fact that so many people can perform them. That being said, to find a company who can truly be a ‘strategic partner’ with your organization is another set of skills entirely. This would be a company who can, with your input, write a full-scale strategic plan around technology. They would be able to manage any other vendor you’ve got who provides a technological role, as well as track your IT assets, forecast your upcoming expenses, etc. These are duties typically involving an IT director or CIO, and you should have the expectation that a firm you work with, no matter your company size, should have these types of resources.

5. Number of employees. While even the smallest of IT organizations can have some very talented people, those talented people can’t know everything. It is hard to throw a number of people out there as to what the ideal number of people is. On the smaller end, somewhere between 15 and 20 people is a good number, assuming that they don’t have too many disciplines, nor cover more than a county or maybe two. You want to make sure the IT provider has great ‘back-office’ support (i.e. HR department that can hire quickly if they lose a key employee, good accounting department, etc.) as well as field personnel who are ‘local’ to your place of business, and have redundancy. In other words, if you have a ‘subject matter expert’ on your account who knows a specific piece of technology, you want to make sure that the IT provider whom you partner with has multiple experts on that technology as redundancy. My recommendation: ask how many employees they’ve got, and then go to their office to see their place of business. It’s an easy step if you are going to trust them with your IT

6. Hiring and retaining. The last and perhaps one of the most important aspects to inquire about is how they hire and retain their people. I would encourage you to read our August article, entitled ‘Your toughest hire.’ This article outlines how to hire a good IT person, and I feel as though IT providers should be placing these same standards upon themselves.

In terms of retaining, there is no harder employee to retain than an IT employee, and this can spell bad news for you if the company that you are partnering with is riddled with turnover. Every time an employee at your IT partner turns over, there is going to be some knowledge lost — it is likely the idiosyncrasies of your business, but sometimes, that can be a lot. I think it is important for you to ask them, ‘How do you retain your people?’ An average sales person might not know the answer to this, but any member within their management should have a good answer for you. My recommendation: inquire hard about hiring and retention processes.

Zack Schuler is founder and CEO of Cal Net Technology Group. Reach him at ZSchuler@CalNetTech.com.

 

 

Published in Los Angeles

A growing number of businesses are using cloud computing to access resources over the Internet, store data and run applications. However, in abandoning traditional on-premise computing and data storage for a cloud-based solution, many companies fear what will happen if the remote data center housing the cloud experiences its own crash.

“As Amazon’s recent outage at its Dublin data center showed, it is possible, though unlikely, that a data center might go down,” says Indu Kodukula, executive vice president, products, and chief technology officer of Sungard Availability Services. “That is a risk that companies are able to mitigate with a managed multisite availability solution.”

Smart Business spoke with Kodukula about how managed multisite availability is changing what’s possible in the cloud, and how your business could benefit.

What is managed multisite in a cloud environment, and why is it important?

If you look at managed multisite availability, each term essentially defines what the service is. ‘Multisite’ is the next logical evolution of our cloud platform. Instead of having one, we now have multiple sites where the cloud is available. That allows the cloud to be geographically redundant.

No matter how unlikely, a cloud infrastructure that is housed in a single data center has the potential to be the victim of either natural or man-made disasters. To provide a better level of availability, a cloud provider needs to be able to keep services and customer environments up and running, even in the event of an entire site disaster.

‘Availability’ fundamentally refers to the fact that most applications can only tolerate a certain amount of downtime that is directly related to the business value of the application. Most cloud developers use the cloud to run development and test environments. To ensure high availability in a production setting, a cloud environment should be built from the ground up to run production applications and customer environments, which have a higher availability threshold than development and test environments. A multiple-site cloud environment provides availability for an application that is commensurate with what’s appropriate.

The last aspect is ‘managed.’ In contrast to many cloud service providers that essentially provide DIY service, a business should find a provider that builds the environment for information technology (IT) from day one.

Why is it important to have a cloud environment with IT capabilities?

If you are looking for a cloud environment for production that provides all the capabilities and processes expected with IT — change management, security, operations control, the ability to resolve problems and issues — those are all part of the managed services that should be provided on top of the cloud environment. That means companies can have a tremendous level of comfort and that they can trust the production environment and get the level of availability they need.

That is very different than the DIY model that many cloud developers provide, in which you could be left to fend for yourself.

Is cloud computing for everyone?

There are several points that companies typically walk through when making the decision to use the cloud. The No. 1 reason that companies want to use the cloud for their applications is to align their spending with business value. Increasingly, enterprise IT has become very capital intensive. Companies don’t know up front what business return they would receive from a capital investment in enterprise IT, but they would make the investment anyway and hope that it all works out.

Using the cloud is fundamentally different, because you only pay for the data or compute resources that you use or store, you don’t have hardware to buy or install and, in a managed environment, you don’t need internal  resources to manage your IT. Here, the service provider takes responsibility for maintaining the software, servers and applications.

Therefore, companies utilizing the cloud for enterprise IT can make investments that are in line with the business value. Then, they can invest more capital into infrastructure and resources as the application supports it and as the business becomes more successful.

However, there are multiple concerns. The moment something moves outside your firewall, you don’t own it anymore. So you have to decide what to keep in house and what to move to the cloud. Others are concerned about performance and availability of data in the cloud. The multisite availability feature is most useful for applications that can tolerate only about four hours of downtime a year, need geographic redundancy, or are responsible for keeping the business up and running if you don’t want to have the internal responsibility of running the application yourself.

How can businesses get started?

The first step is to do a virtualization assessment. Then, there is the option of what processes to virtualize. Next, take the virtualized application and decide what to keep in house and what to move outside your firewall.

Look for a cloud service provider that will guide you through the process, helping you understand and decide what applications should stay in house, either because they are not ready to be virtualized or they are too tied into business, and which applications can be moved safely. The goal is to create a roadmap for moving applications to the data center.

What applications are good fits for the cloud?

If you have an application that supports your business and has such strong growth that it will need 10 times more resources next year than it does today, the elasticity the cloud offers is a great option. If the application also uses modern technology, which is easier to virtualize, that combination makes it compelling to move that application to cloud.

The business argument for moving older technology, like ERP, to the cloud is much less strong.

Indu Kodukula is executive vice president, products, and chief technology officer with SunGard Availability Services. Reach him at indu.kodukula@sungard.com.

Published in Philadelphia
Wednesday, 30 November 2011 20:01

What to look for in an enterprise cloud plan

Moving to the cloud. It’s what everyone seems to be talking about lately. It’s what everyone seems to be recommending. But, it’s not as easy as it sounds. Before migrating to a cloud environment, there are several facets you need to think through.

Smart Business learned more from David Feinglass, director of Solutions Engineering at Latisys, about the key components that make up a smart cloud migration plan.

What does a company need to think through before migrating to a cloud service provider?

Cloud migration is a big step for any organization and, potentially, a risky decision if not planned for properly. There are a few questions that need to be answered before you even think about migration. For instance, why does your business require elastic, on-demand computing? Are you making the move for server virtualization, consolidation, disaster recovery, storage, cost savings, capital expense reduction, etc.? This question rarely applies to an enterprise’s IT infrastructure as a whole — it is more often something that needs to be aligned for each application.

If cloud migration does make sense for an organization, what are the first steps?

Before taking on a migration to a cloud service provider, I always recommend organizations think through three main objectives.

1) Understand what type of migration makes the most sense for your organization:

Staged migration — This requires moving single departments and services over, one at a time, to keep day-to-day business as close to as-is as possible during the migration.

Forklift migration — This requires temporarily cutting out service during the migration process. It’s the quickest and least expensive option, but the scariest for most companies to take on because of that period of downtime.

Phased migration — This requires simultaneously running redundant systems during the physical transition. This is likely the most costly option, but for a company that demands no service interruptions, it may be an essential choice.

2) Understand exactly what you are going to be migrating:

From servers to networking, storage, replication and recovery services, licenses and more, it’s essential that you think through exactly what parts of your infrastructure are going to be a part of your cloud, and whether or not you want to own, rent or buy these capabilities.

3) Understand exactly how much internal and external IT support your migration will require:

It’s probably going to be more work than you think. So, make sure you understand the expertise needed to pull off a successful migration, and partner with the right tools and third-party resources in order to make it happen.

How do you put together the right migration plan for your organization?

There’s a different ‘right’ migration plan for every business. Size matters. Speed matters. But it is what you’re moving that often matters most. Looking at each of your applications and services to understand if they lend themselves to virtual machine migration portability and federated cloud environments is a good first step.

A great way to map the VM migration portability is to put together a matrix.  On one axis, identify all of your applications, such as Web, both external facing and intranet sites, e-mail, CRM, analytics, billing, testing/staging, help desk, document management, etc. And then identify what types of services each application represents. Is it revenue generating, frontline support, analytics, back office support, seasonal services, etc.?

Based on the type of service your applications fall under, you should be able to determine which applications are the best fit for portability and which aren’t really portable. There are many applications that will be an on individual-case basis for your organization, but typically the seasonal or one-time project-based applications are the best fit, along with any application that isn’t revenue generating and the only instance of that application.

What should an organization look for in a cloud service provider?

You really need to do your homework and make sure the service provider is a good fit in terms of data center capabilities, security and support. Assess your workload history together and talk through possible solutions. Understand what their role is going to be in terms of strategy, development and the physical migration itself. And get a clear and conservative understanding of what your downtime, if any, will be during the move. The more their pitch sounds like they think migration is a ‘cookie-cutter solution,’ the more you need to be on guard. This move is a strategic decision for your company, and it’s essential you take the right steps now.

David Feinglass is director of Solutions Engineering at Latisys. Reach him at david-feinglass@latisys.com.

Published in Orange County

Codonics Inc. is combating drug labeling errors in the medical field with the Safe Label System. This technology increases syringe labeling compliance and improves patient safety in operating rooms and other areas where syringes are prepared.

President and CEO Peter Botten and his staff collaborated with doctors and medical professionals from Massachusetts General Hospital to create the safety promoting system. Introducing barcode technology, the Safe Label System enables hospitals to more easily comply with industry regulations. Users simply scan a drug vial with the built-in barcode reader and the system checks the hospital’s formulary to ensure the drug is allowed to be used, then provides visual and audible confirmation of the drug name and concentration, allowing for real-time safety checks. A full-color label is automatically printed while the syringe is prepared and accurate electronic documentation recorded.

When the Safe Label System is integrated with an anesthesia information management system and electronic health record, the system’s data can tie directly to the electronic patient record and includes additional safety features such as automatic identification of the medication and concentration of each syringe as well as warnings for expirations, patient allergies and adverse drug interactions.

Studies by Massachusetts General Hospital investigators showed that Codonics’ technology not only provides full compliance with regulations set by both the joint commission and the American Society of Anesthesiologists but improves the efficiency of the clinicians who use the system device. Of the 1,090 syringes evaluated in the baseline study, 593, or 54.4 percent, of them were prepared by clinicians. Only 269 of those met the standards for proper labeling. In another study conducted on 340 labels after the system was implemented, 100 percent of those prepared by clinicians were fully compliant with requirements.

Codonics’ innovative approach to patient safety and efficient hospital operations has made the labeling of medications both fast and accurate.

How to reach: Codonics Inc., (440) 243-1198 or www.codonics.com

Published in Akron/Canton