Computing roundtable: CIOs say the network is key to cloud success

Our expert panel highlighted concerns over network resilience as a barrier to uptake of hosted cloud computing services by enterprise IT departments

Ensuring that an enterprise's networks are resilient enough to support applications, data and services is a constant worry for CIOs and IT leaders, especially when these services and applications are operating from a hosted environment.

Cloud computing has presented companies with a new set of challenges in building an enterprise's infrastructure, not just with regard to the technology but also in terms of data protection, service level agreements (SLAs) and contracts.

Panellists at a Computing roundtable last month, sponsored by cloud provider Claranet, debated the benefits and pitfalls of placing mission-critical applications in a cloud environment.

While hosting services in the cloud is appealing because of the low cost of entry, one of the key concerns for the panel was the demand placed on bandwidth.

For example, Nathan Bishop, director of service delivery for home insurance and maintenance group HomeServe, raised the question of whether or not the "public internet is good enough" to support businesses using browser-based applications. "If you are accessing Microsoft 365, or any other true software-as-a-service (SaaS), you need to ensure the internet on your premises is capable.

"One of the problems we found in our early SaaS implementations was the degree of bandwidth contention we had in our public internet connectivity. People do not give it a great deal of thought," he said.

Starting small

Bishop explained that HomeServe has taken "baby steps" when placing applications into the cloud, and for the past two years has been implementing the typical cloud entry-level SaaS SalesForce automation and CRM applications. It is considering moving some of its HR management systems to a SaaS model.

The lessons HomeServe learned highlighted the importance of downtime and latency when planning the use of cloud services. "When implementing SaaS, we initially had a number of issues with downtime and latency. We are only just beginning to solve these problems by introducing packet-shaping and low-latency bandwidth control solutions.

"A lot of people will buy a public cloud service because of the low cost to entry. When you are reliant on a public internet, you have to think about the network at your end, whether you can buy some services at the cloud end to improve performance, and make sure you have a virtual back-to-back connection," concluded Bishop.

However, the issue may not simply be that too much load is being placed on the bandwidth capabilities, but that the network architecture itself is compounding latency problems.

Mike Spink, founder of cloud computing consultancy firm Nephologic, warned attendees that bandwidth contention issues often occur as a result of a badly designed architecture.

He said CIOs should be wary of operating a fat client model, which operates like a remote computer, where cloud providers are bringing data back from a local network. "If you are running a fat client model over the internet, it will not work very well - you are almost certainly going to have bandwidth and latency problems," explained Spink.

The problem with orchestrating the architectural design of your network when implementing a cloud strategy is that this is often not within a CIO's control. Service providers often manage the network design. As a result, the role of a CIO becomes more about managing contracts.

Jo Stanford, IT director for hotel and hospitality group De Vere, argued that when you operate in a cloud computing environment, a lot of the construction involved falls under the cloud provider's remit and a CIO's role largely becomes about good supplier management.

De Vere has 65 hotels and conference properties across the UK and uses cloud-based applications to manage its web platform and web-booking engines, which are managed by Claranet.

Computing roundtable: CIOs say the network is key to cloud success

Our expert panel highlighted concerns over network resilience as a barrier to uptake of hosted cloud computing services by enterprise IT departments

Stanford is also in the final stages of approving proposals to put her back-office operations in the cloud. “For us it is about contracts management. I have always been a great believer in outsourcing, and in my team I have more of what I call business relationship managers,” said Stanford.

“They work with suppliers to ensure they are sweating the asset for the business rather than just looking at it. It requires a different skill. I do not have a technical background - stick me in front of a server and I would be completely lost.”

Stanford recognises that in-house technical skills are important for cloud architecture and De Vere tackles this by employing specialists who can query the architecture with service providers. “I have people on my team who work closely on the web site with Claranet. They have a close relationship, drill into the technical details, ask the right questions and make sure it is built correctly,” she said.

“Everyone talks about fit for purpose, but I always talk about fit for our purpose. Fit for purpose is meaningless.”

Doug Legge, IT operations manager at housing development company Berkeley Group Holdings, agreed that technical skills are still an imperative, as a CIO “needs to understand the business well enough to architect the solutions in order to get a solution that is appropriate”.

Berkeley Group Holdings began looking at cloud options in 2003 when it started to feel the burden of hosting an ever-increasing network in the group’s headquarters in Cobham. During this time, Berkeley moved its entire network into a twin datacentre in London that was linked via private fibre. Since then the group has virtualised about 90 per cent of its platform and is now being provided with an aggregated Multiprotocol Label Switching (MPLS) network by Claranet.

Legge plans to move further into using cloud-based services because of the flexibility it offers. “We are looking to continue with consolidation. As a construction company we are not interested in whether we have MPLS routers or blade servers. We just want to provide a service,” he said.

“We want the ability to flex up and down with the business and be able to add 300 people to the network easily if we need to. We want to maintain the availability, reliability, accessibility and customisation you get from your own private network, but we also want someone to kick if it goes wrong.”

Panellists were keen to quiz the legal experts about how much influence and control an IT director can have over cloud contracts. They were told the secret is to plan early and use SLAs to spread risk.

Nigel Miller, partner at business law firm Fox Williams, was asked whether companies have any flexibility when negotiating cloud contracts with service providers. “When you get involved in a commodity-type service, you are reliant on standard terms and conditions, which may be difficult to negotiate,” he explained.

“But when you are dealing with substantive contracts, which are more complex, you must get into the contractual process early on. For some people the contract is the last thing they look at, when really it is easier to address the issues much earlier on.”

Stanford agreed with Miller and suggested that companies looking to hosted services should consider the contract from the beginning. “The data you put in the cloud is always your data, so you should try to integrate SLAs into the contract. While you may think it is going to be happy partners all the way along, this is the real world. In commercial terms, companies do go under,” she said.

A lesson learned

Mark O’Conor, partner and cloud specialist at law firm DLA Piper, agreed but said the ability to negotiate often depends on company size. “A bespoke contract is entirely a function of price. If it is a big enough deal, you will have a negotiation. If it is a small deal, you will take the terms and conditions as they come.”

Having a clear understanding of how a cloud service works means a company can avoid problems later on. Spink said that lessons could be learned from the Amazon outage that occurred in April this year, where its servers went offline and many businesses experienced downtime.

For instance, Amazon offers availability zones at additional cost, which allow customers to spread their risk by building applications across various zones, such as differing datacentres, to prevent failure from a single location. “Those companies that suffered from the outage typically had not taken advantage of this service. They were buying a commodity service at a low price,” said Spink.

“There are two lessons here: the first is that you always get what you pay for in terms of SLAs; and the second is that the cloud is a fundamentally different architecture, with different risks compared with on-premise solutions.”

The roundtable debate can be viewed here.