Ensuring that an enterprise’s networks are resilient enough to support applications, data and services is a constant worry for CIOs and IT leaders, especially when these services and applications are operating from a hosted environment.
Cloud computing has presented companies with a new set of challenges in building an enterprise’s infrastructure, not just with regard to the technology but also in terms of data protection, service level agreements (SLAs) and contracts.
Panellists at a Computing roundtable last month, sponsored by cloud provider Claranet, debated the benefits and pitfalls of placing mission-critical applications in a cloud environment.
While hosting services in the cloud is appealing because of the low cost of entry, one of the key concerns for the panel was the demand placed on bandwidth.
For example, Nathan Bishop, director of service delivery for home insurance and maintenance group HomeServe, raised the question of whether or not the “public internet is good enough” to support businesses using browser-based applications. “If you are accessing Microsoft 365, or any other true software-as-a-service (SaaS), you need to ensure the internet on your premises is capable.
“One of the problems we found in our early SaaS implementations was the degree of bandwidth contention we had in our public internet connectivity. People do not give it a great deal of thought,” he said.
Bishop explained that HomeServe has taken “baby steps” when placing applications into the cloud, and for the past two years has been implementing the typical cloud entry-level SaaS SalesForce automation and CRM applications. It is considering moving some of its HR management systems to a SaaS model.
The lessons HomeServe learned highlighted the importance of downtime and latency when planning the use of cloud services. “When implementing SaaS, we initially had a number of issues with downtime and latency. We are only just beginning to solve these problems by introducing packet-shaping and low-latency bandwidth control solutions.
“A lot of people will buy a public cloud service because of the low cost to entry. When you are reliant on a public internet, you have to think about the network at your end, whether you can buy some services at the cloud end to improve performance, and make sure you have a virtual back-to-back connection,” concluded Bishop.
However, the issue may not simply be that too much load is being placed on the bandwidth capabilities, but that the network architecture itself is compounding latency problems.
Mike Spink, founder of cloud computing consultancy firm Nephologic, warned attendees that bandwidth contention issues often occur as a result of a badly designed architecture.
He said CIOs should be wary of operating a fat client model, which operates like a remote computer, where cloud providers are bringing data back from a local network. “If you are running a fat client model over the internet, it will not work very well - you are almost certainly going to have bandwidth and latency problems,” explained Spink.
The problem with orchestrating the architectural design of your network when implementing a cloud strategy is that this is often not within a CIO’s control. Service providers often manage the network design. As a result, the role of a CIO becomes more about managing contracts.
Jo Stanford, IT director for hotel and hospitality group De Vere, argued that when you operate in a cloud computing environment, a lot of the construction involved falls under the cloud provider’s remit and a CIO’s role largely becomes about good supplier management.
De Vere has 65 hotels and conference properties across the UK and uses cloud-based applications to manage its web platform and web-booking engines, which are managed by Claranet.
By eliminating high entry costs for big data analysis, you can convert more raw data into valuable business insight.
A discussion of the "risk perception gap", its implications and how it can be closed