Integration problems with hybrid cloud are being tackled by data fabric approach, say NetApp and Phoenix

The big public cloud services like Azure, AWS and Softlayer have got the message that in the era of hybrid they have to open up their APIs, a Computing web seminar audience hears

Removing data from a public cloud platform has always been problematic, not least because of the sheer volumes of data that tend to be involved and the time, storage and bandwidth required to do so. Along with proprietary file formats and small print in SLAs, this is one of the issues that can lead to customers becoming locked in to a particular provider.

However, this is starting to change said Mark Scaife, head of technical development at managed services provider Phoenix.

"The entire landscape is changing. Cloud's gone through the revolution phase and it's now in the evolution phase," he said during a Computing web seminar entitled Data portability - the missing piece in the hybrid cloud puzzle today.

As the cloud matures, customers will want more choice, he went on, describing how the current situation restricts data portability.

"For businesses that have got virtual workloads on Hyper-V, for example, it's relatively easy to move those workloads to Azure but not to move to AWS or to vCloud Air because the underlying disks are not compatible," Scaife (pictured) said.

"The big cloud providers are only just understanding that problem. They're just starting to put in solutions to allow you to migrate from one particular disk type over to the other public cloud provider's choice of disk. Also you've got new smaller companies like Eucalyptus that are enabling you to pull data back from Amazon and keep it in the Amazon format in your own private data centre."

Co-panellist Martin Warren, cloud solutions marketing manager EMEA at data management software company NetApp, agreed that customer demand is forcing a rethink among the "big four".

"Those big cloud providers are going to realise there is extra money to be made [by cooperating with the others]. APIs to allow integration will become available."

Referring to the OpenStack platform, of which NetApp is a contributing member, Warren said: "We're pulling out APIs that even some of our competitors can latch on to. You're going to see a much more collaborative vision for the future. There's room for everyone in the market. It's just about where the data is best stored, and where the compute is best done."

"OpenStack is driving open APIs into enterprise," confirmed Scaife. "The situation is already improving."

However, we are not there yet.

Computing research presented during the web seminar showed that issues of integration - getting platforms that were not designed to talk to each other to do just that - were the most problematic with private cloud, closely followed by authentication and security, connectivity and latency.

"Customers want choice. They want two public cloud offerings rather than one," said Scaife, explaining why it is important that the integration difficulties are solved.

The integration issue is being confronted by some co-location providers, said Warren. These providers will allow you to store data "next to" the cloud (i.e. in the co-lo data centre) rather than in it, with low-latency links to public cloud compute allowing processing of that data by the provider of choice.

"They can give you a link to multiple clouds. You can simply switch to another provider by pointing to it. The beauty is that the data doesn't move. The data remains in the co-location data centre," said Warren (pictured right).

It is unfair to expect public cloud providers to be able to tailor their offerings to the individual customer, he went on, but combining them with managed services providers like Phoenix can allowsfor cheap and almost infinite scalability together with fine-grained control and regulatory compliance.

"The trick is to use those wholesale services where it makes sense ... you can mix different types of service providers and different cost models and also link back to the private cloud too. It's all about choice."

"Among our customers, the hybrid cloud is the favoured approach," added Scaife, pointing to back-up as a common requirement that the hybrid model is well placed to meet.

"A lot of public providers don't provide back-up," he said. "A lot of customers don't think about latency, back-ups, sovereignty or compliance when they use the public cloud. PCI environments can't sit in the cloud."

Warren talked of the ideal hybrid cloud as being a "data fabric", a platform-agnostic approach.

"It's an interconnectivity between different service providers, and different vendors. A data fabric is like air traffic control. I can get on a plane in London or Timbuktu and it's the same language and the same set of rules. It's applying that type of principle. It's not something we own, but that's our vision and a lot of other vendors support us in that."

The Computing web seminar Data portability - the missing piece in the hybrid cloud puzzle will shortly be available to view on demand. Register now.