Latest Cloud Computing posts

The long-term future of 'the cloud'

17 Sep 2012

Figuratively speaking “the cloud” does not have much of a future because the term will become redundant and using it will sound dated. In the long term, public cloud will cease to be seen as a subset of the way information technology and communication (ITC) is delivered, but integral to it. In fact, it might be the other way around; in the long term running IT in-house will come to be seen as a quaint and unusual practice.

The majority of businesses will consume applications and services over wide area networks from what was once called the public cloud.  However, there will be a “long tail”, with more conservative organisations insisting that they can still run IT better than external service providers whose whole business model is built on IT. Some large organisations will also continue to invest in new in-house systems (often deployed as private cloud infrastructure).

Those organisations that fully embrace cloud services will no longer need the type of IT departments that most have today which run servers and patch software. Instead they will have service delivery specialists that focus on making sure lines-of-business and their employees have access to the applications they need and that the use and storage of data is secure and compliant; these largely will be business-focused rather that technology-focused roles.

This does not mean the end of the IT professional; those jobs will migrate from end user organisations to public cloud service delivery specialists. Here the true technologist will be in their element, working for organisations whose raison d’être is the delivery of high quality IT services. Whether it is the data centre, hardware/software infrastructure or applications, these professionals will be focused on delivering effective services that will drive the success of the cloud.

Of course, individual providers will come and go, but the direction of travel is clear, away from in-house and to the cloud. This series of blogs has argued the case that public cloud service providers will succeed because in many cases they have the best platforms for the job; more secure, more available and more cost efficient. Furthermore, the compliance challenges differ little from those that exist for the use of internal IT.

The four top use cases put forward for public cloud infrastructure services in an earlier post; as an application test bed, as a failover platform, for handling peak loads and planning for the unexpected will drive early adoption and increase confidence. However, as was pointed out in another post, the majority of consumption of public cloud platforms will be indirect through the use of software as a service (SaaS).

This is the real point about cloud and information technology. Facebook and Twitter users do not think of themselves as IT users, they are just consuming applications that allow them to communicate with others. The same will be true of businesses; they will no longer need to think about IT but simply about applications. As was pointed out in another earlier post – “It’s the application, stupid”.
 
Originally posted at Lunacloud Compute & Storage Blog

Bob Tarzey, Analyst and Director, Quocirca.

Cloud Chains – Integrating beyond boundaries

13 Sep 2012

If cloud computing manages to evolve to where it should do, the end result for organisations is a mixed environment of internal and external IT platforms that stretch beyond their direct control into the value chain of suppliers and customers, and beyond to others providing services along a complex business-to-business (B2B) chain.

Historically, organisations have been able to exert a level of control through ownership of the IT stack from hardware through operating systems to applications, and have been able to ring-fence their systems through identifying where the responsibilities of their organisation ended, generally at a point defined by the use of a firewall.

However, more innovative organisations have found that, to be able to be more competitive in their markets, they need to be able to exchange information in a more dynamic and open manner across these extended value chains. However, such information flows still have to be secure and auditable – and this is where even the most innovative organisations begin to struggle.

In the B2B space, there have been certain players who have provided services for many years – vendors such as GXS and Sterling Commerce (now part of IBM) have provided managed services where data from one organisation could be transferred to another, anywhere on the plant, maintaining data fidelity and providing full auditability of what had been sent, at what time to which organisation.  Little did these vendors know that they were doing cloud computing years before the term came into common parlance.

As time went on, extra capabilities were added to their services – for example, the capability for catalogues of goods to be hosted and managed; dealing with the needs for paperwork to be created and made available for the physical transfer of goods across geographic borders; creating and managing auctions and reverse auctions of goods across a broad group of possible customers.  The broader adoption of solid internet standards has made the reach of such vendors more inclusive – small and medium businesses (SMBs) do not need to install expensive software on their premises, they can just use web-based portals to participate in dealing with their customers and suppliers for the various requests for “X” (requests for information (RFIs), proposals (RFPs), quotes (RFQs), etc.), as well as catalogues, legal paperwork, straight-through order processing and so on. This all enables them to operate as true peers against their larger competitors in highly stressed markets.

However, is there still more that can be provided?

Certainly.  The advent of cloud services is changing the way technology can be provisioned.  As the take up of Infrastructure, platform and software as a service (I/P/SaaS) services increases, organisations will have less need to worry about the hardware their applications run on and they will not have to feel so constrained by what they already have in place when looking to bring in new functionality to support their needs.  This starts to drive organisations toward a more “functional” view of technology – outgo the large, monolithic enterprise applications that we have all grown up with; in comes the “composite” application, built up from technical services as needed to meet the needs of a specific business process. 

This requires some form of cloud service provider that can act as a broker to take responsibility for managing the catalogue of technical services available to an organisation, and to provide the integration services which can bring these together on the fly in a manner that provides support not just for the single organisation’s process needs, but also to enable high-fidelity information and data exchange processes throughout the value chain.  In Quocirca’s view, this will be best managed by those who already have a great deal of demonstrable domain expertise in dealing with highly mixed environments – and the B2B managed services vendors fit the bill nicely.

Quocirca recommends that organisations reviewing how they manage their B2B interactions look towards a managed service that provides highly managed and audited exchanges of information in any form required by a mix of senders and receivers.  When selecting a provider, it will be well worth considering how well they will be able to support your organisation in the coming years.  Here, make sure the right questions are asked as to what extra services such a provider will expect to provide itself as time progresses – and how it proposes to manage the use of external services that impinge on its own services.

If the vendor can show a clear roadmap that includes the embracing and integration of external services, then all well and good.  If not, Quocirca’s recommendation would be to look elsewhere.

Quocirca’s report, Maintaining the chain, written in conjunction with GXS is freely available here.

It’s all in the detail – just what is cloud recovery all about?

11 Sep 2012

In a previous post, Quocirca discussed how the cloud can be used to provide levels of business continuity and disaster recovery to meet an organisation’s needs around its own business risk profile.

However, data can be stored in many formats, and the granularity of this storage can have an impact on how well an organisation can recover information, function or transactions. There are three basic levels to consider: files, storage images and applications.

First, at the file level, the most common form of a need of data recovery is the loss of a single file. A user may have deleted the file by mistake, may have over-written it or may just have mislaid it. The best way to recover such a file is to have a mirrored copy of the primary file store it resides in – provided that there is a degree of intelligence built in. Direct mirroring of all actions carried out on a file store not only ensures that all files saved are replicated – it also means that all files deleted or modified are reflected in the mirror as well. Therefore, a user deleting a file in one file store deletes it in all mirrors – and so is no better off when trying to recover it.  

Phased mirroring can be implemented, but is not of much use. Here, a time delay is built in to the mirroring, so that if a file is deleted or changed, there is a grace period in which the user can change their mind before the action is reflected in the mirror – but this also applies to file saves – and just how long should such a grace period be – a few seconds, minutes, hours, days?

A far better way is to build in basic versioning; here, a number of copies of the file can be kept as they are saved. This should reflect the importance of the data held within the file – information of lower importance may just have one earlier version stored, whereas more important project documents or information that may be required to feed into governance and compliance systems may have many more versions enabled.

Doing such version control within an organisation can rapidly give rise to massive growth in storage requirements, and many would struggle to put in place the infrastructure required to manage this. This is where cloud storage comes in to its own as it is easy to "thin provision" storage volumes and manage them dynamically, sharing the underlying costs of a mass storage platform between the cloud provider’s many customers. Another key benefit of using cloud storage in this way is that it provides an abstraction of the file from the user’s immediate environment (i.e. offsite) – the data is protected from device failure or even from site failure. Data replicated within an organisation’s own datacentre may not survive a catastrophic large-scale failure.

Beyond replicating files, the second level is the need to backup disk images. A user with a full-function device (such as a PC or laptop with installed software and data) can find themselves incapable of working should such a device fail. Rebuilding a machine can take a long time – and if the associated data has been lost, then the cost to the organisation can be high.

Taking a full image of a device’s storage systems means that on failure, the device can be rebuilt very rapidly – or a new device can be provisioned using the saved image and the employee can soon be working again. An image can also be mounted as a virtual device, giving the user access to a virtual desktop while a new physical device is provisioned. Again, the cloud enables a cost-effective means of enabling such functionality, without the customer organisation having to own all the underlying hardware, operating systems and software stacks that underpin this, all again with the benefit of storage being off-site.

Some may think that using such image files negates the need for file mirrors. As all files are included in an image backup, it is possible for individual files to be recovered from these. However, immediate imaging is not an easy task, and so files can be lost between the image creation times. To recover a file, the correct image will have to be identified, mounted and opened, with the file system then being interrogated in order to provide the user with the capability to recover the one file they are looking for – this may not be the best way to do things, and does not easily allow for versioning.

The third level is the need to keep full applications running. In the world of virtualisation, it is now possible to package a complete application up as a virtual machine – this includes anything from the operating system upwards in the stack, and may include application server platform, middleware connectors, additional services the application is dependent on, and so on. Such virtual machines (VMs) should not include any live data, however, as this means that the standby VM has to be kept synchronised with the live instance at all times. Data should be stored outside of the VM and mirrored separately. By creating backups of VMs, should anything happen to the live instance (e.g. a failure in the physical underpinnings, corruption of the image or whatever), then a new instance of the image can be spun up rapidly to enable work to continue.

Each of the three levels of granularity has its part to play in how an organisation should seek to ensure it has the best approach to business continuity and disaster recovery. Although all three could be carried out in-house, cloud computing brings technical and business benefits to the fore – from domain expertise skills in how to manage data through economies in scale in providing large storage capabilities to multi-level data management through the provider’s own backup and restore policies to build on your organisation’s own. For many organisations struggling to ìdo more with lessî, the cloud is the only way to gain access to such levels of technical information assurance – cloud brings such large organisation capabilities into the reach of many mid-market and small and medium enterprise (SME) organisations.

In fact, such capabilities are increasingly available from specialist providers of business continuity and disaster recovery services, and many of these do not even run their own storage infrastructure. How? You’ve guessed it; they turn to other cloud service providers for the functionality itself.

Originally posted at Lunacloud Compute & Storage Blog

The cloud – business continuity at affordable pricing?

05 Sep 2012

Many organisations look to the cloud to provide some level of contingency against their own systems going down, be it off-site data backup, failover servers for business applications, or the use of high-availability servers and software. The level of disaster recovery (DR) and business continuity (BC) a given organisation chooses to put in place will vary according to its own risk appetite and budget.

There degree to which cloud services are suitable for providing a safety blanket will vary from one case to another.  So which one is right for your organisation?

The following use case scenarios provide some guidance, starting with the most basic level of data backup and moving to full business continuity.

1. Simple data backup – the cloud can act as an external storage system where files can be stored so that if there is a problem with on-premise storage, individual files can be recovered, or images of specific machines can be restored to a device.  This can be very cost effective – but as with similar on-premise solutions, there will be a level of down-time while the data is identified and restored to the live environment.  Also, large amounts of data will take a long time to be recovered over the internet – which is why Quocirca recommends that data be recovered from the cloud to a local physical device which is then couriered to the customer’s site and then recovered to the target storage system at local area network (LAN) speeds.  However, the service provider may be able to offer additional archiving services that could work well for compliance needs (as Quocirca points out in a previous blog post).

2. Secondary data storage.  The cloud can be used as the place where a mirror of existing data is kept.  Then, when there is a failure in an on-premise data storage device, systems can failover to use the data being stored in the cloud.  Although this may look as if it provides good levels of business continuity, organisations must bear in mind that providing data to on-premise applications from outside the data centre may lead to latency issues, and that the synchronisation of live data may not be as easy as first thought.

3. Primary data storage – no data is stored on-premise, instead being held directly in the cloud. Although this should provide better data availability due to how the cloud provider architects its storage platform, the latency from the on-premise application to the data will generally make this a non-viable option.  However, data backup and restore is now being carried out at LAN speed.

4. Applications and data are held in the cloud, with data back-up and restore being integrated.  This moves the application and data closer to each other so that direct latency is no longer an issue.  As long as the application supports web-based access effectively, the user experience should by good.  Should the prime data storage be impacted, restores can be carried out at LAN speed so recovery time objective (RTO) is shortened.  However, this only provides data continuity – if the application goes down, the organisation will still be unable to carry out its business.

5. Applications being used as virtual machines with data being mirrored.  This is getting closer to real business continuity. By using applications that have been packaged as virtual machine, the failure of a single instance of the application can be rapidly fixed through just spinning up a new instance.  Data needs to be covered as well, and should be mirrored to a different storage environment so that there is a high level of data availability in place. Such an approach can lead to recovery times measured in a few minutes, and will be enough for many organisations.  This is also known as a “cold standby”, as standby virtual machines are not running all the time.

6. Stand-by business continuity.  Here, the stand-by application virtual machine is permanently “spinning” (i.e. provisioned), but is not part of the live environment.  On the failure of the live image, pointers can be moved over to the stand-by image in a matter of seconds, using existing or mirrored data storage. Also known as “hot standby”, as the virtual machines are ready to take over as soon as a failure occurs.

7. Full business continuity.  Here, everything is provisioned to at least an “N+1” level.  Multiple data storage silos are mirrored on a live basis and multiple live application virtual machines are maintained.  Workloads are balanced between the virtual machines, and two-level commit is used on data to ensure that any problem with the data itself is not mirrored across all the data stores at the same time.  This is the approach used by large organisations that have to have the capability to continue working through a systems failure – but is outside of the cost capabilities of the majority of other organisations.  Cloud computing can bring such a capability into the reach of more organisations through the economies of scale.

Obviously, there are cost issues as the amount of cover increases through the table.  This is why any organisation must first understand its corporate risk profile, building up a picture of exactly what business risks it cannot afford to carry and that which it is capable of carrying.  Once a risk profile has been created, the right level of technical “insurance” can be found from a cloud or hosting provider.  The cloud makes the costs less of an issue, as each level can be offset through the number of organisations that are sharing the infrastructure.  Therefore, an organisation that has previously regarded business continuity out of its reach and has settled for disaster recovery can now look to the cloud to create a more business-capable platform.

Originally posted at Lunacloud Compute & Storage Blog

Clive Longbottom, Service Director, Business Process Analysis, Quocirca

 

Compliance in the cloud

30 Aug 2012

Earlier in the year Quocirca was asked a surprising question which was along these lines; “if we use a cloud-based storage service and there is a leak of personal data, who is responsible, us or them?” Make no mistake, the answer is, that regardless of how and where data is stored, the responsibility for the security of any data lies with the organisation that owns it, not its service providers.

In general terms, regulators are mainly concerned about personal identifiable data (PID). In the UK, the Data Protection Act (DPA) requires any company that processes PID to appoint a data controller to ensure the safe processing and storage of such data. The controller should indeed be wary of cloud-based storage services when it comes to compliance with the DPA and EU Data Protection Directive which is being updated this year.

As was pointed out in a previous Quocirca blog post “The highly secure cloud”, this is not because cloud storage services are inherently less secure; indeed in many cases such services are likely to be more secure than internally-provisioned storage infrastructure. The danger comes from how such services are used. There are four main use cases which data controller should be wary of:

1 – Storage provided as part of an infrastructure-as-a-service (IaaS) offering. Here the provider is simply providing a managed storage facility. As long as the provider is well selected then the base infrastructure should be more than secure enough; it will be how it is used that matters and that is down to the buyer of the service. There are two caveats:

·  The EU Data Protection Directive requires that personal data is processed within the physical boundaries of the EU (unless covered by a safe-harbour agreement).

· Some countries have far reaching laws when it comes to the ability to request access to data, most notoriously the US Patriot Act. Safe-harbour does not protect against this.

So the physical location of the storage facility used must be defined and guaranteed in the contract with the service provider.

2 – Backup-as-a-service. Here the provider takes a copy of your data and promises to restore it should the original be lost. This may be a short term backup service or a long term archiving service. The main difference here is the provider is now responsible for selecting where the data is stored, so the service level agreement must again cover physical locations and state that the provider will not use primary or secondary locations that fall outside the compliance boundaries.

3 – Software-as-a-service (SaaS). Here a subscription is made to an on-demand application that will process and store data. Again, it must be understood where data will be stored and processed. Many of the big US-based providers (for example salesforce.com) have safe-harbour agreements with the EU, so it is OK for personal data to be processed and stored in their data centres outside the EU as part of a specific SaaS agreement.

4 – Consumer cloud storage services. These are the most insidious threat and open up a wild frontier as they are often provided on a freemium basis. They are attractive to users who want to back up their own personal data and access data from multiple devices. However, if business data gets caught up in the mix, the data controller has now lost control. This requires a mix of end-point security, mobile device management, data loss prevention and web access control to be in place that is beyond the scope of this article.

Well provisioned cloud storage services are an inherently safe place to store data. However, data controllers need to understand how they are being used and have clear SLAs in place. If a provider fails to meet an SLA, the buyer can seek compensation, but by then it too late; it is the data controller’s door that the enforcers of the DPA will come knocking on.

Bob Tarzey, Analyst and Director, Quocirca