Optimising cloud economics to future-proof the business

Separating compute and storage is one answer to retaining flexibility in business analytics

It's impossible to predict the future analytical needs of a business. Its storage needs today may not be sufficient next year, particularly when its workload requirements have, say, grown five-fold. As data volumes explode, organisations increasingly make significant investments in HDFS or S3 data lake infrastructure for a more cost-effective approach to storing "cold" data. However, under such unpredictable circumstances, it's important that organisations safeguard these investments, as well as handle their dynamic workloads and evolving data analytics requirements.

Given the time spent on moving data in and out of various data stores, and the cost of provisioning storage and compute resources for worse-case scenarios, organisations should ensure their analytical database platform serves a number of purposes. Not only should it provide freedom from underlying infrastructure, maximise the organisation's investments in data lakes, and support its pursuit of cloud economics as well as on-premise and hybrid deployments, it should also unify the analytics that underpin all of this.

To optimise data analytics for the cloud, data should be placed on reliable and durable storage. This storage should be separated from compute (as well as the optimum amount of local cache), and the performance requirements of existing workloads should be matched. This enables the organisation to provision the right amount of compute resources for queries, and the right amount of storage resources for data, thereby allowing them to tie the costs of their data platform directly to their business needs.

Essentially, scaling up and down the cluster elastically to accommodate peaks and troughs in analytic workloads will result in lowered infrastructure cost and maximum business value. By keeping cloud economics front of mind when employing a separation of compute and storage architecture, an organisation will ultimately enjoy greater operational simplicity and workload isolation that meets both SLA and business stakeholder objectives.

Capitalising on the promise of cloud economics for analytics

Businesses across all industries are employing advanced analytics in a bid to transform the way they operate, grow and remain competitive. They can, however, enjoy greater flexibility and financial freedom if they embrace cloud economics and the separation of compute and storage.

Take the example of a medical equipment manufacturer going to market with a smart medical device. The manufacturer will run a trial with its key customer accounts before the product is formally launched and, by utilising the separation of compute and storage, is able to optimise its resources to analyse the early-release testing data, allowing it to benefit from maximum reliability when it comes to the mass release.

A proven analytical database with a separation of compute and storage architecture can also strengthen the ability of retailers to analyse seasonal sales patterns. While analytic workloads in retail settings tend to vary depending on volume, they still require the maximum possible dashboard performance to allow key members to gain access to important sales metrics, no matter the number of concurrent users.

What's more, the separation of compute and storage can also help streamline engagement analysis. Take, for example, a gaming company about to introduce a new game. Prior to launch, it must increase its analytical capacity to evaluate success based on customer insights. This cloud-optimised architecture would enable the company to deliver analytics in real time during critical stages of the launch - such as a tournament or promotional event - allowing modifications to be made that would make the game more appealing to its community.

Matching capacity to need

Scaling capacity to workload offers significant cost savings. The separation of compute and storage can deliver significant savings in compute costs, for example, over a traditional database provisioned for peak workload all year round.

The workload may be uniform, of course, or just periodic. In the case of short-term projects, the separation of compute and storage enables "hibernating", in which all compute nodes are shut down until needed at which point a new cluster can be created, the database revived and the project started up again.

Most organisations have use cases with consistent compute requirements, and others with variable compute needs. It's important that the right use cases are supported today in the knowledge that they may change in the future. Organisations should seek options that allow them to buy compute power when needed, and to reduce storage costs based on requirement. It's worth considering, too, that cloud deployments are likely part of a larger, hybrid strategy.

Businesses are under increasing pressure to understand how their data analytics platform can maximise cloud economics while deriving timely insights, particularly as the key drivers of cloud adoption are operational simplicity, just-in-time deployments, and elastic scaling of resources. Cloud economics enable organisations to only pay for what they need, when they need it. It's important, therefore, to consider how to take advantage of this without being penalised for fluctuating requirements and dynamic workloads. By providing organisations with maximum flexibility, and enabling them to minimise resource burden and maximise business value, the separation of compute and storage may well be the answer.

Jeff Healey is senior director for Vertica at Micro Focus