Maintaining peak performance in a cloud migration

When migrating to the cloud-based storage to support homeworking, it's important to keep an eye on latency and read times

This year has seen a huge surge in the demand for cloud solutions. This has been driven by many non-remote teams having to shift to home working, which has required organisations that were still using local or on-premise storage to transition to the cloud. Additionally, teams that were already using cloud storage as a secondary or ancillary storage medium have been forced to scale up their cloud infrastructure and use it as their main storage medium.

Why cloud storage is attracting remote teams

Cloud storage has more appeal in a remote working context because of the reliability, security, and flexibility it offers teams, especially relative to on-premise storage solutions. This is because to work remotely, on-premise storage will require team members to use virtual private networks (VPNs), which can greatly slow down the pace teams can work at while also requiring in-house IT support to continually configure and oversee the infrastructure and applications.

Maintaining data security is another concern among remote teams that is catered to well by cloud solutions. When properly configured, cloud environments provide security that can surpass an on-premise solution - most cloud storage providers offer ‘immutable buckets', which means data cannot be deleted or rewritten, leaving crucial files less vulnerable to hackers or human error. All data can be backed up securely multiple times and is immediately accessible whenever needed.

And lastly, a huge benefit of cloud storage for organisations (whether remote or not) is that, depending on the provider, they only need to pay for as much storage as they need, and therefore can easily scale up or down as their storage requirements change, while guaranteeing a far more consistent performance due to the economies of scale offered by cloud providers.

Common cloud performance bugbears

However, migrating to the cloud also brings some challenges in terms of performance. Organisations making a cloud storage migration have to consider two things in particular in the context of their new setup: latency and read delays.

Latency is the time it takes for data to reach its destination. All data has a maximum speed limit, taking into account that it has to travel to and from a cloud data centre, but additionally, the data centre has to also process requests for data, retrieve it, and then send it.

If the data centre architecture isn't efficient, or if your organisation is edged into a pricing plan that puts you on lower priority servers in a data centre, then you'll experience higher latency. If your organisation has to share many large files or has lots of stakeholders accessing relevant information, high latency wastes time and causes unnecessary frustration.

Read delays cause similar problems, especially in teams that have many collaborating on the same project simultaneously. A read delay is where a data centre takes additional time to allow a file to be read by team members after it is written by a user - this could be seconds, minutes, or hours. It's easy to see how read delays can grind business to a halt.

Avoiding the pitfalls to maintain peak performance

To avoid the above performance issues, it first helps to be aware of them when you're looking to make a cloud migration. If you're aware of what you're looking for, then you can ensure you choose a provider whose abilities meet your needs.

When it comes to addressing latency, make sure you're aware of how a prospective provider compares to others. We measure latency with the Time-To-First-Byte (TTFB) speed, which is in milliseconds. Your TTFB with a data centre also determines your download and upload speeds, which can prove essential when it comes to sharing files or collaborating on live documents. For example, a TTFB of 80ms will cap your maximum download speed to 6.6Mb a second, whereas a TTFB of 20ms will cap your maximum download speed to 26.2Mb a second.

When looking at a provider, take a look at the average TTFB they offer customers. Given that the TTFB can fluctuate across the day, look at how stable that average is and investigate the typical range of TTFBs you can expect your team to experience across a normal workday. The very best providers can offer reliable TTFBs of 15ms or less.

Being savvy extends to also tackling the issue of read delays. Ask a prospective cloud provider outright if you can expect read delays on their service, and how long if so. Make use of all the material you have at hand to help discover this - their website and case studies, as well as what their users (and ex-users) say about the read delays on their service.

Many providers are capable of offering very low TTFBs and zero read delays, but often only offer them to customers who can successfully navigate complex and expensive pricing structures. Through being savvy, a remote team can circumvent this.

It's often said that you can't have something fast, good, and cheap all at the same time. However, if done right, your cloud storage infrastructure can be the exception to this rule.

David Friend is CEO and co-founder of Wasabi