Photobox Group CTO reveals how the company moved nine petabytes to the cloud in six months

Going too fast would mean damaging data centre infrastructure - but too slow and there would be bottlenecks at peak times

Photobox Group is among the world's largest image-hosting platforms - and may well be the largest of those based in Europe. Adding about a petabyte (PB) of storage every year, the need for speed, agility and scalability has recently meant dropping the data centre and embarking on an ambitious, large-scale move to the AWS cloud.

Until this January, all of Photobox's data was stored on racks in co-located data centres in Amsterdam and Paris - and the company's no-delete promise meant that storage was becoming unwieldy.

"If you think about the size and scale of the data we were dealing with - nine petabytes - we were adding about a petabyte a year: anything up to about 5-6 million photos a day," said Group CTO Richard Orme. "On peak days that really goes up, we could be adding a million photos an hour to our storage - and we don't delete anything.

"Part of our proposition is if that if you upload an image to us and build a product, you can reorder and reorder and reorder that product over time."

More and more time was necessary just to maintain the storage layer, which limited the effort that could be devoted to more than just ‘keeping the lights on'. Innovation began to suffer, until the company decided that enough was enough.

"We hit a really crucial decision point about two years ago, where we decided that actually it was probably no longer in our interest to keep running those data centres.

"Fundamentally, three things have changed [from when we first set up out data centres]: we're good at data centre provision, but AWS is definitely better - as are Microsoft and Google; [secondly] pricing, and in particular bulk storage pricing across all the cloud providers, has got to a place where economically it became very viable for us to do it; and thirdly...we'd slowed down a bit too much on the digital and physical product innovation side. It was important for us to reinvest the spend that we were putting into the data centres into kickstarting the innovation back in those areas."

In the end the Group chose to go with AWS, partly due to the existing relationship between the firms: all of the Photobox brands' websites run on AWS, which helps them scale up at peak times. With as much as 60 per cent of annual revenue being made in November and December, scalability and reliability is critical.

We were pretty sure that if we were able to match the speeds that AWS felt they could ingress at, we'd actually end up melting a lot of the disks in the data centres

Moving nine petabytes anywhere is no small task. As well as mapping the project out from end to end with a team brought together for exactly that purpose, Orme had to consider physical logistics: even a fibre optic pipe would have struggled to work with so much information, and the physical disks were their own challenge.

"We evaluated with our network providers whether or not we would be able to move data out of the data centre at a fast enough rate; we evaluated with our hardware and then our storage providers how fast we could read data off the disks. There were some interesting conversations: we were pretty sure that if we were able to match the speeds that AWS felt they could ingress at, we'd actually end up melting a lot of the disks in the data centres."

Understandably, Orme and the team started to look at other solutions. Back in 2016, panellists at a Computing event agreed that nothing beats a van or truck for transporting petabytes of information - and nothing has changed in the intervening time. Enter AWS's Snowball Edge: physical storage devices that can each hold 100TB of data.

"[That] sounds like a lot, but we actually ended up using 90 of them over time!" said Orme. "We were working on two separate data centres and at each of the two data centres we had three sets of four Snowballs. We were either filling the Snowballs, or they were in transit or they were with AWS and they were egressing data. In the end we had 24 Snowballs on rotation."

The company also considered the AWS Snowmobile, which can carry up to 100PB of data at once, but the waiting time - plus the challenge of having to move information from two locations, not just one - meant that the Snowballs worked out to be the most efficient route.

Photobox Group CTO reveals how the company moved nine petabytes to the cloud in six months

Going too fast would mean damaging data centre infrastructure - but too slow and there would be bottlenecks at peak times

Time is money

Efficiency doesn't necessarily equal speed; the planning stage predicted that the data transfer would take as much as 12 months. However, when the work began that started to look optimistic.

"Quite quickly, we found that there were some things that we hadn't picked up in the planning that were causing us a challenge. The first one was: 5.7 billion really small files. Images are quite big, but not in comparison to videos or large data files...

"We experienced some challenges in getting stuff into the Snowballs. They'd fill up pretty quickly and then they'd start to slow down a little bit. That was because the index on the database that runs the file system on the Snowball was just filling up; it never anticipated that the Snowball would take that many files."

AWS, luckily, was monitoring the move closely - the Group was, after all, moving Europe's largest consumer photo database to the cloud. Orme says, "For a period of a week and a half it felt like we were running the Snowball Edge roadmap; they just stopped everything to get us over the line."

That monitoring, with feedback from the Photobox team, saw AWS pushing patches in near-real time. One of these - bringing forward a compression feature that had been due to be rolled out months later - solved the small file challenge.

"Within two or three weeks we got to a point where we were suddenly pulling data out at a much faster rate than we'd believed possible, while still preserving the sanctity of the underlying discs. [It was] incredible stuff, right at the edge of what we could do, but incredibly motivating for the team, because they were just smashing through speed records every day."

We're making savings, our customer serving of photos is now radically faster than it was and our ability to ingest new photos is relatively limitless

These optimisations sped up the entire project, with the end result that 12 months turned into just six and a half. The data transfer that began in January ended in mid-July, and Photobox is celebrating.

"You don't often think of these things as being an opportunity to learn and improve, write new software and push the limits of what can be done - but in the end that's what it turned out to be...

"The net result that we get from having completed this project five months early is just incredible. We're making savings, our customer serving of photos is now radically faster than it was and our ability to ingest new photos is relatively limitless, compared to what it used to be. We don't have any worries going into peak this year."