Moore's Law can't help us now - big data processing and the drain on energy supplies

Data volumes are "sky-rocketing" as Moore's Law efficiencies are flatlining, says EPFL's Babak Falsafi

The volume of data passing through data centres is ballooning. According to some estimates, 100 times more data will be processed in 2020 than in 2010. This growth far outstrips the rate at which IT is becoming more energy efficient, meaning that the server stack needs a complete overhaul along the lines of recent improvements to mobile devices if an energy crunch, higher prices or reduced performance are to be avoided.

That's according to Babak Falsafi, director of EcoCloud, a project of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland that works with VMware, HP, Microsoft and Oracle among others to improve the energy efficiency of cloud computing.

"IT efficiency is horrendous because until now vendors haven't had to pay any attention to it," Falsafi told Computing. Until 10 years ago, he explained, doubling the density of transistors every couple of years, in accordance with Moore's Law, also, by happy accident, meant that you got a near-doubling of energy efficiency "for free".

"But around 2005 we hit a wall. Energy efficiency of transistors used to come for free, but since then it has only been increasing very slowly. At the same time demand is sky-rocketing."

Falsafi draws a comparison between the world of mobile devices - where every aspect is specifically designed to avoid wasting battery power - and the data centre, where energy efficiency is an afterthought.

"Portables have specialised devices and services, software and hardware that for critical uses minimise the amount of energy required to work. But in data centres the software, hardware, cooling and substructure all work independently of each other. In the future they will need to be integrated," he said.

"One of the biggest problems is that the server market is horizontal and everyone is creating their own products and they have these well-defined interfaces that sit on top of each other," he said.

"But if you look at the big IT providers like Amazon, Google and Yahoo, they are building all their own services internally and integrating them to improve efficiency."

It is the practices of these giants that Falsafi's team and its many industry collaborators are seeking to emulate, by monitoring, integrating and optimising as many of the interrelated processes that occur within the data centre as possible, "from the algorithms to the plumbing" they are looking for efficiency gains "of an order of magnitude".

"We had a project called EuroCloud in collaboration with ARM in which we showcased technologies and designs, brought them to the server and then ran an entire software stack on the top and showed you can do the kind of things that Amazon, Facebook and Google do: using the same technology with 10 times more efficiency," he said.

In a separate collaboration with IBM the group is developing a compact two-phase liquid cooling system that sits on top of a server CPU, which should allow blades to be built to be more compact (or more powerful) while cooling them more efficiently; and there is the added benefit that the waste heat could be piped away and reused elsewhere, for example to warm a swimming pool or domestic homes, or to generate power.

EPFL is working with the bank Credit Suisse to rationalise its data centres, developing battery-less sensors that monitor local conditions at thousands of points to allow fine-grained thermal balancing. Through such measures it is expected that energy consumption will be reduced by about 50 per cent, as well as allowing for space savings.

"Thanks to our focus on energy efficiency and virtualisation, we haven't had to build a new data centre in Switzerland since 1995," said Marcel Ledergerber, VP data centre facilities design and planning at Credit Suisse.

Both Credit Suisse and EPFL are part of the GreenDataNet consortium, which is seeking to improve the energy efficiency of data centres and cloud computing.