Deep in the heart of Oregon, on the outskirts of a small town called Prineville, sits one of the world’s most sophisticated datacentres.
It doesn’t do much that could be called “mission critical”. It just records people’s mundane online conversations, conveys messages, registers their “likes”, their dislikes and what they plan to do at the weekend. However, the datacentre, which is owned by Facebook, is arguably one of the most advanced in the world.
Built at a cost of $210m (£130m) and completed in 2011, it doesn’t just run Facebook’s popular website, storing and serving data to as many as one billion people worldwide, it also holds the company’s enterprise software. This includes the servers that run Facebook’s financial software, which aren’t just secured electronically, but in the old-fashioned way – behind a locked “cage” in one small part of the datacentre.
Facebook’s Prineville datacentre was designed from the ground up to be as efficient as possible – not just in terms of serving users, but also energy efficiency.
Facebook is not the only high-profile web company that is seeking to push back the boundaries of datacentre design, deployment and management. Indeed, Facebook’s internet rivals, Google and eBay, have also developed datacentres across the world that are, if not strictly energy efficient, increasingly powered with renewable energy, such as wind and hydro.
Google, for example, signed an agreement with the Grand River Dam Authority for a supply of around 48 megawatts for its Oklahoma datacentre from the nearby Canadian Hills Wind Project, which will come online later this year.
In many respects, companies like Google, eBay and Facebook are lucky: they have the financial resources – and the business justification – to build datacentres to the gold standard. Other companies, with tighter margins and smaller profits, will typically have a legacy estate that they cannot quickly migrate from, and have to work with much tighter budgets too.
The real world
Indeed, in the wider world, some organisations are carrying decades of accumulated legacy hardware and applications. One of the key challenges many organisations face today before they can even begin to move forward is simply reorganising to become more agile.
For pharmaceuticals giant AstraZeneca, its first task – which it has almost completed – was one of datacentre and infrastructure consolidation. “We have grown, over the years, more tactically than strategically,” says Conor Breslin, head of technology services delivery.
As a result, the company’s IT infrastructure had grown partly to service business units in various different regions, while also responding to demands from different users for applications and computing power without necessarily counting the cost.
As a first step, about six years ago AstraZeneca sought to align its IT department by functional needs and started the long process of reorganising it into a “global service delivery” organisation, “responsible for all of our datacentres and multiple sites in key hubs”, says Breslin.
Business units are signed up to a global service management contract with the delivery organisation, helping to track costs on the one hand, while imposing high service levels for IT to achieve on the other.
All these moves, though, are prerequisites to give AstraZeneca the agility to be able to adapt and react more quickly.
Now, having consolidated and largely virtualised the company’s datacentre environment, Breslin’s attention is turning to cloud computing, including external cloud services, starting with email and collaboration. “On cloud, we tend to look at it in terms of five main factors to find the balance between performance, speed of delivery, flexibility, compliance and cost,” says Breslin.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)