While New York shut down earlier this week as Hurricane Sandy approached, many people were unable to leave the city or abandon their posts: emergency workers, of course, but also many of the staff that keep the city's datacentres running 24 hours a day, seven days a week.
"We had a two-metre storm surge through the lobby that's resulted in the basement being flooded," said Robert Miggins, senior vice-president for business development for cloud and hosting provider PEER1.
If that wasn't bad enough, the city was then hit by power cuts, leaving PEER1 dependent on its emergency generating capacity. And at one point, the datacentre was just hours away from shutting down completely as the company ran perilously close to running out of fuel for the company's back-up generators.
"Obviously, if you get that much water around, the power companies turn off the facility power. So that meant we were running on generators," he said. At one point, fuels stocks dipped to such a low level that PEER1 had to start warning customers that it might have to shut down within just two hours – before staff managed to get hold of some more fuel to keep the datacentre going.
The drawback? It needed to be transported, by hand, up 18 flights of stairs. "We managed to get hold of a fuel truck, and it turned up, but we're on the 18th floor," said Miggins.
"But they didn't have any hoses to pump the fuel up that far and, in fact, the fuel truck was too big, so they had to go away and come back overnight. Overnight, the team was using jerry cans to manually haul diesel up 18 flights of stairs to top up the tanks."
It might have been a close call, but thanks to this emergency supply very few customers had their services disrupted as a result of the disruption in New York – despite the datacentre being located in the eye of the storm.
"We did have some customers we told to power down and they powered down. Then we let them know that we hoped to keep live so they could come back up again. We'd been posting regular information on our forums and we had been in constant telephone communication with customers that were affected," said Miggins.
Miggins hypothesised that it would have been entirely reasonable for the company to turn off the servers due to the impact of Hurricane Sandy on the building, but the company decided to keep on going nevertheless.
"I suppose at one level you could say it's covered by force majeure, so if I looked at our terms and conditions, it would be entirely reasonable for us to say that we're going to turn you off, and then we could go back to sleep until everything recovers then turn everybody back on," he said.
"But I think it's when you have an issue like this our people dig deepest, and their view is if it happened to a company they worked with, they'd want to do everything humanly possible to keep it running. And, if it's humanly possible to carry cans of fuel up the stairs, they're going to do it ‘til they drop," he said. Employees chose to man their posts, he added, they weren't ordered to come to work.
So what lessons have been learned from the crisis? To keep datacentres away from parts of buildings at risk of flooding – such as cellars.
"In the future as we will build out our datacentres and we'll be looking to build datacentres that aren't in buildings that could be flooded," said Miggins.
However, with Hurricane Sandy a once-in-a-generation event, Miggins is no doubt hoping that he won't be called on to carry hundreds of gallons of diesel up 18 flights of stairs again in his lifetime.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)