Selling a negative is never as easy as pushing a positive. This is very much the case with the long-time Cinderella of the IT world, backup and recovery, where the chief benefit is the ability to maintain rather than enhance the bottom line. When it comes to funding, poor Cinderella always seems to lose out to her pushier siblings.
Added to the image problem faced by backup and recovery, there is no easy way to quantify the financial benefits of investing in it. There are too many uncertainties in predicting the future, analysing the probabilities of various possible calamities occurring and quantifying the potential cost of doing nothing to mitigate them.
Given these difficulties, when IT heads meet the board to discuss priorities, those advocating a more integrated approach to backup and recovery will struggle to make their voices heard.
The result is that backup and recovery tends to be tacked on as an afterthought to other projects whose financial benefits are easier to calculate and which seem to be driving the business towards a brave new tomorrow.
This second-class status means that over time a hotch-potch of disconnected backup and recovery solutions evolves, each focused on a particular application or platform, and each with its own dependencies, support requirements and deficiencies when it comes to supporting newer technologies, such as virtualisation, mobile and cloud.
And since backup and recovery solutions tend to remain in place for many years, this effectively condemns the organisation to a patchy and incomplete safeguard against unexpected events.
Tales of the unexpected
A recent Computing survey of 120 IT professionals at medium to large organisations sought to find out how those charged with keeping enterprise IT systems operational quantify risk and, more importantly, how they use that information to justify spending on backup and recovery.
Unsurprisingly all claimed to plan for obvious things such as fire and flood, along with power cuts, hardware and software failures, malware attacks and so on. When asked for specific instances, however, the responses were a lot less predictable.
From a pack of kippers left in a duct by a vengeful air con supplier to an operator vomiting over a rack of switches and bringing down an entire network as a result of finding a decomposing and very smelly hedgehog in the works, bad smells are something unlikely to feature in many disaster recovery plans.
Nor is it easy to foresee complex, multi-layered scenarios, such as a dedicated business resiliency centre being cordoned off by the police as the result of an explosion nearby. The risk of human error is also notoriously hard to quantify: in one firm, although tapes had been religiously loaded into the backup library every night, the backup routine itself had been de-scheduled. This was not discovered until staff had to recover a crucial storage array and found the backups were months out of date.
In order for all eventualities – from acts of God to rotten hedgehogs – to be adequately covered without breaking the bank, an enterprise-wide audit of assets, categorisation by importance and then some sort of cost-benefit analysis over their long-term safeguarding would seem to be in order. But wouldn’t this be expensive?
By eliminating high entry costs for big data analysis, you can convert more raw data into valuable business insight.
A discussion of the "risk perception gap", its implications and how it can be closed