That said, the statistics hide a multitude of scenarios in which human error and freak events play a starring role – as we will see.
Away from problems within the datacentre itself, extreme weather was reported as causing data loss and downtime for a quarter of the respondents, and flooding affected 16 per cent. Planning for such events is at a lower level than for IT failures, but more than half of respondents still feature them in their DR strategy.
While more than half of the sample expressed confidence in their ability to recover lost data and bring mission-critical applications back online, others were less sure, with about a quarter feeling they could be better prepared and 19 per cent only confident of a partial recovery of data (Figure 3).
This uncertainty spreads to the time taken to recover lost data. About a quarter said that this might take days or even weeks (Figure 4). In business terms, that can only represent financial losses and missed opportunities.
Expect the unexpected
While most disruptions might appear to come out of the blue, the broad scenarios they represent can still be planned for. Some are relatively commonplace within the data centre – hard disk failure or database corruption for example – but often the real danger lies in the long tail – both figuratively and, in rare instances, literally (in the case of rodents).
With the long tail in mind, the survey asked respondents about some of the more unusual causes of downtime that they had experienced. Some of these are listed below.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)