Storage virtualisation is a must-have for businesses to stay competitive in a 24/7 economy. But how can corporate IT providers guarantee both the latency of critical data and necessary regulatory compliance?
Corporates that are consolidating their computing infrastructures through virtualisation programmes will need to assess how to manage these needs in the face of exploding levels of data but also the availability of that critical information at one datacentre or another. Where a financial service company is moving terabytes of data around, managing that infrastructure for 24/7 availability is clearly critical.
Of course, an organisation that is running virtualised servers via a hypervisor may not want to centralise its data but because of operational priorities, or “game changing” legislation – such as Sarbanes Oxley that demands information assurance across global operations – applications effectively have to be “swapped” between many different virtualised resources. The critical need is that data follows the application while risk is contained.
The drive for availability doesn’t stop there. In most market sectors, technology has deconstructed internal business operations and supply chains – through outsourcing and out-tasking – enabling new service offerings and billing models to be established. This is being hastened by the mass of data now made available from discrete transactions and tasks within those supply chains and business processes. And as desk-bound companies mobilise their workforces with the virtual desktop, how many key staff and how often will they need to access data remotely – rather than locally? Is accessing data across a WAN going to be more risky?
If CIOs are driven towards new levels of latency, how can they manage the risks involved? In these circumstances, every organisation will need a strategic approach to their storage virtualisation. CIOs will need to work with integration or managed service providers that can test for potential risks and likely benefits of their storage strategy.
Pushing data into the cloud could be an attractive route with an expert provider that does not federate the service and thereby minimises security risks. Could virtualised storage applications delivered through a secure datacentre succeed in stripping out layers of complexity and also avert SLA/service demarcation issues too?
Nor should CIOs forget the ever-present shadow cast by disaster recovery needs.
Focused and adaptable risk management, including virtualised storage delivered via a managed service, could help businesses simplify and thereby transform the way they deal with business continuity issues. Storage virtualisation is essential - but not without focused and adaptable risk management.
By eliminating high entry costs for big data analysis, you can convert more raw data into valuable business insight.
A discussion of the "risk perception gap", its implications and how it can be closed