In part one of this research we looked at how, as a consequence of their fundamental importance to the enterprise, mission-critical applications have tended to remain stubbornly attached to bare metal.
Applications such as ERP, financial suites and HR systems represent a sticking point to consolidation because, to those responsible for them, the risk of failure or dropping off of performance outweighs any benefits that might accrue by virtualising and then consolidating them.
As well as cost, space and management savings, the advantages of virtualisation include the ability to integrate heterogeneous enterprise data onto unified virtual infrastructure, allowing for high-performance interoperability and rapid data access. Such integration over high-speed connections, accompanied by low-latency memory paging and input/output (I/O) access, allows the rapid creation of high-performance analytics, business intelligence, data mining and compliance solutions.
So, a simplified, agile, high-performance infrastructure is the promise. However, worries over performance and scalability are the main technical barriers to consolidating enterprise applications, according to a Computing survey of 150 IT decision makers (see figure 1).
[Click to enlarge]
The premise of the virtual environment is that a hypervisor assumes control of the physical machine, allowing multiple operating systems to be run on a single physical infrastructure. Memory paging, network communications and storage I/O access are controlled by the hypervisor, but appear as virtually native to each guest operating system (OS).
Early software-based hypervisors had to cope with a previous generation of CPU chips without virtualisation support, meaning that precious processing power was spent in fooling the OS that it was running on hardware rather than a virtual machine (VM). Processor-hungry applications often ran slowly or unreliably on these early platforms, leading many IT directors to conclude that VMs were completely incompatible with enterprise applications.
The reticence to virtualise mission-critical applications may be, in part, a hangover from that era. Times have changed. Modern Type 1 hypervisors (see below) take full advantage of the virtualisation technologies that chip makers such as Intel and AMD have built into their latest processors, minimising the overhead on processing power.
Nevertheless, there will always be some difference in performance between a dedicated box and a virtualised platform, however slight, and the few specialised applications for which millisecond latency is a major issue, such as trading systems, may always be better on specialised hardware.
For the remainder, though, there a signs that things are changing. The Computing survey found that even applications such as financial suites have been virtualised – at least partially – in about half of the organisations questioned.
Is it safe?
Security of virtualised platforms is also a hurdle for many. Again the technical fears are largely historical.
In early systems, there was concern that with software-based (Type 2) virtualisation approaches, the machine kernel could be breached through a weak spot in one guest OS. With hardware-based virtualisation and self-standing Type 1 hypervisors, that route is closed off.
Moreover, modern hardware-enhanced virtual servers have additional security features designed for large-scale virtualised environments.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)