When does an organisation’s trusty IT backbone become a legacy system? Vendors will try to convince you that it is as soon as a new version of their solution has been released, but for most organisations keeping everything bang up to date is impractical and pointless. However, nothing lasts for ever, and at the other end of the obsolescence scale the maxim “If it ain’t broke, don’t fix it” is often pushed to the nth degree, with systems not becoming “legacy” until they are actually at death’s door.
So legacy is partly in the eye of the beholder. That aside, when a system costs more to maintain than it saves in enabling efficiencies, when it becomes an impediment to rather than a facilitator of change, or when it turns into a security risk because of discontinued support – then perhaps then it can legitimately be tagged “legacy”. By such definitions many – perhaps most – organisations are dependent on legacy systems of one kind or another.
Take banks. While they may have an outer shell of shiny customer-friendly apps and interfaces that give the impression of efficiency and modernity, beneath the surface most retail banks run on core systems that are decades old.
As another example, the servers that control the UK’s energy infrastructure date back decades.
“In the nuclear industry, the control systems are designed to work for the life of the plant. We therefore have in place technology that is 20-plus years old still being operated and maintained,” Hugh Boyes, cyber security lead at the Institute of Engineering and Technology, told Computing last year.
Sometimes, then, “legacy” systems are all part of the plan. More commonly, though, their existence is the result of organic growth, mergers and acquisitions, ongoing contracts and personnel changes.
Don’t bank on it
But back to banks. High street banks have been subjected to huge changes over the last two decades, with massive acquisitions, break-ups and the imposition of new regulatory regimes all occurring at the same time as online and mobile banking have taken off.
Operating internationally across multiple branches, they run ATM machines, online banking and CRM on the back of pre-millennial mainframes, proprietary operating systems and Cobol code, all glued together with hard-coded middleware.
Modernising such a system is not easy. No wonder then that banks are often only spurred into action when a crisis occurs.
One such crisis occurred in 2012 with the RBS Group IT failure that locked millions of users out of their accounts for days. Over the weeks that followed a picture emerged of a fractured landscape, with key support services having been outsourced leaving insufficient knowledge about the group’s archaic core in-house systems. When a patch failed on one server it started a chain reaction that took out large parts of the infrastructure.
It took a many days to locate and fix this problem, but to unravel, rationalise and upgrade the spaghetti-like systems on which RBS depends will take many years – 2018 is just the latest estimate.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)