How Hiscox went from one release per year to becoming a DevOps champion

DevOps transforms insurer from a low IT performer into one that can deploy code in half an hour

The insurance industry is often painted as a laggard when it comes to the latest technology trends and developments, and Hiscox, a firm with around 2,000 employees of which some 250 work in IT, was no exception.

Speaking at Computing's DevOps Summit 2016 today, James Waterfield, senior cloud specialist at Hiscox, explained that prior to achieving a DevOps culture, IT was unable to support the pace of change the business desired.

"The required pace of change is a pressure," said Waterfield. "The business wants change, but IT couldn't deliver change quickly enough. They also won't let you take systems down, due to the fear of the unknown."

"We're only just now finishing the Windows 2003 decommissioning process," added Jeremy McGee, DevOps consultant at the insurer. "IT works, why do we need to change it and spend money on it!"

Waterfield continued: "How do you support something running on COBOL when the only guy who knows it has left? We were managing old applications which were out of support [from the vendor], and it's hard to get people to agree who's responsible. The pace of change was slow."

The firm was interested in moving to more of a DevOps culture, but the change wasn't easy to instigate.

"How do you DevOps when you've got lots of pre-packaged applications?" asked McGee. "There was finger-pointing, people saying 'We can't install this, it must be infrastructure's fault!' We weren't doing very well. Puppet do a survey each year, and they sum up organisations into three camps: low IT performers, then medium and high."

Hiscox found itself in the first category, managing one software release per year, which might take a week to roll out.

"And there was the problem that because the business didn't want downtime, everyone tried to cram as many changes as they could into one maintenance window. So rolling that back if anything went wrong was very difficult," added Waterfield.

McGee added that the company kept its targets reasonable, aiming for improvements in the speed and quality of delivery rather than expecting to go from an also-ran to Olympic gold.

"We wanted to go from a low to medium performer," he said. "Let's not boil the ocean. So we started in the application side. The UK underwriting application was ancient, it needed shooting. So we bought a new packaged system, and because it was packaged, deploying it was straightforward. The vendor had already solved the problem of installing it easily; customer numbers two and three effectively paid for the automation of the deployment cycle, and we just rode on their coat-tails. Puppet stood up the infrastructure, IBM did orchestration and deployment, and we had a little bit of Powershell in there too."

He explained that when this was presented to the IT director, he was delighted that the organisation now had the ability to deploy application code in half an hour.

"They looked at that and said it's gone from a week to 30 mins, that's great."

However, the dirty secret, McGee explained, was on the infrastructure side.

"We gave the business faster changes on the application side, so there were lots of smiles and high fives, but that doesn't help when you come to work on the infrastructure end.

Waterfield added that the solution was to be able to describe the entire infrastructure in code.

"We identified we needed to create the capability to describe the infrastructure in code. But how? On premises? Buy a solution? Create our own from scratch? Or use something which exists on a cloud platform?"

[Please turn to page 2]

How Hiscox went from one release per year to becoming a DevOps champion

DevOps transforms insurer from a low IT performer into one that can deploy code in half an hour

They ran a proof of concept using Azure and Amazon Web Services, and being a Microsoft shop, Azure came out on top for them.

"We quickly realised during testing that it allowed us to describe all of the infrastructure in code, from the networks, security certificates, databases, websites, to the DNS all done automatically. That really cut down deployment time for us.

"You can copy one application you've written and use the template for another one straight away. We can build as many environments as we like using the same bit of code," said Waterfield.

McGee said that the technology made it possible to involve Hiscox's server, network and security people in the message that developing faster is better.

"It also means that if it's not working you can make it go away faster," said McGee. "In practice we went on to create a dedicated project team, and combined with the application development group we created a team of eight to 10 people with that blend of skills, but with an ops focus, and adopted software engineering principles."

"We've all got a copy of the infrastructure described as code," said Waterfield. "We used to have so many different environments. We'd have UAT, dev, systest, production - an application could have six or seven different ones, although the record was 28! But when you describe it as code, you don't need it. You make your changes, deploy a development version, write a test, it goes green, and you push it up the pipeline. This way you have one to three at most builds running at once. It's a new way of thinking."

The other advantage, McGee said, was that these environments can be completely independent in the cloud, so they won't disrupt one another.

"You can play with the technology in isolation knowing that you won't affect the rest of the infrastructure," he explained.

"We ended up with a team who can do all this stuff, all with strong, cross-functional skills," continued McGee. "They have both technical and soft skills.

"We moved five applications to Azure in five months, including creating the base capability for more systems. It's repeatable, reliable, secure, cheaper, and self-documenting. Environments can be built in minutes, on-demand. It helps faster testing, and better quality systems," he concluded.

Computing's in-depth research in the latest DevOps tools and trends is available now.