Doing DevOps without measuring performance is like doing archery in the dark

If that sounds dangerous, it's because it is

Devs need DevOps - but there are many questions to face before beginning. Where do I start? What does good look like? What is bad practice? What's your favourite dinosaur?

The major failing that companies face when adopting DevOps practices is not measuring performance, argued Kris Saxton of Automation Logic, speaking at Computing's DevOps Live conference today.

"Many of our clients, when we join them, aren't really monitoring anything relating to DevOps performance. They usually have some fairly large, abstract goal relating to the thing they're trying to deliver - be it a programme or a service - but none of them are systematically measuring whether they're getting better at the act of delivery itself.

"To me that seems really odd. It would be like taking up a sport - or any activity that involves skill and practice - and not measuring performance to understand whether you're getting better or worse at it.

"Doing DevOps without measuring is a bit like playing archery in the dark. Not only are you less likely to hit the target, but you take your shots and the darkness can last for several months in terms of understanding whether you got better or worse. You've got no way of correlating the actions that you took with the outcomes."

These capabilities are some that we have found to be very highly correlated with high-performing DevOps teams. "If you are doing all of these things well, it is likely you are doing DevOps well."

How do you take these measurements, Saxton asked? A mixture of hard (velocity lead time) and soft (questionnaires and workshops to talk to devs) metrics. Soft metrics are no less relevant; there is a lot of ambiguity about what DevOps is and what it's trying to do, and this process "gives you something everyone can point at and agree on when you embark on some kind of improvement activity."

It is important to understand the results of your measurements and what they mean in context. Automation Logic tries to plot the impact on DevOps capability/performance from focusing on one area over another - this helps to build a roadmap by finding areas to prioritise.

The scores, analysis and roadmap of measurements must be tied together into a framework, which requires an approach to change known as empirical process control. The three pillars of EPC are transparency, inspect and adaptation. You must be transparent about what you're measuring and the results; inspect and draw insights from the results gathered; and use the insights to adapt and plan.

"If you rinse and repeat that a few times, as well as getting better at DevOps, you'll also build a self-sustaining DevOps capability," said Saxton.