Find Leading Indicators
For all the talk about leading indicators, there often is little action in dashboards that I’ve seen. It makes sense that dashboards plot the data we have, and there is a prevalence of lagging data available. All is not lost though, to find leading indicators, we actually need to start with lagging ones.
Turning a Lagging Indicator into a Leading One
To make this point, I’m going to use an example. Let’s start with a quality metric.
The lagging indicator we have: Production Rollbacks and Escaped Defects
A common set of quality metrics that get put forward when asking “how will we measure quality” are the typical defects and rollbacks. They are relatively easy to measure and obtain from tools and systems. But as you can understand, they need failure to occur before they change. A defect has to escape, a rollback has to occur. If we want to improve future performance, metrics that measure quality BEFORE they impact production or customers are obviously superior. These metrics aren’t bad, they just are post-event.
A leading indicator is BEFORE EVENT. And this gives us the ability to avoid the event. Or change the path and severity of the event. A key skill for coaching teams using data is to go from these lagging measures to leading ones. It comes back to the questions you ask, and here are a few that might help you.
”Do releases that have rollbacks or defects have anything in common?”
”Do releases that have rollbacks or defects skip any steps or processes?”
”Do releases that have rollbacks or defects add any steps or processes?”
”What conditions are present or absent when we have rollbacks or defects?”
What we are trying to find is something that leads to failure.
Some example responses might be -
”We skip integration testing.”
”We skipped code-review.”
”We had no comments in the code review.”
”The changes are unusually big.”
”The changes are in the legacy code with no unit test coverage.”
These responses give you some process changes and some ability to measure the indicators that an increased occurrence rate is LIKELY.
What leading measure would you rap around the responses given?
Some examples might be:
1. # Comments in code reviews per change/commit
2. # Files in change/commit
3. # tests for code areas
All of these metrics give an indication of the increased likelihood of rollbacks and defects even if teams are lucky. That is the key point. Lagging indicators hide the fact that sometimes you are lucky. Leading indicators help show that you dodged a bullet (which is why aviation tracks near misses and accidents, a near miss is still a failure).
So, go find metrics of near-misses. They will be leading indicators. And it starts by simply asking, what conditions are present, what do we skip, or what is common when these bad things have historically occurred. Measure those.