Copy
View this email in your browser

Forecasting and Data Newsletter by Troy Magennis
Data is Destroyed When People Feel Judged

Thanks for subscribing, and if you haven't yet (this newsletter was forwarded to you, or you clicked a link somewhere) consider subscribing: Subscribe here

In this newsletter:

  1. Coming workshops and events you might be interested in.
  2. Article: Avoid Judging People Via Charts 
  3. Myth of the Month: You need a stable system to forecast
  4. Tool of the Month: Skill and Capability Survey and heatmap
  5. Book of the month: Rethinking Agile by Klaus Leopold
  6. About Focused Objective and Troy Magennis
If you like this content, please forward it to a colleague. 
If you didn't like this content, please email me why: Troy Magennis.
Got Metric or Forecasting Questions? Contact Me

Coming Workshops and Events

Chicago Forecasting + Metrics 9-10th March
Toronto Forecasting + Metrics 26-27th March <-- popular
London Forecasting + Metrics 30th April - 1st May
Chicago Flight Levels with Klaus Leopold 27-28th July <-- new

New ONLINE 2-hour sessions ($99)
Using the Monte Carlo Forecasting Spreadsheets
Using the Team Dashboard Spreadsheet
Free Months Call on Metrics
Training Schedule
Free Zoom Call: Ask Me Anything on Metrics & Forecasting
All Workshops on Agile Forecasting, and Metrics

Article: Judging People (AKA Screwing up Your Data)

Key point: If you make chart data personal, and people feel judged without context, they find ways to hide or manipulate that data. Now you have two problems: the inadequate measure and people hiding the data to tell you that.

In software and I.T. as in life, things go wrong. If the only use of data is to find out who is responsible, then don't expect people to be enthusiastic about capturing and using that data. There is always a context when issues arise. Charts just showing data usually don't convey that context, leaving people feeling angry and singled-out and judged without a hearing. If success to anyone means staying OFF a naughty list, data chart, or email, then don't expect accurate or un-gamed data to emerge. The first rule of new-age data is to avoid any visualization being unnecessarily personal without purpose.

One place I was consulting with had a great idea to focus on quality. They decided that defect counts would be an indicator that teams were moving too fast and their software becoming unreliable and unreleasable. They created a dashboard that displayed proudly in public spaces the number of defects currently logged, sorted highest to lowest allocated to each developer by name. Think about this for a second — your name, listed in red because you have ten or higher defects assigned to you. And that evening, you get an email from the V.P. saying it's unacceptable that you have so many open "mistakes" in your name. 

How can putting the focus on quality like this be bad? For many, many reasons -

  1. We want defects reported. Are you more or less likely to enter bug reports into the work tracking tool if it gets you placed on the Naughty List? Bug reports assigned to your friends or yourself written on sticky-notes; Other teams bug reports entered multiple times using different grammar.

  2. Does it matter how many defects there are if that code isn't shipping or going live in the next release? If it has to be perfect before deploying to a test environment or behind a feature flag, should you test more (say six more weeks)? If no-one can give early feedback, who finds the other ten defects or design flaws in a timely fashion?

  3.  Are all defects equal? Would you not ship a fix for a significant feature because you have some small layout defects on a specific language and browser combination? Quality is relative, not absolute. Withholding this release for a minor defect reduces quality as a whole for everyone.

  4. Does it matter whom the defects are assigned? Would just knowing there is a spike in defects enough? The names of people are a secondary concern and a less useful one (as long as those people know, who cares if you know). 

The mistake this organization made was to show data without context. No priority context. No customer impact context. No "for feedback" context. Just "you are a failure" for writing code and committing it to a test environment. The result was devastating (although the executive team never realized it) - defect counts appeared to go down. Still, the quality of the delivered product was unknowable at any point. People just adapted by hiding defect data and not integrating or committing early, getting feedback too late to implement. Yuk.

Another form of judging that causes data gaming or misrepresentation is the "need" to categorize people or teams as high, medium, or low performers. Stack ranking them. When we apply these sorts of summary labels, we are just asking for data manipulation. Most often, it isn't even necessary. If we are looking at the next step for improving, why does it matter where we are on someone else's journey? What matters is the decision applying to our context as to our "next steps" or "needed action." Worse still, those labeled "High Performers," have tough times too, but as a high performer you feel less urgency to react early to adverse trends - you are after all "high performing" (for now). Nothing is gained by these labels, and we should purge our industry of such simple (flawed) ideas.

A common example occurs where I see this happen is in team performance comparisons. Comparing a newly formed team against a long-term team doesn't give any useful information other than the gap. The newly formed team could be trending miraculously in performance, but still below the long-term team that has been declining rapidly. The correct assessment is "good job new team," but if we stack rank teams into high, medium, or low performers, the new team is still "lower." Being judged against others out of context leads to all manner of data gaming and, most importantly: stupid decisions.

In closing, there is often very little upside to showing data publically at an individual level. It has many impacting downsides that mean data becomes needlessly incomplete, gamed or both.

Myth of the Month - You need a stable system to forecast

Setting aside varying definitions of a stable system, the general belief that you need a "stable" system to forecast is a myth. You don't need similar values, or values close to each other. You do need the distribution of values to be similar. 

For example, 100,10,100,10,100,10 stories per week is a stable system as far as forecasting goes. We probably deduce this team works on a 2-week sprint cycle, and this pattern is consistent and stable. Every second week is either 100 or 10. The issue is that conventional forecasting using an average pace of 55 stories would make little sense.

Monte Carlo forecasting would give the correct result over many weeks. But, if you were forecasting next week, it would be better if you know what last week's throughput was (say 100) so you know to anticipate the other value for this week (10). 

Medium and longer-term distribution stability is what matters. Short term forecasts always require immediate context to make sense of what the next value could be. Medium to longer-term, the distribution of values works out. 

We forecast unstable systems in other fields. Think about the weather forecasting. There are many unstable input factors where small changes in one value cause a massive change in outcome. Over time meteorologists have learned through getting it wrong and investigating why they got it wrong. They improve the model and get fewer surprises over time. In an unstable system, you can never be sure of the actual outcome, but Monte Carlo and improving our models to incorporate the surprise factors over time means we get it right more often.

Tool of the Month - Capability Survey and Heatmap

Understand where your skill gaps exist. Create a list of skills, ask your team to asses themselves, and compare the capability you have with the capabilities you think you need in the future. Plan and grow people and teams into the critical skills your organization needs.
Download the Capability Matrix Spreadsheet
All the free tools

Book of the Month - Rethinking Agile by Klaus Leopold


OK, I'm cheating a little here. I did the English translation copy editing for this book. So, I had to read it several times. I love it though. It follows an Agile transformation case study. An organization that invests heavily in Agile and gets nothing in return (sound familiar?). They then look at applying Agile to solve real problems and success starts to follow. If you need to explain to your boss, and your bosses boss why Agile needs to be considered a whole organizational sport, not just a "local team" thing, then buy this book an leave it on their desk.

Buy from Amazon
 

About Focused Objective and Troy Magennis
I offer training and consulting on Forecasting and Metrics related to Agile planning. Come along to a training workshop or schedule a call to discuss how a little bit of mathematical and data magic might improve your product delivery flow.
See all of my workshops and free tools on the Focused Objective website.

Got Metric or Forecasting Questions? Contact Me
Twitter
Website
Email
Copyright © 2020 Focused Objective LLC, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.