Everybody reading this should be familiar with the concept of taking your temperature. When you don't feel well, you (or Mom or Dad) pop a thermometer into your mouth, wait a few seconds, and then take a look. If you're above 98.6° F (37° C, 310.15 K, ...), you may have an infection. Note: I'm not a Doctor (but I play one on this blog).
Recording your temperature is a "little metric" around making a quick judgement around your health, and it's effortless to collect. Well, possibly not effortless for parents of toddlers (I remember those dark days), but you get the idea.
Like taking your body's temperature, we should be able to find and record some effortless "little metrics" around our software projects to make judgements about their health. This is especially true around any project that is:
- taking more than some tiny number of iterations.
- split up into many tasks.
- worked on by multiple developers.
We demand observability (one of the core -ilities) for the execution of our software, using logging libraries and monitoring services. We should also demand observability on the health of our development efforts, and present that observability in a way that everybody, from individual contributors to C-suite executives, can easily understand.
I'm deliberately wishy-washy about when to apply "little metrics". You'll know where you need it.
My "little metrics rules":
- The little metrics collected should be an easy concept for everybody to understand.
A common example is number of work items (tickets) total for a project, number that are closed, and number that are active. Collecting these metrics requires everybody to be good work item citizens and follow the rules on work sizing and not reopening closed work.
Other good examples of little metrics are cost, performance (how fast to do the thing(s)), and scalability (how many things are supported simultaneously before the system goes pear shaped). - The little metrics should be small in number.
If you start collecting and reporting 10 numbers for a project, my eyes are going to start glazing over and thinking about potato chips before you're done. 3-5 numbers sound good. - Collection of the little metrics should be effortless.
Clicking two urls is good. Clicking one is double plus good.
The urls should show results in less than 5s. Any more, and you've lost my attention. - Collection of the little metrics should use your source of truth.
Think about where your source of truth is for tracking the work to be done. Use that. Any use of a denormalized copy of that data (like in an external spreadsheet) will be outdated as soon as you hit File > Save. - Collection of the little metrics should be done in a ceremonial fashion.
While it's great to collect these metrics via robot, incorporate their collection in your meetings, so that everybody can witness them getting collected. Playing music with their collection is optional.
Collect and show their graphing over time (the next step) to everybody in your ceremonies simultaneously, and take a short amount of time to speak to what they mean. - The little metrics should be recorded and graphed over time.
Work tracking systems are great at showing the current state of things. What you want to see is how project health is progressing over time. Every stakeholder, from individual contributor to executive, should be able to easily see these graphs, and understand what they mean.
If you can record these metrics and graph them over time in your work tracking system, great. If you need to copy them into a spreadsheet, that's fine too. As you're capturing the health state at a point in time, you don't risk outdated data listed in rule 4 above, because changes to the future state don't affect what is happening right now.
In the example of total, closed, and active, it's very interesting to see a graph like this.
but you can slice it or dice it however you want: percentages, lines towards 0 (project completion), break out active work as a separate graph, etc. Whatever is most useful to the stakeholders to gauge the "temperature" of a project.
If total work is growing over time, then the project isn't properly scoped yet (not a pejorative - growth is bound to happen, as the unknown unknowns become known). If closed work stays flat over time, then you can see if you've got enough resources on the project. There's a ton more to be written here, and way out of scope for this blog post. - The little metrics should NOT BE WEAPONIZED.
I'm 100% serious here. This observability on the state of our projects should be collected and presented in a judgement free manner. This is using data to determine project health. Projects that need love and attention should get love and attention to get them back on track.
I see a lot of metrics in my day to day that can be hard to digest what they really mean - think "can't see the forest for the trees". I'm a firm believer that having a system like these 7 rules in place around project observability will help tremendously in building a shared understanding project health, which further fosters a collaborative development culture.