Although software development is more about numbers than ever before, not all development teams are realizing the same level of value from the metrics they use. Some organizations are approaching the problem more holistically to get more from their investments.
Metrics are transforming the ways companies and even software teams work, but the dirty little secret is that numbers are a double-edged sword. While metrics can improve processes and productivity, they also can be misused, abused, or not used at all.
“One of the main concerns I’ve always had about metrics is the temptation for management to use them to evaluate whether one team is better than another team,” said Nate McKie, CTO of Asynchrony, a software development consulting firm. “Once you start that, you can kiss the usefulness of the numbers goodbye; because at that point the teams will do whatever they can to make their numbers look good.”
Take Source Lines of Code (SLOC) for example. Some teams use it to track production because it’s objective and an easy metric to measure. But it isn’t necessarily the best way to gauge productivity because a team working on a new development project produces more code than a team debugging undocumented legacy spaghetti code.
“SLOC does not cover the time you spend doing R&D, documentation, or other non-coding tasks that are part of development,” said Mark Herschberg, CTO of Madison Logic, which builds business-to-business lead generation solutions. “Nearly all metrics – SLOC, function points, COCOMO, and cyclomatic complexity – have their operating ranges; but none of them make sense for all projects. Building a simple ecommerce website is very different from creating fraud detection software.”
Defining “The Right” Metrics
If a metric can’t be applied to all projects and all teams all of the time with the same stellar results, then discernment has to come into play. Like many of his peers, Herschberg has tracked progress by monitoring the backlog of bug fixes and new features. His team members have also estimated hours for sprints and tasks.
Not everyone is equally accurate at estimating time, however. Developers tend to get better at project estimation with practice by analyzing the variance between estimated hours and actual hours.
“If estimates are off you have to understand why they are off,” said Herschberg. “If I take a look back and see that my development estimates were off by a factor of 2, my database estimates were off by a factor of 3.2, and my GUI estimates were off by a factor of 1.5, it could mean I’m worse at estimating the database issues. It could also mean that the database is really complicated, not well documented, or that database issues involve talking to other teams. A lot of people just aren’t used to that kind of thinking.”
To get all the software teams on board, McKie started a monthly ops review that requires team leaders to show what they’re doing to improve productivity and quality. Rather than mandating specific metrics that every team must use, McKie gives each team the freedom to choose its own metrics.
“There are still people who use velocity and iterations to show productivity,” said McKie. “For quality we’ve had everything from defects over a certain period to teams presenting on technical debt and how they’ve begun to measure that across their teams, how long legacy development takes versus new development, test coverage, test failures, those types of things.”
McKie is taking steps to ensure that the meetings mature from reporting to analysis so people understand what metrics different teams are using, how the metrics are used, what happens when trends change for better or worse, and what changes have been made as a result of the metrics.
The Dark Side of Metrics
Metrics can be a slippery slope – for developers and their clients.
Initially, developers may not want to use metrics because it falls outside the scope of what they consider to be their job. This was part of the motivation for launching the ops meetings at Asynchrony.
“We started the meetings with the idea that if the leads come together there’s a peer pressure aspect. You’ll look silly if you show up to a meeting without metrics,” said McKie. “People can see that others are solving problems so it’s a group support mechanism. It’s much more effective than mandating that everyone has to produce certain metrics just because that’s the way it works.”
On the flip side, it’s also possible for developers to get so wrapped up in metrics that they spend less time writing code. The myriad of possible metrics based on possible assumptions and calculations can be a rabbit hole, especially if metrics are being tracked using spreadsheets. And after all that thinking (or over-thinking), the results may be erroneous.
“We’ve had cases where things look right, but there was an error in the spreadsheet or assumptions or data,” said McKie. “That can be bad. We’re hoping to improve that with the ops meetings because you have peer review. If a metric is unusual, people can question it and the person presenting it can explain it and verify their assumptions.”
Anyone who has ever had internal or external clients knows (or will eventually learn) that setting client expectations is important. Metrics are commonly used to evaluate progress or project success.
But the intrinsic value of a metric is often compromised the moment it is produced, according to Russ Lewis, principal at Storm Consulting. “A team practicing Agile forecasting might record their current ‘velocity’ at the top of their project whiteboard for all to see. To members of the team it means the number of story points completed in the last sprint, and is used as the basis for planning future sprints,” said Lewis. “But to the casual passer-by it looks like a measure of achievement, which perhaps should be increasing month on month. To the competitive-minded it could easily become a target which must be challenged and beaten. Soon the team is told they must start improving their velocity, which leads to the Agile equivalent of currency inflation.”
When a team is told to improve its velocity, the team may increase the number points it assigns to stories even though the amount of effort and resource requirements haven’t changed. Because it appears from the higher numbers that the velocity has improved, the client, a product manager, or company executive who demanded the improvement may be satisfied, temporarily.
Another problem with numbers, especially among project managers and executives who are relatively inexperienced using them, is a failure to consider what besides numbers can affect the value of a metric. Are the metrics are appropriate to the situation? Do all parties involved have the same understanding of what the numbers mean? Are appropriate actions being taken as a result of the metrics? Is all the measurement actually resulting in a positive benefit?
“It takes maturity to understand the numbers, act on them, and then see whether the action actually improved things,” said Asynchrony’s McKie. “The teams that are using metrics to their benefit are those that have been using them for years. The teams that are new to using them are using metrics for project burndown and to determine how well they followed a plan.”
Bottom line, there is no one-size-fits-all approach to metrics that works for every team in every situation, every time. While the use of metrics can improve outcomes and productivity, there is more to deriving benefits from numbers than meets the eye.
See also:
[dfads params='groups=934&limit=1&orderby=random']
[dfads params='groups=937&limit=1&orderby=random']