9 Tips to Make the Most of Metrics
Collaborate | Posted March 23, 2010

The other day I was reading some responses from our users about why they bought our code review tool. A large number of them particularly needed Code Collaborator's capability to automatically record metrics and data about their reviews.

This is a change.

So why do people care so much about metrics? Obviously, hard data provides useful measures that help you improve processes. And in methodologies like CMMI and regulated industries like medical and financial software, teams are required to capture review data and use the information for continuous improvement.

Good metrics can give you a lot of useful, actionable information. Some oft-used code review metrics include…

 

 

 

 

 

  •  

     

     

     

    • Number of lines of code reviewed

 

 

 

 

 

 

  •  

     

     

     

    • Number of defects found

 

 

 

 

 

 

  •  

     

     

     

    • Time spent in reviews by each reviewer

 

 

 

 

 

 

  •  

     

     

     

    • Inspection rate, normally measured in thousands of lines of code (kLOC) per hour

 

 

 

 

 

 

  •  

     

     

     

    • Number of defects found per kLOC of code, a.k.a. defect density

 

 

 

 

 

 

  •  

     

     

     

    • Defects found per hour, a.k.a. defect rate

 

But caveat emptor: metrics can also steer you down the wrong path if you're not careful. While metrics do supply good data at a high level, they have no way of accurately representing the normal variations that occur in code review. For example, the code for your detailed, tricky new algorithm requires more intensive review than the re-purpose of an oft-used function, so is it really helpful to compare inspection rates between them?

Use these tips to make the most of your metrics and your code reviews:

1. Faster is not always better. Take your time with code reviews. If you review too quickly, you miss defects. If you review too slowly, you're not likely to find extra defects, just waste time. Aim for an inspection rate of no more than 300-500 LOC/hour.

2. Longer time spent in reviews doesn't always mean better quality. Obviously you want to review code slowly and carefully, but don't spend more than 60-90 minutes at once. This guideline is well-supported by evidence from many other studies besides our own Cisco study* (described in detail in our free book, Best Kept Secrets of Peer Code Review). In fact, it's generally known that when people engage in any activity requiring concentrated effort, performance starts dropping off after 60-90 minutes.

3. Likewise, your effectiveness drops off if you review too many lines of code at once. I mean, can you really give detailed attention to 2000 LOC in an hour? At that rate, you're likely to miss many of the lurking defects. For maximum efficiency, review fewer than 200-400 lines of code at a time.

4. You can't always accurately compare metrics from one project to another. Even within a project, sections of code can have widely-varying degrees of complexity.  

5. Finding more defects doesn't indicate an unskilled or careless author. In fact, you'll likely find more defects in hard or brand new sections of code, which may be assigned to your most skilled or conscientious programmer.

6. Reviewers' inspection rates will vary widely even with similar authors, reviewers, files, and review size, and that's normal. In our Cisco study, we observed that many factors combine to determine the review speed, and we found no metric that correlated significantly with inspection rate. Both rates and defects depend on many things besides the author or reviewer, such as the programming language, the environment, the code (new, existing, hard, easy, etc.).

7. Never use code review metrics to evaluate developers. This practice makes people despise or even avoid reviews, try to work the system to achieve the best metrics, and in general not trust the overall process. Metrics are Good for process improvement, but Evil if used for criticism and performance evaluation. Smart managers understand that defects are a natural part of coding and will promote the viewpoint that finding defects is a good thing. After all, each one is a bug that never reached customers and presents an opportunity to improve the code. Some teams even reward defects as a successful team result of both author and reviewer or reward individuals who are particularly dedicated to code reviews for the betterment of the team.

Metrics only tell you about the code, not the coder – as it should be. (If you want more detail on this topic, we have a whole white paper on the social effects of peer code review).

8. Establish high-level, quantifiable goals like "reduce support calls by 20%" and "reduce the percentage of defects that reach QA by 30%." These measures reveal whether your code review is truly effective.

9. Keep in mind that only automated or tightly-controlled processes can give you repeatable metrics -- human beings just aren't good at remembering to stop and start stopwatches and to record every defect they discussed. Using a code review tool that gathers metrics automatically is the most accurate and efficient way to provide metrics for process improvement. 

Perhaps most important to consider is that your team is comprised of people and
all of the variables that go with being human... including a general distrust of being judged by metrics that don't represent the whole picture.
Which means its crucial that the focus remain on what the metrics say about the code, not the coder. The team needs to understand that the metrics will only be used to work for them, not against them.

Finally, don’t worry too much about how other groups and companies use metrics -- set them up in a way that ensures your team gets value out of them. In the end, go with what works best for your unique situation.

*Smart Bear’s Code Review study: We developed the best practices outlined here using data we collected over the course of ten months in a code review study at Cisco Systems. The Cisco teams, comprised of more than 50 programmers, conducted 2500 code reviews to improve 3.2 million lines of code.

If you want more handy tips from the study, get our (free!) book: Best Kept Secrets of Peer Code Review, which details results from this study and others. For the "Cliff Notes" summary, check out the white paper, 11 Best Practices for Peer Code Review.

 

 

 

 

 

 

 

 

 

 

 

 

Close

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox