Code Review: Worth the Time?
Collaborate | Posted April 30, 2013

In a recent survey on code review that we conducted with more than 600 developers, the No. 1 reason given by developers for not doing code review is that they don’t have the time. In today’s world, everyone is pressed for time, not the least of all developers. The question then is not whether code review takes too much time, but whether or not a company could afford not to spend the time.

Formal, manual code review is time consuming, and many would undoubtedly argue that it is not a productive use of time. By “formal, manual code review” I mean meetings where developers get into a room and review code. I know of no one who actually likes this.

Of course, there are other types of code review. Over-the-shoulder code review can be beneficial, but it’s typically limited to the two people involved. The same goes for email-based code review, where an author emails their code to another developer for feedback.

Tool-based code review is different. With tool-based code review, users can send code to entire teams for feedback, with each person reviewing the code being able to see the comments other reviewers have made. This collaboration enables the whole team to improve, rather than just two people learning from one another.

But is it too time consuming? Or, phrased differently, is the return worth the time it takes to review the code?

To answer that question, let’s first look at the time it takes to review code.

Tool-Based Code Review is Faster

A code review study we conducted with Cisco showed that, for optimal effectiveness, developers should review between 200-400 lines of code (LOC) at a time. Beyond that, the ability to find defects diminishes. At this rate, with the review spread over no more than 60–90 minutes, you should get a 70–90% yield. In other words, if 10 defects existed, you'd find 7 to 9 of them.

After 10 months of monitoring, the study crystallized a theory: Done properly, lightweight code reviews are just as effective as formal ones, yet they're substantially faster (and less annoying) to conduct. Our lightweight reviews took an average of six and a half hours less than formal reviews, but found just as many bugs.

(We have a number of other metrics around code review in our free eBook, Best Kept Secrets of Peer Code Review.)

Best Practice: Verify that the Defects are Actually Fixed

OK, this "best practice" seems like a no brainer. If you're going to all of the trouble of reviewing code to find bugs, it certainly makes sense to fix them! Yet many teams that review code don't have a good way of tracking defects found during review and ensuring that bugs are actually fixed before the review is complete. It's especially difficult to verify results in email or over-the-shoulder reviews.

Keep in mind that these bugs aren't usually entered into bug tracking logs, because they are found before code is released to QA. So, what's a good way to ensure that defects are fixed before the code is given the "All Clear" sign?

We suggest using good collaborative review software, integrated with your bug tracking system, to track defects found in reviews. With the right tool, reviewers can log bugs and discuss them with the author as necessary. Authors then fix the problems and notify reviewers, and reviewers must then verify that each issue is resolved. The tool should track bugs found during a review and prohibit review completion until all bugs are verified as fixed by the reviewer (or tracked as a separate work item to be resolved later). A work item should be approved only when the review is complete.

The Cost of Defects

For years, writers have used a fairly simplistic measure of the cost of defects by phase. The measurements usually resemble something like this:

Defects found during requirements =           $250

Defects found during design =                     $500

Defects found during coding and testing =  $1,250

Defects found after release =                    $5,000

Capers Jones, who has written extensively on this, has been careful to point out that, while this works mathematically, results vary dramatically by company. The cost-per-defect penalizes quality and is always cheapest where the greatest number of defects are found.

Most would not argue, however, that the more you can do to eliminate defects during the requirements and design phase the better.

But what is the cost of fixing a defect? A simple formula can help:

 

 Average cost to fix a defect =(Number of people * number of days) * cost per person-day

 (Number of fixed defects)

 

For simplicity’s sake, let’s take the original figure of $250 per defect found during the requirements phase – increasing to $5,000 as the development reaches release. If you assume a developer cost of $500/day, each developer needs to find only two defects per day during the requirements phase or one during the design phase to have a high ROI. Every defect he finds prior to the release results in a higher return. That is, if you assume a defect found after release costs the company $5,000, every defect found earlier in the process is saving you that much more. And this doesn’t even take into account the value that developer brings authoring code!

A No Brainer

Of course, it’s hard to get these results without good process – and good software. Talk with developers who use a tool for code review and you’ll find they are much happier than their counterparts who aren’t. Most happy, however, are the developer managers that can use the tool to measure the team’s quality and improvements.

For all of them, the time spent is well worth it.

See also:

Close

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox