What is Code Review?

Code Review, also known as Peer Code Review, is the act of consciously and systematically convening with one’s fellow programmers to check each other’s code for mistakes and has been repeatedly shown to accelerate and streamline the process of software development like few other practices can. There are peer code review tools and software, but the concept itself is important to understand. Software is written by human beings. Software is therefore often riddled with mistakes. To err is, of course, human, so this is an obvious correlation. But what isn’t so obvious is why software developers often rely on manual or automated testing to vet their code to the neglect of that other great gift of human nature: the ability to see and correct mistakes ourselves.

Whether you’re a software development manager or a programmer in the trenches, you may be ignoring the tremendous benefits of code reviews, or code inspections, at your own peril. When done correctly, peer reviews save time, streamlining the development process upfront and drastically reducing the amount of work required later of QA teams. Reviews can also save money, particularly by catching the types of bugs that might slip undetected through testing, through production, and into the end-users’ laptops (whereupon those annoyed customers will issue scathing reviews of your product on Amazon or in the App Store and your sales will suffer accordingly).

But while saving time and money are crucial concerns in the business of software development, code reviews also deliver some additional, more human-centric ROI. Work environments that encourage programmers to talk with each other about their code tend to foster greater communication and camaraderie, distribute the sense of "ownership" for any piece of code, and provide an invaluable educational context for junior developers—as senior colleagues demonstrate, through real instances, better ways to write clean code, solve common problems with useful shortcuts, and visually identify any number of potential troublespots, such as memory leaks, buffer overflows, or scalability issues. Document review makes it easier for an organization to curate, govern, and manage the lifecycle of digital artifacts beyond the source code.

Taken together, these factors should inspire any development team to consider implementing a smart, strategic code review process if they aren’t already doing so (especially given the stats). After all, would a serious book publisher dare to print thousands of copies of an author’s work without first having a team of editors proof and copyedit the manuscript? For the writers and publishers of software, the same logic applies. But these days, with production cycles getting shorter and shorter, where does one begin?

Stay Up to Date on Peer Review Trends
View our interactive report on peer review trends and insights

Understanding Code Review

To anyone who thinks of code reviews with a cringe and a shudder, recalling the way they used to be done years ago, the prospect of introducing such a system into your fast-paced Agile workplace can seem like cruel and unusual punishment. Beginning back in 1976, when IBM’s Michael Fagan published his groundbreaking paper, "Design and Code Inspections to Reduce Errors in Program Development," the idea of a formal, systematic code review caught on quickly (with earlier versions of peer review tending to be less structured) and generally consisted of a bunch of people sitting together around a table in a stuffy room, poring over dot-matrix print-outs of computer code together, red pens in hand, until they were bleary-eyed and brain-dead. But just because something’s painful it doesn’t mean it isn’t worth the effort. As Capers Jones and Olivier Bonsignour wrote in a blog post titled "Do You Inspect?":

Recent work by Tom Gilb, one of the more prominent authors dealing with software inspections, and his colleagues continues to support earlier findings that a human being inspecting code is the most effective way to find and eliminate complex problems that originate in requirements, design, and other noncode deliverables. Indeed, to identify deeper problems in source code, formal code inspection outranks testing in terms of defect-removal efficiency levels.

Nevertheless, along with everything else in the world of computing and software development, code reviews have evolved dramatically, and there are now many mutations to choose from. These days, as effective as they always are, long formal code review processes aren’t typically necessary except in software engineering situations where there is literally a zero-percent margin for error, such as in avionics or other regulated industries where human safety takes precedence above all else. But for most other situations, a slew of "lightweight" peer-review processes have organically developed over time, and many of them are fully compatible with equally lightweight Agile workflows and iterative production cycles. There are a few common Agile-friendly approaches to code review, each with its limitations.

Common Code Review Approaches

The Email Thread

As soon as a given piece of code is ready for review, the file is sent around to the appropriate colleagues via email for each of them to review as soon as their workflow permits. While this approach can certainly be more flexible and adaptive than more traditional techniques, such as getting five people together in a room for a code-inspection meeting, an email thread of suggestions and differing opinions tends to get complicated fast, leaving the original coder on her own to sort through it all.

Pair Programming

As one of the hallmarks of Extreme Programming (XP), this approach to writing software puts developers side by side (at least figuratively), working on the same code together and thereby checking each other’s work as they go. It’s a good way for senior developers to mentor junior colleagues, and seems to bake code review directly into the programming process. Yet because authors and even co-authors tend to be too close to their own work, other methods of code review may provide more objectivity. Pair programming can also use more resources, in terms of time and personnel, than other methods.

Over-the-Shoulder

More comfortable for most developers than XP’s pair programming, the over-the-shoulder technique is one of the oldest, easiest and most intuitive way to engage in peer code review. Once your code is ready, just find a qualified colleague to sit down at your workstation (or go to theirs) and review your code for you, as you explain to them why you wrote it the way you did. This informal approach is certainly "lightweight," but it can be a little too light if it lacks methods of tracking or documentation. (Hint: bring a notepad.)

Tool-Assisted

We saved our personal favorite for last, as there is arguably no simpler and more efficient way to review code than through software-based code review tools, some of which are browser-based or seamlessly integrate within a variety of standard IDE and SCM development frameworks. Software tools solve many of the limitations of the preceding approaches above, tracking colleagues’ comments and proposed solutions to defects in a clear and coherent sequence (similar to tracking changes in MS Word), enabling reviews to happen asynchronously and non-locally, issuing notifications to the original coder when new reviews come in, and keeping the whole process moving efficiently, with no meetings and no one having to leave their desks to contribute. Some tools also allow requirements documents to be reviewed and revised and, significantly, can also generate key usage statistics, providing the audit trials and review metrics needed for process improvement and compliance reporting.

Tracking Your Progress

Whichever method of peer review one prefers, it goes without saying that metrics matter in the arena of code review, especially with so many dev teams out there still waiting to be convinced about its ultimate efficacy as a regular practice. Yet there is no better way than tracking actual metrics to justify, and most intelligently utilize, the time and brainpower required—and that’s why some existing studies are so edifying, such as a 2005-2006 analysis of Cisco’s peer-code review process conducted by SmartBear Software, which surveyed no less than 2,500 reviews of 3,200,000 lines of code written by 50 developers (the full study is presented in SmartBear’s in-depth eBook Best-Kept Secrets of Peer Code Review).

The result? Not only was it determined that Cisco’s code review process detected far more bugs, or defects, than regular testing alone could have uncovered, but the metrics (derived from such a large sample set) allowed the researchers to glean the following crucial insights about code review in general:

  • Lines of code (LOC) under review should be less than 200, not exceeding 400, as anything larger overwhelms reviewers and they stop uncovering defects.
  • Inspection rates of less than 300 LOC/hour result in the best defect detection, and rates under 500 are still good, but expect to miss a significant percentage of defects if LOC are reviewed faster than that.
  • Authors who prepare the review with annotations and explanations have far fewer defects than those that do not. The cause, presumably, is due to authors being forced to self-review their code.
  • The total review time should be less than 60 minutes, not to exceed 90. Defect detection rates plummet after 90 minutes of reviewing.
  • Expect defect rates at around 15 per hour. This rate can only be higher with less than 175 LOC under review.

In another case study, recorded in the same ebook, the SmartBear team reported that one customer wanted to reduce the high cost of fielding customer support calls each year, which they determined cost them $33 per call. Starting at 50,000 calls a year, within a few years after systematic code review procedures were implemented to both remove software defects (because computer glitches would initiate random calls to the customer support department) and to improve usability (to reduce the number of bewildered customers calling tech support), the frequency of support calls had dropped down to just 20,000 per year (despite even a 200% increase in product sales)—equaling a cool $2.6 million in savings.

Clearly, proper peer code review not only streamlines software, but also streamlines bottom lines. You can see other code review best practices in our Code Review Learning Center.

The Future of Peer Code Review

Of course, although we’ve been emphasizing it in this article, code review is only one component of any software production team’s Quality Assurance plan, with the many varieties of testing and static analysis rounding out the QA checklist. But it’s an important component, often eradicating bugs right after they hatch and before they have time to grow into unwieldy beasts, as well as identifying "hidden" bugs that might present no problem now but may impede the future evolvability of the product.

Unit tests are always good for determining whether or not a given function "works" as intended, but code review can illuminate subtler issues more suited to human perception—such as scalability, error handling, and basic legibility (including the written clarity of developers’ annotations and requirements docs). Just as test automation has become increasingly sophisticated and the predominant weapon of choice for testing teams, it also seems likely that tool-assisted peer code review will, in time, supplant the other "lightweight" forms as the most appropriate and inclusive methodology available.

In a world of accelerating software production schedules, where continuous deployment is becoming the norm and customer feedback is an endless loop, an ever-increasing reliance on the right digital tools for maximum efficiency just makes sense. Github effect's on code review can be felt by the increasing amount of reviews that development teams are actually doing.

But even a decade or two from now, unless new software somehow starts writing itself, we can rest assured that the primary component of peer code reviews—namely, human beings—will still take center stage. Indeed, as long as there are humans conducting code reviews, it means that even if there were no software review tools, code will still continue to improve, simply thanks to human psychology.

As Jason Cohen, the founder of SmartBear Software, has pointed out in a phenomenon that he calls "the ego effect", knowing that other people are going to be criticizing your work tends to make you, automatically, a better and more conscientious developer. Nobody wants to look bad in front of their colleagues, and that fact alone can be a powerful motivator for writing better code, paying more attention, and allowing fewer bugs to squeak through the many cracks of our fallible human nature and into the light of day.

Start Your Free Trial
Start a Peer Code Review Program with Collaborator