Do Code Review Meetings Still Make Sense in 2019?
“I survived a meeting that should have been an email.”
  March 08, 2019

Recently, it seems that anti-meeting sentiment is growing. Sometimes it shows up as a meme; sometimes it shows up in frustrated post-meeting mumbling; and sometimes it shows up as a snarky mug design. Either way, people are hyperconscious of their time these days and acutely aware of how meetings can invade their calendars.

“Oh, another recurring meeting? Great…”

So, let’s break this down through the lens of code review. Do code review meetings still make sense in 2019?

Why is it that we are having meetings in the first place?

In 1976, Michael Fagan published a study at IBM on the efficacy of peer reviewing software. That study served as the basis for the first popularized approach to effective code review, eponymously referred to as “Fagan Inspections”. This code review approach is driven by a series of meetings. An introductory meeting is set to present materials and create objectives for the review. This is followed by an inspection meeting where materials are reviewed, defects are found, and metrics are collected manually.

After the author makes changes based off of this feedback, there is a verification meeting where reviewers make sure that the issues initially raised have been remedied. Lastly, there is a follow-up meeting to discuss what could be improved for the next review cycle.

In case you weren’t counting or got caught in a brief spiral of meeting vertigo, this approach recommends 4 meetings for a code review.

It goes without saying that the development landscape has transformed dramatically since the inception of this approach. According to our 2018 State of Code Review report, tool-assisted reviews are now almost twice as popular as meeting-based reviews.

The rise of tool-assisted code reviews

For the two decades following Fagan’s study, requesting feedback on code files through email was the closest thing to conducting a tool-assisted review. While email allows for file sharing and conversations, most of the review work is still manual.

In 2003, SmartBear launched the first commercial code review tool, Code Collaborator, now simply called Collaborator. For the first time, teams had the ability to clearly see differences between file versions, make comments and mark defects, and report on their review process.

By 2008, the code review tool market had expanded. SonarQube, a popular open source static analysis tool, was created to help teams automate finding certain defects in their code reviews. Other peer review tools like Crucible and Review Board started to gain traction, and GitHub and Bitbucket debuted for the first time. In the decade plus since, new players like GitLab (2011) and Visual Studio (2015) have been added to the mix.

We surveyed 1100 software professionals in 2018 and found that 39% of respondents are conducting tool-assisted reviews at least one a week, with 21% participating on a daily basis.

All of this is to say that using a tool to drive reviews is now the dominant approach, from lightweight pull requests to focused, customized peer reviews.

Have meetings been completely replaced by these tools?

As with most things, it’s complicated.

In Jason Cohen’s 2006 book, The Best Kept Secrets of Peer Code Review, he cites a number of studies that try to determine if defects (things that need to change) are actually found during meetings, or if they are identified by reviewers in the reading phase ahead of the meeting.

One of those studies was published by Lawrence Votta from AT&T Bell Labs in 1993, showing that 96% of defects were found in the reading phase. The review meeting was found to be a relatively ineffective method for defect identification. A study by Reidar Conradi in 2003 looked at 38 architecture reviews and found that even though only 25% of the review was comprised of reading time, that reading time accounted for finding 80% of the defects.

Today, tool-assisted reviews have made this reading phase even easier by showing diffs of files, allowing for remote feedback, and capturing metrics.

So if code review meetings aren’t helpful in identifying defects, why bother?

In our analysis of the 2018 State of Code Review report, we found that of the teams that never hold code review meetings, only 28% are satisfied or very satisfied with their software quality. Almost half (48%) are actually unsatisfied with their software quality.

Having review meetings is correlated with being satisfied with your software quality. This doesn’t mean that your team needs to hold meetings every day, or every week for that matter, but it does indicate that review meetings can have value.

The same analysis mentioned earlier found that the best review approach is to combine tool-assisted reviews with meetings. Reviewers can conduct code reviews on a daily basis utilizing a tool, and then teams can regroup in meetings and use those tools to facilitate conversations. For example, in Collaborator, teams can capture metrics like defect type and severity, review inspection rates, lines of code reviewed, as well as custom metrics.

Understanding What You Can Improve

How do you know how effective your reviews are? Without key metrics, you can’t really know.

In meetings, your team could pull a report on the type of defects that are showing up most in reviews. Are there trends? Would it make sense to hold a training on a certain subject area? Is your team spending enough time on reviews or rushing through?

By taking the time to understand the outcomes of your reviews, your team will be able to actively improve your process and hone your focus. Our analysis found that teams that report on their review process are 5X as likely to be satisfied with their software quality. Reporting provides you with the information you need to make process improvements. If the numbers don’t align with your team’s perception, meetings allow space for those honest conversations.

Fostering Collective Ownership and Learning

The benefit of meetings, in person or virtual, is that you can build a sense of team. That means that the conversation topics of your meetings should reflect your values as a team. Looking at metrics can signal that your team cares about following a process and constantly improving it. Allocating time to recognize outstanding contributions and thank team members for their effort signals that there is a common appreciation for each other’s time and hard work.

If you want to grow your team, you also need to prioritize learning. Code reviews are a unique interaction between authors and reviewers, where feedback on work is direct and candid. If there is a better way to do something, someone will tell you. If egos don’t get in the way, every review is ripe for team members to learn best practices and skills from each other. Many teams use this practice to accelerate onboarding, pairing senior engineers with new team members.

Take time in code review meetings to have team members share what they have learned in recent reviews. By doing this, you can take the cross-pollination of skills that might have occurred between a few team members and share those learnings across the team. This practice also enforces a collaborative, learning culture.

By prioritizing learning and mentorship as core values, your team can stay active, picking up new skills and techniques week after week.

It can be easy to forget, especially if you are stuck in a routine for a while, but humans love to learn new things. Sometimes, we just need to be reminded that there is time and space to go adventure.

So, yes. There is still a place for code review meetings.

Just don’t waste them trying to find new issues.

Take the time to learn from the issues you’ve already found, discuss ways that your process can be improved, and encourage continuous learning.

If your team is looking to get started with reporting on your code review process, you will need a tool that can capture the metrics that matter to you. Get started with Collaborator.