In any line of work, there are a number of rites of passage by which you can mark your career and feel good about milestones achieved. Programming is no different.
Think back to the first time you were asked to perform a code review. How exciting! (Though you probably laugh at your younger self now for being excited to review code) You spent time as the newest programmer in the organization, under the impression that code reviews were something done to you rather than by you, but now you have confirmation that your opinion is valued.
If you're anything like me, the first thing you did in such a situation was to scramble to Google and start punching in things like, "how to conduct a code review", "code review checklist," or “code review best practices.” Being asked to weigh in on someone's code is a lot of responsibility, and who wants to screw that up? In such a Google search, you'll probably find recommendations that steer you toward a list like this:
- Does every method have an XML comment?
- Do classes have a copyright header?
- Do fields, methods, and types follow our standard naming convention?
- Do methods have too many parameters?
- Are you checking validity of method parameters?
- Does the code have "magic" values instead of named constants?
Seems pretty reasonable, right? If we brainstormed for an hour or two, we could probably flesh this out to be a comprehensive list with maybe 100 items on it. And that's what a lot of code review checklists look like -- large lists of things for reviewers to go through and check.
So code review becomes an activity reminiscent of a line supervisor inspecting factory equipment with a clipboard.
There are two problems with this.
- You can't keep 100+ items in your head as you look at every method or clause in a code base, so you're going to have to read the code over and over, looking for different things.
- None of the checks I listed above actually require human They can all be handled via static analysis.
Automating The Easy Stuff
So, if you go the "long checklist" route of code review, what you wind up with, in a real sense, is a lot of busywork. Code inspection is a hugely valuable activity. Spend it doing things that require human intellect. By all means, make this large checklist, too – and then set about automating everything that can be automated.
What do I mean by "automated?" Get static analysis tools that developers can install in their IDEs and run prior to delivering code, which will flag violations as errors or warnings. Get static analysis tools that run on the build machine and fail the build for violations. Don't build a list to help you nag developers about not following conventions – make it impossible for them not to follow the conventions so you can focus code reviews on the more conceptual elements of programming.
Not only does this introduce an operational efficiency, but it also has a subtly morale-boosting effect on the whole peer review process. Imagine pouring your heart into writing a letter to someone and the recipient saying, "Looks like you've got a typo on line 12." You're looking for meaningful, qualitative feedback on your writing, and they're nitpicking over things that a spelling and grammar check could handle. Let the static checker handle it, and have a more meaningful discussion once your code is being reviewed by another. Instead of feeling like every submission will lead to an onslaught of nitpicky sharpshooting, your colleague will feel that they're having professional discussions about their craft.
That morale boost is empowering, and it leads to an increased sense of professionalism in the approach to software development. "We trust you to handle your business with code formatting" sends a much better message than, "Let's see here, did you remember to prepend all of your fields with an underscore?" The former inspires upfront due diligence while the latter inspires learned helplessness.
Due diligence is important because there should really be two code review checklists: things the reviewee should do prior to submitting for review and things the reviewer should check. The lists I'm suggesting here cannot possibly be comprehensive (particularly without specification of tech stack or even whether we're talking about functional, structured or OOP languages), but they should give you a sample of the philosophy at play.
Code Review for the Important Stuff
First, here is an example checklist for a code author. The reviewer should assume that the following are true, and the author should consider it a matter of professional pride to ensure that they are true. This list should be pretty short if you're doing a good job of baking as many rules as possible into automated, static analysis checks. The more developers can verify automatically, the better.
- Does my code compile without errors and run without exceptions in happy path conditions?
- Have I checked this code to see if it triggers compiler or static analysis warnings?
- Have I covered this code with appropriate tests, and are those tests currently green?
- Have I run our performance/load/smoke tests to make sure nothing I've introduced is a performance killer?
- Have I run our suite of security tests/checks to make sure I'm not opening vulnerabilities?
Here is an example checklist for a code reviewer. Note that none of these are things easily addressed by an automated check because the reviewer assumes that the code author has done her due diligence in completing the above checklist.
- Does this code read like prose?
- Do the methods do what the name of the method claims that they'll do? Same for classes?
- Can I get an understanding of the desired behavior just by doing quick scans through unit and acceptance tests?
- Does the understanding of the desired behavior match the requirements/stories for this work?
- Is this code introducing any new dependencies between classes/components/modules and, if so, is it necessary to do that?
- Is this code idiomatic, taking full advantage of the language, frameworks, and tools that we use?
- Is anything here a re-implementation of existing functionality the developer may not be aware of?
As you can see here, the code author assumes a great deal of responsibility for writing well-crafted code. The reviewer is looking for possible cracks in the facade. This allows the two of them to team up on fixes.
It's also very important that the lists remain relatively brief so that the reviews don't turn into mind-numbing procedures. You can always modify the checklist and add new things as they come up, but be sure to cull things that are solved problems as your team grows. No matter how hard you try, it's not going to be perfect. The aim is to catch what mistakes you can and to get better – not to attempt perfection.
The State of Code Quality 2016
The State of Code Review 2016 Report includes valuable insights into how development teams are collaborating to improve code quality in 2016.
In addition to code review, the report provides insight on the different tools teams are using to build and maintain software. The report also takes a deeper look at how development teams are organized in 2016 and how dev teams in organizations ranging from less than 5 employees to more 10,000 are approaching code quality.
Get the report!