Best Practices of Code Reviews

Long ago I read a great white paper on the Best Kept Secrets of Code Review published by SmartBear software. It's an excellent study of code reviews and offers some great advice. Since then, I've continued learning and owe many thanks to some great engineers and thought leaders. What follows is a compilation of my current thoughts on the subject.

Before I get started, I'll give my definition of code review:

  • Code Review -- a critical analysis by an individual or group of a functional unit of code, be it a diff, a completed work, or a combination of both.

Next, the five types of code review, as defined by SmartBear -- plus one from me.
  1. Over the shoulder
  2. Email
  3. Tools, something like Gerrit
  4. Pair programming
  5. Formal inspections
  6. Self review
I feel that the first two on their own are insufficient for any development organization with a focus on quality. The informality of over the shoulder or email does not induce the proper level of seriousness. Emails can be lost and are not available to the team at large.

A (good) code review tool is irreplaceable. It provides history, contextual comments, patch sets, a web link to reference for the future, etc. It also provides an integration point for build systems and development workflows, a critical component to any team attempting continuous delivery. Pair programming is very valuable, for more than just code review, but is not the subject of my post.

Formal reviews can go either direction. I'll start with a bad one. At a previous company, we used to hold ad hoc, hour long code reviews. We had no tool and instead would invite the entire team into one cramped room in which we'd hook up a laptop to the TV and someone would show us part of their code. There was rarely any preparation on part of the presenter, nor any advance notice of what we'd be seeing. It was kind of like open mic night at the local coffee shop.

Looking back on this, I'm astonished that we didn't realize its ineffectuality. Roughly 70% of the engineers did not pay attention; reviews never started on time; an average max of only 15% of the attendees actually had exposure to the code under review; bugs were rarely found; meetings often devolved into tangential debates. In other words, it was a terrible waste of time.

Not all formal reviews have to be so dreadful. In fact, we conduct very successful formal reviews at my present company. We call them "Code Workshops." These entail advance notice of the code being shared, an assigned reviewer who comes prepared, and a (comfortable) room full of attentive and respectful engineers. The outcome is generally positive and we all leave a little wiser.

Finally, self reviews. A self review is the act of critically reviewing code that you're written. This can happen at any point: before you submit for review by others, before you refactor it, or just for fun on a lonely night. Self review should not be overlooked. It is a critical factor in effective peer code review and overall a fundamental factor to personally developing better code practices.


Given the preceding context, I've come to learn and adopt the following lists.

Learnings

  • The cost of finding a customer reported bug versus finding a bug in development is significantly disproportionate. The cost benefit of ensuring quality through effective code review and other means is immense.
  • Sign-off from senior engineers should not be required. This says "We don't trust you." You should inherently trust the people you hire to be smart enough to *request* code reviews when its needed. If you're faced with habitual stealth submitters, your problems with those employees may extend beyond just code reviews.
  • Synchronous pair code reviewing for complex changes reduces feedback cycles and total time to deliver. Moreover, beyond code review, it promotes rapid knowledge transfer.
  • An engineer's time is your most precious resource. Make sure it's used as effectively as possible and avoid doing anything manually that can be automated. In the context of reviewing code, automate as much as you possibly can with static analysis tools and build systems so that engineer's spend their time leaving unique comments (and not nit-picks about code standars). We use Facebook's Arcanist for our static analysis of PHP and Adam Rosien's Sniff for Scala.

Best Practices

  • Use code review as training for new hires. Keep track of good reviews in a team wiki.
  • Do NOT use code review as a way to teach people how certain code is used. Hold training sessions and record them and upload them internally (or to Box!).
  • Do not oversubscribe reviewers: although having more people look over the code will most likely result in more defects being discovered, the relationship quickly becomes logarithmic rather than linear.
  • Strictly enforce limits on the size of a review change set. Effectiveness in reviews begins decreasing after the first 15 minutes spent reviewing and drops drastically after 60 minutes. Respecting this practice will also help when bisecting code history during bug hunting.
  • Code reviews of unfamiliar code should be accompanied by a brief overview or an in person conversation.
  • Before requesting a code review, the original author should very critically review their own code. A reviewer is not a janitor. In fact, authors often find several problems in this phase.
  • Create a checklist of common issues. Checklists helped uncover 30% more defects in equal amounts of time in studies conducted by SmartBear (paper, page 126).
    •  A checklist should be supplemental to automation by a static analysis phase of your build system.
    • See sample checklist on page 112 of the SmartBear white paper.

Tips

  • Reviewers tend to spend majority of time on declarations and signatures.
    • Keep variable declaration closer to their use.
    • Use shorter functions.
  • Reduce feedback cycles by leaving concise, intelligent comments.


Final Thoughts


If I could encourage you to do one thing, it would be to find a good tool that works for your team and diligently learn how to use it to its maximum potential. Integrate it into your development process and use it in your build system. Treat it with the same level of importance as your code repository.

The bottom line is that no matter how you're doing it, code reviews help your team do better. These bullet points are the product of several years of software development and teamwork, but nonetheless simply remain suggestions and not prescriptions.


Further Reading

http://smartbear.com/resources/whitepapers/best-kept-secrets-of-peer-code-review
http://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/



Comments

Recent posts