Things I Learned at Google: Code Review

This is part two of a multi-part series on lessons I learned while working as a software engineer at Google. If you’d like to start at the beginning, check out part zero.

Code review is a cornerstone of development at Google; almost every human-authored change to the codebase is manually reviewed by at least one other person before submission. If you’re curious about the fine points of Google’s approach, I recommend chapter nine of the excellent Software Engineering at Google.

I’m going to focus here on my personal experience with code review and share some tips for both authors and reviewers. Of course, things I find appropriate are highly context-dependent; every company has their own norms around reviews and related communication.

What code review is good for

Coming out of college, I mostly understood the purpose of code review to be quality checking. In my junior year I took a class where all the assignments were in Lisp and had to be submitted to a custom review portal made by our professor. If our code passed the automated tests he would review it qualitatively and return it with comments, only accepting submissions once the comments were satisfactorily addressed.

Intuitively, I also understood the security value of requiring code reviews. If two humans are required to submit a code change then a single compromised developer machine has much less power in the hands of an attacker.

While these are both good reasons for requiring code reviews, it wasn’t until I started working on Google Drive that I appreciated the full value of the practice. Code review at Google serves a critical purpose as a recurring, long-term investment in codebase health. This investment pays dividends primarily in the form of codebase consistency, a quality which enables substantial productivity improvements when an organization is developing software at large scale over many years.

As a newcomer, code review was the primary way that I came to learn (and eventually master) Google’s various language style guides. Engineers at Google can attain an internal certification (called ‘readability’) by demonstrating proficiency in a language and correct application of the language’s style guide. Approval from someone with readability for the language(s) used in a change is required for code submission. In this way, code review works in tandem with the style guides to drive stylistic consistency across the entire Google monorepo.

Like code review itself, the value of consistency was something I only understood after observing a long-lived codebase and organization in action. From Software Engineering at Google:

At scale, code that you write will be depended on, and eventually maintained, by someone else. Many others will need to read your code and understand what you did. Others (including automated tools) might need to refactor your code long after you’ve moved to another project. Code, therefore, needs to conform to some standards of consistency so that it can be understood and maintained.

The benefits of consistency scale superlinearly with codebase size and so are most important at large companies like Google. Code review helps to drive this consistency while also providing a venue for new team members to learn the preferred style of their language(s). This cycle perpetuates itself as these team members eventually earn readability and bring their own expertise to future reviews.

What code review is not good for

At least as a first-order effect, code review is not good for development speed. Introducing blocking asynchronous communication (and sometimes multiple rounds of it) between code authoring and code submission can massively increase the wall time of any single change. A change written in half an hour may take multiple days to get through review and submit, especially if the author and reviewer are in different time zones[0].

In order to combat this effect, it is critical to apply automation as aggressively as possible to catch issues before changes are sent for review. Automated testing is one form of this; it is easier for a reviewer to be confident in a change’s logic if they can be sure that existing and added tests have passed. To the extent possible, style guide requirements should be automatically linted for to limit reviewer (and author) time spent on style minutiae.

It is also possible to fight the delaying effect of code reviews with culture: Google has a well established official policy on review speed (externalized here) that sets a clear and high bar for reviewer promptness:

If you are not in the middle of a focused task, you should do a code review shortly after it comes in.

One business day is the maximum time it should take to respond to a code review request (i.e., first thing the next morning).

Teams and organizations that wish to employ code review effectively should be mindful of how they incentivize reviewer responsiveness since review speed may be the largest single inhibitor of change velocity.

Tips for authors

Since review speed may be the largest single contributor to change wall time, making your changes easy to review can have a significant effect on your overall productivity by reducing the number of back-and-forth cycles before your changes get approved. Although options for achieving this will vary by team and language, I do have three universal suggestions.

The first tip is to pre-review your changes. When you have a change that is ready to be sent for review, open it up in whatever tool the reviewer is going to use and read through your change as if you are its reviewer. It is crucial that you do this in the review tool rather than your editor – the contextual clues of the review interface (e.g. the diff view) will help to switch your brain from ‘author’ mode into ‘critic’ mode and cause you to notice issues that were invisible before.

This is a simple idea that requires discipline to implement; it is an intervention needed at the moment when you are likely eager to move onto something (anything!) else. Try to remember the time savings of one avoided comment cycle and press on! In practice I’ve found this to be a highly effective way of catching mistakes, producing higher-quality code, and ultimately getting changes through review more quickly.

The second tip is to anticipate which changes are likely to cause significant discussion or disagreement during review and proactively solicit feedback from your reviewer during or before development. By shifting any potential discussion ‘left’ (i.e. earlier in time), you can move an exchange of ideas from a very slow medium (code review) to a faster one like chat or video call. In situations where there are multiple approaches, talking to your reviewer before writing the code may also save you from a costly trip down the wrong path.

For large projects, design documents are a common way to efficiently build consensus before code is written. The approach I’m describing tries to achieve a similar efficiency on the level of a single change or a small set of changes, below or up to the scale where a document would be appropriate. Often a short message can turn up opinions or suggestions that could cause days of delay if surfaced in review. For example:

Hi $TEAMMATE – I think I have a plan for fixing $TRICKY_BUG but I wanted to run my approach by you before I fully implement it. My plan is to debounce the data fetches and cache results on the client with a TTL of 30 seconds. In the process I will probably split the FooBarService class into a FooService and BarService. Do you see any issues with that approach or have any other ideas I should consider? Thanks :)

There are three typical responses to a message like this:

  1. “Sounds good to me”
  2. “That’ll work – check out the BazService where we implemented something similar last year” (or other contextual information you might be missing)
  3. “That actually won’t work for $REASON”/”Have you considered $GOOD_ALTERNATIVE?”

In the second case your coworker has saved you a little time; in the third case they’ve saved you a lot of time!

For slightly larger tasks (think, a set of changes) I like to outline my approach and also offer to either write up a short document for their review or review the prototype code on a call – catering to your reviewer’s preferred communication style is a good meta-tactic.

Code review comments (and therefore review time) tend to follow a power-law distribution; 80% of review comments happen on 20% of changes. Identifying and focusing on improving review speed for this minority of changes can, besides reducing conflict, have an outsized impact on your overall time-to-submission.

The third tip is about humility. Again, from Software Engineering at Google:

Remember that you are not your code, and that this change you propose is not “yours” but the team’s. After you check that piece of code into the codebase, it is no longer yours in any case. Be receptive to questions on your approach, and be prepared to explain why you did things in certain ways. Remember that part of the responsibility of an author is to make sure this code is understandable and maintainable for the future.

In my experience, authors tend to have a base opposition to modifying their proposed changes. Problem solving is a personal experience and it is natural to feel attached to your own creations, big or small. But once a change is submitted it belongs to a team, not an individual. Code review is a transitional phase that can facilitate ‘letting go’ of this attachment, if you let it.

Tips for reviewers

Code review is an asymmetric process; the reviewer effectively holds ‘veto’ power over the author’s proposed change. Being a good reviewer mostly means being conscientious with this power: responding promptly, reviewing thoroughly, and being kind.

Because comment turnaround time can make up such a large part of time-to-submission, reviewing changes soon after you receive them is one of the easiest ways to improve your team’s velocity. In a similar vein, it is crucial to send _all _applicable comments at once when reviewing changes. Raising new issues in the second round of comments that were present in the initial code is another way to inflate the number of comment cycles before a change can be approved.

The power differential inherent in code reviews is amplified when the reviewer is senior to the author. It is easy to forget, after years of experience, how intimidating it can be to have your code reviewed by experts whose knowledge far exceeds your own. I remember mailing 150-line changes in the first year of my career and wincing when I saw twelve or fifteen comments from my reviewer! Code reviews can be a wonderful venue for education, but sensitivity on the part of the reviewer is needed to make this happen. Be mindful of the way your communication can shape others’ attitudes over time, especially for junior colleagues.

Likewise, remember that a proposed change does not automatically need improvement simply because you could have written it in a better way. If you are more experienced than your teammates, you should expect most of the code you receive to (initially) be below this standard! It is neither reasonable nor pragmatic to wait until every change is as good as you can make it before approving. To answer the question of ‘where is the bar’, Google applies the campfire principle: code is ready to be submitted when it improves the overall quality of the codebase. By following this principle you can make progress on development and code health in tandem, rather than holding the former hostage to the latter.

[0] A classic experience for American Googlers is writing up a change and then discovering that the only people who can approve it are in Zurich, asleep.