• Micro Sharing - More people who understand a piece
• Macro Sharing - Exposure to new patterns and ways of
“Why Review Code” Sophie Alpert https://sophiebits.com/2018/12/25/why-review-code.html
• Architecture is planned (or not) in design meetings and
• Code Reviews help flesh it out
• Code Reviews are great places to challenge, discuss
and improve architecture decisions or gaps
• Especially good for less formal processes
“A Philosophy Of Software Design” John Ousterhout https://amzn.to/2XxQM03
Deliberate practice [is when]
(1)your attention is focused tightly on a
specific skill you’re trying to improve […]
(2)You receive feedback so you can
correct your approach to keep your
attention exactly where it is most
- Cal Newport, Deep Work
“Deep Work” Cal Newport https://amzn.to/2XCfn3Y
Hey everyone! We have a ton of speakers tonight, so I’m gonna get right into it.
My name is Ben. I work a block up the street at WhatCounts (formerly Windsor Circle), I’ve been in professional software development for about a decade now, and have lived in Durham for even longer than that. I write about shipping web software at my blog, feel free to check it out.
Cool, so… here’s the gameplan. Talks on code review are a bit unusual, so I’m gonna stand up here for 30 seconds and convince you I’m not wasting your time. Then we’ll talk about why this code review thing is useful, how to do it better, and finally how to create an environment where everyone can get more benefits from code reviews.
So why is this a better use of your time than reading tweets for the next 20 minutes?
Raise your hand if you do code reviews or have your code reviewed as part of your job
Raise your hand if you contribute to or are interested in contributing to open source projects? < yeah you all are going to encounter code reviews too When Google did an internal research project on code reviews, they found that developers spent on average over 3 hours a week on code reviews, with some developers devoting considerably more time reviewing code. This aligns with my experience on teams that regularly review code.
Despite this being a regular part of many developers rhythms its something that I’ve rarely seen discussed in conferences, meetups or blogs. Which is a shame, because its something that you can definitely get better at.
Finally, good code reviews can make your job better by making things collaborative and helping you be better. Bad ones can exclude people and miss opportunities to improve software
And helping you get better is the goal today. So lets start by understanding what value code review brings.
I’m going to focus on 4 main areas of value for code review. These aren’t exclusive, you can probably come up with more, but I’ve found these to be universal benefits
Ok, so this is probably why most people would say they’re doing reviews. We’re catching bugs, and pointing out opportunities for improvement when somebody doesn’t know about a particular pattern or library.
I owe the breakdown of this set of benefits to Sophie Alpert, the former manager of the React team at Facebook.
Sophie makes a distinction between micro and macro knowledge sharing:
Micro is more about increasing team knowledge of the code base, which helps people do more and mitigates the “lottery bus” factor: when an employee leaves suddenly because they won the lottery or were hit by a bus
Macro is about learning from your coworkers and seeing new ways of approaching problems that you hadn’t experienced before. Note that while code reviews are ostensibly to help the review or the codebase, these benefits are actually for the *reviewer*. Code reviews have more than one impact
One of the books that changed how I think about code reviews was “A Philosophy Of Software Design” by John Ousterhout. It’s a book about software architecture and design, but the authors recommended tactic for implementing his suggestions is to use code reviews to see how well the codebase and new code supports good architecture.
For instance after reading about how modules that know too much about each other can cause problems, spend time looking for patterns like that in your code reviews.
Most teams I’ve worked on in my career have been on the informal end of the process in terms of how much time we spend on upfront designs and explicit architecture models. So code review discussions have always been really important in defining the architecture But even if you are more formal, code reviews are where theories meet reality.
This is also very relevant to open source projects. It’s a common pattern in the open source world to ask somebody who has a suggestion for an architecture change to open a PR for discussion, especially if that person isn’t the maintainer.
In his book Deep Work, Cal Newport argues that the key to mastery of any skill is deliberate practice, which consists of “doing something” and “getting feedback on it”
The first 3 points are all specific versions of the last point: code reviews help us get better by defining a feedback loop. There’s an old saying about how you can have 10 years of experience in a job, or 1 year of experience 10 times. Getting feedback on our work and learning where we need to improve gives us the opportunity to get better.
Alright, so that’s why we review code: Better code, knowledge sharing, implementing architecture and feedback loops. On to how to do it better
Ok, so 2 parts of having better code reviews. Getting better ones when you’re submitting code, and giving better ones when you’re reviewing
So this is probably the single biggest practical thing I’ve done to get better reviews: I’ve learned that annotating my pull requests with additional context has resulted in much better feedback during reviews.
Annotating = comments in bitbucket/github. Not just for reviewers
The types of things I annotate are Giving reviewers a high level picture of how the changes fit together Reference when a particular change connects to a change in another file Give context for why a particular section of code changed (as opposed to what it does or how it works, which should be evident in the code + comments in the diff) Asking questions or highlighting particularly important parts of the diff
I have found that this lowers the burden on reviewers as they try to understand context, and helps them focus on the important stuff. Anecdotally it has led to much more useful feedback and less rubber stamping of my PRs.
So… this tweet speaks for itself. Split apart your changes into digestible review sizes as much as possible.
In my experience somewhere around 150 lines things start getting pretty dicey, and most PRs should be much smaller than that.
Ok, so on to “giving” better code reviews. This is where the human factor really comes in.
The chart on the slide is my favorite framework for thinking about feedback You give good feedback when you care personally and are willing to challenge directly.
In code reviews that means taking the time to get things right and tell the reviewee what you think, but do do so with an attitude of genuinely wanting to help them and make the end result better. I think you can imagine what the other quadrants look like: empty compliments, rubber stamping or nitpicking , and abrasive or domineering put downs.
Code reviews are one of the most human parts of our day to day work as developers, and getting better at it means getting better at working with humans
So more about dealing with humans: we generally don’t respond well to constant streams of negativity. For most people their defense mechanisms kick in at that point and they start to tune people out.
In fact, while criticism is sometimes necessary, it is often the least impactful way to give feedback. Instead mix it up. Ask about things you don’t understand, or think you may be lacking context on. Be humble. Tell people what they’ve done well, and make suggestions for how things could be better (without necessarily saying that its wrong now).
Ok, final section here is about how we help teams do this better. This is most relevant to the managers, maintainers and leads in the room, but anybody can speak up on these things and push for change
One thing I think a lot of developers assume about code review is that its about senior developers keeping more junior developers “in line”. But code reviews can actually be useful in multiple directions. They just have different benefits each way.
Note that Senior and Junior here are relative terms. Its about when one person has more experience and comfort with contributing to a particular code base, either through years of experience, aptitude, or familiarity with the subject matter.
(read the slide :) )
So this is my best “manager hat” advice of the talk: if you want your team to get value out of code reviews, the best thing you can do is to treat it as an equal activity to writing the code. That means that its equally valued as time spent, and when bugs come back both the coder and reviewer should be held accountable. This doesn’t mean “blaming the reviewer”, it just means when you’re examining any quality problems and looking for solutions, you should be working with coders AND reviewers to find improvements and not accepting a “rubber stamping” attitude towards reviews.
Finally, a lot of teams like the ideas of code reviews but aren’t set up to do them. So here are my quick hit tips
Get the right technology. I’ve worked on teams that used old version control systems and sent patches along by email for review. Code Review on those teams was ok, but often got skimped because it was so much work. Modern tools like github, bitbucket, gitlab, Phabricator and others make this MUCH lighter weight. And when it’s easier, teams do it more. I recommend setting up some checklists of common things to look for in code reviews. These shouldn’t be exhaustive, but outline the common things that get missed, 3-7 items, and update it over time. Maybe have specialty checklists for different parts of the code and work it into your process. To be honest, I’m still experimenting with this stuff, will probably write about it once I’ve used it a bit more with my team Lastly, code reviews are severely limited if you don’t leave time for developers to act on a reviewers suggestion. If developers are overcommitting to the point where they can’t iterate after a reviewer comments, that should be addressed and fixed.