Successfully reported this slideshow.

Async Code Reviews Are Killing Your Company’s Throughput - [Old]

9

Share

1 of 47
1 of 47

Async Code Reviews Are Killing Your Company’s Throughput - [Old]

9

Share

Download to read offline

"Never had a PR over 300 LoC that didn't look good to me".
We've all been there. The PR is so big you don't even bother commenting. It's already too late to build the quality in. You make a sad face, comment "LGTM", and click approve.

That's still the case in lots of teams but feels like after a long time the industry, on average, learned the value of the Small Batches idea from Lean applied to PRs. And that's a good thing; it's a step in the right direction.
We do a bit of coding, raise a PR and then ask for feedback. People are more likely to engage on smaller PRs. It takes them less time to review it, and they have a feeling that they can still course-correct if something goes astray. PRs go sooner out of the door, and we feel productive as a team.

But, here's the surprise.
What if I told you that teams doing small PRs (with async code reviews) actually have way lower throughput than teams doing big PRs.

"Wait, what?!"
Yes.

I got this surprising systemic insight from analyzing PRs across a bunch of very active repositories and I'll present you with the results of the study.
On the bigger PRs side of the spectrum, we tend to lose quality, while on the smaller PRs end of the spectrum we lose throughput. We're forced to make a trade-off between speed and quality.

But! There's a parallel universe of ways of working where you get to have a cake and eat it too. Where you can have both throughput and quality.
That universe is called co-creation patterns (Pair and Mob programming).

Join me on a journey where I'll show you data invalidating the assumption that two or more people sitting at the same computer will hurt our throughput and why the opposite is actually the case.

"Never had a PR over 300 LoC that didn't look good to me".
We've all been there. The PR is so big you don't even bother commenting. It's already too late to build the quality in. You make a sad face, comment "LGTM", and click approve.

That's still the case in lots of teams but feels like after a long time the industry, on average, learned the value of the Small Batches idea from Lean applied to PRs. And that's a good thing; it's a step in the right direction.
We do a bit of coding, raise a PR and then ask for feedback. People are more likely to engage on smaller PRs. It takes them less time to review it, and they have a feeling that they can still course-correct if something goes astray. PRs go sooner out of the door, and we feel productive as a team.

But, here's the surprise.
What if I told you that teams doing small PRs (with async code reviews) actually have way lower throughput than teams doing big PRs.

"Wait, what?!"
Yes.

I got this surprising systemic insight from analyzing PRs across a bunch of very active repositories and I'll present you with the results of the study.
On the bigger PRs side of the spectrum, we tend to lose quality, while on the smaller PRs end of the spectrum we lose throughput. We're forced to make a trade-off between speed and quality.

But! There's a parallel universe of ways of working where you get to have a cake and eat it too. Where you can have both throughput and quality.
That universe is called co-creation patterns (Pair and Mob programming).

Join me on a journey where I'll show you data invalidating the assumption that two or more people sitting at the same computer will hurt our throughput and why the opposite is actually the case.

More Related Content

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

Async Code Reviews Are Killing Your Company’s Throughput - [Old]

  1. 1. Async Code Reviews Are Killing Your Company’s Throughput
  2. 2. ● Principal@HelloFresh ● Ex: @Careem/Uber, @GetYourGuide ● Rants on draganstepanovic.com @d_stepanovic ● I put mayo on 🍕 😱
  3. 3. Meet people where they are
  4. 4. PR-based async code review
  5. 5. What was I curious to see? ● Engagement ● Wait Time ● Size
  6. 6. Engagement
  7. 7. “Never had a 300+ LoC PR that didn’t look good to me” ~ Anonymous Developer
  8. 8. Why was I curious about the engagement? ● Feedback delayed and ‘choked’ by the system ● High-latency, low-throughput feedback vs Low-latency, high-throughput feedback ● Engagement by size? ○ Only counting non-trivial comments (no LGTM)
  9. 9. Lack of engagement/feedback => No ability to build the quality in
  10. 10. Wait time
  11. 11. A couple of important assumptions and facts ● Processing Time and Flow Efficiency on the bigger size PRs end of the spectrum inaccurate because of git rewrite practice ● Wait Time way more accurate ● Wait time can have processing time ● Processing time can have wait time as well ● PR size is measured through simple LoC changed
  12. 12. Cost of code review per LoC size
  13. 13. www.draganstepanovic.com/2020/11/16/pr-size-cannot-be-reduced.html
  14. 14. Throughput ↓
  15. 15. Throughput ↓ Quality ↓
  16. 16. Cost of code review per LoC size How do we keep cost of code review per LoC constant so the throughput doesn’t plummet? 🤔
  17. 17. Cost of code review per LoC size Actors’ reaction time size Availability of actors size Actors = Author + Reviewers Throughput size
  18. 18. Availability of actors size
  19. 19. With the decrease in size of PR, people need to get exponentially closer and closer in time in order not to lose throughput
  20. 20. You cannot be interrupted if you’re not doing anything else
  21. 21. Enter Co-creation patterns
  22. 22. How would this scatter look like had we had continuous code review after each line of code?
  23. 23. How would this scatter look like had we had continuous code review after each line of code?
  24. 24. PR Score
  25. 25. Size ↓ Wait time per size ↓ Engagement per size↑ (or not ↓) What are we trying to optimize for?
  26. 26. PR Score @staticmethod def calculate_for(size, wait_time, engagement): return math.log(1 + Score.absolute(engagement, size, wait_time)) @staticmethod def absolute(engagement, size, wait_time): return size * wait_time.in_seconds() / (1 + engagement) 1 ↑ 0 = 0 = 0
  27. 27. Continuous code review (pairing/mobbing)
  28. 28. How would the world look like had we paired?
  29. 29. We’ve been told all along that we’ll achieve more if we limit and delay our interactions Hope you now have (also) a data-informed reason to not believe that We’ve been told all along that we’ll achieve more if we limit and delay our interactions
  30. 30. ● If you’re interested in trying out the tool for free? dragan.stepanovic@gmail.com ● The idea of the tool is to provide you value to change your process so you don’t need the tool anymore ● Only Github supported currently ● I’m writing a book (intersection of XP, Lean, ToC, Systems Thinking) ● If you’d like me to speak to your community about this topic, Small Batches and/or Intro to ToC, Systemic Advantages of co-creation, let me know
  31. 31. @d_stepanovic

×