Advertisement
Advertisement

More Related Content

Slideshows for you(20)

Advertisement

Async Code Reviews Are Killing Your Company’s Throughput - [Old]

  1. Async Code Reviews Are Killing Your Company’s Throughput
  2. ● Principal@HelloFresh ● Ex: @Careem/Uber, @GetYourGuide ● Rants on draganstepanovic.com @d_stepanovic ● I put mayo on 🍕 😱
  3. Meet people where they are
  4. PR-based async code review
  5. What was I curious to see? ● Engagement ● Wait Time ● Size
  6. Engagement
  7. “Never had a 300+ LoC PR that didn’t look good to me” ~ Anonymous Developer
  8. Why was I curious about the engagement? ● Feedback delayed and ‘choked’ by the system ● High-latency, low-throughput feedback vs Low-latency, high-throughput feedback ● Engagement by size? ○ Only counting non-trivial comments (no LGTM)
  9. Lack of engagement/feedback => No ability to build the quality in
  10. Wait time
  11. A couple of important assumptions and facts ● Processing Time and Flow Efficiency on the bigger size PRs end of the spectrum inaccurate because of git rewrite practice ● Wait Time way more accurate ● Wait time can have processing time ● Processing time can have wait time as well ● PR size is measured through simple LoC changed
  12. Cost of code review per LoC size
  13. www.draganstepanovic.com/2020/11/16/pr-size-cannot-be-reduced.html
  14. Throughput ↓
  15. Throughput ↓ Quality ↓
  16. Cost of code review per LoC size How do we keep cost of code review per LoC constant so the throughput doesn’t plummet? 🤔
  17. Cost of code review per LoC size Actors’ reaction time size Availability of actors size Actors = Author + Reviewers Throughput size
  18. Availability of actors size
  19. With the decrease in size of PR, people need to get exponentially closer and closer in time in order not to lose throughput
  20. You cannot be interrupted if you’re not doing anything else
  21. Enter Co-creation patterns
  22. How would this scatter look like had we had continuous code review after each line of code?
  23. How would this scatter look like had we had continuous code review after each line of code?
  24. PR Score
  25. Size ↓ Wait time per size ↓ Engagement per size↑ (or not ↓) What are we trying to optimize for?
  26. PR Score @staticmethod def calculate_for(size, wait_time, engagement): return math.log(1 + Score.absolute(engagement, size, wait_time)) @staticmethod def absolute(engagement, size, wait_time): return size * wait_time.in_seconds() / (1 + engagement) 1 ↑ 0 = 0 = 0
  27. Continuous code review (pairing/mobbing)
  28. How would the world look like had we paired?
  29. We’ve been told all along that we’ll achieve more if we limit and delay our interactions Hope you now have (also) a data-informed reason to not believe that We’ve been told all along that we’ll achieve more if we limit and delay our interactions
  30. ● If you’re interested in trying out the tool for free? dragan.stepanovic@gmail.com ● The idea of the tool is to provide you value to change your process so you don’t need the tool anymore ● Only Github supported currently ● I’m writing a book (intersection of XP, Lean, ToC, Systems Thinking) ● If you’d like me to speak to your community about this topic, Small Batches and/or Intro to ToC, Systemic Advantages of co-creation, let me know
  31. @d_stepanovic
Advertisement