Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Jenkins review buddy

1,344 views

Published on

Jenkins Review Buddy talk by Knud Poulsen at the Copenhagen Jenkins User Event 2013

  • Be the first to comment

Jenkins review buddy

  1. 1. Jenkins ReviewBuddy Jenkins assisted Peer Code Reviews Knud.Poulsen@switch-gears.dk
  2. 2. Inspiration for this talk + = JRB Competitive advantage today can only be achieved through better man/machine cooperation. No huge surprise to Jenkins users. Race *with* the machine.
  3. 3. What we’ll cover ➢ Benefits of peer code review ➢ Our experiences ➢ Others’ experiences ➢ Real-world code review ➢ Causes of noise ➢ Jenkins to the rescue! ➢ Demo/ Screenshots ➢ Further development ➢ Call for sponsors & test pilots
  4. 4. Key Benefits of Peer Code Review ➢ 90% reduction in shipped defects often reported in industry studies ➢ 25% net productivity increase often reported in industry studies ➢ Knowledge sharing ➢ Silo reduction ➢ Training new employees ➢ Best practice propagation
  5. 5. Our experiences - Low Gain Example [Anonymized and obscured]
  6. 6. Our experiences - High Gain Example [Anonymized and obscured]
  7. 7. Others’ experiences Many excellent studies out there
  8. 8. So, what do we know ? ➢ Average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent. [McConnell93]
  9. 9. So, what do we know ? ➢ Basic code reading is ~96% as effective at finding defects as holding a formal heavyweight inspection meeting [Votta1993] ➢ Technical code review checklists are a powerful help (especially against omissions) [Dunsmore2000] ➢ Defect detection drops dramatically after ~60 minutes, to zero after ~90 minutes [Dunsmore2000]
  10. 10. So, what do we know ? ➢ The longer a reviewer spends on the initial read-through, the more defects will ultimately be found [Uwano2006] ➢ Long methods are very time consuming to understand [Uwano2006] ➢ Loops are very time consuming to understand [Uwano2006] ➢ Reading time has ~3x higher correlation with defects found than number of lines under review [Laitenberger1999]
  11. 11. So, what do we know ? ➢ Maximum effective review rate ~400 lines per hour [Cohen2006] ➢ Disproportionately more defects are found when code changes are under 200 lines [Cohen2006] ➢ Beneficial for “quality of review” if the author pre-reviews and leaves comments for subsequent reviewers [Cohen2006]
  12. 12. So, what do we know ? ➢ Review 100 to 300 LOC at a time, in 30-60 minutes chunks, with a break between each sitting [Cohen2006] ➢ Spend at least 5 minutes reviewing a single line of code [Cohen2006] ➢ Limit reviewing to 1 hour per day [Ganssle2009]
  13. 13. Real world ➢ Is messy... ➢ No clear pattern between time spent in the review state, total number of reworks required, or total number of lines changed ➢ Some studies compensate by dropping unreasonable data points and assuming informal out-of-tool review
  14. 14. Real world
  15. 15. Why no clear pattern? ➢ “Production pressure” ➢ “Lack of review guidance” ➢ Hard to justify time ➢ Just give up on large reviews ➢ Hard to delay 1000 lines of new code the 3 days (300 lines/h, 3x1h, 1h/day) it should take to review effectively
  16. 16. What outcome would we like? ➢ Smaller reviews should merge faster ➢ Larger reviews should merge slower
  17. 17. Race *with* the machine ➢ Guidance on time to use doing review ➢ Remind author to do pre-review ➢ Up-front determination of reviewers ➢ Links to review checklists ➢ Load balance reviewers ➢ Help developers justify time investment ➢ Automatically add reviewers ➢ Automatically add developer as pre-reviewer
  18. 18. Let’s try it out
  19. 19. Let’s try it out
  20. 20. Let’s try it out
  21. 21. Let’s try it out
  22. 22. Further work ➢ Also consider previous reviewers, not only previous developers ➢ Make language and domain specific review checklists easily accessible ➢ Weight and score commit message size against nr files/ lines changed ➢ Inline comments from warnings, static checkers ➢ OO review expansion, ie. “you also need to look at these 3 unchanged files”
  23. 23. Call for Sponsors and Test Pilots ➢ Try it out in your own Gerrit Review/ Jenkins environment ➢ Sponsor development into a full-blow, feature rich, configurable “Jenkins ReviewBuddy” plugin ➢ Catch me in the break

×