Successfully reported this slideshow.
Your SlideShare is downloading. ×

Code Review tool for personal effectiveness and waste analysis

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
Code Review for iOS
Code Review for iOS
Loading in …3
×

Check these out next

1 of 24 Ad

Code Review tool for personal effectiveness and waste analysis

Usually it is hard to analyze personal effectiveness and detect wastes in development process because developer’s work decomposition is not transparent and available for analysis. As a good sample of ineffective process imagine developer, who spends 1 day on task implementation and then reimplements it several times according to code review notes during next 2 days. Or another developer, who is waiting for code review during 2 days, switching context to other tasks, finally gets notes and switches back to initial task, trying to refresh all details in his head. And so on and so forth…

Code review tool usage helps to aggregate lots of useful information about any code change at any stage (static analysis, code review, rework, acceptance, integration into main branch). In this talk I’m going to demontrate how this information could be used for detailed analysis of development effectiveness and wastes detection. Based on mentioned analysis you could implement many improvements for your development process and then measure their success.

Usually it is hard to analyze personal effectiveness and detect wastes in development process because developer’s work decomposition is not transparent and available for analysis. As a good sample of ineffective process imagine developer, who spends 1 day on task implementation and then reimplements it several times according to code review notes during next 2 days. Or another developer, who is waiting for code review during 2 days, switching context to other tasks, finally gets notes and switches back to initial task, trying to refresh all details in his head. And so on and so forth…

Code review tool usage helps to aggregate lots of useful information about any code change at any stage (static analysis, code review, rework, acceptance, integration into main branch). In this talk I’m going to demontrate how this information could be used for detailed analysis of development effectiveness and wastes detection. Based on mentioned analysis you could implement many improvements for your development process and then measure their success.

Advertisement
Advertisement

More Related Content

Advertisement

Similar to Code Review tool for personal effectiveness and waste analysis (20)

More from Mikalai Alimenkou (20)

Advertisement

Recently uploaded (20)

Code Review tool for personal effectiveness and waste analysis

  1. 1. Code Review tool for personal effectiveness and waste analysis Mikalai Alimenkou @xpinjection http://xpinjection.com
  2. 2. Disclaimer This talk is based on personal experience
  3. 3. Personal productivity? Are you joking?
  4. 4. Developers are different…
  5. 5. 10x developers are NOT a myth
  6. 6. Team Lead is responsible for the team
  7. 7. Metrics help to understand reality
  8. 8. Code Review practices in large team • Local quality gates (coding standards in IDE, unit/integration tests for module, smoke UI tests) • Pre-commit quality gates (static code analysis, all unit/integration tests) • Internal team code review • Cross-team code review • Whole CI/CD pipeline after integration
  9. 9. Typical wastes in software development
  10. 10. How to map wastes? • WAITING = MR waiting for review or merge • TRANSPORTATION = MR review cycles • MOTION = repeatable notes in several MRs for the same team member • INVENTORY = MR waiting for merge • OVERPRODUCTION = MR that could not be merged
  11. 11. Code Review tools
  12. 12. Gerrit dashboard with all changes
  13. 13. Metrics samples • How many MRs developer produces during the week (task decomposition)? • How much time MR is waiting for merge (risks on merge, deferred value)? • How much time MR is hanging on review (focus lost, risks on merge)? • Does MR break quality gates? How often? (waste of time and infrastructure capacity) • What is MR size? (complex and less focused code review)
  14. 14. Gerrit merge request view
  15. 15. Metrics samples • What is complexity of MR? (risks on merge, long review cycle) • How many people involved in code review? (waste of time) • How many negative grades? (low quality of local review) • Was MR reverted because of CI/CD issues? (waste of team time and infrastructure capacity) • Was MR locally tested with smoke UI tests? (deferred issues on CI/CD pipeline)
  16. 16. Gerrit merge request history
  17. 17. Metrics samples • How many revisions does MR have? (waste of time, delays) • What issues does MR have on quality gates? (lack of local tools configuration and local runs) • Does MR have similar issues as in previous MRs? (waste of time, lack of knowledge grow) • How serious are issues found in MR? (lack of design, repeated full development cycle)
  18. 18. Task complexity is taken into account
  19. 19. Developers have different focus factor
  20. 20. Sample analysis in mixed team
  21. 21. Summary • Be careful with metrics • Don’t publish them to avoid competition • Link personal metrics with common team values and goals • Some metrics could not be automated • Look at trends, not single values • Ask right questions • Use coaching to promote improvements
  22. 22. @xpinjection http://xpinjection.com mikalai.alimenkou@xpinjection.com

×