Mario Bernhart, Andreas Mauczka, Thomas Grechenig Research Group for Industrial Software (INSO) Vienna University of Technology, Austria www.inso.tuwien.ac.at
Introduction Code reviews have many benefits, most importantly to find bugs early [1, 2, 3] It is commonly accepted that code reviews are resource-intensive [4] Due to the lack of time and resources many (agile) software development teams do not perform traditional code reviews [5] Code reviews support knowledge sharing  [18] and collective code ownership [22]
Code reviews in agile environm. Formal reviews such as the IEEE1028 [2] tend to be too heavyweight in agile contexts [4, 5] There is a growing interest in code reviews and review tools in the agile community [24] An empirical study [25] proposes a combination of pair programming and code reviews    Q:  How to adopt traditional code reviews to support reviews in an agile environment?
Indust. case: context & problem EHR exchange service for hospitals Agile software development project, 24 months, Team of 7, sprint length 4 weeks  Planned code reviews for 4 days at the end of each sprint : 1d inspection, 2d rework, 1d re-inspection    Reviews were mostly challenged or skipped due to time and resource lack especially at the end of the sprint
Adoption strategy Make the reviews a continuous task Make each review task smaller Create review tasks automatically based on individual team rules Make review asynchronous and support distributed reviewing Provide a first-class IDE integration for developers  Continuous Changeset-Based Review CCBR
CCBR  description Setup = List of (author, reviewer, filter) E.g. (Peter, Mike, *) + (*, Chris, *test*.java) + … Continuous review workflow: The  developer commits to the SCM If there is a matching rule: review task is auto-created The review is executed by using the compare-view to the previous version of the change-set The review result is consumed by the author A (negative) review result is followed up and may create a corrective task for the author
CCBR  attributes Process Coherence Review is done shortly after the development Rework is done shortly after the original work  “ It's All about Feedback and Change” [23] Information Coherence Changes within one timeframe often correlate [15] Task size small, typically <200 LOC per review task
CCBR vs. traditional reviews (TR) Inspection Re-work Re-inspection t t Development  effort Development effort Inspection, re-work and re-inspection effort TR CCBR
Limitations of CCBR TR sees  the „whole“ whereas CCBR only refers to a „small piece“ Focus on a stable state (TR) vs. the creation process (CCBR) The sum of changed lines in TR < CCBR  one line of code may be changed more than once Pre-commit reviews vs. Post-commit reviews
Review Tool for CCBR Eclipse integrated easy to use UI Review scope = one changeset in the SVN No additional server (XML/SVN based) Works offline Works with Subversive and Subclipse Flexible filtered review assignments  Create Mylyn tasks out of (bad) reviews Freeware , 10.000+ downloads www.inso.tuwien.ac.at/projects/reviewclipse/
ReviewClipse Screenshot
ReviewClipse Screenshot
Planned Empirical Evaluation Research Questions Is CCBR equally effective as TR? Is CCBR equally efficient as TR? Is the rework effort of CCBR < TR? Plan Prestudy with 3/3 and 6 subjects of A and B A: In parallel study 30/30 (targets question 1 + 2) B: In serial study 60 (targets question 3)
Future Work – Mylyn Reviews Mylyn Reviews is a  task-based  code review plugin that integrates with Mylyn Main Features: Do code reviews based on Mylyn tasks Incorporate the review tasks in the Mylyn task list Provide inline-commenting for source code reviews Framework for other review tool integrations www.eclipse.org/reviews
References M. Fagan, “Design and code inspections to reduce errors in program development,”  IBM Systems Journal , vol. 15, no. 3, pp. 182–211, 1976. IEEE standard for software reviews and audits. IEEE STD 1028-2008, pages 1–52, 8 2008. R. Baker, “Code reviews enhance software quality,” In  Proceedings of the 1997 (19th) International Conference on Software Engineering , pp. 570–571, May 1997. M. Ciolkowski, O. Laitenberger, and S. Biffl. “Software reviews, the state of the practice.” Software, IEEE, 20(6):46–51, Nov.-Dec. 2003. L. Harjumaa, I. Tervonen, and A. Huttunen. “Peer reviews in real life - motivators and demotivators.”  Fifth Int.Conference on Quality Software , 2005, pages 29–36, Sept. 2005. J. Remillard. “Source code review systems.” IEEE Software, 22(1):74-77, Jan.-Feb. 2005. B. Meyer. “Design and code reviews in the age of the internet.” Commun. ACM, 51(9):66–71, 2008. M. Bernhart, C. Mayerhofer, T. Grechenig, “ReviewClipse - Supporting Code-Reviews within the Eclipse IDE”. Talk at EclipseCon 2009 M. Stein, J. Riedl, S. J. Harner, and V. Mashayekhi, “A case study of distributed, asynchronous software inspection,” in  ICSE ’97: Proceedings of the 19th international conference on Software engineering , (New York, NY, USA), pp. 107–117, ACM, 1997. L.-T. Cheng, C. R. de Souza, S. Hupfer, J. Patterson, and S. Ross, “Building collaboration into IDEs,” Queue, vol. 1, no. 9, pp. 40–50, 2004. C. R. Prause and S. Apelt, “An approach for continuous inspection of source code,” in  Proceedings of the 6th int. workshop on Software quality , (New York, NY, USA), pp. 17–22, ACM, 2008.  S. P. Berczuk, “Software configuration management patterns - Effective Teamwork”, Practical Integration. Addison-Wesley, 2003. Kersten, M. and Murphy, G. C. “Using task context to improve programmer productivity.” In  Proceedings of the 14th ACM SIGSOFT international Symposium on Foundations of Software Engineering  (Portland, Oregon, USA, November 05 - 11, 2006). SIGSOFT '06/FSE-14. ACM, New York, NY, 1-11.  B. W. Boehm, Software Engineering Economics. Prentice-Hall, 1981. T. Zimmermann, P. Weißgerber, S. Diehl, and A. Zeller.  „Mining Version Histories to Guide Software Changes.” Saarland University, September 2003.  Proc. 26th International Conference on Software Engineering (ICSE) , Edinburgh, UK, May 2004. A. Porter, H. Siy, A. Mockus, and L. Votta, “Understanding the sources of variation in software inspections,”  ACM Trans. Softw. Eng. Methodol., vol. 7, no. 1 , pp. 41–79, 1998. M. Stein, J. Riedl, S. J. Harner, and V. Mashayekhi, “A case study of distributed, asynchronous software inspection.” in  ICSE ’97: Proceedings of the 19th international conference on Software engineering , (New York, NY, USA), pp. 107–117, ACM, 1997. T. Gilb, D. Graham, and S. Finzi. “Software Inspection.” Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1993. C. Mayerhofer. “Design and Development of a Module for Change-Set Based Code Reviews in IDEs” Master Thesis, Vienna University of Technology, 2009. Y. Mizuno. “Software quality improvement ,” Computer, vol. 16 , pp. 66–72, März 1983. M.Bernhart, C. Mayerhofer, T. Grechenig.  „ReviewClipse@Class - Using a Lightweight Continuous Software Reviewing Tool in Undergraduate Software Development Education“ – to appear in  Proceedings of the 23rd Annual IEEE-CS Conference on Software Engineering Education and Training M. E. Nordberg III, &quot;Managing Code Ownership,&quot;  IEEE Software , pp. 26-33, March/April, 2003 Williams, L. and Cockburn, A. Agile Software Development: It's All about Feedback and Change, IEEE Computer, Vol. 36, No., 6, pp. 83-85, June 2003. W. Seliga, S. Ginter, “Effective code reviews in agile teams” Talk at Agile 2009 D. Winkler and S. Biffl. “An empirical study on design quality improvement from best-practice inspection and pair programming.” Product-Focused Software Process Improvement, pages 319–333, 2006. P. Grünbacher, M. Halling, and S. Biffl. “An empirical study on groupware support for software inspection meetings.” Automated Software Engineering, International Conference on, 0:4, 2003.

Adopting code reviews for agile software development

  • 1.
    Mario Bernhart, AndreasMauczka, Thomas Grechenig Research Group for Industrial Software (INSO) Vienna University of Technology, Austria www.inso.tuwien.ac.at
  • 2.
    Introduction Code reviewshave many benefits, most importantly to find bugs early [1, 2, 3] It is commonly accepted that code reviews are resource-intensive [4] Due to the lack of time and resources many (agile) software development teams do not perform traditional code reviews [5] Code reviews support knowledge sharing [18] and collective code ownership [22]
  • 3.
    Code reviews inagile environm. Formal reviews such as the IEEE1028 [2] tend to be too heavyweight in agile contexts [4, 5] There is a growing interest in code reviews and review tools in the agile community [24] An empirical study [25] proposes a combination of pair programming and code reviews  Q: How to adopt traditional code reviews to support reviews in an agile environment?
  • 4.
    Indust. case: context& problem EHR exchange service for hospitals Agile software development project, 24 months, Team of 7, sprint length 4 weeks Planned code reviews for 4 days at the end of each sprint : 1d inspection, 2d rework, 1d re-inspection  Reviews were mostly challenged or skipped due to time and resource lack especially at the end of the sprint
  • 5.
    Adoption strategy Makethe reviews a continuous task Make each review task smaller Create review tasks automatically based on individual team rules Make review asynchronous and support distributed reviewing Provide a first-class IDE integration for developers  Continuous Changeset-Based Review CCBR
  • 6.
    CCBR descriptionSetup = List of (author, reviewer, filter) E.g. (Peter, Mike, *) + (*, Chris, *test*.java) + … Continuous review workflow: The developer commits to the SCM If there is a matching rule: review task is auto-created The review is executed by using the compare-view to the previous version of the change-set The review result is consumed by the author A (negative) review result is followed up and may create a corrective task for the author
  • 7.
    CCBR attributesProcess Coherence Review is done shortly after the development Rework is done shortly after the original work  “ It's All about Feedback and Change” [23] Information Coherence Changes within one timeframe often correlate [15] Task size small, typically <200 LOC per review task
  • 8.
    CCBR vs. traditionalreviews (TR) Inspection Re-work Re-inspection t t Development effort Development effort Inspection, re-work and re-inspection effort TR CCBR
  • 9.
    Limitations of CCBRTR sees the „whole“ whereas CCBR only refers to a „small piece“ Focus on a stable state (TR) vs. the creation process (CCBR) The sum of changed lines in TR < CCBR one line of code may be changed more than once Pre-commit reviews vs. Post-commit reviews
  • 10.
    Review Tool forCCBR Eclipse integrated easy to use UI Review scope = one changeset in the SVN No additional server (XML/SVN based) Works offline Works with Subversive and Subclipse Flexible filtered review assignments Create Mylyn tasks out of (bad) reviews Freeware , 10.000+ downloads www.inso.tuwien.ac.at/projects/reviewclipse/
  • 11.
  • 12.
  • 13.
    Planned Empirical EvaluationResearch Questions Is CCBR equally effective as TR? Is CCBR equally efficient as TR? Is the rework effort of CCBR < TR? Plan Prestudy with 3/3 and 6 subjects of A and B A: In parallel study 30/30 (targets question 1 + 2) B: In serial study 60 (targets question 3)
  • 14.
    Future Work –Mylyn Reviews Mylyn Reviews is a task-based code review plugin that integrates with Mylyn Main Features: Do code reviews based on Mylyn tasks Incorporate the review tasks in the Mylyn task list Provide inline-commenting for source code reviews Framework for other review tool integrations www.eclipse.org/reviews
  • 15.
    References M. Fagan,“Design and code inspections to reduce errors in program development,” IBM Systems Journal , vol. 15, no. 3, pp. 182–211, 1976. IEEE standard for software reviews and audits. IEEE STD 1028-2008, pages 1–52, 8 2008. R. Baker, “Code reviews enhance software quality,” In Proceedings of the 1997 (19th) International Conference on Software Engineering , pp. 570–571, May 1997. M. Ciolkowski, O. Laitenberger, and S. Biffl. “Software reviews, the state of the practice.” Software, IEEE, 20(6):46–51, Nov.-Dec. 2003. L. Harjumaa, I. Tervonen, and A. Huttunen. “Peer reviews in real life - motivators and demotivators.” Fifth Int.Conference on Quality Software , 2005, pages 29–36, Sept. 2005. J. Remillard. “Source code review systems.” IEEE Software, 22(1):74-77, Jan.-Feb. 2005. B. Meyer. “Design and code reviews in the age of the internet.” Commun. ACM, 51(9):66–71, 2008. M. Bernhart, C. Mayerhofer, T. Grechenig, “ReviewClipse - Supporting Code-Reviews within the Eclipse IDE”. Talk at EclipseCon 2009 M. Stein, J. Riedl, S. J. Harner, and V. Mashayekhi, “A case study of distributed, asynchronous software inspection,” in ICSE ’97: Proceedings of the 19th international conference on Software engineering , (New York, NY, USA), pp. 107–117, ACM, 1997. L.-T. Cheng, C. R. de Souza, S. Hupfer, J. Patterson, and S. Ross, “Building collaboration into IDEs,” Queue, vol. 1, no. 9, pp. 40–50, 2004. C. R. Prause and S. Apelt, “An approach for continuous inspection of source code,” in Proceedings of the 6th int. workshop on Software quality , (New York, NY, USA), pp. 17–22, ACM, 2008. S. P. Berczuk, “Software configuration management patterns - Effective Teamwork”, Practical Integration. Addison-Wesley, 2003. Kersten, M. and Murphy, G. C. “Using task context to improve programmer productivity.” In Proceedings of the 14th ACM SIGSOFT international Symposium on Foundations of Software Engineering (Portland, Oregon, USA, November 05 - 11, 2006). SIGSOFT '06/FSE-14. ACM, New York, NY, 1-11. B. W. Boehm, Software Engineering Economics. Prentice-Hall, 1981. T. Zimmermann, P. Weißgerber, S. Diehl, and A. Zeller. „Mining Version Histories to Guide Software Changes.” Saarland University, September 2003. Proc. 26th International Conference on Software Engineering (ICSE) , Edinburgh, UK, May 2004. A. Porter, H. Siy, A. Mockus, and L. Votta, “Understanding the sources of variation in software inspections,” ACM Trans. Softw. Eng. Methodol., vol. 7, no. 1 , pp. 41–79, 1998. M. Stein, J. Riedl, S. J. Harner, and V. Mashayekhi, “A case study of distributed, asynchronous software inspection.” in ICSE ’97: Proceedings of the 19th international conference on Software engineering , (New York, NY, USA), pp. 107–117, ACM, 1997. T. Gilb, D. Graham, and S. Finzi. “Software Inspection.” Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1993. C. Mayerhofer. “Design and Development of a Module for Change-Set Based Code Reviews in IDEs” Master Thesis, Vienna University of Technology, 2009. Y. Mizuno. “Software quality improvement ,” Computer, vol. 16 , pp. 66–72, März 1983. M.Bernhart, C. Mayerhofer, T. Grechenig. „ReviewClipse@Class - Using a Lightweight Continuous Software Reviewing Tool in Undergraduate Software Development Education“ – to appear in Proceedings of the 23rd Annual IEEE-CS Conference on Software Engineering Education and Training M. E. Nordberg III, &quot;Managing Code Ownership,&quot; IEEE Software , pp. 26-33, March/April, 2003 Williams, L. and Cockburn, A. Agile Software Development: It's All about Feedback and Change, IEEE Computer, Vol. 36, No., 6, pp. 83-85, June 2003. W. Seliga, S. Ginter, “Effective code reviews in agile teams” Talk at Agile 2009 D. Winkler and S. Biffl. “An empirical study on design quality improvement from best-practice inspection and pair programming.” Product-Focused Software Process Improvement, pages 319–333, 2006. P. Grünbacher, M. Halling, and S. Biffl. “An empirical study on groupware support for software inspection meetings.” Automated Software Engineering, International Conference on, 0:4, 2003.