SlideShare a Scribd company logo
1 of 10
Download to read offline
Improving Code Review Effectiveness
through Reviewer Recommendations
Patanamon Thongtanunam* and
Raula Gaikovina Kula†, Ana Erika Camargo Cruz*,
Norihiro Yoshida*, Hajimu Iida*
*Nara Institute of Science and Technology (NAIST)
†Osaka University, Japan
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
1
Introduction
• Code Review : A source code inspection performed by
developers other than the author
•  Supported Tools: Gerrit, ReviewBoad, etc.
•  A patch reviewed by developers with related knowledge will
have high quality and low defects
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
2
Project’s
Code RepositoryReviewerAuthor
A new Patch
(Software contribution)
??
Introduction > Reviewer Recommendation > Evaluation > Summary
Reviewer Assignment Problem
• It is difficult and time-consuming to find an appropriate
reviewers in large-scale projects
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
3
Who isappropriate to
review my code?
e.g. Globally distributed software development
Introduction > Reviewer Recommendation > Evaluation > Summary
Previous Work
• Review Bot’s algorithm[1]
•  For industrial setting of VMware
•  Selects reviewers who have reviewed in the same section of
code change.
•  Required a long history of code changes in code review
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
4
[1] V. Balachandran, “Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation,” in
Proc. ICSE’13, 2013, pp. 931–940.
Past Reviews
R4 (New Review)
Reviewer Candidates = Reviewers ([R3, R1])
R1 R2 R3
Introduction > Reviewer Recommendation > Evaluation > Summary
Code change
Reviewer Recommendation Algorithm
• Propose File Path Similarity (FPS) algorithm
• Select reviewers by calculating similarity of file paths
comparing with file paths of previous reviews.
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
5Introduction > Reviewer Recommendation > Evaluation > Summary
•  Files with similar functions are
usually located under the same
or near directories. [2]
•  Reviewers having similar
knowledge of system would
have reviewed similar file.
UICore
Proj
… … … …
……
[2] Ivan T. Bowman, Richard C. Holt, and Neil V. Brewster. Linux as a case study: Its extracted software architecture. In Proceedings ICSE ’99, pp. 555–563, 1999.
Example of FPS algorithm
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
6
FPS	

Past Reviews
Review R3
Files
- video/src/a.java
- video/src/b.java
Reviewers
Review R2
Files
- video/src/x.java
- video/src/y.java
Reviewers
A
Review R1
Files
- video/resource/a.xml
Reviewers
A B
FPS	

? ? ?
1
2
Score of past reviewers propagate to reviewers
A
B
= FPS(R3,R1) + FPS(R3,R2) = 0.1 + 0.5 = 0.6
= FPS(R3,R1) = 0.1
Review History
Introduction > Reviewer Recommendation > Evaluation > Summary
FPS score
•  File Path Similarity score of new review Rn and past review Rp	

•  Similarity Function:
•  commonPath(fn,fp) : Longest Common Prefix function returns number of directory/file
names that appear from the beginning of both file path
7
commonPath(“video/src/a.java”, “video/src/b.java”) = 2
review request
can give opinion
mine the quality
responsible for
mated by Gerrit.
while approvers
nated by project
ess, a patch can
eview request is
verifier.
hor requests ap-
ormation about
for the authors
omatic reviewer
e reviews to be
ATION
m selects candi-
les with similar
m also uses time
Bot’s algorithm.
gorithm 1. This
equest (Rn) and
ommended. The
and
The calculation of FPS function is described in Equation
1. This calculates a score of a past review (Rp) from an
average of similarity of every file in Rp (fp) comparing with
every file in Rn (fn). The Files function returns a set of file
paths of the input review. The Similarity(fn, fp) function
measures the similarity between fp and fn, using Equation 2.
The averaged similarity score is prioritized by m and value.
Same as time prioritization of the Review Bot’s algorithm,
the parameter is a time prioritization factor ranging (0, 1].
When = 1, the time prioritization is not considered.
FPS(Rn, Rp, m) =
P
fn2Files(Rn),
fp2Files(Rp)
Similarity(fn, fp)
|Files(Rn)| ⇥ |Files(Rp)|
⇥ m
(1)
Similarity(fn, fp) =
commonPath(fn, fp)
max(Length(fn), Length(fp))
(2)
In Equation 2, the commonPath(fn, fp) function counts
the number of common directory and/or file name that ap-
pear in both file path from the beginning. This count is
based on the assumption that files, which are under the
same directory, would have the similar function. Thus, the
first directory of file paths is compared firstly. Then, the
other components of file path are compared respectively. For
example, suppose fn is /src/camera/video/a.java and fp
is /src/camera/photo/a.java. The common path will be
/src/camera and the commonPath(fn, fp) returns 2. The
count of commonPath(fn, fp) can be formularized as Equa-
and interests [9].
ers, Approvers
review request
an give opinion
ne the quality
responsible for
ated by Gerrit.
while approvers
ated by project
ss, a patch can
view request is
erifier.
or requests ap-
rmation about
or the authors
matic reviewer
reviews to be
ATION
m selects candi-
es with similar
also uses time
Bot’s algorithm.
gorithm 1. This
quest (Rn) and
mmended. The
and
with the highest score from the sorted list.
The calculation of FPS function is described in Equation
1. This calculates a score of a past review (Rp) from an
average of similarity of every file in Rp (fp) comparing with
every file in Rn (fn). The Files function returns a set of file
paths of the input review. The Similarity(fn, fp) function
measures the similarity between fp and fn, using Equation 2.
The averaged similarity score is prioritized by m and value.
Same as time prioritization of the Review Bot’s algorithm,
the parameter is a time prioritization factor ranging (0, 1].
When = 1, the time prioritization is not considered.
FPS(Rn, Rp, m) =
P
fn2Files(Rn),
fp2Files(Rp)
Similarity(fn, fp)
|Files(Rn)| ⇥ |Files(Rp)|
⇥ m
(1)
Similarity(fn, fp) =
commonPath(fn, fp)
max(Length(fn), Length(fp))
(2)
In Equation 2, the commonPath(fn, fp) function counts
the number of common directory and/or file name that ap-
pear in both file path from the beginning. This count is
based on the assumption that files, which are under the
same directory, would have the similar function. Thus, the
first directory of file paths is compared firstly. Then, the
other components of file path are compared respectively. For
example, suppose fn is /src/camera/video/a.java and fp
is /src/camera/photo/a.java. The common path will be
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
Introduction > Reviewer Recommendation > Evaluation > Summary
Projects Study Period* # of Reviews # of Files
AOSP
Oct 2008 – Jan 2012
(~2 years)
5,126 26,840
OpenStack
Jul 2011 – May 2012
(~1 year)
6,586 16,953
Qt
May 2011 – May 2012
(~1 year)
23,810 78,401
Evaluation
• Accuracy (same as the review bot research)
• Study Projects
•  Using Gerrit code review
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
8
Top-k Accuracy = 	

Number of Correct Top-k recommendation	

Total number of reviews	

Introduction > Reviewer Recommendation > Evaluation > Summary
*Study period started from the 1st year of using peer code review.
2/6/2014 9
77.12 77.97
36.86
29.3
38.78
27.18
0
50
100
AOSP OpenStack Qt
Top-5Accuracy(%)
FPS algorithm Review Bot's algorithm
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
AOSP
Top-1 41.81 % 40.50 % 39.21 % 38.29 % 21.30 % 21.38 % 21.63 % 21.71 % 21.77 % 44.50 %
Top-3 69.84 % 67.66 % 66.56 % 64.83 % 63.38 % 29.15 % 29.17 % 29.17 % 29.20 % 29.20 %
Top-5 77.12 % 75.30 % 74.17 % 72.36 % 70.80 % 29.30 % 29.30 % 29.30 % 29.30 % 29.30 %
OpenStack
Top-1 38.32 % 36.47 % 35.18 % 34.73 % 34.04 % 22.94 % 23.23 % 23.19 % 23.20 % 23.19 %
Top-3 67.57 % 63.09 % 62.45 % 62.10 % 61.66 % 35.76 % 35.76 % 35.68 % 35.55 % 35.53 %
Top-5 77.97 % 73.28 % 72.85 % 72.62 % 71.71 % 38.78 % 38.90 % 38.82 % 38.89 % 38.89 %
Qt
Top-1 13.02 % 11.64 % 10.21 % 9.45 % 8.88 % 18.64 % 18.70 % 18.72 % 18.71 % 18.75 %
Top-3 28.82 % 21.39 % 20.15 % 19.27 % 18.46 % 26.18 % 26.19 % 26.16 % 26.16 % 26.16 %
Top-5 36.86 % 27.25 % 26.07 % 25.34 % 24.36 % 27.18 % 27.18 % 27.17 % 27.19 % 27.19 %
0 1000 2000 3000 4000 5000
020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(a) AOSP
0 1000 2000 3000 4000 5000 6000020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(b) OpenStack
0 5000 10000 15000 20000
020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(c) Qt
Figure 2: Top-5 accuracy of FPS algorithm and Review Bot’s algorithm with time prioritization factor ( = 1)
5. CONCLUSION & FUTURE WORKS
In this study, we have proposed a recommendation algo-
rithm using file path similarity for the modern peer code re-
view process. The results indicate that our proposed FPS al-
gorithm e↵ectively recommend reviewers in two of the three
OSS projects. This algorithm also significantly outperform
the existing algorithm (i.e. Review Bot’s algorithm) in these
projects. Additionally, we found that the used of time pri-
oritization was surprisingly not appropriate for the recom-
mendation algorithms in distributed projects environment.
Our future work will concentrate on explore more insight
into projects, especially large scale projects to improve the
algorithm. The other impact factors such as directory struc-
ture will also be investigated. At the same time, we will
[3] V. Balachandran. Reducing Human E↵ort and
Improving Quality in Peer Code Reviews using
Automatic Static Analysis and Reviewer
Recommendation. In Proc. ICSE ’13, pages 931–940,
2013.
[4] E. T. Barr, C. Bird, P. C. Rigby, A. Hindle, D. M.
German, and P. Devanbu. Cohesive and Isolated
Development with Branches. In Proc. FASE ’12, pages
316–331, 2012.
[5] K. Hamasaki, R. G. Kula, N. Yoshida, C. C. A. Erika,
K. Fujiwara, and H. Iida. Who does what during a
Code Review ? An extraction of an OSS Peer Review
Repository. In Proc. MSR’ 13, pages 49–52, 2013.
[6] E. Kocaguneli, T. Zimmermann, C. Bird, N. Nagappan,
• FPS algorithm significantly
outperformed Review Bot’s
algorithm
Results
Introduction > Reviewer Recommendation > Evaluation > Summary
Summary
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
10
Reviewer Assignment Problem
• It is difficult and time-consuming to find an appropriate
reviewers in large-scale projects
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
3
Who isappropriate to
review my code?
e.g. Globally distributed software development
Introduction > Reviewer Recommendation > Evaluation > Summary
Reviewer Recommendation Algorithm
• Propose File Path Similarity (FPS) algorithm
• Select reviewers by comparing file paths with previous
reviews
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
6Introduction > Reviewer Recommendation > Evaluation > Summary
•  Files with similar functions are
usually located in the same or
near directories. [1]
•  Reviewers having similar
knowledge of system would
have reviewed similar files.
UICore
Proj
… … … …
……
Projects Study Period* # of Reviews # of Files
AOSP
Oct 2008 – Jan 2012
(~2 years)
5,126 26,840
OpenStack
Jul 2011 – May 2012
(~1 year)
6,586 16,953
Qt
May 2011 – May 2012
(~1 year)
23,810 78,401
Evaluation
• Accuracy (same as the review bot research)
• Study Projects
•  Using Gerrit code review
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
9
Top-k Accuracy = !
Number of Correct Top-k recommendation!
Total number of reviews!
Introduction > Reviewer Recommendation > Evaluation > Summary
*Study period started from the 1st year of using peer code review.
Previous Work
• Review Bot’s algorithm[1]
•  For industrial setting of VMware
•  Selects reviewers who have reviewed in the same section of
code change.
2/6/2014
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
4
[1] V. Balachandran, “Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation,” in
Proc. ICSE’13, 2013, pp. 931–940.
Past Reviews
R4 (New Review)
Reviewer Candidates = Reviewers ([R3, R1])
R1 R2 R3
Introduction > Reviewer Recommendation > Evaluation > Summary
2/6/2014 10
77.12 77.97
36.86
29.3
38.78
27.18
0
50
100
AOSP OpenStack Qt
Top-5Accuracy(%)
FPS algorithm Review Bot's algorithm
Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014)
Table 2: Top-k accuracy of FPS algorithm and Review Bot’s algorithm with di↵erent time prioritization factors ( )
Project Top-k
FPS Algorithm Review Bot’s Algorithm
= 1 = 0.8 = 0.6 = 0.4 = 0.2 = 1 = 0.8 = 0.6 = 0.4 = 0.2
AOSP
Top-1 41.81 % 40.50 % 39.21 % 38.29 % 21.30 % 21.38 % 21.63 % 21.71 % 21.77 % 44.50 %
Top-3 69.84 % 67.66 % 66.56 % 64.83 % 63.38 % 29.15 % 29.17 % 29.17 % 29.20 % 29.20 %
Top-5 77.12 % 75.30 % 74.17 % 72.36 % 70.80 % 29.30 % 29.30 % 29.30 % 29.30 % 29.30 %
OpenStack
Top-1 38.32 % 36.47 % 35.18 % 34.73 % 34.04 % 22.94 % 23.23 % 23.19 % 23.20 % 23.19 %
Top-3 67.57 % 63.09 % 62.45 % 62.10 % 61.66 % 35.76 % 35.76 % 35.68 % 35.55 % 35.53 %
Top-5 77.97 % 73.28 % 72.85 % 72.62 % 71.71 % 38.78 % 38.90 % 38.82 % 38.89 % 38.89 %
Qt
Top-1 13.02 % 11.64 % 10.21 % 9.45 % 8.88 % 18.64 % 18.70 % 18.72 % 18.71 % 18.75 %
Top-3 28.82 % 21.39 % 20.15 % 19.27 % 18.46 % 26.18 % 26.19 % 26.16 % 26.16 % 26.16 %
Top-5 36.86 % 27.25 % 26.07 % 25.34 % 24.36 % 27.18 % 27.18 % 27.17 % 27.19 % 27.19 %
0 1000 2000 3000 4000 5000
020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(a) AOSP
0 1000 2000 3000 4000 5000 6000
020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(b) OpenStack
0 5000 10000 15000 20000
020406080
Number of Reviews (N)
Accuracy(%)
FPS Algorithm
Review Bot's Algorithm
(c) Qt
Figure 2: Top-5 accuracy of FPS algorithm and Review Bot’s algorithm with time prioritization factor ( = 1)
5. CONCLUSION & FUTURE WORKS
In this study, we have proposed a recommendation algo-
rithm using file path similarity for the modern peer code re-
view process. The results indicate that our proposed FPS al-
gorithm e↵ectively recommend reviewers in two of the three
OSS projects. This algorithm also significantly outperform
the existing algorithm (i.e. Review Bot’s algorithm) in these
projects. Additionally, we found that the used of time pri-
oritization was surprisingly not appropriate for the recom-
mendation algorithms in distributed projects environment.
Our future work will concentrate on explore more insight
into projects, especially large scale projects to improve the
algorithm. The other impact factors such as directory struc-
ture will also be investigated. At the same time, we will
consider ways to balance the workload of reviewers to help
reviewers and reduce the number of awaiting reviews.
ACKNOWLEDGMENTS
We are thankful to Dr. Mike Barker from NAIST for his
valuable suggestions and discussions.
6. REFERENCES
[1] A. Aurum, H. Petersson, and C. Wohlin.
State-of-the-art: software inspections after 25 years.
Software Testing, Verification and Reliability,
[3] V. Balachandran. Reducing Human E↵ort and
Improving Quality in Peer Code Reviews using
Automatic Static Analysis and Reviewer
Recommendation. In Proc. ICSE ’13, pages 931–940,
2013.
[4] E. T. Barr, C. Bird, P. C. Rigby, A. Hindle, D. M.
German, and P. Devanbu. Cohesive and Isolated
Development with Branches. In Proc. FASE ’12, pages
316–331, 2012.
[5] K. Hamasaki, R. G. Kula, N. Yoshida, C. C. A. Erika,
K. Fujiwara, and H. Iida. Who does what during a
Code Review ? An extraction of an OSS Peer Review
Repository. In Proc. MSR’ 13, pages 49–52, 2013.
[6] E. Kocaguneli, T. Zimmermann, C. Bird, N. Nagappan,
and T. Menzies. Distributed development considered
harmful? In Proc. ICSE ’13, pages 882–890, 2013.
[7] a. Mockus and J. Herbsleb. Expertise Browser: a
quantitative approach to identifying expertise. In Proc.
ICSE ’02, pages 503–512, 2002.
[8] N. Ramasubbu and R. K. Balan. Globally Distributed
Software Development Project Performance : An
Empirical Analysis. In Proc. ESEC/FSE ’07, pages
125–134, 2007.
[9] P. C. Rigby and M.-A. Storey. Understanding
broadcast based peer review on open source software
• FPS algorithm significantly
outperformed Review Bot’s
algorithm
Results
Introduction > Reviewer Recommendation > Evaluation > Summary
Introduction > Reviewer Recommendation > Evaluation > Summary

More Related Content

Similar to Improving Code Review Effectiveness Through Reviewer Recommendations

Software Engineering
Software EngineeringSoftware Engineering
Software Engineeringpoonam.rwalia
 
Partitioned Based Regression Verification
Partitioned Based Regression VerificationPartitioned Based Regression Verification
Partitioned Based Regression VerificationAung Thu Rha Hein
 
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachSearch Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachAlessandro Benedetti
 
Ranking The Refactoring Techniques Based on The External Quality Attributes
Ranking The Refactoring Techniques Based on The External Quality AttributesRanking The Refactoring Techniques Based on The External Quality Attributes
Ranking The Refactoring Techniques Based on The External Quality AttributesIJRES Journal
 
Search Quality Evaluation to Help Reproducibility : an Open Source Approach
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachSearch Quality Evaluation to Help Reproducibility : an Open Source Approach
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachAlessandro Benedetti
 
Summarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesSummarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesNikos Katirtzis
 
Pro smartbooksquestions
Pro smartbooksquestionsPro smartbooksquestions
Pro smartbooksquestionsyoummr
 
Reducing Redundancies in Multi-Revision Code Analysis
Reducing Redundancies in Multi-Revision Code AnalysisReducing Redundancies in Multi-Revision Code Analysis
Reducing Redundancies in Multi-Revision Code AnalysisSebastiano Panichella
 
Scaling Application on High Performance Computing Clusters and Analysis of th...
Scaling Application on High Performance Computing Clusters and Analysis of th...Scaling Application on High Performance Computing Clusters and Analysis of th...
Scaling Application on High Performance Computing Clusters and Analysis of th...Rusif Eyvazli
 
A report on designing a model for improving CPU Scheduling by using Machine L...
A report on designing a model for improving CPU Scheduling by using Machine L...A report on designing a model for improving CPU Scheduling by using Machine L...
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
 
Program Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorProgram Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorMichael Pankov
 
Learning to Rank Relevant Files for Bug Reports using Domain Knowledge
Learning to Rank Relevant Files for Bug Reports using Domain KnowledgeLearning to Rank Relevant Files for Bug Reports using Domain Knowledge
Learning to Rank Relevant Files for Bug Reports using Domain KnowledgeXin Ye
 
Pertemuan 5.pptx
Pertemuan 5.pptxPertemuan 5.pptx
Pertemuan 5.pptxBenjaminS13
 

Similar to Improving Code Review Effectiveness Through Reviewer Recommendations (20)

Software Engineering
Software EngineeringSoftware Engineering
Software Engineering
 
Partitioned Based Regression Verification
Partitioned Based Regression VerificationPartitioned Based Regression Verification
Partitioned Based Regression Verification
 
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachSearch Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
 
Ranking The Refactoring Techniques Based on The External Quality Attributes
Ranking The Refactoring Techniques Based on The External Quality AttributesRanking The Refactoring Techniques Based on The External Quality Attributes
Ranking The Refactoring Techniques Based on The External Quality Attributes
 
Search Quality Evaluation to Help Reproducibility : an Open Source Approach
Search Quality Evaluation to Help Reproducibility : an Open Source ApproachSearch Quality Evaluation to Help Reproducibility : an Open Source Approach
Search Quality Evaluation to Help Reproducibility : an Open Source Approach
 
Summarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesSummarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering Techniques
 
BIRTE-13-Kawashima
BIRTE-13-KawashimaBIRTE-13-Kawashima
BIRTE-13-Kawashima
 
Pro smartbooksquestions
Pro smartbooksquestionsPro smartbooksquestions
Pro smartbooksquestions
 
Reducing Redundancies in Multi-Revision Code Analysis
Reducing Redundancies in Multi-Revision Code AnalysisReducing Redundancies in Multi-Revision Code Analysis
Reducing Redundancies in Multi-Revision Code Analysis
 
Who Should Review My Code?
Who Should Review My Code?  Who Should Review My Code?
Who Should Review My Code?
 
Scaling Application on High Performance Computing Clusters and Analysis of th...
Scaling Application on High Performance Computing Clusters and Analysis of th...Scaling Application on High Performance Computing Clusters and Analysis of th...
Scaling Application on High Performance Computing Clusters and Analysis of th...
 
Apsec 2014 Presentation
Apsec 2014 PresentationApsec 2014 Presentation
Apsec 2014 Presentation
 
Cpcs302 1
Cpcs302  1Cpcs302  1
Cpcs302 1
 
A report on designing a model for improving CPU Scheduling by using Machine L...
A report on designing a model for improving CPU Scheduling by using Machine L...A report on designing a model for improving CPU Scheduling by using Machine L...
A report on designing a model for improving CPU Scheduling by using Machine L...
 
CORRECT-ICSE2016
CORRECT-ICSE2016CORRECT-ICSE2016
CORRECT-ICSE2016
 
Program Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit AdaptorProgram Performance Analysis Toolkit Adaptor
Program Performance Analysis Toolkit Adaptor
 
Learning to Rank Relevant Files for Bug Reports using Domain Knowledge
Learning to Rank Relevant Files for Bug Reports using Domain KnowledgeLearning to Rank Relevant Files for Bug Reports using Domain Knowledge
Learning to Rank Relevant Files for Bug Reports using Domain Knowledge
 
Pertemuan 5.pptx
Pertemuan 5.pptxPertemuan 5.pptx
Pertemuan 5.pptx
 
Thesis Talk
Thesis TalkThesis Talk
Thesis Talk
 
LEXICAL ANALYZER
LEXICAL ANALYZERLEXICAL ANALYZER
LEXICAL ANALYZER
 

Recently uploaded

Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...
Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...
Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...Delhi Call girls
 
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...Pooja Nehwal
 
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdf
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdfAWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdf
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdfSkillCertProExams
 
If this Giant Must Walk: A Manifesto for a New Nigeria
If this Giant Must Walk: A Manifesto for a New NigeriaIf this Giant Must Walk: A Manifesto for a New Nigeria
If this Giant Must Walk: A Manifesto for a New NigeriaKayode Fayemi
 
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptx
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptxChiulli_Aurora_Oman_Raffaele_Beowulf.pptx
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptxraffaeleoman
 
Dreaming Music Video Treatment _ Project & Portfolio III
Dreaming Music Video Treatment _ Project & Portfolio IIIDreaming Music Video Treatment _ Project & Portfolio III
Dreaming Music Video Treatment _ Project & Portfolio IIINhPhngng3
 
Causes of poverty in France presentation.pptx
Causes of poverty in France presentation.pptxCauses of poverty in France presentation.pptx
Causes of poverty in France presentation.pptxCamilleBoulbin1
 
Dreaming Marissa Sánchez Music Video Treatment
Dreaming Marissa Sánchez Music Video TreatmentDreaming Marissa Sánchez Music Video Treatment
Dreaming Marissa Sánchez Music Video Treatmentnswingard
 
Thirunelveli call girls Tamil escorts 7877702510
Thirunelveli call girls Tamil escorts 7877702510Thirunelveli call girls Tamil escorts 7877702510
Thirunelveli call girls Tamil escorts 7877702510Vipesco
 
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verifiedSector 62, Noida Call girls :8448380779 Noida Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verifiedDelhi Call girls
 
Report Writing Webinar Training
Report Writing Webinar TrainingReport Writing Webinar Training
Report Writing Webinar TrainingKylaCullinane
 
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdfThe workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdfSenaatti-kiinteistöt
 
My Presentation "In Your Hands" by Halle Bailey
My Presentation "In Your Hands" by Halle BaileyMy Presentation "In Your Hands" by Halle Bailey
My Presentation "In Your Hands" by Halle Baileyhlharris
 
Uncommon Grace The Autobiography of Isaac Folorunso
Uncommon Grace The Autobiography of Isaac FolorunsoUncommon Grace The Autobiography of Isaac Folorunso
Uncommon Grace The Autobiography of Isaac FolorunsoKayode Fayemi
 
lONG QUESTION ANSWER PAKISTAN STUDIES10.
lONG QUESTION ANSWER PAKISTAN STUDIES10.lONG QUESTION ANSWER PAKISTAN STUDIES10.
lONG QUESTION ANSWER PAKISTAN STUDIES10.lodhisaajjda
 
Digital collaboration with Microsoft 365 as extension of Drupal
Digital collaboration with Microsoft 365 as extension of DrupalDigital collaboration with Microsoft 365 as extension of Drupal
Digital collaboration with Microsoft 365 as extension of DrupalFabian de Rijk
 
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...amilabibi1
 

Recently uploaded (18)

Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...
Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...
Busty Desi⚡Call Girls in Sector 51 Noida Escorts >༒8448380779 Escort Service-...
 
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...
Aesthetic Colaba Mumbai Cst Call girls 📞 7738631006 Grant road Call Girls ❤️-...
 
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdf
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdfAWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdf
AWS Data Engineer Associate (DEA-C01) Exam Dumps 2024.pdf
 
If this Giant Must Walk: A Manifesto for a New Nigeria
If this Giant Must Walk: A Manifesto for a New NigeriaIf this Giant Must Walk: A Manifesto for a New Nigeria
If this Giant Must Walk: A Manifesto for a New Nigeria
 
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptx
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptxChiulli_Aurora_Oman_Raffaele_Beowulf.pptx
Chiulli_Aurora_Oman_Raffaele_Beowulf.pptx
 
Dreaming Music Video Treatment _ Project & Portfolio III
Dreaming Music Video Treatment _ Project & Portfolio IIIDreaming Music Video Treatment _ Project & Portfolio III
Dreaming Music Video Treatment _ Project & Portfolio III
 
Causes of poverty in France presentation.pptx
Causes of poverty in France presentation.pptxCauses of poverty in France presentation.pptx
Causes of poverty in France presentation.pptx
 
Dreaming Marissa Sánchez Music Video Treatment
Dreaming Marissa Sánchez Music Video TreatmentDreaming Marissa Sánchez Music Video Treatment
Dreaming Marissa Sánchez Music Video Treatment
 
Thirunelveli call girls Tamil escorts 7877702510
Thirunelveli call girls Tamil escorts 7877702510Thirunelveli call girls Tamil escorts 7877702510
Thirunelveli call girls Tamil escorts 7877702510
 
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verifiedSector 62, Noida Call girls :8448380779 Noida Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Noida Escorts | 100% verified
 
Report Writing Webinar Training
Report Writing Webinar TrainingReport Writing Webinar Training
Report Writing Webinar Training
 
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdfThe workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
The workplace ecosystem of the future 24.4.2024 Fabritius_share ii.pdf
 
ICT role in 21st century education and it's challenges.pdf
ICT role in 21st century education and it's challenges.pdfICT role in 21st century education and it's challenges.pdf
ICT role in 21st century education and it's challenges.pdf
 
My Presentation "In Your Hands" by Halle Bailey
My Presentation "In Your Hands" by Halle BaileyMy Presentation "In Your Hands" by Halle Bailey
My Presentation "In Your Hands" by Halle Bailey
 
Uncommon Grace The Autobiography of Isaac Folorunso
Uncommon Grace The Autobiography of Isaac FolorunsoUncommon Grace The Autobiography of Isaac Folorunso
Uncommon Grace The Autobiography of Isaac Folorunso
 
lONG QUESTION ANSWER PAKISTAN STUDIES10.
lONG QUESTION ANSWER PAKISTAN STUDIES10.lONG QUESTION ANSWER PAKISTAN STUDIES10.
lONG QUESTION ANSWER PAKISTAN STUDIES10.
 
Digital collaboration with Microsoft 365 as extension of Drupal
Digital collaboration with Microsoft 365 as extension of DrupalDigital collaboration with Microsoft 365 as extension of Drupal
Digital collaboration with Microsoft 365 as extension of Drupal
 
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...
Bring back lost lover in USA, Canada ,Uk ,Australia ,London Lost Love Spell C...
 

Improving Code Review Effectiveness Through Reviewer Recommendations

  • 1. Improving Code Review Effectiveness through Reviewer Recommendations Patanamon Thongtanunam* and Raula Gaikovina Kula†, Ana Erika Camargo Cruz*, Norihiro Yoshida*, Hajimu Iida* *Nara Institute of Science and Technology (NAIST) †Osaka University, Japan 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 1
  • 2. Introduction • Code Review : A source code inspection performed by developers other than the author •  Supported Tools: Gerrit, ReviewBoad, etc. •  A patch reviewed by developers with related knowledge will have high quality and low defects 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 2 Project’s Code RepositoryReviewerAuthor A new Patch (Software contribution) ?? Introduction > Reviewer Recommendation > Evaluation > Summary
  • 3. Reviewer Assignment Problem • It is difficult and time-consuming to find an appropriate reviewers in large-scale projects 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 3 Who isappropriate to review my code? e.g. Globally distributed software development Introduction > Reviewer Recommendation > Evaluation > Summary
  • 4. Previous Work • Review Bot’s algorithm[1] •  For industrial setting of VMware •  Selects reviewers who have reviewed in the same section of code change. •  Required a long history of code changes in code review 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 4 [1] V. Balachandran, “Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation,” in Proc. ICSE’13, 2013, pp. 931–940. Past Reviews R4 (New Review) Reviewer Candidates = Reviewers ([R3, R1]) R1 R2 R3 Introduction > Reviewer Recommendation > Evaluation > Summary Code change
  • 5. Reviewer Recommendation Algorithm • Propose File Path Similarity (FPS) algorithm • Select reviewers by calculating similarity of file paths comparing with file paths of previous reviews. 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 5Introduction > Reviewer Recommendation > Evaluation > Summary •  Files with similar functions are usually located under the same or near directories. [2] •  Reviewers having similar knowledge of system would have reviewed similar file. UICore Proj … … … … …… [2] Ivan T. Bowman, Richard C. Holt, and Neil V. Brewster. Linux as a case study: Its extracted software architecture. In Proceedings ICSE ’99, pp. 555–563, 1999.
  • 6. Example of FPS algorithm 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 6 FPS Past Reviews Review R3 Files - video/src/a.java - video/src/b.java Reviewers Review R2 Files - video/src/x.java - video/src/y.java Reviewers A Review R1 Files - video/resource/a.xml Reviewers A B FPS ? ? ? 1 2 Score of past reviewers propagate to reviewers A B = FPS(R3,R1) + FPS(R3,R2) = 0.1 + 0.5 = 0.6 = FPS(R3,R1) = 0.1 Review History Introduction > Reviewer Recommendation > Evaluation > Summary
  • 7. FPS score •  File Path Similarity score of new review Rn and past review Rp •  Similarity Function: •  commonPath(fn,fp) : Longest Common Prefix function returns number of directory/file names that appear from the beginning of both file path 7 commonPath(“video/src/a.java”, “video/src/b.java”) = 2 review request can give opinion mine the quality responsible for mated by Gerrit. while approvers nated by project ess, a patch can eview request is verifier. hor requests ap- ormation about for the authors omatic reviewer e reviews to be ATION m selects candi- les with similar m also uses time Bot’s algorithm. gorithm 1. This equest (Rn) and ommended. The and The calculation of FPS function is described in Equation 1. This calculates a score of a past review (Rp) from an average of similarity of every file in Rp (fp) comparing with every file in Rn (fn). The Files function returns a set of file paths of the input review. The Similarity(fn, fp) function measures the similarity between fp and fn, using Equation 2. The averaged similarity score is prioritized by m and value. Same as time prioritization of the Review Bot’s algorithm, the parameter is a time prioritization factor ranging (0, 1]. When = 1, the time prioritization is not considered. FPS(Rn, Rp, m) = P fn2Files(Rn), fp2Files(Rp) Similarity(fn, fp) |Files(Rn)| ⇥ |Files(Rp)| ⇥ m (1) Similarity(fn, fp) = commonPath(fn, fp) max(Length(fn), Length(fp)) (2) In Equation 2, the commonPath(fn, fp) function counts the number of common directory and/or file name that ap- pear in both file path from the beginning. This count is based on the assumption that files, which are under the same directory, would have the similar function. Thus, the first directory of file paths is compared firstly. Then, the other components of file path are compared respectively. For example, suppose fn is /src/camera/video/a.java and fp is /src/camera/photo/a.java. The common path will be /src/camera and the commonPath(fn, fp) returns 2. The count of commonPath(fn, fp) can be formularized as Equa- and interests [9]. ers, Approvers review request an give opinion ne the quality responsible for ated by Gerrit. while approvers ated by project ss, a patch can view request is erifier. or requests ap- rmation about or the authors matic reviewer reviews to be ATION m selects candi- es with similar also uses time Bot’s algorithm. gorithm 1. This quest (Rn) and mmended. The and with the highest score from the sorted list. The calculation of FPS function is described in Equation 1. This calculates a score of a past review (Rp) from an average of similarity of every file in Rp (fp) comparing with every file in Rn (fn). The Files function returns a set of file paths of the input review. The Similarity(fn, fp) function measures the similarity between fp and fn, using Equation 2. The averaged similarity score is prioritized by m and value. Same as time prioritization of the Review Bot’s algorithm, the parameter is a time prioritization factor ranging (0, 1]. When = 1, the time prioritization is not considered. FPS(Rn, Rp, m) = P fn2Files(Rn), fp2Files(Rp) Similarity(fn, fp) |Files(Rn)| ⇥ |Files(Rp)| ⇥ m (1) Similarity(fn, fp) = commonPath(fn, fp) max(Length(fn), Length(fp)) (2) In Equation 2, the commonPath(fn, fp) function counts the number of common directory and/or file name that ap- pear in both file path from the beginning. This count is based on the assumption that files, which are under the same directory, would have the similar function. Thus, the first directory of file paths is compared firstly. Then, the other components of file path are compared respectively. For example, suppose fn is /src/camera/video/a.java and fp is /src/camera/photo/a.java. The common path will be 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) Introduction > Reviewer Recommendation > Evaluation > Summary
  • 8. Projects Study Period* # of Reviews # of Files AOSP Oct 2008 – Jan 2012 (~2 years) 5,126 26,840 OpenStack Jul 2011 – May 2012 (~1 year) 6,586 16,953 Qt May 2011 – May 2012 (~1 year) 23,810 78,401 Evaluation • Accuracy (same as the review bot research) • Study Projects •  Using Gerrit code review 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 8 Top-k Accuracy = Number of Correct Top-k recommendation Total number of reviews Introduction > Reviewer Recommendation > Evaluation > Summary *Study period started from the 1st year of using peer code review.
  • 9. 2/6/2014 9 77.12 77.97 36.86 29.3 38.78 27.18 0 50 100 AOSP OpenStack Qt Top-5Accuracy(%) FPS algorithm Review Bot's algorithm Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) AOSP Top-1 41.81 % 40.50 % 39.21 % 38.29 % 21.30 % 21.38 % 21.63 % 21.71 % 21.77 % 44.50 % Top-3 69.84 % 67.66 % 66.56 % 64.83 % 63.38 % 29.15 % 29.17 % 29.17 % 29.20 % 29.20 % Top-5 77.12 % 75.30 % 74.17 % 72.36 % 70.80 % 29.30 % 29.30 % 29.30 % 29.30 % 29.30 % OpenStack Top-1 38.32 % 36.47 % 35.18 % 34.73 % 34.04 % 22.94 % 23.23 % 23.19 % 23.20 % 23.19 % Top-3 67.57 % 63.09 % 62.45 % 62.10 % 61.66 % 35.76 % 35.76 % 35.68 % 35.55 % 35.53 % Top-5 77.97 % 73.28 % 72.85 % 72.62 % 71.71 % 38.78 % 38.90 % 38.82 % 38.89 % 38.89 % Qt Top-1 13.02 % 11.64 % 10.21 % 9.45 % 8.88 % 18.64 % 18.70 % 18.72 % 18.71 % 18.75 % Top-3 28.82 % 21.39 % 20.15 % 19.27 % 18.46 % 26.18 % 26.19 % 26.16 % 26.16 % 26.16 % Top-5 36.86 % 27.25 % 26.07 % 25.34 % 24.36 % 27.18 % 27.18 % 27.17 % 27.19 % 27.19 % 0 1000 2000 3000 4000 5000 020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (a) AOSP 0 1000 2000 3000 4000 5000 6000020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (b) OpenStack 0 5000 10000 15000 20000 020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (c) Qt Figure 2: Top-5 accuracy of FPS algorithm and Review Bot’s algorithm with time prioritization factor ( = 1) 5. CONCLUSION & FUTURE WORKS In this study, we have proposed a recommendation algo- rithm using file path similarity for the modern peer code re- view process. The results indicate that our proposed FPS al- gorithm e↵ectively recommend reviewers in two of the three OSS projects. This algorithm also significantly outperform the existing algorithm (i.e. Review Bot’s algorithm) in these projects. Additionally, we found that the used of time pri- oritization was surprisingly not appropriate for the recom- mendation algorithms in distributed projects environment. Our future work will concentrate on explore more insight into projects, especially large scale projects to improve the algorithm. The other impact factors such as directory struc- ture will also be investigated. At the same time, we will [3] V. Balachandran. Reducing Human E↵ort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation. In Proc. ICSE ’13, pages 931–940, 2013. [4] E. T. Barr, C. Bird, P. C. Rigby, A. Hindle, D. M. German, and P. Devanbu. Cohesive and Isolated Development with Branches. In Proc. FASE ’12, pages 316–331, 2012. [5] K. Hamasaki, R. G. Kula, N. Yoshida, C. C. A. Erika, K. Fujiwara, and H. Iida. Who does what during a Code Review ? An extraction of an OSS Peer Review Repository. In Proc. MSR’ 13, pages 49–52, 2013. [6] E. Kocaguneli, T. Zimmermann, C. Bird, N. Nagappan, • FPS algorithm significantly outperformed Review Bot’s algorithm Results Introduction > Reviewer Recommendation > Evaluation > Summary
  • 10. Summary 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 10 Reviewer Assignment Problem • It is difficult and time-consuming to find an appropriate reviewers in large-scale projects 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 3 Who isappropriate to review my code? e.g. Globally distributed software development Introduction > Reviewer Recommendation > Evaluation > Summary Reviewer Recommendation Algorithm • Propose File Path Similarity (FPS) algorithm • Select reviewers by comparing file paths with previous reviews 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 6Introduction > Reviewer Recommendation > Evaluation > Summary •  Files with similar functions are usually located in the same or near directories. [1] •  Reviewers having similar knowledge of system would have reviewed similar files. UICore Proj … … … … …… Projects Study Period* # of Reviews # of Files AOSP Oct 2008 – Jan 2012 (~2 years) 5,126 26,840 OpenStack Jul 2011 – May 2012 (~1 year) 6,586 16,953 Qt May 2011 – May 2012 (~1 year) 23,810 78,401 Evaluation • Accuracy (same as the review bot research) • Study Projects •  Using Gerrit code review 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 9 Top-k Accuracy = ! Number of Correct Top-k recommendation! Total number of reviews! Introduction > Reviewer Recommendation > Evaluation > Summary *Study period started from the 1st year of using peer code review. Previous Work • Review Bot’s algorithm[1] •  For industrial setting of VMware •  Selects reviewers who have reviewed in the same section of code change. 2/6/2014 Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) 4 [1] V. Balachandran, “Reducing Human Effort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation,” in Proc. ICSE’13, 2013, pp. 931–940. Past Reviews R4 (New Review) Reviewer Candidates = Reviewers ([R3, R1]) R1 R2 R3 Introduction > Reviewer Recommendation > Evaluation > Summary 2/6/2014 10 77.12 77.97 36.86 29.3 38.78 27.18 0 50 100 AOSP OpenStack Qt Top-5Accuracy(%) FPS algorithm Review Bot's algorithm Improving Code Review Effectiveness through Reviewer Recommendations (CHASE 2014) Table 2: Top-k accuracy of FPS algorithm and Review Bot’s algorithm with di↵erent time prioritization factors ( ) Project Top-k FPS Algorithm Review Bot’s Algorithm = 1 = 0.8 = 0.6 = 0.4 = 0.2 = 1 = 0.8 = 0.6 = 0.4 = 0.2 AOSP Top-1 41.81 % 40.50 % 39.21 % 38.29 % 21.30 % 21.38 % 21.63 % 21.71 % 21.77 % 44.50 % Top-3 69.84 % 67.66 % 66.56 % 64.83 % 63.38 % 29.15 % 29.17 % 29.17 % 29.20 % 29.20 % Top-5 77.12 % 75.30 % 74.17 % 72.36 % 70.80 % 29.30 % 29.30 % 29.30 % 29.30 % 29.30 % OpenStack Top-1 38.32 % 36.47 % 35.18 % 34.73 % 34.04 % 22.94 % 23.23 % 23.19 % 23.20 % 23.19 % Top-3 67.57 % 63.09 % 62.45 % 62.10 % 61.66 % 35.76 % 35.76 % 35.68 % 35.55 % 35.53 % Top-5 77.97 % 73.28 % 72.85 % 72.62 % 71.71 % 38.78 % 38.90 % 38.82 % 38.89 % 38.89 % Qt Top-1 13.02 % 11.64 % 10.21 % 9.45 % 8.88 % 18.64 % 18.70 % 18.72 % 18.71 % 18.75 % Top-3 28.82 % 21.39 % 20.15 % 19.27 % 18.46 % 26.18 % 26.19 % 26.16 % 26.16 % 26.16 % Top-5 36.86 % 27.25 % 26.07 % 25.34 % 24.36 % 27.18 % 27.18 % 27.17 % 27.19 % 27.19 % 0 1000 2000 3000 4000 5000 020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (a) AOSP 0 1000 2000 3000 4000 5000 6000 020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (b) OpenStack 0 5000 10000 15000 20000 020406080 Number of Reviews (N) Accuracy(%) FPS Algorithm Review Bot's Algorithm (c) Qt Figure 2: Top-5 accuracy of FPS algorithm and Review Bot’s algorithm with time prioritization factor ( = 1) 5. CONCLUSION & FUTURE WORKS In this study, we have proposed a recommendation algo- rithm using file path similarity for the modern peer code re- view process. The results indicate that our proposed FPS al- gorithm e↵ectively recommend reviewers in two of the three OSS projects. This algorithm also significantly outperform the existing algorithm (i.e. Review Bot’s algorithm) in these projects. Additionally, we found that the used of time pri- oritization was surprisingly not appropriate for the recom- mendation algorithms in distributed projects environment. Our future work will concentrate on explore more insight into projects, especially large scale projects to improve the algorithm. The other impact factors such as directory struc- ture will also be investigated. At the same time, we will consider ways to balance the workload of reviewers to help reviewers and reduce the number of awaiting reviews. ACKNOWLEDGMENTS We are thankful to Dr. Mike Barker from NAIST for his valuable suggestions and discussions. 6. REFERENCES [1] A. Aurum, H. Petersson, and C. Wohlin. State-of-the-art: software inspections after 25 years. Software Testing, Verification and Reliability, [3] V. Balachandran. Reducing Human E↵ort and Improving Quality in Peer Code Reviews using Automatic Static Analysis and Reviewer Recommendation. In Proc. ICSE ’13, pages 931–940, 2013. [4] E. T. Barr, C. Bird, P. C. Rigby, A. Hindle, D. M. German, and P. Devanbu. Cohesive and Isolated Development with Branches. In Proc. FASE ’12, pages 316–331, 2012. [5] K. Hamasaki, R. G. Kula, N. Yoshida, C. C. A. Erika, K. Fujiwara, and H. Iida. Who does what during a Code Review ? An extraction of an OSS Peer Review Repository. In Proc. MSR’ 13, pages 49–52, 2013. [6] E. Kocaguneli, T. Zimmermann, C. Bird, N. Nagappan, and T. Menzies. Distributed development considered harmful? In Proc. ICSE ’13, pages 882–890, 2013. [7] a. Mockus and J. Herbsleb. Expertise Browser: a quantitative approach to identifying expertise. In Proc. ICSE ’02, pages 503–512, 2002. [8] N. Ramasubbu and R. K. Balan. Globally Distributed Software Development Project Performance : An Empirical Analysis. In Proc. ESEC/FSE ’07, pages 125–134, 2007. [9] P. C. Rigby and M.-A. Storey. Understanding broadcast based peer review on open source software • FPS algorithm significantly outperformed Review Bot’s algorithm Results Introduction > Reviewer Recommendation > Evaluation > Summary Introduction > Reviewer Recommendation > Evaluation > Summary