Making property-based testing easier to read for humansLaura M. Castro
Agile practices have taught us that both stakeholder involvement and early testing are key to quality software. However, it is usually the case that tools for good communication are not that good for testing, and vice-versa.
In this talk, readSpec (one of the results of the PF7 EU PROWESS project) is presented, a tool that attempts to fill in this gap.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
Making property-based testing easier to read for humansLaura M. Castro
Agile practices have taught us that both stakeholder involvement and early testing are key to quality software. However, it is usually the case that tools for good communication are not that good for testing, and vice-versa.
In this talk, readSpec (one of the results of the PF7 EU PROWESS project) is presented, a tool that attempts to fill in this gap.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
Machine Learning in Static Analysis of Program Source CodeAndrey Karpov
Machine learning has firmly entrenched in a variety of human fields, from speech recognition to medical diagnosing. The popularity of this approach is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
Session 2 of the Technology & Innovation Management Course. Content: contextual market segmentation, jobs to be done, NASA/DOD technology readiness level
On the Malware Detection Problem: Challenges & Novel ApproachesMarcus Botacin
Marcus Botacin's PhD Defense at Federal University of Paraná (UFPR).
Advisor: Dr André Grégio
Co-Advisor: Paulo de Geus
Evaluation Committee:
Dr Leigh Metcalf, Dr Leyla Bilge, Daniel Alfonso Oliveira
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
All You Need to Know to Win a Cybersecurity Adversarial Machine Learning Comp...Marcus Botacin
Describing our experience in the MLSec competition for the seminar series of the University of Waikato. Presenteed by Fabricio Ceschin and Marcus Botacin from the Federal University of Paraná.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Synthesizing Knowledge from Software Development ArtifactsJeongwhan Choi
The content was created from "The Art and Science of Analyzing Software Data"
O Baysal, Kononenko, O. (Oleksii), Holmes, R. (Reid), and Godfrey, M.W. (Michael W.), “Synthesizing Knowledge from Software Development Artifacts”, 2015.
Machine Learning in Static Analysis of Program Source CodeAndrey Karpov
Machine learning has firmly entrenched in a variety of human fields, from speech recognition to medical diagnosing. The popularity of this approach is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
Session 2 of the Technology & Innovation Management Course. Content: contextual market segmentation, jobs to be done, NASA/DOD technology readiness level
On the Malware Detection Problem: Challenges & Novel ApproachesMarcus Botacin
Marcus Botacin's PhD Defense at Federal University of Paraná (UFPR).
Advisor: Dr André Grégio
Co-Advisor: Paulo de Geus
Evaluation Committee:
Dr Leigh Metcalf, Dr Leyla Bilge, Daniel Alfonso Oliveira
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
All You Need to Know to Win a Cybersecurity Adversarial Machine Learning Comp...Marcus Botacin
Describing our experience in the MLSec competition for the seminar series of the University of Waikato. Presenteed by Fabricio Ceschin and Marcus Botacin from the Federal University of Paraná.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Synthesizing Knowledge from Software Development ArtifactsJeongwhan Choi
The content was created from "The Art and Science of Analyzing Software Data"
O Baysal, Kononenko, O. (Oleksii), Holmes, R. (Reid), and Godfrey, M.W. (Michael W.), “Synthesizing Knowledge from Software Development Artifacts”, 2015.
SurfClipse-- An IDE based context-aware Meta Search EngineMasud Rahman
Despite various debugging supports of the existing IDEs for programming errors and exceptions, software developers often look at web for working solutions or any up-to-date information. Traditional web search does not consider thecontext of the problems that they search solutions for, and thus it often does not help much in problem solving. In this paper, we propose a context-aware meta search tool, SurfClipse, that analyzes an encountered exception andits context in the IDE, and recommends not only suitable search queries but also relevant web pages for the exception (and its context). The tool collects results from three popular search engines and a programming Q & A site against the exception in the IDE, refines the results for relevance against the context of the exception, and then ranks them before recommendation. It provides two working modes--interactive and proactive to meet the versatile needs of the developers, and one can browse the result pages using a customized embedded browser provided by the tool.
Updated slides for my talk at the CHAQ meeting in Antwerp. I also added slides on some of my experiences on performing empirical studies with open source and industrial software systems.
Code reviews have been conducted since decades in
software projects, with the aim of improving code quality from
many different points of view. During code reviews, developers are supported by checklists, coding standards and, possibly, by various kinds of static analysis tools. This paper investigates whether warnings highlighted by static analysis tools are taken care of during code reviews and, whether there are kinds of warnings that tend to be removed more than others. Results of a study conducted by mining the Gerrit repository of six Java open source projects indicate that the density of warnings only slightly vary after each review. The overall percentage of warnings removed during reviews is slightly higher than what previous studies found for the overall project evolution history. However, when looking (quantitatively and qualitatively) at specific categories of warnings, we found that during code reviews developers focus on certain kinds of problems. For such
categories of warnings the removal percentage tend to be very high—often above 50% and sometimes 100%. Examples of those are warnings in the imports, regular expression, and type resolution categories. In conclusion, while a broad warning detection might produce way too many false positives, enforcing the removal of certain warnings prior to the patch submission could reduce the
amount of effort provided during the code review process.
Multi step automated refactoring for code smelleSAT Journals
Abstract
Brain MR Image can detect many abnormalities like tumor, cysts, bleeding, infection etc. Analysis of brain MRI using image
processing techniques has been an active research in the field of medical imaging. In this work, it is shown that MR image of brain
represent a multi fractal system which is described a continuous spectrum of exponents rather than a single exponent (fractal
dimension). Multi fractal analysis has been performed on number of images from OASIS database are analyzed. The properties of
multi fractal spectrum of a system have been exploited to prove the results. Multi fractal spectra are determined using the modified
box-counting method of fractal dimension estimation.
Keywords: Brain MR Image, Multi fractal, Box-counting
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Nowadays software systems are essential to the environment of most organizations, and their maintenance is a key point to support business dynamics. Thus, reverse engineering legacy systems for knowledge reuse has become a major concern in software industry. This article, based on a survey about reverse engineering tools, discusses a set of functional and nonfunctional requirements for an effective tool for reverse engineering, and observes that current tools only partly support these requirements. In addition, we define new requirements, based on our group’s experience and industry feedback, and present the architecture and implementation of LIFT: a Legacy InFormation retrieval Tool, developed based on these demands. Furthermore, we discuss the compliance of LIFT with the defined requirements. Finally, we applied the LIFT in a reverse engineering project of a 210KLOC NATURAL/ADABAS system of a financial institution and analyzed its effectiveness and scalability, comparing data with previous similar projects performed by the same institution.
Towards the Next Generation of Reactive Model Transformations on Low-Code Pla...IncQuery Labs
Authors: Benedek Horváth(IncQuery Labs cPlc., Johannes Kepler University Linz, Linz, Austria), Ákos Horváth (IncQuery Labs cPlc.), Manuel Wimmer (Johannes Kepler University Linz, Linz, Austria)
Read the research here: https://dl.acm.org/doi/10.1145/3417990.3420199
R Tool for Visual Studio และการทำงานร่วมกันเป็นทีม โดย เฉลิมวงศ์ วิจิตรปิยะกุ...BAINIDA
R Tool for Visual Studio และการทำงานร่วมกันเป็นทีม โดย เฉลิมวงศ์ วิจิตรปิยะกุล MVP, Microsoft Thailand
THE FIRST NIDA BUSINESS ANALYTICS AND DATA SCIENCES CONTEST/CONFERENCE
Performance analysis of machine learning approaches in software complexity pr...Sayed Mohsin Reza
This video contains the presentation at TCCE 2020 by Sayed Mohsin Reza on his paper titled "Performance Analysis of Machine Learning Approaches in Software Complexity Prediction"
Keywords: Software Complexity, Software Quality, Machine Learning, Software Design, Software Reliability, etc
Authors :
1. Sayed Mohsin Reza, Ph.D. Student, University of Texas
2. Mahfujur Rahman, Lecturer, Daffodil International University
3. Hasnat Parvez, Student, Jahangirnagar University
4. Omar Badreddin, Professor, University of Texas
5. Shamim Al Mamun, Professor, Jahangirnagar University
Abstract: Software design is one of the core concepts in software engineering. This covers insights and intuitions of software evolution, reliability, and maintainability. Effective software design facilitates software reliability and better quality management during development which reduces software development cost. Therefore, it is required to detect and maintain these issues earlier. Class complexity is one of the ways of detecting software quality. The objective of this paper is to predict class complexity from source code metrics using Machine Learning (ML) approaches and compare the performance of the approaches. In order to do that, we collect ten popular and quality maintained open source repositories and extract 18 source code metrics that relate to complexity for class-level analysis. First, we apply statistical correlation to find out the source code metrics that impact most on class complexity. Second, we apply five alternative ML techniques to build complexity predictors and compare the performances. The results report that the following source code metrics: Depth Inheritance Tree (DIT), Response For Class (RFC), Weighted Method Count (WMC), Lines of Code (LOC), and Coupling Between Objects (CBO) have the most impact on class complexity. Also, we evaluate the performance of the techniques and results show that Random Forest (RF) significantly improves accuracy without providing additional false negative or false positive that work as false alarms in complexity prediction.
The field of machine programming — the automation of the development of software — is making notable research advances. This is, in part, due to the emergence of a wide range of novel techniques in machine learning. In today’s technological landscape, software is integrated into almost everything we do, but maintaining software is a time-consuming and error-prone process. When fully realized, machine programming will enable everyone to express their creativity and develop their own software without writing a single line of code. Intel realizes the pioneering promise of machine programming, which is why it created the Machine Programming Research (MPR) team in Intel Labs. The MPR team’s goal is to create a society where everyone can create software, but machines will handle the “programming” part.
RAISE Lab at Dalhousie University
aims to develop tools and technologies for intelligent automation in software engineering. An overview is presented by Dr. Masud Rahman, Assistant Professor, Faculty of Computer Science, Dalhousie University, Canada.
The Forgotten Role of Search Queries in IR-based Bug Localization: An Empiric...Masud Rahman
Being light-weight and cost-effective, IR-based approaches for bug localization have shown promise in finding software bugs. However, the accuracy of these approaches heavily depends on their used bug reports. A significant number of bug reports contain only plain natural language texts. According to existing studies, IR-based approaches cannot perform well when they use these bug reports as search queries. On the other hand, there is a piece of recent evidence that suggests that even these natural language-only reports contain enough good keywords that could help localize the bugs successfully. On one hand, these findings suggest that natural language-only bug reports might be a sufficient source for good query keywords. On the other hand, they cast serious doubt on the query selection practices in the IR-based bug localization. In this article, we attempted to clear the sky on this aspect by conducting an in-depth empirical study that critically examines the state-of-the-art query selection practices in IR-based bug localization. In particular, we use a dataset of 2,320 bug reports, employ ten existing approaches from the literature, exploit the Genetic Algorithm-based approach to construct optimal, near-optimal search queries from these bug reports, and then answer three research questions. We confirmed that the state-of-the-art query construction approaches are indeed not sufficient for constructing appropriate queries (for bug localization) from certain natural language-only bug reports. However, these bug reports indeed contain high-quality search keywords in their texts even though they might not contain explicit hints for localizing bugs (e.g., stack traces). We also demonstrate that optimal queries and non-optimal queries chosen from bug report texts are significantly different in terms of several keyword characteristics (e.g., frequency, entropy, position, part of speech). Such an analysis has led us to four actionable insights on how to choose appropriate keywords from a bug report. Furthermore, we demonstrate 27%–34% improvement in the performance of non-optimal queries through the application of our actionable insights to them. Finally, we summarize our study findings with future research directions (e.g., machine intelligence in keyword selection).
Preprint: https://bit.ly/39nAoun
Publication URL: https://bit.ly/3xVUxlq
Replication package: https://bit.ly/36T8oxL
More details: https://web.cs.dal.ca/~masud
Effective Reformulation of Query for Code Search using Crowdsourced Knowledge...Masud Rahman
An effective query reformulation technique that adopts crowd sourced knowledge and large-scale data analytics from Stack Overflow Q&A site, and then improves source code search.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
Code-Review-COW56-Meeting
1. TOWARDS AUTOMATED SUPPORTS FOR CODE
REVIEWS USING REVIEWER RECOMMENDATION
AND REVIEW QUALITY MODELLING
Mohammad Masudur Rahman, Chanchal K. Roy, Raula G.
Kula, Jason Collins, and Jesse Redl
University of Saskatchewan, Canada, Osaka University, Japan
Vendasta Technologies, Canada
56th COW: Code Review and Continuous Inspection/Integration
3. RECAP ON CODE REVIEW
Formal inspection
Peer code review
Modern code review (MCR)
Code review is a systematic
examination of source code for
detecting bugs or defects and
coding rule violations.
3
Early bug detection
Stop coding rule violation
Enhance developer skill
4. PARTIES INVOLVED IN MCR
Code Reviewer
Patch Submitter
Suggestion on
appropriate code
reviewers
Help with good/useful
code reviews
4
5. TODAY’S TALK OUTLINE
Part I: Code Reviewer
Recommendation System
Part II: Prediction Model
for Review Usefulness
5
9. EXISTING LITERATURE
Line Change History (LCH)
ReviewBot (Balachandran, ICSE 2013)
File Path Similarity (FPS)
RevFinder (Thongtanunam et al, SANER 2015)
FPS (Thongtanunam et al, CHASE 2014)
Tie (Xia et al, ICSME 2015)
Code Review Content and Comments
Tie (Xia et al, ICSME 2015)
SNA (Yu et al, ICSME 2014)
9
Issues & Limitations
Mine developer’s contributions from
within a single project only.
Library & Technology Similarity
Library Technology
10. OUTLINE OF THIS STUDY
10
Vendasta codebase
CORRECT
Evaluation using
VendAsta code base
Evaluation using
Open Source Projects
Conclusion
Comparative
study
Exploratory study 3 Research questions
11. EXPLORATORY STUDY ( 3 RQS)
RQ1: How frequently do the commercial software
projects reuse external libraries from within the
codebase?
RQ2: Does the experience of a developer with
such libraries matter in code reviewer selection by
other developers?
RQ3: How frequently do the commercial projects
adopt specialized technologies (e.g., taskqueue,
mapreduce, urlfetch)?
11
12. DATASET: EXPLORATORY STUDY
12
Each project has at least 750 closed pull requests.
Each library is used at least 10 times on average.
Each technology is used at least 5 times on average.
10 utility libraries
(Vendasta)
10 commercial projects
(Vendasta)
10 Google App Engine
Technologies
13. LIBRARY USAGE IN COMMERCIAL PROJECTS
(ANSWERED: EXP-RQ1 )
Empirical library usage frequency in 10 projects
Mostly used: vtest, vauth, and vapi
Least used: vlogs, vmonitor
13
14. LIBRARY USAGE IN
PULL REQUESTS (ANSWERED: EXP-RQ2)
30%-70% of pull requests used at least one of the 10 libraries
87%-100% of library authors recommended as code reviewers in
the projects using those libraries
Library experience really matters!
14
% of PR using selected libraries % of library authors as code reviewers
15. SPECIALIZED TECHNOLOGY USAGE
IN PROJECTS (ANSWERED: EXP-RQ3)
Empirical technology usage frequency in top 10
commercial projects
Champion technology: mapreduce
15
16. TECHNOLOGY USAGE IN PULL REQUESTS
(ANSWERED: EXP-RQ3)
20%-60% of the pull requests used at least one of the
10 specialized technologies.
Mostly used in: ARM, CS and VBC
16
17. SUMMARY OF EXPLORATORY FINDINGS
17
About 50% of the pull requests used one or more of the
selected libraries. (Exp-RQ1)
About 98% of the library authors were later
recommended as pull request reviewers. (Exp-RQ2)
About 35% of the pull requests used one or more
specialized technologies. (Exp-RQ3)
Library experience and Specialized technology
experience really matter in code reviewer
selection/recommendation
18. CORRECT: CODE REVIEWER RECOMMENDATION
IN GITHUB USING CROSS-PROJECT &
TECHNOLOGY EXPERIENCE
18
21. EVALUATION OF CORRECT
Two evaluations using-- (1) Vendasta codebase (2)
Open source software projects
21
1: Are library experience and technology experience useful
proxies for code review skills?
2: Does CoRReCT outperform the baseline technique for
reviewer recommendation?
3: Does CoRReCT perform equally/comparably for both
private and public codebase?
4: Does CoRReCT show bias to any of the development
frameworks
22. EXPERIMENTAL DATASET
Sliding window of 30 past requests for learning.
Metrics: Top-K Accuracy, Mean Precision (MP), Mean
Recall (MR), and Mean Reciprocal rank (MRR). 22
10 Python projects 2 Python, 2 Java &
2 Ruby projects
13,081 Pull requests 4,034 Pull requests
Code reviews Code reviewers
Gold set
23. LIBRARY EXPERIENCE & TECHNOLOGY
EXPERIENCE (ANSWERED: RQ1)
Metric Library Similarity Technology Similarity Combined Similarity
Top-3 Top-5 Top-3 Top-5 Top-3 Top-5
Accuracy 83.57% 92.02% 82.18% 91.83% 83.75% 92.15%
MRR 0.66 0.67 0.62 0.64 0.65 0.67
MP 65.93% 85.28% 62.99% 83.93% 65.98% 85.93%
MR 58.34% 80.77% 55.77% 79.50% 58.43% 81.39%
23
[ MP = Mean Precision, MR = Mean Recall, MRR = Mean Reciprocal Rank ]
Both library experience and technology experience are
found as good proxies, provide over 90% accuracy.
Combined experience provides the maximum performance.
92.15% recommendation accuracy with 85.93% precision and
81.39% recall.
Evaluation results align with exploratory study findings.
24. COMPARATIVE STUDY FINDINGS (ANSWERED:
RQ2)
CoRReCT performs better than the competing technique in all
metrics (p-value=0.003<0.05 for Top-5 accuracy)
Performs better both on average and on individual projects.
RevFinder uses PR similarity using source file name and file’s
directory matching
24
Metric RevFinder[18] CoRReCT
Top-5 Top-5
Accuracy 80.72% 92.15%
MRR 0.65 0.67
MP 77.24% 85.93%
MR 73.27% 81.39%
[ MP = Mean Precision, MR = Mean Recall,
MRR = Mean Reciprocal Rank ]
25. COMPARISON ON OPEN SOURCE PROJECTS
(ANSWERED: RQ3)
In OSS projects, CoRReCT also performs better than the
baseline technique.
85.20% accuracy with 84.76% precision and 78.73% recall,
and not significantly different than earlier (p-value=0.239>0.05
for precision)
Results for private and public codebase are quite close.
25
Metric RevFinder [18] CoRReCT (OSS) CoRReCT (VA)
Top-5 Top-5 Top-5
Accuracy 62.90% 85.20% 92.15%
MRR 0.55 0.69 0.67
MP 62.57% 84.76% 85.93%
MR 58.63% 78.73% 81.39%
[ MP = Mean Precision, MR = Mean Recall, MRR = Mean Reciprocal Rank ]
26. COMPARISON ON DIFFERENT PLATFORMS
(ANSWERED: RQ4)
Metrics Python Java Ruby
Beets St2 Avg. OkHttp Orientdb Avg. Rubocop Vagrant Avg.
Accuracy 93.06% 79.20% 86.13% 88.77% 81.27% 85.02% 89.53% 79.38% 84.46%
MRR 0.82 0.49 0.66 0.61 0.76 0.69 0.76 0.71 0.74
MP 93.06% 77.85% 85.46% 88.69% 81.27% 84.98% 88.49% 79.17% 83.83%
MR 87.36% 74.54% 80.95% 85.33% 76.27% 80.80% 81.49% 67.36% 74.43%
26
[ MP = Mean Precision, MR = Mean Recall, MRR = Mean Reciprocal Rank ]
In OSS projects, results for different platforms look
surprisingly close except the recall.
Accuracy and precision are close to 85% on average.
CORRECT does NOT show any bias to any particular platform.
27. THREATS TO VALIDITY
Threats to Internal Validity
Skewed dataset: Each of the 10 selected projects is
medium sized (i.e., 1.1K PR) except CS.
Threats to External Validity
Limited OSS dataset: Only 6 OSS projects considered—
not sufficient for generalization.
Issue of heavy PRs: PRs containing hundreds of files can
make the recommendation slower.
Threats to Construct Validity
Top-K Accuracy: Does the metric represent effectiveness
of the technique? Widely used by relevant literature
(Thongtanunam et al, SANER 2015)
27
30. RESEARCH PROBLEM: USEFULNESS OF CODE
REVIEW COMMENTS
30
What makes a review comment
useful or non-useful?
34.5% of review comments are non-
useful at Microsoft (Bosu et al., MSR 2015)
No automated support to detect or
improve such comments so far
31. STUDY METHODOLOGY
31
1,482 Review
comments (4 systems)
Manual tagging with
Bosu et al., MSR 2015
Non-useful
comments (602)
Useful
comments (880)
(1)
Comparative
study
(2)
Prediction
model
32. COMPARATIVE STUDY: VARIABLES
Independent Variables (8) Response Variable (1)
Reading Ease Textual
Comment Usefulness
(Yes / No)
Stop word Ratio Textual
Question Ratio Textual
Code Element Ratio Textual
Conceptual Similarity Textual
Code Authorship Experience
Code Reviewership Experience
External Lib. Experience Experience
32
Contrast between useful and non-useful comments.
Two paradigms– comment texts, and
commenter’s/developer’s experience
Answers two RQs related to two paradigms.
33. ANSWERING RQ1: READING EASE
Flesch-Kincaid Reading Ease applied.
No significant difference between useful and non-
useful review comments.
33
34. ANSWERING RQ1: STOP WORD RATIO
Used Google stop word list and Python keywords.
Stop word ratio = #stop or keywords/#all words from a
review comment
Non-useful comments contain more stop words than
useful comments, i.e., statistically significant.
34
35. ANSWERING RQ1: QUESTION RATIO
Developers treat clarification questions as non-useful
review comments.
Question ratio = #questions/#sentences of a comment.
No significant difference between useful and non-useful
comments in question ratio.
35
36. ANSWERING RQ1: CODE ELEMENT RATIO
Important code elements (e.g., identifiers) in the
comments texts, possibly trigger the code change.
Code element ratio = #source tokens/#all tokens
Useful comments > non-useful comments for code
element ratio, i.e., statistically significant.
36
37. ANSWERING RQ1: CONCEPTUAL SIMILARITY
BETWEEN COMMENTS & CHANGED CODE
How relevant the comment is with the changed code?
Do comments & changed code share vocabularies?
Yes, useful comments do more sharing than non-useful ones,
i.e., statistically significant.
37
38. ANSWERING RQ2: CODE AUTHORSHIP
File level authorship did not make much difference, a
bit counter-intuitive.
Project level authorship differs between useful and
non-useful comments, mostly for Q2 and Q3
38
39. ANSWERING RQ2: CODE REVIEWERSHIP
Does reviewing experience matter in providing useful
comments?
Yes, it does. File level reviewing experience matters. Especially
true for Q2 and Q3.
Experienced reviewers provide more useful comments than non-
useful comments.
39
40. ANSWERING RQ2: EXT. LIB. EXPERIENCE
Familiarity with the library used in the changed
code for which comment is posted.
Significantly higher for the authors of useful
comments for Q3 only.
40
41. SUMMARY OF COMPARATIVE STUDY
41
RQ Independent Variables Useful vs. Non-useful
Difference
RQ1
Reading Ease Not significant
Stop word Ratio Significant
Question Ratio Not significant
Code Element Ratio Significant
Conceptual Similarity Significant
RQ2
Code Authorship Somewhat significant
Code Reviewership Significant
External Lib. Experience Somewhat significant
42. EXPERIMENTAL DATASET & SETUP
42
1,482 code review
comments
Evaluation set
(1,116)
Validation set
(366)
Model training &
cross-validation
Validation with
unseen comments
43. REVHELPER: USEFULNESS PREDICTION MODEL
43
Review
comments
Manual classification
using Bosu et al.
Useful & non-useful
comments Model training
Prediction
model
New review comment
Prediction of usefulness for a new review
comment to be submitted.
Applied three ML algorithms– NB, LR, and RF
Evaluation & validation with different data sets
Answered 3 RQs– RQ3, RQ4 and RQ5
44. ANSWERING RQ3: MODEL PERFORMANCE
Learning Algorithm Useful Comments Non-useful Comments
Precision Recall Precision Recall
Naïve Bayes 61.30% 66.00% 53.30% 48.20%
Logistic Regression 60.70% 71.40% 54.60% 42.80%
Random Forest 67.93% 75.04% 63.06% 54.54%
44
Random Forest based model performs the best.
Both F1-score and accuracy 66%.
Comment usefulness and features are not linearly
correlated.
As a primer, this prediction could be useful.
49. TAKE-HOME MESSAGES (PART II)
Usefulness of review comments is complex but a
much needed piece of information.
No automated support available so far to predict
usefulness of review comments instantly.
Non-useful comments are significantly different from
useful comments in several textual features (e.g.,
conceptual similarity)
Reviewing experience matters in providing useful
review comments.
Our prediction model can predict the usefulness of a
new review comment.
RevHelper performs better than random guessing and
available alternatives.
49
51. RESEARCH PROBLEM: IMPACT OF
AUTOMATED BUILDS ON CODE REVIEWS
Automated Builds, an
important part of CI for commit
merging & consistency
Exponential increase of
automated builds over the years
with Travis CI.
Builds & Code reviews as
interleaving steps in the pull-
based development
RQ1: Does the status of automated builds influence the code
review participation in open source projects?
RQ2: Do frequent automated builds help improve the
overall quality of peer code reviews?
RQ3: Can we automatically predict whether an automated build
would trigger new code reviews or not? 51
52. ANSWERING RQ1: BUILD STATUS & REVIEW
PARTICIPATION
Build Status Build Only Builds + Reviews Total
Canceled 2,616 1,368 3,984
Errored 51,729 27,262 78,991
Failed 55,546 39,025 94,571
Passed 236,573 164,174 400,747
All 346,464 231,829 (40%) 578,293
52
578K PR-based builds
Four build statuses
232K (40%) build entries
with code reviews.
Chi-squared tests (p-
value=2.2e-16<0.05)
53. ANSWERING RQ1: BUILD STATUS & REVIEW
PARTICIPATION
53
Previous
Build status
#PR with Review Comments
Only Added↑ Only Removed↓ Total Changed↑↓
Canceled 20 24 65
Errored 510 265 812
Failed 1,542 826 2,316
Passed 4,235 1,788 5,677
All 6,307 2,903 8,870 (28%)
31,648 PRs for 232K entries from 1000+ projects
For 28% PR, #review comments changed.
Passed builds triggered 18% of new reviews.
Errored + Failed triggered 10%
54. ANSWERING RQ2: BUILD FREQUENCY &
REVIEW QUALITY
54
Quantile Issue Comments PR Comments All Review Comments
M p-value ∆ M p-value ∆ M p-value ∆
Q1
0.60
<0.001* 0.35
0.24
<0.001* 0.49
0.84
<0.001* 0.41
Q4
0.99 0.52 1.50
M= Mean #review comments, * = Statistically significant, ∆ = Cliff’s Delta
55. ANSWERING RQ2: BUILD FREQUENCY &
REVIEW QUALITY
5 projects from Q1, and 5 from Q4, 3-4 years old
Cumulative #review comments/build over 48 months
Code review quality (i.e., #comments) improved almost
linearly for frequently built projects
Didn’t happen so for the counterpart.
55
56. ANSWERING RQ3: PREDICTION OF NEW
CODE REVIEW TRIGGERING
Learning
Algorithm
Overall
Accuracy
New Review Triggered
Precision Recall
Naïve Bayes 58.03% 68.70% 29.50%
Logistic Regression 60.56% 64.50% 47.00%
J48 64.04% 69.50% 50.10%
56
Features: build status, code change statistics, test
change statistics, code review comments. Response:
New review or unchanged.
Three ML algorithms with 10-fold cross-validation.
26.5K build entries as dataset.
J48 performed the best, 64% accuracy, 69.50%
precision & 50% recall.
57. TAKE-HOME MESSAGE (PART III)
Automated build might influence manual code
review since they interleave each other in the
modern pull-based development
Passed builds more associated with review
participations, and with new code reviews.
Frequently built projects received more review
comments than less frequently built ones.
Code review activities are steady over time with
frequently built projects. Not true for counterparts.
Our prediction model can predict whether a build
will trigger new code review or not.
57
58. REPLICATION PACKAGES
CORRECT, RevHelper & Travis CI Miner
http://www.usask.ca/~masud.rahman/correct/
http://www.usask.ca/~masud.rahman/revhelper/
http://www.usask.ca/~masud.rahman/msrch/travis/
Please contact Masud Rahman
(masud.rahman@usask.ca) for further details about these
studies and replications.
58
59. PUBLISHED PAPERS
[1] M. Masudur Rahman, C.K. Roy, and Jason Collins, "CORRECT: Code
Reviewer Recommendation in GitHub Based on Cross-Project and
Technology Experience", In Proceeding of The 38th International
Conference on Software Engineering Companion (ICSE-C 2016), pp. 222--
231, Austin Texas, USA, May 2016
[2] M. Masudur Rahman, C.K. Roy, Jesse Redl, and Jason Collins, "CORRECT:
Code Reviewer Recommendation at GitHub for Vendasta Technologies",
In Proceeding of The 31st IEEE/ACM International Conference on Automated
Software Engineering (ASE 2016), pp. 792--797, Singapore, September 2016
[3] M. Masudur Rahman and C.K. Roy and R.G. Kula, "Predicting Usefulness
of Code Review Comments using Textual Features and Developer
Experience", In Proceeding of The 14th International Conference on Mining
Software Repositories (MSR 2017), pp. 215--226, Buenos Aires, Argentina,
May, 2017
[4] M. Masudur Rahman and C.K. Roy, "Impact of Continuous Integration on
Code Reviews", In Proceeding of The 14th International Conference on
Mining Software Repositories (MSR 2017), pp. 499--502, Buenos Aires,
Argentina, May, 2017
59
When I searched in the web, I found this.
Obviously, code review is not a very good experience all the time
(1) If you do not select appropriate reviewers, the review could be disastrous.
(2) If you do a poor code review yourself, that also could be embarrassing for you!
We already had a lot of talks on code review.
Anyway, just to recap, code review is a systematic process of source code
that identifies defects and coding standard violation of the code.
It helps in early bug detection—thus reduces cost.
It also ensures code quality by maintaining the coding standards.
Code review has also evolved. First, it was formal inspection which was time-consuming, slow and costly.
Then came a less formal code review—peer code review.
Now we do tool assisted code review—also called modern code review.
There are two parties involved in modern code reviews -- (1) patch submitter and (2) code reviewers.
Since modern code review promotes tool support for code reviews, these two parties actually need two different types of tool supports for successful code reviews.
For example, a patch submitter needs automated support in finding appropriate code reviewers for his/her pull requests.
On the other hand, a code reviewer needs a tool that can assist during the code review session.
One type of support could be -- determining usefulness of the review comments in real time.
In our work, we developed tool supports for both of these parties.
Our talk is divided into two parts:
In part I, we will talk about a code reviewer recommendation system.
In part II, we will talk about a model that can predict the usefulness of code review comments.
Since our focus is to extend automated supports in various aspects of code reviews, we conduct a third study.
In this study, we analyze the impact of continuous integration on code reviews.
Part I
This is an example for code review at GitHub. Once a developer submits a pull request, a way to submitting changes at GitHub,
The core developers/ reviewers can review the changes and provide their feedback like this.
Our goal in this research is to identify appropriate code reviewers for such a pull request.
Identifying such code reviewers is very important especially for the novice developers who do not know the skill set their fellow developers.
It is also very essential for distributed development where the developers rarely meet face to face.
Besides an earlier study suggest that without appropriate code reviewers the whole change submission could be 12 days late on average.
However, identifying such reviewers is challenging since the skill is not obvious, and it would require massive mining of the revision history.
The earlier study analyze line change history of source code, file path similarity of source files and review comments.
In short, they mostly considered the work experience of a candidate code reviewers within a single project only.
However, some skills span across multiple projects such as working experience with specific API libraries or specialized technologies.
Also in an industrial setting, a developer’s contribution scatter throughout different projects within the company codebase.
We thus consider external libraries and APIs included in the changed code and suggest more appropriate code reviewers.
This is the outline of my today’s talk.
We collect commercial projects and libraries from Vendasta codebase, a medium sized Canadian software company.
Then we ask 3 research questions and conduct an exploratory study to answer those questions.
Based on those findings, we then propose our recommendation technique—CORRECT.
Then in the experiments, we experimented commercial projects, compare with the state-of-the-art and also experimented with Open source projects.
Finally, we conclude the talk.
We ask these three research questions.
In a commercial codebase, there are two types of projects– customer project and utility projects. The utility projects are also called libraries.
We ask.
How frequently do the commercial software projects reuse external libraries in their code?
Does working experience on such libraries matter in code reviewer selection? That means does a reviewer with such experience get preference over the others?
Does working experience with specialized technologies such as mapreduce, taskqueue matter in code reviewer selection?
This is connectivity graph of core projects and internal libraries from Vendasta codebase.
We see the graph is pretty much connected, that means most of the libraries are used by most of the projects.
For the study, we chose 10 projects and 10 internal libraries, and they are chosen based on certain restriction.
Each project should have 750 closed pull requests, that means they should be pretty big, and most importantly pretty much active.
Each internal library should be used at least 10 times on average by each of those projects.
Each technology, I mean the specialized technology is should be used at least 5 times on average by each of those projects.
We consider the Google App Engine libraries as the specialized technologies. We consider 10 of them.
This is the usage frequency of the selected libraries in the 10 project we selected for the study.
We take the latest snapshot of each of the projects, analyze their source files, and look for imported libraries using an AST parser.
We try to find out those 10 libraries mostly, and this is the box plot of their frequencies.
We can see that vtest, vauth and vapi are the mostly used, which a kind of make sense especially vtest and vauth, they provide possibly the generic testing and authentication support.
However, vtest has a large variance, that means some projects used it extensively whereas the others didn’t use it at all.
The least used libraries are vlogs and vmonitor.
So, this is the empirical frequencies.
We investigated the ratio of pull requests that used any of the selected libraries.
We note that 30%-70% of all pull requests did that in different projects.
We also investigated the percentage of the library authors who are later recommended as code reviewers for the projects referring to that library.
We considered a developer as library author if he/she authored at least one pull request of the library.
We note that almost 100% of authors are later recommended.
This is a very interesting findings that suggest that library experience really matters.
We also calculated the empirical frequency of the ten specialized technologies in the selected Vendasta projects
And this is the box plot.
We can see that mapreduce is the champion technology here, and the rest are close competitors.
In case of the pull requests, 20%-60% pull requests used at least one of ten specialized technologies
Mostly used by ARM, CS and VBC.
So, specialized technologies are also used in our selected projects quite significantly.
So, here are the empirical findings from the exploratory studies we conducted.
They suggest that library experience and specialized technology experience really matter.
These are new findings, and we exploit them to develop the recommendation algorithm later.
Based on those exploratory findings, we propose CORRECT– Code reviewer recommendation based on cross-project and technology experience.
This is our recommendation algorithm.
Once a new pull request R3 is created, we analyze its commits, then source files, and look for the libraries referred and the specialized technologies used. Thus, we get a library token list and a technology token list.
We combined both lists, and this list can be considered as a summary of libraries and technologies for the new pull request.
Now, we consider the latest 10 but closed pull requests, and collect their library and technology tokens.
It should be noted that the past requests contain their code reviewers.
Now, we estimate the similarity between the new and each of the past requests. We use cosine similarity score between their token list.
We add that score to the corresponding code reviewers.
This way, finally, we get a list of reviewers who got accumulated score for different past reviews.
Then they are ranked and top reviewers are recommended.
Thus, we use pull request similarity score to estimate the relevant expertise of code review.
Now, to be technically specific
The state-of-the-art considers two pull requests relevant/similar if they share source code files or directories.
On the other hand, we suggest that two pull requests are relevant/similar if they share the same external libraries and specialized technologies.
That’s the major difference in methodology and our core technical contribution.
We performed two evaluations– one with Vendasta codebase, and the other with Open source codebase.
From that experiments, we try to answer four research questions.
Are library experience and technology experience useful proxies for code review skills?
Can our technique outperform the state-of-the-art technique from the literature?
Does it perform equally for closed source and open source projects?
Does it show any bias to any particular platform?
We conducted experiments using 10 projects from GitHub codebase and 6 projects from open source domain.
From Vendasta, we collected 13K pull requests, and from open source, we collect 4K pull requests.
Gold reviewers are collected from the corresponding pull requests.
Vendasta projects are python-based whereas the OS projects are written in python, Java and Ruby.
We consider four performance metrics– accuracy, precision, recall, and reciprocal rank.
In case of accuracy, if the recommendation contains at least one gold reviewer, we consider the recommendation is accurate.
This is how we answer the first RQ.
We see that both library similarity and technology similarity are pretty good proxies for code review skills.
Each of them provides over 90% top-5 accuracy.
However, when we combine, we get the maximum—92% top-5 accuracy.
The precision and recall are also greater than 80% which is highly promising according to relevant literature.
We then compare with the state-of-the-art –RevFinder.
We found that our performance is significantly better than theirs. We get a p-value of 0.003 for top-5 accuracy with Mann-Whitney U tests.
The median accuracy 95%. The median precision and median recall are between 85% to 90%
In the case of individual projects, our technique also outperformed the state-of-the-art.
We also experimented using 6 Open source projects, and found 85% Top-5 accuracy.
For the case of precision and recall, they are not significantly different from those with Vendasta projects.
For example, with precision, we get a p-value of 0.239 which is greater than 0.05.
This slide shows how CORRECT performed with the projects from 3 programming platforms– Python, Java and Ruby.
We also find quite similar performance for each of the platforms which is interesting.
This shows that our findings with commercial projects are quite generalizable.
There are a few threats to the validity of our findings.
-- The dataset from VA codebase is a bit skewed. Most of the projects are medium and only one project is big.
--Also the projects considered from open source domain is limited.
--Also, the technique could be slower for big pull requests.
Now to summarize
Code review could be unpleasant or unproductive without appropriate code reviewers.
We fist motivated our technique using an exploratory study which suggested that
--library experience and specialized technology experience really matter for code reviewer selection.
Then we proposed our technique—CORRECT that learns from past review history and then recommends.
We experimented using both commercial and open source projects, and compared with the state-of-the-art.
The results clearly demonstrate the high potential of our technique.
For example, these are two review comments.
This one triggers a new code change whereas the other one does not.
Now, professional developers @Microsoft considered the change-triggering comment as the useful one.
Now, in this research, we try to answer what makes a review comment useful? If not useful, can we identify that before sending them to the patch submitters?
About 35% of the review comments are found non-useful at Microsoft, which is a significant amount.
Unfortunately, there exist no automated supports for detecting or improving such comments so far.
So, here comes our study.
So, in this research, we try to understand what makes a review comment useful. For that, we need real code review data.
So, we collaborated with Vendasta, a local software company with 150+ employees, and collect about 1,500 recent review comments from four of their systems.
Then we apply the change-triggering heuristic of an earlier study, and manually annotate the comments.
That is, if a review comment triggers new code change in its vicinity, it is a useful comment and vice versa.
Now, we have to distinct sets of comments.
Now we do comparative analysis between them, which is our first contribution.
Then, we also develop a prediction model so that the non-useful comments can be predicted before submitting to the developers.
The goal is to improve the non-useful review comments through identification and possible suggestions.
In the comparative study, we contrast between useful and non-useful comments.
We have independent and response variables.
We consider two paradigms– comment texts, and the corresponding developer’s experience.
In the textual aspect we consider five features of the comments such as reading ease, code element ratio or conceptual similarity between comment texts and the changed code.
In the experience paradigm, we consider the authorship, review experience, and library experience of the developers.
We have one response variable, that is the comment is useful or not– yes or no.
We applied Flesch-Kincaid reading ease to both review comments which returns a ease score between 0 to 100.
We did not find any significant difference in these scores for useful and non-useful comments.
The second variable is stop word ratio.
We used Google’s stop word list and Python keyword list since our code base is of Python.
We found that non-useful comments contain significantly higher ratio of stop words than useful comments.
The findings is also confirmed from quartile level analysis.
Developers often treat clarification question as a sign of non-useful comments.
So, we determine question ratio = # of questions/#sentences for each of the comments.
We did not find any such evidence of developers claim.
There is no significant difference between useful and non-useful comments in the question ratio
From our manual observations, we note that review comments often contain relevant code elements such as method names, or identifiers.
We also contrast on code element ratio between useful and non-useful comments.
We found that useful comments contain more code elements than non-useful comments.
How relevant the comment is against the changed code for which the comment is posted?
This can be determined between lexical similarity between comments and the changed code.
Similar concepts are applied with Stack Overflow questions and answers.
We found that useful comments are conceptually more similar to their target code, than the non-useful comments.
That is, you have to use familiar vocabulary to patch submitters to make your comment useful.
As opposed to earlier findings, file level authorship did not make too much difference.
That is, 46% of the useful comment authors changed file at least once.
The same statistic for non-useful comments is 49%
However, in the case with project level authorship of the developers, we found some difference between useful and non-useful comments.
Mostly for Q2 and Q3. For more details, please read the paper.
We also investigate if the reviewing experience is different between the authors of useful and non-useful comments.
We found that file level reviewing experience matters here.
Reviewing experience reduces the non-useful comments from a developer.
Also experienced developers provide more useful comments than less experienced developers.
We repeat the same for the external library experience, and found that developer’s experience for useful comments is slightly higher than that of non-useful comments.
So, these are the summaries from our comparative study.
We found three textual features are significantly different between useful and non-useful comments which are interesting.
No studies explored these aspects earlier.
We also confirmed some of the earlier findings on the developer experience paradigm as well.
We divide our dataset into two groups– evaluation set and validation set.
With the evaluation set, we evaluate our technique with 10-fold cross validation.
Using the validation set, we validate the performance of our prediction model.
These are the steps of our model development.
We first annotate the comments using change-triggering heuristic.
Then we extract the textual and experiment features of the comments, train our model using three learning algorithms.
Then we use our model to predict whether a new review comment is useful or not.
If a review comment is not useful, we can explain that with our identified features as well.
Our evaluation shows that RF-based model performs the best, especially it can identify the non-usefulness comments more than the other variants.
Since RF is a tree-based model, it also confirms that comment usefulness and features are NOT linearly correlated.
The model provides 66% accuracy with 10-fold cross validation.
As a state-of-the-art, this could be useful.
Something is always better than nothing we believe.
We also investigate how each of the features perform or contribute during comment usefulness prediction.
We see that Conceptual Similarity, a textual feature has the strongest role in the prediction.
For example, if we discard that feature, our prediction accuracy drops by 25%
The next important features are reviewing experience stats of the developers.
We also contrast between two group of features– textual features and experience features.
While experience is a strong metric overall, it has limited practical use/applicability during usefulness prediction.
That is why, we introduce the textual features. It also shows some interesting results.
ROC and PR-curve show that our prediction is much better than random guessing.
We compare we five variants of a baseline model – Bosu et al., MSR 2015
That model considers a set of keywords and sentiment analysis to separate useful and non-useful comments.
However, according to our experiments, these metrics are not sufficient enough, especially for usefulness prediction.
We see that our identified features are more effective, and prediction much more accurate than theirs.
This is ROC for our model and the competitive models.
While the baseline models are close to random guessing, we are doing much better.
Obviously, these can be further improved with more accurate features and possibly more training dataset.
Possibly you can read them out I guess.
Automated build is an integral part of CI. It checks for commit merge success, basically if the patch is breaking something or not.
From the challenge data, we see a significant adoption of Travis CI over the years.
In the pull-based software development, automated builds and code reviews are basically interleaving steps.
Both of them focus on quality assurance, but using different means– automated and manual.
Given that code review is a widely adopted practice nowadays both in the industry and open source,
We tried to investigate how automated builds can influence the code reviews.
So, we formulate three RQs
---Read them out
In the RQ1, we try to find out if review participation is correlated to/affected by automated build status.
Out of 578K PR-based build entries, we found 40% are associated with code reviews.
In order to contrast between entries not related to code reviews and related to code reviews, we extract two equal size samples.
We consider #review comments>0 as the indication of review participation.
Then our Chi-squared tests reported that build status can significantly affect the review participation which refutes the null hypothesis.
We also perform Pearson correlation between build status and #commits with code reviews for each project, and we found passed builds are the most correlated to the review participation.
That is, yes, people are interested to review such code, that does not contain any visible errors (e.g., compile errors, merge errors)
But yes, errored and failed builds also received some code reviews as well, which is interesting.
In RQ1, we also investigate whether a previous build can trigger new code reviews or not.
We found, that is true for 28% of the pull requests from our investigation.
We found that passed builds triggered the most of them, like 18% of the code reviews.
The errored and failed builds triggered 10% of the reviews.
In the RQ2, we investigate how build frequency might influence the code review quality.
We consider review comment count as a proxy to code review quality.
We divide the projects into four quartiles based on their #build entries per month.
Then we contrast the review comments from the projects of Q1 and Q4 in the box plot.
We see that frequently build projects have more review comments than that of less frequently built projects.
That is, frequently builds possibly trigger further code reviews that the counterparts.
The statistical tests below also confirmed that findings.
We also investigate how build frequency can affect the code review activities over a certain period.
For this, we select 5 most frequent projects from Q4, and 5 least frequent projects from Q1. Each of them is 3-4 years old.
Then for each month, we calculate review comment counts per build for each project, actually the plot shows the cumulative version.
We see that frequently built projects have almost linear increase in their review comments.
The same did not happen for least-frequent projects, they look zigzag.
That regular/frequent builds might have encouraged code reviews, and keep up a steady flow.
Whether a particular build needs further code reviews or not, this is a vital piece of information for project management.
We apply ML on the challenge data to find such info.
We apply 3 algorithms on the data where we use build status, code change stats, test change stats, and previous code review stats to predict such info.
J48 performed the best according to our experiment with 70% precision and 50% recall.