Network vs. Code Metricsto Predict Defects: A Replication Study      Rahul Premraj             Kim Herzig  VU University A...
The Original StudyProceedings of the International Conference on    Software Engineering (ICSE, May 2008).
The Original StudyBug Database                  code qualityVersion Archive                  defect prediction            ...
The Original Study               modelcode metrics      network metrics   combined metrics
The Original Study               modelcode metrics      network metrics   combined metricsNetwork metrics outperformed cod...
What are Network Metrics?                                         Code artifacts•   Consider code artifacts               ...
Contributionsnetwork vs. code metrics
Contributions                   network vs. code metrics•   using random sampling on same release    (similar to Z&N)
Contributions                   network vs. code metrics•   using random sampling on same release    (similar to Z&N)•   a...
Contributions                   network vs. code metrics•   using random sampling on same release    (similar to Z&N)•   a...
Contributions                   network vs. code metrics•   using random sampling on same release    (similar to Z&N)•   a...
Data Collection                                   gs                                bu                              se    ...
Data Collection                                   gs                                bu                              se    ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Data Collection: Differences                       our study                Z&N   Language                Java             ...
Experimental Setup
One Release Classi cation          Stratified repeated holdout setup                               •   Randomly splitting d...
Forward Prediction                 Closest to real world situation                   e.g. JRuby 1.0                e.g. JR...
Cross-Project Prediction                           Are defect predictions transferable?                            release...
Results•   Reporting prediction    measures as box plot•   Reporting results of best    model•   non-parametric statistica...
One Release Classi cation                 JRuby 1.0                     JRuby 1.1                     ArgoUML 0.24        ...
One Release Classi cation                 JRuby 1.0                     JRuby 1.1                     ArgoUML 0.24        ...
Forward Prediction                                         Code       Network       All        JRuby 1.0 to predict 1.1   ...
Forward Prediction                                                  Code       Network       All                 JRuby 1.0...
Forward Prediction                                                  Code       Network       All                 JRuby 1.0...
Cross-Project Prediction                                        Code      Network      All             Train: JRuby 1.1   ...
Cross-Project Prediction                                        Code      Network      All             Train: JRuby 1.1   ...
In uencial Metrics                Measured by area under ROC curve                  using the combined metrics set•   4/6 ...
[1]                     [2]                                  Z&N                   our study               Bird et al.    ...
[1]                     [2]                                  Z&N                   our study               Bird et al.    ...
Network vs. Code Metrics  to Predict Defects: A Replication Study
Network vs. Code Metrics  to Predict Defects: A Replication Study
Network vs. Code Metrics  to Predict Defects: A Replication Study
Network vs. Code Metrics  to Predict Defects: A Replication Study
Network vs. Code Metrics  to Predict Defects: A Replication Study
Upcoming SlideShare
Loading in...5
×

Network vs. Code Metrics to Predict Defects: A Replication Study

1,040

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,040
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
25
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • The function computes a ROC curve by first applying a series of cutoffs for each metric and then computing the sensitivity and specificity for each cutoff point. The importance of the metric is then determined by computing the area under the ROC curve.\n
  • The function computes a ROC curve by first applying a series of cutoffs for each metric and then computing the sensitivity and specificity for each cutoff point. The importance of the metric is then determined by computing the area under the ROC curve.\n
  • The function computes a ROC curve by first applying a series of cutoffs for each metric and then computing the sensitivity and specificity for each cutoff point. The importance of the metric is then determined by computing the area under the ROC curve.\n
  • The function computes a ROC curve by first applying a series of cutoffs for each metric and then computing the sensitivity and specificity for each cutoff point. The importance of the metric is then determined by computing the area under the ROC curve.\n
  • Network vs. Code Metrics to Predict Defects: A Replication Study

    1. 1. Network vs. Code Metricsto Predict Defects: A Replication Study Rahul Premraj Kim Herzig VU University Amsterdam Saarland University
    2. 2. The Original StudyProceedings of the International Conference on Software Engineering (ICSE, May 2008).
    3. 3. The Original StudyBug Database code qualityVersion Archive defect prediction model Source Code code metrics
    4. 4. The Original Study modelcode metrics network metrics combined metrics
    5. 5. The Original Study modelcode metrics network metrics combined metricsNetwork metrics outperformed code metrics!
    6. 6. What are Network Metrics? Code artifacts• Consider code artifacts /** * Used by {@link #getThreadInformation()} for a traversal search of * {@link Thread}s/{@link ThreadGroup}s * * @param group * @param level * @return */ private String visit(final ThreadGroup group, final int level) { // Get threads in `group StringBuilder builder = new StringBuilder(); int numThreads = group.activeCount(); Thread[] threads = new Thread[numThreads * 2]; numThreads = group.enumerate(threads, false); as communicating StringBuilder indent = new StringBuilder(); for (int i = 0; i < level; ++i) { indent.append(" "); } // Enumerate each thread in `group for (int i = 0; i < numThreads; i++) { // Get thread Thread thread = threads[i]; builder.append(indent); builder.append("|-"); builder.append(thread.getName()).append(" ["); builder.append(thread.getClass().getSimpleName()).append("], "); builder.append(thread.getPriority()).append(", "); builder.append(thread.getState().name()); builder.append(FileUtils.lineSeparator); for (StackTraceElement element : thread.getStackTrace()) { builder.append(indent); builder.append("| "); builder.append(element.toString()); builder.append(FileUtils.lineSeparator); } // builder.append(FileUtils.lineSeparator); } // Get thread subgroups of `group actors int numGroups = group.activeGroupCount(); /** ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; * Used by {@link #getThreadInformation()} for a traversal search of numGroups = group.enumerate(groups, false); * {@link Thread}s/{@link ThreadGroup}s * // Recursively visit each subgroup * @param group for (int i = 0; i < numGroups; i++) { * @param level builder.append(indent); * @return builder.append(visit(groups[i], level + 1)); */ } private String visit(final ThreadGroup group, final int level) { return builder.toString(); // Get threads in `group } StringBuilder builder = new StringBuilder(); int numThreads = group.activeCount(); Thread[] threads = new Thread[numThreads * 2]; numThreads = group.enumerate(threads, false); StringBuilder indent = new StringBuilder(); for (int i = 0; i < level; ++i) { indent.append(" "); } // Enumerate each thread in `group for (int i = 0; i < numThreads; i++) { // Get thread Thread thread = threads[i]; builder.append(indent); builder.append("|-"); builder.append(thread.getName()).append(" ["); builder.append(thread.getClass().getSimpleName()).append("], "); builder.append(thread.getPriority()).append(", "); builder.append(thread.getState().name()); builder.append(FileUtils.lineSeparator); for (StackTraceElement element : thread.getStackTrace()) { /** builder.append(indent); * Used by {@link #getThreadInformation()} for a traversal search of builder.append("| "); * {@link Thread}s/{@link ThreadGroup}s builder.append(element.toString()); * builder.append(FileUtils.lineSeparator); * @param group } * @param level // builder.append(FileUtils.lineSeparator); * @return } */ Reuse metrics from private String visit(final ThreadGroup group, // Get thread subgroups of `group final int level) { int numGroups = group.activeGroupCount();• // Get threads in `group ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; StringBuilder builder = new StringBuilder(); numGroups = group.enumerate(groups, false); int numThreads = group.activeCount(); Thread[] threads = new Thread[numThreads * 2]; // Recursively visit each subgroup numThreads = group.enumerate(threads, false); for (int i = 0; i < numGroups; i++) { /** builder.append(indent); * Used by {@link #getThreadInformation()} for a traversal search of StringBuilder indent = new StringBuilder(); builder.append(visit(groups[i], level + 1)); * {@link Thread}s/{@link ThreadGroup}s for (int i = 0; i < level; ++i) { } * indent.append(" "); * @param group } return builder.toString(); * @param level } * @return // Enumerate each thread in `group */ for (int i = 0; i < numThreads; i++) { private String visit(final ThreadGroup group, // Get thread final int level) { Thread thread = threads[i]; // Get threads in `group builder.append(indent); StringBuilder builder = new StringBuilder(); builder.append("|-"); int numThreads = group.activeCount(); builder.append(thread.getName()).append(" ["); Thread[] threads = new Thread[numThreads * 2]; builder.append(thread.getClass().getSimpleName()).append("], "); numThreads = group.enumerate(threads, false); builder.append(thread.getPriority()).append(", "); builder.append(thread.getState().name()); StringBuilder indent = new StringBuilder(); builder.append(FileUtils.lineSeparator); for (int i = 0; i < level; ++i) { for (StackTraceElement element : thread.getStackTrace()) { indent.append(" "); builder.append(indent); } social networks based builder.append("| "); /** builder.append(element.toString()); * Used by {@link #getThreadInformation()} for a traversal search of // Enumerate each thread in `group builder.append(FileUtils.lineSeparator); * {@link Thread}s/{@link ThreadGroup}s for (int i = 0; i < numThreads; i++) { } * // Get thread // builder.append(FileUtils.lineSeparator); * @param group Thread thread = threads[i]; } * @param level builder.append(indent); * @return builder.append("|-"); // Get thread subgroups of `group */ builder.append(thread.getName()).append(" ["); int numGroups = group.activeGroupCount(); private String visit(final ThreadGroup group, builder.append(thread.getClass().getSimpleName()).append("], "); ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; final int level) { builder.append(thread.getPriority()).append(", "); numGroups = group.enumerate(groups, false); // Get threads in `group builder.append(thread.getState().name()); StringBuilder builder = new StringBuilder(); builder.append(FileUtils.lineSeparator); // Recursively visit each subgroup int numThreads = group.activeCount(); for (StackTraceElement element : thread.getStackTrace()) { for (int i = 0; i < numGroups; i++) { Thread[] threads = new Thread[numThreads * 2]; builder.append(indent); builder.append(indent); numThreads = group.enumerate(threads, false); builder.append("| "); builder.append(visit(groups[i], level + 1)); builder.append(element.toString()); } StringBuilder indent = new StringBuilder(); builder.append(FileUtils.lineSeparator); for (int i = 0; i < level; ++i) { } return builder.toString(); indent.append(" "); // builder.append(FileUtils.lineSeparator); } } } // Enumerate each thread in `group // Get thread subgroups of `group for (int i = 0; i < numThreads; i++) { int numGroups = group.activeGroupCount(); // Get thread ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; Thread thread = threads[i]; numGroups = group.enumerate(groups, false); builder.append(indent); on code dependency builder.append("|-"); // Recursively visit each subgroup builder.append(thread.getName()).append(" ["); for (int i = 0; i < numGroups; i++) { builder.append(thread.getClass().getSimpleName()).append("], "); builder.append(indent); builder.append(thread.getPriority()).append(", "); builder.append(visit(groups[i], level + 1)); /** builder.append(thread.getState().name()); } * Used by {@link #getThreadInformation()} for a traversal search of builder.append(FileUtils.lineSeparator); * {@link Thread}s/{@link ThreadGroup}s for (StackTraceElement element : thread.getStackTrace()) { return builder.toString(); * builder.append(indent); } * @param group builder.append("| "); * @param level builder.append(element.toString()); * @return builder.append(FileUtils.lineSeparator); */ } private String visit(final ThreadGroup group, // builder.append(FileUtils.lineSeparator); final int level) { } // Get threads in `group StringBuilder builder = new StringBuilder(); // Get thread subgroups of `group int numThreads = group.activeCount(); int numGroups = group.activeGroupCount(); Thread[] threads = new Thread[numThreads * 2]; ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; numThreads = group.enumerate(threads, false); numGroups = group.enumerate(groups, false); StringBuilder indent = new StringBuilder(); // Recursively visit each subgroup for (int i = 0; i < level; ++i) { for (int i = 0; i < numGroups; i++) { indent.append(" "); builder.append(indent); } builder.append(visit(groups[i], level + 1)); } // Enumerate each thread in `group graph for (int i = 0; i < numThreads; i++) { return builder.toString(); // Get thread } Thread thread = threads[i]; builder.append(indent); builder.append("|-"); builder.append(thread.getName()).append(" ["); builder.append(thread.getClass().getSimpleName()).append("], "); builder.append(thread.getPriority()).append(", "); builder.append(thread.getState().name()); builder.append(FileUtils.lineSeparator); for (StackTraceElement element : thread.getStackTrace()) { builder.append(indent); builder.append("| "); builder.append(element.toString()); builder.append(FileUtils.lineSeparator); } // builder.append(FileUtils.lineSeparator); } // Get thread subgroups of `group int numGroups = group.activeGroupCount(); ThreadGroup[] groups = new ThreadGroup[numGroups * 2]; numGroups = group.enumerate(groups, false); // Recursively visit each subgroup for (int i = 0; i < numGroups; i++) { builder.append(indent); builder.append(visit(groups[i], level + 1)); } return builder.toString(); }
    7. 7. Contributionsnetwork vs. code metrics
    8. 8. Contributions network vs. code metrics• using random sampling on same release (similar to Z&N)
    9. 9. Contributions network vs. code metrics• using random sampling on same release (similar to Z&N)• across different releases of same project (forward prediction)
    10. 10. Contributions network vs. code metrics• using random sampling on same release (similar to Z&N)• across different releases of same project (forward prediction)• across different projects (cross project prediction)
    11. 11. Contributions network vs. code metrics• using random sampling on same release (similar to Z&N)• across different releases of same project (forward prediction) ati on• across different projects cs itu is ti (cross project prediction) re al
    12. 12. Data Collection gs bu se ea e el m t-r code metrics network metrics na os e #p fil ●●● ●●●code files ●●● combined metrics ●●● ●●●
    13. 13. Data Collection gs bu se ea e el m t-r code metrics network metrics na os e #p fil ●●● ●●●code files ●●● combined metrics ●●● ●●● code metrics network metrics number 9 25 granularity class/method class [1] [2] tools Understand UCINET LoC, NumMethods, FanIn/ Ego-network, structural examples FanOut metrics, centrality [1] Understand, Scientific Toolworks Inc. (Version 2.0, Build 505, http://www.scitools.com/) [2] UCINET: Social Network Analysis Software, Analytic Technologies (http://www.analytictech.com/ucinet/)
    14. 14. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    15. 15. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    16. 16. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    17. 17. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    18. 18. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    19. 19. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    20. 20. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    21. 21. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    22. 22. Data Collection: Differences our study Z&N Language Java C/C++ JRuby, ArgoUML, Projects Windows 2003 Server Eclipse Granularity source file binary 9 12 Code Metrics aggregated from class/ some specific for C/C method level ++ ... Tool Understand Microsoft in houseNetwork Metrics mostly the same (minor differences) ... Tool UCINET
    23. 23. Experimental Setup
    24. 24. One Release Classi cation Stratified repeated holdout setup • Randomly splitting data set • Preserving the proportion ofOne Release positive and negative instances Data Set • 300 independent training and testing sets • Repeat for code, network and training data (2/3) combined metrics (900 training and testing sets) testing data (1/3)
    25. 25. Forward Prediction Closest to real world situation e.g. JRuby 1.0 e.g. JRuby 1.1 release N release N+1 time release N release N+1 training set testing set Repeated for all possible combinations.Releases must be from same project. Testing release must be later than testing release.
    26. 26. Cross-Project Prediction Are defect predictions transferable? release M release NProject X Project Y e.g. ArgoUML e.g. Eclipse training set testing set Repeated for all combinations of projects (only one version per project)
    27. 27. Results• Reporting prediction measures as box plot• Reporting results of best model• non-parametric statistical test (Kruskal-Wallis)
    28. 28. One Release Classi cation JRuby 1.0 JRuby 1.1 ArgoUML 0.24 ArgoUML 0.26 Eclipse 2.1 Eclipse 3.01.00.8 Precision0.60.40.2 (svmRadial) (svmRadial) (svmRadial)0.0 (svmRadial) (svmRadial) (multinom) (svmRadial) (svmRadial) (treebag) (nb) (svmRadial) (treebag) (rpart) (multinom) (multinom) (rpart) (svmRadial) (rpart)1.00.80.6 Recall0.40.20.0 (rpart) (nb) (nb) (treebag) (treebag) (treebag) (nb) (treebag) (nb) (nb) (nb) (nb) (nb) (nb) (nb) (treebag) (treebag) (treebag)1.00.8 F-measure0.60.40.20.0 (rpart) (nb) (treebag) (treebag) (treebag) (treebag) (nb) (treebag) (treebag) (nb) (nb) (nb) (nb) (nb) (nb) (treebag) (treebag) (treebag) Code Network All Code Network All Code Network All Code Network All Code Network All Code Network All
    29. 29. One Release Classi cation JRuby 1.0 JRuby 1.1 ArgoUML 0.24 ArgoUML 0.26 Eclipse 2.1 Eclipse 3.01.00.8 Precision0.60.40.2 (svmRadial) •Network metrics outperform code (svmRadial) (svmRadial)0.01.0 (svmRadial) (svmRadial) metrics! (multinom) (svmRadial) (svmRadial) (treebag) (nb) (svmRadial) (treebag) (rpart) (multinom) (multinom) (rpart) (svmRadial) (rpart)0.80.6 •Using all metrics together offers no Recall0.40.2 improvement!0.0 •Higher accuracy for the smaller (rpart) (nb) (nb) (treebag) (treebag) (treebag) (nb) (treebag) (nb) (nb) (nb) (nb) (nb) (nb) (nb) (treebag) (treebag) (treebag)1.00.8 projects in comparison to Eclipse! F-measure0.60.40.20.0 (rpart) (nb) (treebag) (treebag) (treebag) (treebag) (nb) (treebag) (treebag) (nb) (nb) (nb) (nb) (nb) (nb) (treebag) (treebag) (treebag) Code Network All Code Network All Code Network All Code Network All Code Network All Code Network All
    30. 30. Forward Prediction Code Network All JRuby 1.0 to predict 1.1 ArgoUML 0.24 to predict 0.26 Eclipse 2.1 to predict 3.01.00.80.60.40.20.0 Precision Recall F-measure Precision Recall F-measure Precision Recall F-measure
    31. 31. Forward Prediction Code Network All JRuby 1.0 to predict 1.1 ArgoUML 0.24 to predict 0.26 Eclipse 2.1 to predict 3.0 1.0 0.8 0.6 0.4 0.2 0.0 Precision Recall F-measure Precision Recall F-measure Precision Recall F-measure JRuby ArgoUML EclipseNetwork vs. better recall worse recall CodeAll vs. Code worse recall better recall worse recall & F- measure
    32. 32. Forward Prediction Code Network All JRuby 1.0 to predict 1.1 ArgoUML 0.24 to predict 0.26 Eclipse 2.1 to predict 3.0 1.0 0.8 0.6 0.4 0.2 0.0 Precision Recall F-measure Precision Recall F-measure Precision Recall F-measure JRuby ArgoUML Eclipse All three metrics sets appear to haveNetwork vs. comparably prediction accuracy. better recall worse recall Code (no statistically significant differences: ANOVA test)All vs. Code worse recall better recall worse recall & F- measure
    33. 33. Cross-Project Prediction Code Network All Train: JRuby 1.1 Train: ArgoUML 0.26 Train: Eclipse 3.0 1.0 0.8 Test: JRuby 1.1 0.6 0.4 0.2 0.0 1.0 Test: ArgoUML 0.26 0.8 0.6 0.4 0.2 0.0 1.0 0.8 Test: Eclipse 3.0 0.6 0.4 0.2 0.0 Precision Recall F-measure Precision Recall F-measure Precision Recall F-measure
    34. 34. Cross-Project Prediction Code Network All Train: JRuby 1.1 Train: ArgoUML 0.26 Train: Eclipse 3.0 1.0 0.8 Test: JRuby 1.1 0.6 0.4 0.2 •Combined metrics do not work well! 0.0 1.0 Test: ArgoUML 0.26 0.8 0.6 •Except Eclipse predicting Jruby no 0.4 0.2 statistical difference (ANOVA test) 0.0 1.0 0.8 Test: Eclipse 3.0 0.6 0.4 0.2 0.0 Precision Recall F-measure Precision Recall F-measure Precision Recall F-measure
    35. 35. In uencial Metrics Measured by area under ROC curve using the combined metrics set• 4/6 cases, all top 10 metrics were network metrics ‣ Except: JRuby 1.0, ArgoUML 0.26, and Eclipse 2.1• No pattern with respect to presence or ranking
    36. 36. [1] [2] Z&N our study Bird et al. Tosun et al. Language C/C++ Java C/C++, Java C/C++, Java Granularity Binary File Package File Network vs. Code metrics using ... one release prediction ▼ forward prediction ▼ cross-project prediction ▼ Network metrics performance with respect to ... project size ▼[1] Bird, C., Nagappan, N., Gall, H. and Murphy, B. 2009. Putting It All Together: Using Socio-technical Networks to Predict Failures. In Proceedings of the 20th International Symposium on Software Reliability Engineering (ISSRE 2009)[2] Tosun, A., Burak Turhan and Bener, A. 2009. Validation of network measures as indicators of defective modules in software systems. In Proceedings of the 5th International Conference on Predictor Models in Software Engineering (PROMISE 09)
    37. 37. [1] [2] Z&N our study Bird et al. Tosun et al. Language C/C++ Java C/C++, Java C/C++, Java Granularity Binary File Package File Network vs. Code metrics using ... one release prediction Code metrics might be preferable because: ▼ • more easy to collect ▼ forward prediction • cross-projectfewer in numbers prediction ▼ • faster to train prediction model with respect to ... Network metrics performance project size ▼[1] Bird, C., Nagappan, N., Gall, H. and Murphy, B. 2009. Putting It All Together: Using Socio-technical Networks to Predict Failures. In Proceedings of the 20th International Symposium on Software Reliability Engineering (ISSRE 2009)[2] Tosun, A., Burak Turhan and Bener, A. 2009. Validation of network measures as indicators of defective modules in software systems. In Proceedings of the 5th International Conference on Predictor Models in Software Engineering (PROMISE 09)
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×