Personalized defect prediction models can more accurately predict buggy changes. The researchers propose two personalized approaches:
1) Personalized Change Classification (PCC) trains a separate model for each developer using their change history.
2) Confidence-based Hybrid PCC (PCC+) combines the predictions from the CC and PCC models, selecting the one with the highest confidence.
The approaches were evaluated on six projects, finding up to 155 more bugs by inspecting only 20% of code locations compared to non-personalized models. PCC and PCC+ consistently outperformed the baseline across different settings, demonstrating the benefits of personalization.
The document discusses three major problems in verification: specifying properties to check, specifying the environment, and computational complexity. It then presents several approaches to addressing these problems, including using coverage metrics tailored to detection ability, sequential equivalence checking to avoid testbenches, and "perspective-based verification" using minimal abstract models focused on specific property classes. This allows verification earlier in design when changes are more tractable and catches bugs before implementation.
- The document proposes a technique to help developers debug code by detecting similar code elements between a developer's code and code found in answers to questions on Stack Overflow.
- The technique involves detecting code clones between the developer's code and code in Stack Overflow questions and answers, then filtering the results to find the most similar code elements.
- An evaluation on several open source projects found the technique was able to detect 189 warnings, with 171 warnings confirmed as real bugs by developers.
AfterTest Madrid March 2016 - DevOps and Testing IntroductionPeter Marshall
This document discusses continuous testing in the context of DevOps. It defines continuous testing as including automated testing, managing production and non-production environments, application monitoring, and evaluating business objectives. Continuous testing relies on DevOps activities like automating builds, infrastructure, deployments, and monitoring. It advocates for smaller, more frequent deliveries through practices like test automation, infrastructure as code, and treating testing as integral to the software delivery process. The conclusion emphasizes automation, configuration management, and obtaining frequent feedback to enable continuous testing.
Advanced Continuous Test Automation presentation given by Marc Hornbeek, Sr. Solutions Architect for Spirent at the IEEE meeting Buenaventura chapter March 9, 2016.
TRACK H: Using Formal Tools to Improve the Productivity of Verification at ST...chiportal
Formal verification was used to verify three projects at STMicroelectronics:
1) A Sensor Control Block, where 3 bugs were found including issues with an APB interface and interrupt properties.
2) A Clock and Reset Manager block where the specification was unclear but formal analysis helped extract timing properties.
3) Point-to-point connectivity checks across a subsystem where 2564 connections were formally verified.
Overall, formal verification provided time savings over constrained random testing, helped address incomplete specifications, and improved quality.
REMI: Defect Prediction for Efficient API Testing ( ESEC/FSE 2015, Industria...Sung Kim
1) The document presents REMI, a method for applying software defect prediction to efficiently test APIs. REMI ranks APIs based on metrics to identify risky APIs and guide test case development and execution.
2) An experiment applying REMI to Tizen wearable APIs found that focusing test cases on risky APIs identified additional defects compared to uniform testing. REMI also found defects more quickly during test execution.
3) Developers providing feedback found the risky API rankings helpful for efficiently allocating limited testing resources, though REMI required some overhead to configure and execute. Labeling APIs as buggy/clean for the prediction model was also difficult without noise.
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
Personalized defect prediction models can more accurately predict buggy changes. The researchers propose two personalized approaches:
1) Personalized Change Classification (PCC) trains a separate model for each developer using their change history.
2) Confidence-based Hybrid PCC (PCC+) combines the predictions from the CC and PCC models, selecting the one with the highest confidence.
The approaches were evaluated on six projects, finding up to 155 more bugs by inspecting only 20% of code locations compared to non-personalized models. PCC and PCC+ consistently outperformed the baseline across different settings, demonstrating the benefits of personalization.
The document discusses three major problems in verification: specifying properties to check, specifying the environment, and computational complexity. It then presents several approaches to addressing these problems, including using coverage metrics tailored to detection ability, sequential equivalence checking to avoid testbenches, and "perspective-based verification" using minimal abstract models focused on specific property classes. This allows verification earlier in design when changes are more tractable and catches bugs before implementation.
- The document proposes a technique to help developers debug code by detecting similar code elements between a developer's code and code found in answers to questions on Stack Overflow.
- The technique involves detecting code clones between the developer's code and code in Stack Overflow questions and answers, then filtering the results to find the most similar code elements.
- An evaluation on several open source projects found the technique was able to detect 189 warnings, with 171 warnings confirmed as real bugs by developers.
AfterTest Madrid March 2016 - DevOps and Testing IntroductionPeter Marshall
This document discusses continuous testing in the context of DevOps. It defines continuous testing as including automated testing, managing production and non-production environments, application monitoring, and evaluating business objectives. Continuous testing relies on DevOps activities like automating builds, infrastructure, deployments, and monitoring. It advocates for smaller, more frequent deliveries through practices like test automation, infrastructure as code, and treating testing as integral to the software delivery process. The conclusion emphasizes automation, configuration management, and obtaining frequent feedback to enable continuous testing.
Advanced Continuous Test Automation presentation given by Marc Hornbeek, Sr. Solutions Architect for Spirent at the IEEE meeting Buenaventura chapter March 9, 2016.
TRACK H: Using Formal Tools to Improve the Productivity of Verification at ST...chiportal
Formal verification was used to verify three projects at STMicroelectronics:
1) A Sensor Control Block, where 3 bugs were found including issues with an APB interface and interrupt properties.
2) A Clock and Reset Manager block where the specification was unclear but formal analysis helped extract timing properties.
3) Point-to-point connectivity checks across a subsystem where 2564 connections were formally verified.
Overall, formal verification provided time savings over constrained random testing, helped address incomplete specifications, and improved quality.
REMI: Defect Prediction for Efficient API Testing ( ESEC/FSE 2015, Industria...Sung Kim
1) The document presents REMI, a method for applying software defect prediction to efficiently test APIs. REMI ranks APIs based on metrics to identify risky APIs and guide test case development and execution.
2) An experiment applying REMI to Tizen wearable APIs found that focusing test cases on risky APIs identified additional defects compared to uniform testing. REMI also found defects more quickly during test execution.
3) Developers providing feedback found the risky API rankings helpful for efficiently allocating limited testing resources, though REMI required some overhead to configure and execute. Labeling APIs as buggy/clean for the prediction model was also difficult without noise.
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
Agile Development in a Regulated EnvironmentTechWell
There is no doubt that agile is an accepted development methodology. However, if you work in a regulated industry like health care where you have to comply with its standard operating procedures, heaps of paperwork, and frequent audits, don’t these conflict with agile’s core tenets? Chris Ampenberger describes his operating environment and the applicable regulations that define the constraints for the software development process he can use. He shares how they overcame the incongruity between agile and regulatory requirements. With real-world examples, Chris demonstrates how you can produce the required documentation as a byproduct of the scrum team’s everyday work and illustrates how his teams succeeded in an agile way, achieving significant increases in productivity. Chris points out common pitfalls, details the hurdles they had to overcome, and discusses how to obtain buy-in from stakeholders at all levels of the organization. If you are working in a regulated environment, this session is for you.
This document introduces software testing. It defines software testing as executing a program to find bugs based on specifications, functionality, and performance. The goals of testing are to find as many faults as possible and ensure the software works properly. Testing should start early in the software development life cycle and continue throughout. Different types of testing exist and test plans must be carefully made and documented.
Software quality improvement expert Jan Princen and XBOSoft CEO Philip Lew discuss the use of Predictive Analytics to prevent software defects in this XBOSoft webinar on Defect Prevention.
This document discusses static code analysis and tools like SonarQube and Coverity. Static code analysis examines code without executing it to find bugs. Monitoring and fixing code quality issues improves application quality and delivery. SonarQube is an open source platform for managing code quality. It provides continuous inspection, reporting, and community support. Coverity also helps developers find defects early through static analysis of concurrency, security, and other issues. Both tools analyze code to find bugs and improve code quality and development processes.
1. The document discusses quality processes at Google, including that testing involves many developers and one tester.
2. It describes different types of tests - unit, integration, and system - and how they are used to validate code quality and product quality at different stages.
3. Key aspects of Google's quality process include classifying tests by size, enforcing time limits, ensuring tests are independent and have no side effects, and using test results to guide continuous integration.
Gary Howard discusses establishing project monitoring mechanisms for software development using a "software assembly line" approach. He outlines how a software assembly line would automatically build software, run tests, and provide pass/fail notifications similarly to a manufacturing assembly line. Key aspects include continuously integrating and testing source code changes, defining test cases with expected results, and tracking pass rates and times at each stage to monitor quality. The presentation provides examples of automated testing strategies and dashboards for visualizing build comparisons and progress.
Kanoah is an innovative company that provides test management solutions integrated with the Atlassian JIRA platform. Kanoah Tests allows users to plan, author, execute, track and report on tests directly within JIRA for better collaboration. It offers features like test case management, test execution and reporting, and a REST API for test automation. Customers praise Kanoah Tests for its seamless JIRA integration, support for both agile and traditional testing, and responsive customer support.
ISTQB Certified Mobile Application Tester - introHassan Muhammad
The document provides information about the ISTQB® Mobile Application Tester (CTFL-MAT) certification. It states that candidates must first hold the ISTQB Foundation Certificate to receive the Mobile Application Testing certificate. There are currently 50 certified testers in Egypt. The exam consists of 40 multiple choice questions to be completed within 60 minutes, with a pass mark of at least 26 correct answers. The syllabus covers 5 chapters on topics like mobile testing types and processes. Questions are distributed across the chapters and classified at different K-levels (Remember, Understand, Apply).
1. The first step in performance testing is to capture non-functional requirements from stakeholders to define test scope, targets, resources, and deliverables.
2. The second step is to build a test environment that closely approximates production, including necessary load injection capacity and monitoring.
3. The third step is to script use cases, identifying session data, input requirements, and checkpoints for each use case.
4. The fourth step is to build test scenarios defining test type, users, load profiles, and monitoring for each performance test.
5. The fifth step is to execute the performance tests, running dress rehearsals and different test types in a cycle of execution and problem resolution.
What is automation testing | David TzemachDavid Tzemach
What is Automation Testing?
What are the objectives of using automation tools?
What can we achieve using automation tools?
What Test Automation is not?
WHY MAY TESTING TEAMS REJECT THE IMPLEMENTATION OF AUTOMATED TESTS?
Common Types of Automated Testing Tools
Modern release management teams pride themselves on setting up a seamless workflow for continuous integration and delivery. However, continuous testing – which is one of the most critical components of the workflow is often taken for granted or marginalized without clear ownership leading to impediments in quality. With the advent of DevOps and the movement to break down silos between developers and operations, it becomes critically important that all members of an IT team - regardless of what tools they use, or role they play - understand the essentials of continuous testing.
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
A brief overview of the different software testing methods is provided, analysing the main aspects of black-box, white-box and grey-box techniques.
Experienced-based testing is also mentioned.
Static Analysis Techniques For Testing Application Security - Houston Tech FestDenim Group
Static Analysis of software refers to examining source code and other software artifacts without executing them. This presentation looks at how these techniques can be used to identify security defects in applications. Approaches examined will range from simple keyword search methods used to identify calls to banned functions through more sophisticated data flow analysis used to identify more complicated issues such as injection flaws. In addition, a demonstration will be given of two freely-available static analysis tools: FXCop and the beta version of Microsoft’s XSSDetect tool. Finally, some approaches will be presented on how organizations can start using static analysis tools as part of their development and quality assurance processes.
Global System For Automated Applications Using Plug Injpinasaez
The document discusses developing a global system for automated test applications using a plug-in architecture. It proposes:
1. Treating test applications as plug-ins that can be dynamically loaded into a framework. This would provide a unified interface and data storage.
2. Developing a test application template that abstracts the framework details and provides a standardized way for developers to create test plug-ins.
3. Using messaging and a factory pattern to allow communication between the framework and plug-ins that operate in different contexts.
The goal is to create a modular, reusable and user-friendly system for developing and running automated tests across different hardware and applications.
When do software issues get reported in large open source softwareRAKESH RANA
The document examines reporting patterns of over 7,000 issue reports from five large open source software projects to evaluate when defects are reported and if there is a difference between reported and actual defect inflow. The results show there are distinct patterns in when defects are reported, with more in January-March and fewer in April-July. However, the ratio of reported to actual defects remains fairly stable over time. This stability enhances confidence in applying software reliability growth models that use reported defect data for predictions, which were found to have around 4.8% error on average compared to using actual bug data. The reporting patterns may also provide insight into when different contributors report issues.
Testing- Fundamentals of Testing-Mazenet solutionMazenetsolution
For Youtube Videos: bit.do/sevents
Why testing is necessary,Fundamental test process, Psychology of testing, Re-testing and regression testing,
Expected results,Prioritisation of tests
What slows down your mobile SDLC?
We analyzed the testing strategies from 350 enterprise app developers, testers and QA manager to find out what causes delays.
Learn how to accelerate the mobile app lifecycle from development to deployment and discover:
What factors slow down app testing
How these factors delay release cycles
Strategies to speed up app testing and delivery
Defect root cause analysis, Андрей ТитаренкоSigma Software
This document outlines a process for defect root cause analysis with the following goals:
1) Control defect costs and delivery through early defect elimination to save budget.
2) Contract with the team to map potential defects to root causes and SDLC phases.
3) Mine data to accurately define defect root causes and focus on solving real problems by adjusting team process and prevention strategies, then monitoring progress to ensure effectiveness.
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
Agile Development in a Regulated EnvironmentTechWell
There is no doubt that agile is an accepted development methodology. However, if you work in a regulated industry like health care where you have to comply with its standard operating procedures, heaps of paperwork, and frequent audits, don’t these conflict with agile’s core tenets? Chris Ampenberger describes his operating environment and the applicable regulations that define the constraints for the software development process he can use. He shares how they overcame the incongruity between agile and regulatory requirements. With real-world examples, Chris demonstrates how you can produce the required documentation as a byproduct of the scrum team’s everyday work and illustrates how his teams succeeded in an agile way, achieving significant increases in productivity. Chris points out common pitfalls, details the hurdles they had to overcome, and discusses how to obtain buy-in from stakeholders at all levels of the organization. If you are working in a regulated environment, this session is for you.
This document introduces software testing. It defines software testing as executing a program to find bugs based on specifications, functionality, and performance. The goals of testing are to find as many faults as possible and ensure the software works properly. Testing should start early in the software development life cycle and continue throughout. Different types of testing exist and test plans must be carefully made and documented.
Software quality improvement expert Jan Princen and XBOSoft CEO Philip Lew discuss the use of Predictive Analytics to prevent software defects in this XBOSoft webinar on Defect Prevention.
This document discusses static code analysis and tools like SonarQube and Coverity. Static code analysis examines code without executing it to find bugs. Monitoring and fixing code quality issues improves application quality and delivery. SonarQube is an open source platform for managing code quality. It provides continuous inspection, reporting, and community support. Coverity also helps developers find defects early through static analysis of concurrency, security, and other issues. Both tools analyze code to find bugs and improve code quality and development processes.
1. The document discusses quality processes at Google, including that testing involves many developers and one tester.
2. It describes different types of tests - unit, integration, and system - and how they are used to validate code quality and product quality at different stages.
3. Key aspects of Google's quality process include classifying tests by size, enforcing time limits, ensuring tests are independent and have no side effects, and using test results to guide continuous integration.
Gary Howard discusses establishing project monitoring mechanisms for software development using a "software assembly line" approach. He outlines how a software assembly line would automatically build software, run tests, and provide pass/fail notifications similarly to a manufacturing assembly line. Key aspects include continuously integrating and testing source code changes, defining test cases with expected results, and tracking pass rates and times at each stage to monitor quality. The presentation provides examples of automated testing strategies and dashboards for visualizing build comparisons and progress.
Kanoah is an innovative company that provides test management solutions integrated with the Atlassian JIRA platform. Kanoah Tests allows users to plan, author, execute, track and report on tests directly within JIRA for better collaboration. It offers features like test case management, test execution and reporting, and a REST API for test automation. Customers praise Kanoah Tests for its seamless JIRA integration, support for both agile and traditional testing, and responsive customer support.
ISTQB Certified Mobile Application Tester - introHassan Muhammad
The document provides information about the ISTQB® Mobile Application Tester (CTFL-MAT) certification. It states that candidates must first hold the ISTQB Foundation Certificate to receive the Mobile Application Testing certificate. There are currently 50 certified testers in Egypt. The exam consists of 40 multiple choice questions to be completed within 60 minutes, with a pass mark of at least 26 correct answers. The syllabus covers 5 chapters on topics like mobile testing types and processes. Questions are distributed across the chapters and classified at different K-levels (Remember, Understand, Apply).
1. The first step in performance testing is to capture non-functional requirements from stakeholders to define test scope, targets, resources, and deliverables.
2. The second step is to build a test environment that closely approximates production, including necessary load injection capacity and monitoring.
3. The third step is to script use cases, identifying session data, input requirements, and checkpoints for each use case.
4. The fourth step is to build test scenarios defining test type, users, load profiles, and monitoring for each performance test.
5. The fifth step is to execute the performance tests, running dress rehearsals and different test types in a cycle of execution and problem resolution.
What is automation testing | David TzemachDavid Tzemach
What is Automation Testing?
What are the objectives of using automation tools?
What can we achieve using automation tools?
What Test Automation is not?
WHY MAY TESTING TEAMS REJECT THE IMPLEMENTATION OF AUTOMATED TESTS?
Common Types of Automated Testing Tools
Modern release management teams pride themselves on setting up a seamless workflow for continuous integration and delivery. However, continuous testing – which is one of the most critical components of the workflow is often taken for granted or marginalized without clear ownership leading to impediments in quality. With the advent of DevOps and the movement to break down silos between developers and operations, it becomes critically important that all members of an IT team - regardless of what tools they use, or role they play - understand the essentials of continuous testing.
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
A brief overview of the different software testing methods is provided, analysing the main aspects of black-box, white-box and grey-box techniques.
Experienced-based testing is also mentioned.
Static Analysis Techniques For Testing Application Security - Houston Tech FestDenim Group
Static Analysis of software refers to examining source code and other software artifacts without executing them. This presentation looks at how these techniques can be used to identify security defects in applications. Approaches examined will range from simple keyword search methods used to identify calls to banned functions through more sophisticated data flow analysis used to identify more complicated issues such as injection flaws. In addition, a demonstration will be given of two freely-available static analysis tools: FXCop and the beta version of Microsoft’s XSSDetect tool. Finally, some approaches will be presented on how organizations can start using static analysis tools as part of their development and quality assurance processes.
Global System For Automated Applications Using Plug Injpinasaez
The document discusses developing a global system for automated test applications using a plug-in architecture. It proposes:
1. Treating test applications as plug-ins that can be dynamically loaded into a framework. This would provide a unified interface and data storage.
2. Developing a test application template that abstracts the framework details and provides a standardized way for developers to create test plug-ins.
3. Using messaging and a factory pattern to allow communication between the framework and plug-ins that operate in different contexts.
The goal is to create a modular, reusable and user-friendly system for developing and running automated tests across different hardware and applications.
When do software issues get reported in large open source softwareRAKESH RANA
The document examines reporting patterns of over 7,000 issue reports from five large open source software projects to evaluate when defects are reported and if there is a difference between reported and actual defect inflow. The results show there are distinct patterns in when defects are reported, with more in January-March and fewer in April-July. However, the ratio of reported to actual defects remains fairly stable over time. This stability enhances confidence in applying software reliability growth models that use reported defect data for predictions, which were found to have around 4.8% error on average compared to using actual bug data. The reporting patterns may also provide insight into when different contributors report issues.
Testing- Fundamentals of Testing-Mazenet solutionMazenetsolution
For Youtube Videos: bit.do/sevents
Why testing is necessary,Fundamental test process, Psychology of testing, Re-testing and regression testing,
Expected results,Prioritisation of tests
What slows down your mobile SDLC?
We analyzed the testing strategies from 350 enterprise app developers, testers and QA manager to find out what causes delays.
Learn how to accelerate the mobile app lifecycle from development to deployment and discover:
What factors slow down app testing
How these factors delay release cycles
Strategies to speed up app testing and delivery
Defect root cause analysis, Андрей ТитаренкоSigma Software
This document outlines a process for defect root cause analysis with the following goals:
1) Control defect costs and delivery through early defect elimination to save budget.
2) Contract with the team to map potential defects to root causes and SDLC phases.
3) Mine data to accurately define defect root causes and focus on solving real problems by adjusting team process and prevention strategies, then monitoring progress to ensure effectiveness.
An exploratory study of the state of practice of performance testing in Java-...corpaulbezemer
The document summarizes an exploratory study of performance testing practices in 111 Java-based open source projects. The study examined performance testing from five perspectives: developers involved, extent of testing, organization of tests, types of tests, and tools used. Key findings include that performance tests are done by a small group, suites are typically small, and there is no standard organization. The implications are that there is a lack of tools for performance testing, it is not a popular task, and developers want support for quick testing.
This document reports on a gap analysis of the MELCOR computer code for use in leak path factor (LPF) applications. The analysis evaluates MELCOR against Software Quality Assurance (SQA) criteria to determine improvements needed. Five of ten SQA requirements are met at an acceptable level, while remedial actions are recommended for the remaining five. A new MELCOR baseline is recommended for LPF use, along with upgrading documentation and providing training. The effort to qualify MELCOR for the DOE Safety Analysis Toolbox in LPF applications is estimated at two full-time years initially.
The document discusses the waterfall model of software development. It describes the waterfall model as a linear sequential approach where progress flows from one phase to the next like a waterfall. The key phases are requirement analysis, design, development, testing, deployment, and maintenance. Each phase has distinct requirements and activities. The waterfall model works well for smaller, well-defined projects but has disadvantages for complex projects where requirements may change.
As the products and organizations grow in terms of scale, developing the applications while keeping performance an integral part of the build process is important. The presentation covers about what is performance aware development, why it is the need of the hour and how we are doing it inside LinkedIn where we run hundreds of services having multiple deployments everyday while making sure the performance of the services is kept in check and probability of introducing performance regressions is kept to a minimum.
The document discusses topics related to software quality assurance and testing. It covers definitions of testing, types of testing activities like static and dynamic testing, different levels of testing from unit to system level. It also discusses test criteria, coverage, and agile testing approaches. The overall document provides an overview of key concepts in software quality assurance and testing.
The document discusses various types and stages of software testing in the software development lifecycle, including:
1. Component testing, the lowest level of testing done in isolation on individual software modules.
2. Integration testing in small increments to test communication between components and non-functional aspects.
3. System testing to test functional and non-functional requirements at the full system level, often done by an independent test group.
4. The document provides details on planning, techniques, and considerations for each type of testing in the software development and integration process.
The document outlines an 8-week quality management training program. The program covers topics such as quality management principles, testing types and processes, test specification, planning and reporting. It also includes hands-on training on test automation using Selenium and assignments on specific projects. Participants will learn testing skills and apply them to projects like Outage Analyzer, Bradford and Jills. In the final two weeks, trainees complete a special assignment and report on one of the projects.
When do software issues get reported in large open source software - Rakesh RanaIWSM Mensura
This document summarizes a study examining reporting patterns of issues and bugs in five large open source software projects. The study found:
1) While there are distinct variations in when defects are reported, the ratio of reported defects to actual defects remains fairly stable over time.
2) Using reported defect inflow data in software reliability growth models results in predicted asymptotes that on average deviate only 4.8% from models using actual defect data.
3) The reporting patterns provide insights into groups of people who contribute to open source projects, with more issues reported mid-week and fewer on weekends and Mondays.
The document discusses various topics related to software testing including:
1. It introduces different levels of testing in the software development lifecycle like component testing, integration testing, system testing and acceptance testing.
2. It discusses the importance of early test design and planning and its benefits like reducing costs and improving quality.
3. It provides examples of how not planning tests properly can increase costs due to bugs found late in the process, and outlines the typical costs involved in fixing bugs at different stages.
Case Study: Ecolab Transforms Infrastructure and Application Monitoring into ...CA Technologies
Ecolab implemented a DevOps strategy using CA technologies like Application Performance Management and Service Operations Insight to transform its infrastructure and application monitoring. This improved end-to-end monitoring of its 4,000+ servers and 2,100+ network devices across multiple data centers. The transition standardized monitoring, increased the percentage of monitored assets from 60% to 95%, and decreased critical incidents per month. The tools helped identify the root cause of a problem for a key application. Ecolab plans to further enhance application monitoring with additional CA technologies and expand cloud monitoring.
Software engineering quality assurance and testingBipul Roy Bpl
The presentation discusses software quality assurance and testing. It covers topics such as the importance of software quality, types of software quality (functional and non-functional), software testing principles and processes. The testing process involves test planning, analysis and design, implementation and execution, evaluating results, and closure activities. The presentation emphasizes that testing is a critical part of the software development process to improve quality and find defects.
The document provides an overview of manual and automated software testing concepts and Selenium. It covers topics such as the software development life cycle (SDLC), testing fundamentals, manual testing techniques, Selenium basics, and real-world examples for testing a jobs factory application using Selenium. The document is intended as a training manual to teach software testing using both manual and automated approaches.
DataScience Lab 2017_Обзор методов детекции лиц на изображениеGeeksLab Odessa
DataScience Lab, 13 мая 2017
Обзор методов детекции лиц на изображение
Юрий Пащенко ( Research Engineer, Ring Labs)
В данном докладе мы предлагаем обзор наиболее новых и популярных методов обнаружения лиц, таких как Viola-Jones, Faster-RCNN, MTCCN и прочих. Мы обсудим основные критерии оценки качества алгоритма а также базы, включая FDDB, WIDER, IJB-A.
Все материалы: http://datascience.in.ua/report2017
When a test manager receives a project to work with, he would like to comprehend the scope of the project, the test objectives such as project timeline, project resources and budget. The Test Manager then needs to think about the test strategy. Selecting an appropriate test strategy is crucial for his project success. There are several test strategies for the Test Manager to select such as analytical, model-based, methodical, process or standard-compliant, dynamic, consultative or directed, and regression-averse. One of the most common and important test strategy is the analytical one that includes risk-based and specification-based testing. Comprehending analytical strategy and its methodologies will help the test manager guide software testing activities to reach the right targets to fulfill the testing objectives. That will make the customers happy and accept his company products. Then he and his company will get paid and great compensation from the customers. From there, his company business will continue to expand and everybody will be happy.
The talk will bring ideas about the analytical strategy and how to run risk-based and specification-based testing activities. Definitely the talk will bring good value to software testing audiences especially test managers. Testers, developers, project managers and higher management can benefit from the talk in the way that they understand and facilitate software testing methodologies in software development life cycle.
This document discusses various types of software testing performed at different stages of the software development lifecycle. It describes component testing, integration testing, system testing, and acceptance testing. Component testing involves testing individual program units in isolation. Integration testing combines components and tests their interactions, starting small and building up. System testing evaluates the integrated system against functional and non-functional requirements. Acceptance testing confirms the system meets stakeholder needs.
Similar to Systematic Architecture Level Fault Diagnosis Using Statistical Techniques (20)
The Challenges of Taking Open Source Cloud Foundry to ProductionFabian Keller
This document discusses the challenges of taking the open source Cloud Foundry platform to production. It identifies challenges in four areas: technology, organizational processes, culture, and processes. For technology, challenges include learning BOSH YAML, tailoring the deployment, migrating foundations, building and distributing buildpacks, and automating upgrades. Organizational challenges include hiring platform engineers and upskilling existing staff. Cultural challenges involve adopting an agile mindset and dealing constructively with failures. Process challenges include managing risks, supporting the platform through the community, fixing bugs, and giving back to open source.
Cloud Foundry - A Platform for EveryoneFabian Keller
Cloud Foundry is a cloud application platform that makes it faster and easier to build, test, deploy and scale applications. The platform is recently being adopted by lots of companies to securely host thousands of applications while at the same time reducing operational burden for developers significantly. The platform is equipped with a marketplace to provide any backing service your application may need with the click of a button. In this talk, you'll get an understanding of what Cloud Foundry is, why it increases the productivity of development teams and see how to run and operate a microservice landscape in a live demo. Once you deployed your first app, you can hardly imagine deploying software differently!
Netflix dürfte für die meisten als Streaming-Dienstleister bekannt sein. Viele Entwickler erfreuen sich an den Open-Source Werkzeugen wie Eureka für Service-Discovery und Hystrix für Resilience. Dementsprechend gilt Netflix auch als Pionier rund um die Themen Microservices und Betrieb. Mit Hilfe von Spring Cloud Netflix ist es möglich durch wenige, einfache Annotationen die entsprechenden Komponenten von Netflix zu integrieren, konfigurieren und zu nutzen. Allerdings hat Netflix bereits die Weiterentwicklung an Eureka 2.0 und an Hystrix eingestellt. Im Zuge dieser Entscheidung wird Spring Cloud Netflix ebenfalls nicht mehr weiterentwickelt. In diesem Vortrag soll aufgezeigt werden, welche Alternativen Netflix selbst vorschlägt, um resiliente Cloud-Architekturen zu entwickeln. Es wird auf die Konzepte sowie Integration eingegangen und wie diese zu einer sinnvollen Architektur kombiniert werden können. Darüber hinaus soll dargestellt werden, welche Out-Of-The-Box Lösungen PaaS wie Cloud Foundry, verteilte Container-Umgebungen wie Kubernetes und Services Meshes bereitstellen, wie diese zu bewerten sind und wie sie genutzt werden können.
Blasting Through the Clouds - Automating Cloud Foundry with Concourse CIFabian Keller
Cloud Foundry has an extremly high release velocity with new versions being available multiple times every week for a usual deployment. It is important for operators to deploy these releases in a timely manner in order to keep up with security patches and feature improvements. Commonly, there is not only one Cloud Foundry deployment to be kept up to date, but rather a couple of different stages that need to be upgraded in a specific order, for example from a sandbox to a development to a production environment.
Automation is key to keep up to date with Cloud Foundry's release velocity and Concourse CI is the continuous thing-doer of choice to do this honorable task. In this talk we'll first get to know Concourse CI basics and then see how we can leverage Concourse to automate staged platform updates for Pivotal Cloud Foundry. With pcf-automation being sunsetted in favor of PCF Automation we'll have a look at how we can tailor upgrade pipelines to suit different needs all while keeping the thrust at high pace to blast through the clouds!
Talk given at the Cloud Foundry Meetup Stuttgart in May 2019.
Vorgetragen im Rahmen des Studienprojekets "High Performance Message-oriented Middleware" an der Universität Stuttgart im Sommersemester 2013.
Mehr Informationen:
http://www.fabian-keller.de/research/scalable-multicast-concepts
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
2. Estimated Costs 2012
as reported by Britton et al. [2013]
11.11.2014 STARDUST - Fabian Keller 2
3. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 3
4. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 4
5. Fault Diagnosis
what is the current practice?
Goal: Pinpoint single/multiple failure/s
Commonly used techniques:
• System.out.println()
• Symbolic Debugging
• Static Slicing / Dynamic Slicing
There is room for improvement!
11.11.2014 STARDUST - Fabian Keller 5
6. Automated Fault Diagnosis
is it possible?
B1 B2 B3 B4 B5 Error
Test1 1 0 0 0 0 0
Test2 1 1 0 0 0 0
Test3 1 1 1 1 1 0
Test4 1 1 1 1 1 0
Test5 1 1 1 1 1 1
Test6 1 1 1 0 1 0
11.11.2014 STARDUST - Fabian Keller 6
By intuition: A block is more suspicious, if:
- It is involved in failing test cases
- It is not involved in passing test cases
8. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 8
9. Commonly Used Data
and its limiting factors
11.11.2014 STARDUST - Fabian Keller 9
Software-artifact Infrastructure Repository
• Siemens set
• space program
Program Faulty versions LOC Test cases Description
print_tokens 7 478 4130 Lexical anayzer
print_tokens2 10 399 4115 Lexical analyzer
replace 32 512 5542 Pattern recognition
schedule 9 292 2650 Priority scheduler
schedule2 10 301 2710 Priority scheduler
tcas 41 141 1608 Altitude separation
tot_info 23 440 1052 Information measure
space 38 6218 13585 Array definition language
10. Performance Metrics
how can fault localization performance be evaluated?
• Wasted Effort (WE):
Ranking: L4, L3, L2, L7, L6, L1, L5, L9, L10, L8
Wasted Effort (prominent bug): 2 (or 20%)
• Proportion of Bugs Localized (PBL)
Percentage of bugs localized with WE < p%
• Hit@X
Number of bugs localized after inspecting X elements
11.11.2014 STARDUST - Fabian Keller 10
11. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 11
12. AspectJ – Lines of Code
nearly doubled in the examined timespan
11.11.2014 STARDUST - Fabian Keller 12
13. AspectJ – Commits
active development with mostly 50+ commits per month
11.11.2014 STARDUST - Fabian Keller 13
14. AspectJ – Bugs
nearly 2500 bugs reported in the examined time span
11.11.2014 STARDUST - Fabian Keller 14
15. AspectJ – Data
less than 40% of the investigated bugs are applicable for SBFL
AspectJ AJDT Sum
All bugs 1544 886 2430
Bugs in iBugs 285 65 350
Classified Bugs 99 11 110
Applicable Bugs 41 1 42
Involved Bugs 20 1 21
11.11.2014 STARDUST - Fabian Keller 15
What happened?
16. Bug 36234
workarounds cannot be used as evaluation oracle
11.11.2014 STARDUST - Fabian Keller 16
Bug report: „Getting an out of memory error when compiling with Ajc 1.1 RC1 […]”
Pre-Fix Post-Fix
17. Bug 61411
platform specific bugs are mostly not present in test suites
11.11.2014 STARDUST - Fabian Keller 17
Bug report: „[…] highlights a problem that I've seen using ajdoc.bat on Windows […]”
Pre-Fix Post-Fix
18. Bug 151182
synchronization bugs are mostly not present in test suites
11.11.2014 STARDUST - Fabian Keller 18
Bug report: „[…] recompiled the aspect using 1.5.2 and tried to run it […], but it fails
with a NullPointerException.[…]”
Pre-Fix Post-Fix
19. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 19
20. Research Questions
• RQ1: How does the program size influence fault localization
performance?
• RQ2: How many bugs can be found when examining a fixed
amount of ranked elements?
• RQ3: How does the program size influence suspiciousness
scores produced by different ranking metrics?
• RQ4: Are the fault localization performance metrics
currently used by the research community valid?
11.11.2014 STARDUST - Fabian Keller 20
21. RQ1: Program Size vs. SBFL Performance?
multiple ranked elements are mapped to the same suspiciousness
11.11.2014 STARDUST - Fabian Keller 21
23. RQ4: Are the Performance Metrics Valid?
on average, no bugs can be found in the first 100 lines
11.11.2014 STARDUST - Fabian Keller 23
24. RQ4: Are the Performance Metrics Valid?
with luck, 33% of all bugs can be found in the first 1000 lines
11.11.2014 STARDUST - Fabian Keller 24
25. Agenda
1. Automated Fault Diagnosis
2. State of the Art
3. Case Study: AspectJ
4. Evaluation
5. Conclusions
11.11.2014 STARDUST - Fabian Keller 25
26. Conclusions
there is still some work to be done
• Bugs need more context to be fully understood
• Current metrics cannot be applied to large projects
• SBFL is not feasible for large projects
• New metrics are starting point for future work
11.11.2014 STARDUST - Fabian Keller 26
27. Thank you for your attention!
Questions?
11.11.2014 STARDUST - Fabian Keller 27
28. RQ2: examining a fixed amount
inspect more than 100 files to find 50% of all bugs
11.11.2014 STARDUST - Fabian Keller 28
29. RQ3: Program Size vs. Suspiciousness
mean suspiciousness drops for larger programs
11.11.2014 STARDUST - Fabian Keller 29