A keynote delivered for the 3rd Workshop on
Validation, Analysis and Evolution of Software Tests
February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada.
http://vst2020.scch.at
Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.
Ranjith kumar Nagisetty(AndiordApp and PostSiliconTest Engineer)_Resumeranjith nagisetty
This document contains a resume summary for Ranjith Kumar Nagisetty. It outlines his work experience testing mobile devices and chipsets over 4 years at Wipro and Qualcomm. It details his technical skills which include Java, Android tools, SQL, and scripting languages. It also lists his education as a B.Tech in Electrical and Electronics Engineering from Jawaharlal Nehru Technological University in 2010. Key projects involved system hardware and post-silicon validation testing, thermal and power profiling, and testing various mobile applications on Android platforms.
The document outlines Vincenzo Ferme's research on automating performance testing for continuous software development environments. It discusses the context of continuous development lifecycles and DevOps practices, and how performance testing is rarely applied in these processes. It then presents the state of the art in declarative performance engineering and the challenges of defining and executing performance tests. The document outlines the problem statement and research goals, which include how to specify performance tests and automate their execution in continuous software development lifecycles. The main contributions are summarized as developing an automation-oriented performance tests catalog, the BenchFlow declarative domain-specific language for specifying tests, and the BenchFlow model-driven framework for executing experiments.
The document summarizes the test automation tools and approaches used by the QA team at ZixCorp to improve testing efficiency and coverage. It describes how they used WinRunner for branded portal testing, Perl/Mechanize for manual "Sunday Sanity" testing, and eventually adopted Watir for additional test automation in Ruby. The three tools together allow increased test coverage, reduced testing time, and help meet customer quality and availability standards.
Patni has been supporting LSI in various areas including architecture design, tool development, firmware testing and enhancements, and setting up offshore development environments. Key projects included SATA command testing, MPI-2 interface testing, ATA passthrough testing, StoreLib IR testing, and Integrated RAID 1E and SimDiscovery tool testing. Patni delivered the projects on time with 100% test coverage, found and resolved multiple defects, and followed quality processes.
Release engineering involves managing the delivery of high quality software releases through processes like release planning, branch management, building, testing, and source code control. It aims to make releases predictable and of high quality by facilitating activities such as compiling code, verifying functionality, controlling branching/merging of codelines, and following best practices.
Enhancing Your Test Automation Scenario Coverage with Selenium - QA or the Hi...Perfecto by Perforce
This document discusses strategies for scaling test automation. It begins with an introduction of the speaker and an overview of continuous testing and the need for scale. It then covers cross-browser testing challenges and solutions like Selenium. The rest of the document discusses measuring success, tools for test automation, strategies for scaling like using the cloud, and best practices. It concludes with a live demo of scaling Selenium tests using Perfecto.
Test Automation - Past, Present and FutureKeizo Tatsumi
The document discusses the history and future of test automation. It covers test automation from its beginnings in the 1950s-1960s, through its growth in the 1970s-1990s driven by the software crisis and emergence of new technologies. The document then discusses the present state of test automation, including tools for web, mobile, and cloud testing. Finally, it discusses potential future research areas like cloud/SaaS testing and the role of the test automator in planning and implementing automation strategies and frameworks.
Ranjith kumar Nagisetty(AndiordApp and PostSiliconTest Engineer)_Resumeranjith nagisetty
This document contains a resume summary for Ranjith Kumar Nagisetty. It outlines his work experience testing mobile devices and chipsets over 4 years at Wipro and Qualcomm. It details his technical skills which include Java, Android tools, SQL, and scripting languages. It also lists his education as a B.Tech in Electrical and Electronics Engineering from Jawaharlal Nehru Technological University in 2010. Key projects involved system hardware and post-silicon validation testing, thermal and power profiling, and testing various mobile applications on Android platforms.
The document outlines Vincenzo Ferme's research on automating performance testing for continuous software development environments. It discusses the context of continuous development lifecycles and DevOps practices, and how performance testing is rarely applied in these processes. It then presents the state of the art in declarative performance engineering and the challenges of defining and executing performance tests. The document outlines the problem statement and research goals, which include how to specify performance tests and automate their execution in continuous software development lifecycles. The main contributions are summarized as developing an automation-oriented performance tests catalog, the BenchFlow declarative domain-specific language for specifying tests, and the BenchFlow model-driven framework for executing experiments.
The document summarizes the test automation tools and approaches used by the QA team at ZixCorp to improve testing efficiency and coverage. It describes how they used WinRunner for branded portal testing, Perl/Mechanize for manual "Sunday Sanity" testing, and eventually adopted Watir for additional test automation in Ruby. The three tools together allow increased test coverage, reduced testing time, and help meet customer quality and availability standards.
Patni has been supporting LSI in various areas including architecture design, tool development, firmware testing and enhancements, and setting up offshore development environments. Key projects included SATA command testing, MPI-2 interface testing, ATA passthrough testing, StoreLib IR testing, and Integrated RAID 1E and SimDiscovery tool testing. Patni delivered the projects on time with 100% test coverage, found and resolved multiple defects, and followed quality processes.
Release engineering involves managing the delivery of high quality software releases through processes like release planning, branch management, building, testing, and source code control. It aims to make releases predictable and of high quality by facilitating activities such as compiling code, verifying functionality, controlling branching/merging of codelines, and following best practices.
Enhancing Your Test Automation Scenario Coverage with Selenium - QA or the Hi...Perfecto by Perforce
This document discusses strategies for scaling test automation. It begins with an introduction of the speaker and an overview of continuous testing and the need for scale. It then covers cross-browser testing challenges and solutions like Selenium. The rest of the document discusses measuring success, tools for test automation, strategies for scaling like using the cloud, and best practices. It concludes with a live demo of scaling Selenium tests using Perfecto.
Test Automation - Past, Present and FutureKeizo Tatsumi
The document discusses the history and future of test automation. It covers test automation from its beginnings in the 1950s-1960s, through its growth in the 1970s-1990s driven by the software crisis and emergence of new technologies. The document then discusses the present state of test automation, including tools for web, mobile, and cloud testing. Finally, it discusses potential future research areas like cloud/SaaS testing and the role of the test automator in planning and implementing automation strategies and frameworks.
Tomas Riha presented on the principles and benefits of continuous delivery. Continuous delivery aims to have applications always ready for release through a highly automated process of continuous integration, testing, and deployment. It emphasizes automating all parts of the release process, building quality in from the start, and having all team members share responsibility for releases. Frequent releases allow for faster feedback and reduce risks from large code changes. Continuous delivery helps enable test-driven development and improves the ability to verify features continuously.
DevOps is a software development approach that emphasizes collaboration between development and operations teams throughout the development lifecycle. Central to DevOps is continuous delivery, which involves frequent software releases through an automated testing pipeline. This pipeline incorporates various types of testing at different stages to catch issues early. Automated deployment is key to continuous delivery, allowing for more testing opportunities like automated functional and security testing. Implementing practices like continuous integration, unit testing, code coverage, mutation testing, static analysis, and automated deployment verification can improve software quality by enabling more testing and fearless refactoring.
Continuous Integration for Salesforce1 PlatformTechsophy Inc.
AutoRABIT automates the process of building, testing, and deploying software on the Salesforce1 Platform. It includes powerful metadata management and automation tools. These tools can be used alone or as part of a complete Continuous Integration & Deployment procesess
This study analyzed the co-evolution of infrastructure and source code through an empirical study of 262 GitHub repositories. The key findings were:
1. Infrastructure files make up a significant portion of the codebase, almost as large as test and source files.
2. Infrastructure files change frequently, with a median of 0.28 changes per month, comparable to production code and higher than build/test files.
3. Changes to infrastructure files tend to be tightly coupled with changes to test and production files. The most common reasons for this were integration of new tests and updating global variables.
SonarQube와 함께하는 소프트웨어 품질 세미나 - 지속적인 코드 인스펙션 SonarQube 활용 방안CURVC Corp
The document discusses using SonarQube and SonarLint to enable continuous code inspection. It introduces SonarLint as a tool that provides on-the-fly issue detection and notifications to fix issues early. It then covers how SonarLint and SonarQube can be integrated to centrally manage code quality rules. The document also discusses how SonarQube can be used for code review by analyzing code changes and generating reports, and how it implements a quality gate to control code promotions. Finally, it mentions how SonarQube allows extensions through plugins and custom rule sets.
An overview of how Pivotal Labs performs quality assurance on mobile. We have over 1300 mobile devices on all platforms - iOS, Android, Windows, BlackBerry and more. We perform automated and manual testing using various tools and methodologies to ensure a bug-free app for our clients.
Personalized defect prediction models can more accurately predict buggy changes. The researchers propose two personalized approaches:
1) Personalized Change Classification (PCC) trains a separate model for each developer using their change history.
2) Confidence-based Hybrid PCC (PCC+) combines the predictions from the CC and PCC models, selecting the one with the highest confidence.
The approaches were evaluated on six projects, finding up to 155 more bugs by inspecting only 20% of code locations compared to non-personalized models. PCC and PCC+ consistently outperformed the baseline across different settings, demonstrating the benefits of personalization.
How do you implement Continuous Delivery? Part 3: All about PipelinesThoughtworks
This document discusses pipelines for continuous delivery. It describes how pipelines can incorporate progressive testing from unit tests to system integration tests. A typical pipeline includes stages for committing code, building, running unit tests, code analysis, and creating build artifacts. Deployment testing stages prepare environments, deploy artifacts, and run smoke and UI tests. Best practices are to keep everything in source control and replicate production. The document also discusses how to structure pipelines for multiple applications and federated systems.
How to Learn The History of Software Testing Keizo Tatsumi
The document provides a history of software testing covering several topics:
1. It discusses the prehistory of software testing, noting that Ada Lovelace is considered the first programmer and suggesting she may have also been the first tester while working on Charles Babbage's Analytical Engine in the 19th century.
2. It outlines the evolution of computers, software engineering, and the growth of software testing from the 1950s to the present day. Key periods included the debugging, demonstration, destruction, evaluation, and prevention-oriented periods.
3. It describes some of the early testing techniques developed in the 1960s-1970s, including the concept of test control processes at IBM, equivalence partitioning, and boundary
This document discusses Test Driven Development (TDD), Continuous Integration (CI), and Continuous Delivery (CD) for mobile development. It defines TDD, CI, and CD and provides examples of implementing each for both Android and iOS development. For TDD, it demonstrates writing tests first and then code to pass the tests. For CI, it recommends automating the build and test process. And for CD, it suggests using services like TestFlight to automatically deliver new builds to testers.
Balance Change and Control of Continuous Delivery at ScalePlutora
In most large enterprises, tightly-coupled applications have dependencies across multiple release trains. Read this guide to learn what you can do to update your test environment management, redefine traditional roles, and remain relevant in the age of automation.
Meenakshi Pal has 9 years of experience in software QA with expertise in storage concepts, virtualization, networking protocols, scripting languages, and automation testing tools. She has a Bachelor's degree in Electronics and Communication and has led end-to-end testing projects for storage products and networking tools. Her skills include test automation, collaboration, issue tracking, and mentoring others. She has received several awards for her work in testing backup solutions, transition tools, and raising quality bugs.
Delivering Quality Software with Continuous IntegrationAspire Systems
Learn about:
1> Best Practices In Distributed Environment
2> Potential Challenges Of Not Following CI
3> Tools & Frameworks That Help You Implement CI Better
The document discusses key concepts in software testing including software quality, software quality assurance (SQA), software quality control (SQC), and the V-Model. It describes the software development lifecycle including requirements gathering, design, coding, testing, and maintenance. It provides details on different types of testing like unit testing, integration testing, system testing, and reviews/inspections conducted at various stages. Key testing techniques mentioned are black box testing, white box testing, basis path testing, control structure testing, and mutation testing. The V-Model mapping development stages to corresponding testing stages is also explained.
The document outlines the evolution of software testing over time from 1956 to the present. It describes five distinct periods: Debbuging (1956-1978), where testing was associated with debugging; Demonstration (1957-1978), focusing on proving software satisfies requirements; Destruction (1979-1982), aiming to find errors; Evaluation (1983-1987), providing product evaluation and quality measurement; and Prevention (1988-present), seeking to prevent faults in requirements, design, and implementation. Key events and concepts are listed for each period.
This document contains a summary of B Vinodh Kumar's work experience and qualifications. It outlines 3 projects he worked on as a Test Engineer at Microsoft and IP Infusion. The projects involved manual and automation testing of applications, operating systems, and networking protocols. It also lists his education qualifications and technical skills in areas like manual testing, automation testing, programming languages and tools.
This is a 90 min talk with some exercises and discussion that I gave at the DHS Agile Expo. It places DevOps as a series of feedback loops and emphasizes agile engineering practices being at the core.
slide deck from my tccc10 presentation. please use the URL's and references for the source code and other technologies which are discussed but not covered.
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...University of Antwerp
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
(Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)
Cloud continuous integration- A distributed approach using distinct servicesAndré Agostinho
In cloud computing services the ability to share and deliver services, scale computing resources and distribute data storage and files requires a deployment process aligned with agility and scalability. The continuous integration can automate process reducing operational effort, improving code quality and reducing time to market. This presentation shows a proposal for distributed continuous integration to use differents cloud computing services, from planning to execution of scenarios.
Tomas Riha presented on the principles and benefits of continuous delivery. Continuous delivery aims to have applications always ready for release through a highly automated process of continuous integration, testing, and deployment. It emphasizes automating all parts of the release process, building quality in from the start, and having all team members share responsibility for releases. Frequent releases allow for faster feedback and reduce risks from large code changes. Continuous delivery helps enable test-driven development and improves the ability to verify features continuously.
DevOps is a software development approach that emphasizes collaboration between development and operations teams throughout the development lifecycle. Central to DevOps is continuous delivery, which involves frequent software releases through an automated testing pipeline. This pipeline incorporates various types of testing at different stages to catch issues early. Automated deployment is key to continuous delivery, allowing for more testing opportunities like automated functional and security testing. Implementing practices like continuous integration, unit testing, code coverage, mutation testing, static analysis, and automated deployment verification can improve software quality by enabling more testing and fearless refactoring.
Continuous Integration for Salesforce1 PlatformTechsophy Inc.
AutoRABIT automates the process of building, testing, and deploying software on the Salesforce1 Platform. It includes powerful metadata management and automation tools. These tools can be used alone or as part of a complete Continuous Integration & Deployment procesess
This study analyzed the co-evolution of infrastructure and source code through an empirical study of 262 GitHub repositories. The key findings were:
1. Infrastructure files make up a significant portion of the codebase, almost as large as test and source files.
2. Infrastructure files change frequently, with a median of 0.28 changes per month, comparable to production code and higher than build/test files.
3. Changes to infrastructure files tend to be tightly coupled with changes to test and production files. The most common reasons for this were integration of new tests and updating global variables.
SonarQube와 함께하는 소프트웨어 품질 세미나 - 지속적인 코드 인스펙션 SonarQube 활용 방안CURVC Corp
The document discusses using SonarQube and SonarLint to enable continuous code inspection. It introduces SonarLint as a tool that provides on-the-fly issue detection and notifications to fix issues early. It then covers how SonarLint and SonarQube can be integrated to centrally manage code quality rules. The document also discusses how SonarQube can be used for code review by analyzing code changes and generating reports, and how it implements a quality gate to control code promotions. Finally, it mentions how SonarQube allows extensions through plugins and custom rule sets.
An overview of how Pivotal Labs performs quality assurance on mobile. We have over 1300 mobile devices on all platforms - iOS, Android, Windows, BlackBerry and more. We perform automated and manual testing using various tools and methodologies to ensure a bug-free app for our clients.
Personalized defect prediction models can more accurately predict buggy changes. The researchers propose two personalized approaches:
1) Personalized Change Classification (PCC) trains a separate model for each developer using their change history.
2) Confidence-based Hybrid PCC (PCC+) combines the predictions from the CC and PCC models, selecting the one with the highest confidence.
The approaches were evaluated on six projects, finding up to 155 more bugs by inspecting only 20% of code locations compared to non-personalized models. PCC and PCC+ consistently outperformed the baseline across different settings, demonstrating the benefits of personalization.
How do you implement Continuous Delivery? Part 3: All about PipelinesThoughtworks
This document discusses pipelines for continuous delivery. It describes how pipelines can incorporate progressive testing from unit tests to system integration tests. A typical pipeline includes stages for committing code, building, running unit tests, code analysis, and creating build artifacts. Deployment testing stages prepare environments, deploy artifacts, and run smoke and UI tests. Best practices are to keep everything in source control and replicate production. The document also discusses how to structure pipelines for multiple applications and federated systems.
How to Learn The History of Software Testing Keizo Tatsumi
The document provides a history of software testing covering several topics:
1. It discusses the prehistory of software testing, noting that Ada Lovelace is considered the first programmer and suggesting she may have also been the first tester while working on Charles Babbage's Analytical Engine in the 19th century.
2. It outlines the evolution of computers, software engineering, and the growth of software testing from the 1950s to the present day. Key periods included the debugging, demonstration, destruction, evaluation, and prevention-oriented periods.
3. It describes some of the early testing techniques developed in the 1960s-1970s, including the concept of test control processes at IBM, equivalence partitioning, and boundary
This document discusses Test Driven Development (TDD), Continuous Integration (CI), and Continuous Delivery (CD) for mobile development. It defines TDD, CI, and CD and provides examples of implementing each for both Android and iOS development. For TDD, it demonstrates writing tests first and then code to pass the tests. For CI, it recommends automating the build and test process. And for CD, it suggests using services like TestFlight to automatically deliver new builds to testers.
Balance Change and Control of Continuous Delivery at ScalePlutora
In most large enterprises, tightly-coupled applications have dependencies across multiple release trains. Read this guide to learn what you can do to update your test environment management, redefine traditional roles, and remain relevant in the age of automation.
Meenakshi Pal has 9 years of experience in software QA with expertise in storage concepts, virtualization, networking protocols, scripting languages, and automation testing tools. She has a Bachelor's degree in Electronics and Communication and has led end-to-end testing projects for storage products and networking tools. Her skills include test automation, collaboration, issue tracking, and mentoring others. She has received several awards for her work in testing backup solutions, transition tools, and raising quality bugs.
Delivering Quality Software with Continuous IntegrationAspire Systems
Learn about:
1> Best Practices In Distributed Environment
2> Potential Challenges Of Not Following CI
3> Tools & Frameworks That Help You Implement CI Better
The document discusses key concepts in software testing including software quality, software quality assurance (SQA), software quality control (SQC), and the V-Model. It describes the software development lifecycle including requirements gathering, design, coding, testing, and maintenance. It provides details on different types of testing like unit testing, integration testing, system testing, and reviews/inspections conducted at various stages. Key testing techniques mentioned are black box testing, white box testing, basis path testing, control structure testing, and mutation testing. The V-Model mapping development stages to corresponding testing stages is also explained.
The document outlines the evolution of software testing over time from 1956 to the present. It describes five distinct periods: Debbuging (1956-1978), where testing was associated with debugging; Demonstration (1957-1978), focusing on proving software satisfies requirements; Destruction (1979-1982), aiming to find errors; Evaluation (1983-1987), providing product evaluation and quality measurement; and Prevention (1988-present), seeking to prevent faults in requirements, design, and implementation. Key events and concepts are listed for each period.
This document contains a summary of B Vinodh Kumar's work experience and qualifications. It outlines 3 projects he worked on as a Test Engineer at Microsoft and IP Infusion. The projects involved manual and automation testing of applications, operating systems, and networking protocols. It also lists his education qualifications and technical skills in areas like manual testing, automation testing, programming languages and tools.
This is a 90 min talk with some exercises and discussion that I gave at the DHS Agile Expo. It places DevOps as a series of feedback loops and emphasizes agile engineering practices being at the core.
slide deck from my tccc10 presentation. please use the URL's and references for the source code and other technologies which are discussed but not covered.
Finding Bugs, Fixing Bugs, Preventing Bugs — Exploiting Automated Tests to In...University of Antwerp
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
(Keynote for the SHIFT 2020 and IWSF 2020 Workshops, October 2020)
Cloud continuous integration- A distributed approach using distinct servicesAndré Agostinho
In cloud computing services the ability to share and deliver services, scale computing resources and distribute data storage and files requires a deployment process aligned with agility and scalability. The continuous integration can automate process reducing operational effort, improving code quality and reducing time to market. This presentation shows a proposal for distributed continuous integration to use differents cloud computing services, from planning to execution of scenarios.
Between spending hours (or days!) making sure you can code and test locally and the difficulties of keeping remote environments up to date, sometimes we find ourselves falling back on "It works on my machine!". Getting rid of the difficulties in making new development environments and maintaining testing infrastructure is really key to banishing the dreaded phrase. In this session, we'll take you through some of the recent tools and techs that will not only make your life easier but will mean you never have to say "works on my machine" ever again.
Developing software at scale cs 394 may 2011Todd Warren
I gave a guest lecture in software engineering to Chris Riesbeck's CS394 class at Northwestern in spring 2011. See my related blog post at www.toddwinc.com/blog
Principles and Practices in Continuous Deployment at EtsyMike Brittain
This document discusses principles and practices of continuous deployment at Etsy. It describes how Etsy moved from deploying code changes every 2-3 weeks with stressful release processes, to deploying over 30 times per day. The key principles that enabled this are innovating continuously, resolving scaling issues quickly, minimizing recovery time from failures, and prioritizing employee well-being over stressful releases. Automated testing, deployment to staging environments, dark launches, and extensive monitoring allow for frequent, low-risk deployments to production.
Watch the recorded version of this Webinar here:
Curious about Continuous Integration? Tune in!
Continuous Integration (CI), which is a big part of continuous delivery, is the concept of continuously building and testing software using an automated process. We have learned that utilizing CI could help us catch bugs earlier, enable better visibility, reduce repetitive processes, enable the development team to produce deployable products at a moment's notice, and reduce risk overall.
These slides will identify the various levels of continuous integration and delivery with regards to a release maturity of the development team or parent organization.
Control source code quality using the SonarQube platformPVS-Studio
The document discusses the SonarQube platform for continuous analysis and measurement of code quality. Some key features of SonarQube include supporting multiple programming languages, providing metrics on code quality issues like bugs, duplications, test coverage, and technical debt. It integrates with build systems and IDEs and allows customizing dashboards and quality profiles. The author implemented SonarQube for a customer to provide centralized monitoring of metrics for a large, long-term project.
This document discusses the software development lifecycle (SDLC) and DevOps. It provides an overview of the SDLC phases and Agile Scrum framework. It describes the need for DevOps by explaining problems that can occur when development and operations teams are separated. It proposes DevOps as a solution to automate software delivery and infrastructure changes through a cross-functional team and toolchain. The document outlines various tools used in a DevOps toolchain for version control, IDEs, project management, continuous integration, testing, security, collaboration and more. It concludes by discussing future plans to implement OpenStack, Docker and gain experience with Amazon Web Services.
2016 quali continuous testing quest for quality conferenceQualiQuali
This document discusses continuous testing in DevOps. It defines DevOps and DevTestOps, and explains that continuous testing is the last mile of DevOps. Sandboxes are proposed as a way to automate testing environments for DevOps by creating production-like environments for development and testing. The document outlines how sandboxes can model infrastructure, applications, data, and services to enable consistent and automated testing throughout the development lifecycle.
Update on WebRTC standard and Implementation Status. Presented at Sydney's webrtc meet-up on may 25 2017. Find the companion blog post at webrtcbydralex.com
This slide deck explains a simple approach to conduct value stream mapping for DevOps value streams. Easy to use templates are provided. An example is included, which shows the dramatic effect that using containers and Kubernetes had on the value stream for a business application.
This document describes Cerberus, an open source test automation tool developed by La Redoute. Cerberus allows centralized management of test cases across multiple technologies like web, mobile, and APIs. It supports features like a step library, test automation, execution reporting, and integration with other tools. The document also provides examples of how Cerberus is used at La Redoute for regression testing websites in multiple languages and environments. It maintains over 3,500 regression tests that execute twice daily. Cerberus can also be used for functional monitoring of websites by regularly executing test cases and monitoring performance metrics.
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
This document discusses continuous integration for System z mainframe applications. It begins with an overview of DevOps and continuous integration concepts. It then discusses the IBM DevOps solution and challenges of applying DevOps to System z environments. The document focuses on how continuous integration can be implemented for System z to provide rapid feedback, automated testing in isolated environments, and higher quality code promoted between stages. It also discusses how continuous testing can be achieved through dependency virtualization to improve testing efficiency.
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
The document introduces Open Virtual Platforms (OVP) as a solution for developing embedded software for multicore systems on chips (SoCs). OVP provides an open way to model virtual platforms using instruction-accurate software models that can run embedded software quickly for testing and development. It consists of APIs for modeling processors, peripherals and complete platforms, an open source library of models, and a reference simulator. OVP aims to establish common open standards for software virtual platforms.
This document discusses the Bottlenecks project which aims to identify system bottlenecks in OPNFV infrastructure through testing. It provides updates on Release B which implemented throughput testing and virtual switch testing. Release C planning includes refactoring deployment using Puppet for increased reusability, implementing multi-scenario testing for different configurations, and cooperating with other projects to integrate requirements and tools. The goal is to automatically detect bottlenecks during staging to prevent issues in production.
Similar to Keynote VST2020 (Workshop on Validation, Analysis and Evolution of Software Tests) (20)
Several experience reports illustrate that mutation testing is capable of supporting a “shift-left” testing strategy for software systems coded in textual programming languages like C++. For graphical modeling languages like Simulink, such experience reports are missing, primarily because of a lack of adequate tool support. In this paper, we extend MUT4SLX, a tool for automatic mutant generation and test execution of Simulink models based on block diagrams. The tool is extended to support mutation operators for Stateflow models, which, to the best of our knowledge, are not supported by any other tool. The current version of MUT4SLX has 8 operators that are modeled after realistic faults (mined from an industrial bug database) and are fast to inject (by only replacing the parameter values). An experimental evaluation on four sample projects shows that MUT4SLX is capable of performing mutation analysis reasonably fast, but mutant execution is always more time-consuming.
The document summarizes two industrial experiences using AI for software engineering. It describes using an AI system called Ampyfier to automatically amplify test cases, which improved test coverage and mutation scores. It also discusses using AI to classify bug reports and predict factors like who should fix the bug and how long it may take. The presentation concludes by discussing using AI to assist with planning poker estimations and providing explainable AI for practitioner validation.
Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. Several academic tool prototypes have been proposed by the research community so far — DSpot (for Java), AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). Up until now, these tool prototypes have only been validated on a series of open source systems; concrete experience reports from actual use within the software industry are still lacking. In this presentation, we will share our experience with AmPyfier as applied within the context of Garvis, a start up company from the University of Antwerp.
- The document discusses technical debt in startups and scaleups, noting that a fast-growing AI startup established in 2020 currently has a single variant for its vital AI engine.
- It presents on a technique called test amplification to help address technical debt by generating new test cases that improve test coverage and mutation score.
- An initial application of test amplification on a Python project found improvements for 13 of 25 files, raising the mutation score to 20.29% and code coverage to 41.39%. Continuous application led to higher improvements across more files over time.
The document discusses variant forks in open source software projects hosted on GitHub. It describes how variant forks allow related projects to co-exist while acknowledging a shared ancestry. The document reports on a survey of 105 maintainers of variant projects that identified common motivations for creating variants, such as diverging technical or governance needs, as well as impediments to co-evolution between variants and their mainlines. It also presents two research questions about the motivations for creating variants and how variants evolve relative to their mainlines.
Finding Bugs, Fixing Bugs, Preventing Bugs - Exploiting Automated Tests to In...University of Antwerp
Presentation for BARCO and the EFFECTS Project
---Abstract---
With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect?
Slides used for the VST2022 workshop.
---Abstract---
Software test amplification is the act of strengthening manually written test-cases to exercise the boundary conditions of the system under test. It has been demonstrated by the research community to work for the programming language Java, relying on the static type system to safely transform the code under test. In dynamically typed languages, such type declarations are not available, and as a consequence test amplification has yet to find its way to programming languages like Smalltalk, Python, Ruby and Javascript. The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier (for Python) and Small-Amp (for Pharo-Smalltalk). In this tool demonstration paper we explain how we relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification.
Formal Verification of Developer Tests: a Research Agenda Inspired by Mutatio...University of Antwerp
With the current emphasis on DevOps, automated software tests become a necessary ingredient for continuously evolving, high-quality software systems. This implies that the test code takes a significant portion of the complete code base — test to code ratios ranging from 3:1 to 2:1 are quite common.
We argue that "testware'" provides interesting opportunities for formal verification, especially because the system under test may serve as an oracle to focus the analysis. As an example we describe five common problems (mainly from the subfield of mutation testing) and how formal verification may contribute.
We deduce a research agenda as an open invitation for fellow researchers to investigate the peculiarities of formally verifying testware.
Reproducible Crashes: Fuzzing Pharo by Mutating the Test MethodsUniversity of Antwerp
Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.
The document describes a survey conducted on test automation maturity between December 2018 and June 2019. The survey received 151 responses from 101 organizations across 25 countries. The survey aimed to understand the current state of practice regarding test automation processes and maturity. Key findings from the survey include: 1) organizations demonstrated varying levels of maturity depending on the practices adopted; 2) responses showed diverse situations regarding different practices, such as 85% having sufficient skills but 47% lacking guidelines; 3) some practices were strongly correlated or clustered; 4) factors like percentage of automated tests and use of agile/DevOps indicated higher maturity; 5) practitioner roles like QA engineers and consultants showed differing response variations. The survey provides insight into present test automation and opportunities for improvement
During the SANER 2018 Conference (in Campobasso, Italy) I chaired a discussion on double-blind reviewing. Here are the slides used to stimulate the discussion. It is based on a survey among the SANER reviewers to understand how double-blind reviewing is perceived in the field.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
2. Books
• 2 books
• 3 proceedings (editor) Best Teacher’s Award
Top Publications
Spin Off &
Start Up
3.
4. Ericsson, Bombardier,
Saab, System Verification,
Empear, Verifyter, KTH,
MDH, RISE Comiq, EfiCode,
Ponsse, Siili,
Qentinel, Symbio,
Uni.Oulu, VTT
Axini, Testwerk,
TNO, Open Uni. AKKA, Expleo,
EKS, FFT,
Fraunhofer,
IFAK, OFFIS,
Parasoft
Alerion,
Prodevelop,,
Uni.Mandragon Kuveyt Bank,
Saha BT
The TESTOMAT project will allow software teams to
increase the development speed without sacrificing quality
To achieve this goal, the project will advance the state-of-the-art in test
automation for software teams moving towards a more agile development
process.
7. Six decades into the computer revolution, four
decades since the invention of the microprocessor,
and two decades into the rise of the modern Internet,
all of the technology required to transform industries
through software finally works and can be widely
delivered at global scale.
8.
9.
10. Software Testing is the process of executing a program
or system with the intent of finding errors.
(Myers, Glenford J., The art of software testing. Wiley, 1979)
11. "Next Level" Test Automation (VST Keynote)
Context
11
Continuous Integration
Continuous Delivery
Continuous Deployment
DevOps
Tesla
“over-the-air” updates
± once every month
Amazon deploys to
production
± every 11,6 seconds
September 2015
Amazon Web Services
suffered major disruption.
NetFlix recovers quickly!
(Chaos Monkey)
13. "Next Level" Test Automation (VST Keynote)
Integration Hell
13
Requirement
Collection
Analysis
Design
Implementation
Testing
Deployment
N
otsm
ooth
Lots
ofW
aste
14. "Next Level" Test Automation (VST Keynote)
Continuous Integration Pipeline
14
<<Breaking the Build>>
version
control
build
developer
tests
deploy
scenario
tests
deploy to
production
measure &
validate
15. [Khom2014] Khomh, F. Adams, B, Dhaliwal, T and Zou, Y Understanding the Impact of
Rapid Releases on Software Quality: The Case of Firefox, Empirical Software
Engineering, Springer. http://link.springer.com/article/10.1007/s10664-014-9308-x
1.0 1.5 2.0 3.0 3.5 3.6 4.0 5.0 7.0
8.0
9.0
Traditional Release Cycle Rapid Release Cycle
(a) Time Line of Major Versions of FireFox
(b) Time Line of Minor Versions of FireFox
Figure 1. Timeline of FireFox versions.
channels are respectively 100,000 for NIGHTLY, 1 million
for AURORA, 10 million for BETA and 100+ millions for
a major Firefox version [11]. NIGHTLY reaches Firefox
developers and contributors, while other channels (i.e., AU-
RORA and BETA) recruit external users for testing. The
source code on AURORA is tested by web developers who
are interested in the latest standards, and by Firefox add-on
developers who are willing to experiment with new browser
APIs. The BETA channel is tested by Firefox’s regular beta
by bug triaging developers and assigned for fixing. When
a developer fixes a bug, he typically submits a patch to
Bugzilla. Once approved, the patch code is integrated into
the source code of Firefox on the corresponding channel and
migrated through the other channels for release. Bugs that
take too long to get fixed and hence miss a scheduled release
are picked up by the next release’s channel.
III. STUDY DESIGN
16. [Khom2014] Khomh, F. Adams, B, Dhaliwal, T and Zou, Y Understanding the Impact of
Rapid Releases on Software Quality: The Case of Firefox, Empirical Software
Engineering, Springer. http://link.springer.com/article/10.1007/s10664-014-9308-x
✓ bugs are fixed faster
(but … harder bugs propagated to later releases)
✓ amount of pre- & post-release bugs ± the same
✓ the program crashes earlier
(perhaps due to recent features)
3.6 4.0 5.0 7.0
8.0
9.0
Rapid Release Cycle
rs and assigned for fixing. When
he typically submits a patch to
the patch code is integrated into
on the corresponding channel and
er channels for release. Bugs that
nd hence miss a scheduled release
release’s channel.
TUDY DESIGN
earch questions:
ease cycle affect the
erence in the number
control for the time
lease dates. However,
tly lower for versions
les, i.e., failures seem
cycle affect the fixing
ter for versions devel-
e cycle affect software
d release model are
.e., the proportion of
ersions that possibly
5.0 NIGHTLY 6.0 NIGHTLY 7.0 NIGHTLY 8.0 NIGHTLY
5.0 AURORA 6.0 AURORA 7.0 AURORA
5.0 BETA 6.0 BETA
5.0 MAIN
New Feature Development
6 Weeks 6 Weeks 6 Weeks 6 Weeks
Figure 2. Development and Release Process of Mozilla Firefox
major release was made. Figure 1(b) shows the release dates
of the minor versions of Firefox.
With the advent of shorter release cycles in March 2011,
new features need to be tested and delivered to users faster.
To achieve this goal, Firefox changed its development pro-
cess. First, versions are no longer supported in parallel, i.e.,
17. "Next Level" Test Automation (VST Keynote)
Plan
• Context
• Test Strategy: When Should I Test?
- V-Model
- Fit tables (Acceptance Testing)
- Flipping the V
- 4 Quadrants
• Test Quality: How Good Are Your Tests?
• Test Analytics: Exploiting the DevOps pipeline.
• Test Research: Tips for PhD Students
17
18. "Next Level" Test Automation (VST Keynote)
V-Model
18
Requirements Acceptance Tests
Design Integration Tests
Coding Unit Tests
Architecture System Tests
Requirements Architecture Design Coding Testing
Test Design Test Execution
Acceptance
Test Cases
System Test
Cases
Integration Test
Cases
Unit Test
Cases
Unit Tests
Integration
Tests
System Tests
Acceptance
Tests
Integration hell?
19. "Next Level" Test Automation (VST Keynote)
Fit Tables
19
Browse Music
Play Music
Browse Music
start eg.music.browser
enter library
check total songs 37
Browse Music
enter select 1
check title Akila
check artist Toure Kunda
enter select 2
check title American Tango
check artist Weather Report
check album Mysterious Traveller
check year 1974
Example: Acceptance Test Cases
http://fit.c2.com
Play Music
start eg.music.Realtime
press play
check status loading
pause 2
check status playing
20. "Next Level" Test Automation (VST Keynote)
Test Case Management
20
Smoke Test
21. "Next Level" Test Automation (VST Keynote)
Scrum — Feedback Loop
21
Product
Backlog
Sprint
Backlog
Sprint
Execution
Working Increment
of Product
24h
Sprint
Planning
Sprint
Review
Sprint
Retrospective
22. "Next Level" Test Automation (VST Keynote)
Behaviour Driven (User Stories)
22
As a <user role>
I want to <goal>
so that <benefit>.
• …
• …
• …
As a clerk
I want to calculate stampage
so that goods get shipped fast.
• Verify with nearby address
• Verify with overseas address
• Verify with parcels <= 1kg
• Verify with fragile parcel
Template
Example
Conditions of Satisfaction
23. "Next Level" Test Automation (VST Keynote)
Flipping the V
23
Acceptance Tests
(GUI Tests)
System Tests
Integration Tests
Unit Tests
70%
20%
10%
10%
20%
70%
Test Automation
24. "Next Level" Test Automation (VST Keynote)
Flipping the V in Practice
24
Acceptance Tests
(GUI Tests)
System Tests
Integration Tests
Unit Tests
27. "Next Level" Test Automation (VST Keynote)
Plan
• Context
• Test Strategy: When Should I Test?
• Test Quality: How Good Are Your Tests?
- Coverage (Control flow, Statement, Branch, Path)
- Tests vs. Faults
* Reach, Infect, Propagate, Reveal
- Mutation Analysis
- Mutation Operators
- Case studies
* Scaling up: the cloud
• Test Analytics: Exploiting the DevOps pipeline.
• Test Research: Tips for PhD Students
27
28. How good are your tests?
version
control
build
developer
tests
deploy
scenario
tests
deploy to
production
measure &
validate
34. "Next Level" Test Automation (VST Keynote)
Code Coverage
34
Test System Under Test
(SUT)
Code is instrumented
Test executes the code
Instrumented code gets
executed
The traces are stored:
• When statement is executed
- CFG (Control Flow Graph)
• When data is changed
- DFG (Data Flow Graph)
35. import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class TestEmployeeDetails {
EmpBusinessLogic empBusinessLogic = new EmpBusinessLogic();
EmployeeDetails employee = new EmployeeDetails();
//happy day scenario for calculation of appraisal and salary
@Test
public void testCalculateAppriasal() {
employee.setName("Rajeev");
employee.setAge(25);
employee.setMonthlySalary(8000);
double appraisal = empBusinessLogic.calculateAppraisal(employee);
double salary = empBusinessLogic.calculateYearlySalary(employee);
}
}
assertionless
test
36. "Next Level" Test Automation (VST Keynote)
The RIPR Model
36
Reach
Test Oracle Strategies for Model-Based Testing. IEEE Transactions on Software Engineering 43(4):1-1 · January 2016
Infect
Propagate
Reveal
37. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
38. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
39. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
x = null; y = 5
40. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
41. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
x = [2,3,5]; y = 3
42. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
43. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
x = [2,3,5]; y = 25
44. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
45. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
46. /**
* Find last index of element
* @param x array to search
* @param y element to look for
* @return last index of y in x, if absent -1
* @throws NullPointerException if x is null
*/
public static int findLast(int [] x, int y)
{
for (int i=x.length-1; i>0; i--)
if (x[i] == y)
return i;
return -1;
}
x = [2,3,5]; y = 2
47. "Next Level" Test Automation (VST Keynote)
Mutation Testing
47
State of the Art State of the Practice
Number of mutation testing publications per year
48. int method(int v1, int v2)
{
if (v1 <v2)
return 1;
return -1;
}
int method(int v1, int v2)
{
if (v1 >=v2)
return 1;
return -1;
}
49. Operator Description Example
Before After
CBM
Mutates the boundary
conditions
a > b a >= b
IM Mutates increment operators a++ a−−
INM Inverts negation operator −a a
MM
Mutates arithmetic & logical
operators
a & b a | b
NCM
Negates a conditional
operator
a == b a != b
RVM
Mutates the return value of a
function
return true return false
VMCM Removes a void method call voidCall(x) –
Competent Programmer
Hypothesis
(Program is close to correct)
Coupling Effect
(Test suites capable of detecting simple errors
will also detect complex errors)
50. "Next Level" Test Automation (VST Keynote)
Industrial Case Study
50
• 83K lines of code
• Complicated structure
• Lots of legacy code
• Lots of black-box tests
Ali Parsai, Serge Demeyer; “Comparing Mutation Coverage Against Branch Coverage in an Industrial Setting”.
Software Tools for Technology Transfer
52. "Next Level" Test Automation (VST Keynote)
Industrial Case
52
Unit tests only !
Segmentation
Percentage
020406080100
Mutation Coverage
Branch Coverage
53. CI
D
evelop
Build
Test
W
ay too slow
We witnessed 48 hours of mutation testing time on a
test suite comprising 272 unit tests and 5,258 lines of
test code for testing a project with 48,873 lines of
production code.
Sten Vercammen, Serge Demeyer, Markus Borg, and Sigrid Eldh; “Speeding up Mutation Testing via the
Cloud: Lessons Learned for Further Optimisations”. Proceedings ESEM 2018
54. Master
1) Initial test Build
2) ∀ files to mutate:
queue file names
3a) Generate mutants
4a) Execute mutants
3b) Store mutants
4b) Store results
3c) Queue mutant
references
5) Process results
57. Presence of defect
+ reach the defect
+ infect the program state
+ observable on output
coverage
mutants
Mutation Testing
= Actionable !
58. "Next Level" Test Automation (VST Keynote)
Mutation Testing @ google
58
version
control
build
developer
tests
deploy
scenario
tests
deploy to
production
measure &
validate
DevOps
TriCoder CodeCritique
59. "Next Level" Test Automation (VST Keynote)
Mutation Testing @ google
59
MutationTesting
Reported benefits
• stronger tests,
• more effective debugging,
• prevention of bugs,
• improved code quality.
An Industrial Application of Mutation Testing: Lessons,
Challenges, and Research Directions, Goran Petrovic ́
Marko Ivankovic, Bob Kurtz Paul Ammann, René Just. ICST
Proceedings
60. "Next Level" Test Automation (VST Keynote)
Plan
• Context
• Test Strategy: When Should I Test?
• Test Quality: How Good Are Your Tests?
• Test Analytics: Exploiting the DevOps pipeline
- Test Automation Maturity Model (TAIM)
- (Spectrum Based) Fault Localisation
- Test Amplification
- Bug Reports
- Stack Overflow
• Test Research: Tips for PhD Students
60
63. Henri Heiskanen, Mika Maunumaa, and Mika Katara.
A test process improvement model for automated test
generation. In Oscar Dieste, Andreas Jedlitschka, and
Natalia Juristo, editors, Product-Focused Software
Process Improvement: 13th International Conference,
PROFES 2012, Madrid, Spain, June 13-15, 2012
Proceedings, pages 17--31, Berlin, Heidelberg, 2012.
Springer Berlin Heidelberg.
Sigrid Eldh, Kenneth Andersson, Andreas Ermedahl,
and Kristian Wiklund. Towards a test automation
improvement model (TAIM). In Proceedings of the
2014 IEEE International Conference on Software
Testing, Verification, and Validation Workshops,
ICSTW '14, pages 337--342, Washington, DC, USA,
2014. IEEE Computer Society.
Ana Paula C. C. Furtado, Silvio R. L. Meira, and
Marcos Wanderley Gomes. Towards a maturity model
in software testing automation. In Proceedings of
ICSEA 2014 : The Ninth International Conference on
Software Engineering Advances, ICSEA2014, pages
282 - 285. IARIA, 2014.
Sigrid Eldh. Test Automation Improvement Model —
TAIM 2.0. In Proceedinsg NEXTA 2020 (Workshop on
Next Level Test Automation)
Test Process Improvement
for Automated Test
Generation
Version: 1.01.51 (2010‐05‐06) [Draft]
TPI is an evaluation framework to evaluate thematurity of a software testing
process. We have modified the original TPI framework to evaluate test processes
that areusing Automated Test Generation methodsand tools.
Henri Heiskanen, Mika Maunumaa, Mika Katara
10.2.2010
Tampere University of Technology, Department of Software Systems
68. "Next Level" Test Automation (VST Keynote)
Test Amplification
68
Benjamin Danglot, Oscar Vera-Pérez, Benoit Baudry, Martin Monperrus. Automatic Test Improvement with DSpot: a Study with Ten
Mature Open-Source Projects. Empirical Software Engineering, Springer Verlag, 2019, pp.1-35. 10.1007/s10664-019-09692-y .
Mehrdad Abdi, Henrique Rocha and Serge Demeyer. Test Amplification in the Pharo Smalltalk Ecosystem. Proceedings IWST 2019
(International Workshop on Smalltalk Technologies)
input generation
+ assertion generation
testWithdraw
|b|
b := SmallBank new.
b deposit: 100.
self assert: b balance equals: 100.
b withdraw: 30.
self assert: b balance equals: 70
Genetic
Algorithms
Inside
testWithdraw_12
| b |
b := SmallBank new.
b deposit: 100.
b withdraw: SmallInteger maxVal.
self assert: b balance equals: 100
69. Description text Mining
Stack Traces Link to source code
Product/Component
Specific vocabulary
Suggestions?
Artificial
Intelligence
Inside
70. Question Cases Precision Recall
Who should fix this bug? Eclipse, Firefox, gcc
eclipse: 57%
firefox: 64%
gcc: 6%
—
How long will it take to fix
this bug? (*)
JBoss
depends on the component
many similar reports: off by one hour
few similar reports: off by 7 hours
What is the severity of this
bug? (**)
Mozilla, Eclipse, Gnome
mozilla, eclipse:67% -
73%
gnome:
75%-82%
mozilla, eclipse:50% -
75%
gnome:
68%-84%
Promising results but …
• how much training is needed? (cross-project training?)
• how reliable is the data? (estimates, severity, assigned-to)
• does this generalise? (on industrial scale?)
replication is needed
(*) In CSMR2012 Proceedings
Who should fix this bug? Eclipse, Firefox, gcc
eclipse: 57%
firefox: 64%
gcc: 6%
—
Irrelevant for
Practitioners
(**) In CSMR2011; MSR 2010 Proceedings
Artificial
Intelligence
Inside
73. "Next Level" Test Automation (VST Keynote)
Plan
• Context
• Test Strategy: When Should I Test?
• Test Quality: How Good Are Your Tests?
• Test Analytics: Exploiting the DevOps pipeline.
• Test Research: Tips for PhD Students
- Descriptive Statistics
- Unit Tests vs. Integration Tests
73
74. "Next Level" Test Automation (VST Keynote)
Descriptive Statistics
74
Semi-automatic Test Case Expansion for Mutation Testing
ZhongXiLu,StenVercammen,andSergeDemeyer—UniversityofAntwerp,Belgium . . . . . . . . . . . . . . . . . . . . . . 1
An Early Investigation of Unit Testing Practices of Component-Based Software Systems
Georg Buchgeher, Stefan Fischer, Michael Moser, and Josef Pichler — Software Competence Center
Hagenberg, Austria; University ofAppliedSciencesUpperAustria,Austria ............................................ 12
75. "Next Level" Test Automation (VST Keynote)
Test Code Quality
75
At minimum include some code coverage
measurement + assertion density.
76. "Next Level" Test Automation (VST Keynote)
Unit Tests vs. Integration Tests (1/2)
76
Further analysis confirmed that it is worthwhile to treat unit
tests and integration tests differently: we discovered that unit
tests cause more breaking builds, that fixing the defects
exposed by unit tests takes longer and implies more
coordination between team members.
77. "Next Level" Test Automation (VST Keynote)
Unit Tests vs. Integration Tests (2/2)
77
Hence, if we apply the IEEE definition to Java a unit test is a test that tests
only units from within one package (i.e., related units). On the other hand, an
integration test tests units from more than one package.