We have challenging issue in test automation operation.
Test automation fail by bug and not bug reason.
I think one of the main reason is temporary accident.
When test automation fail by this temporary accident, test will be success by running test again usually.
This re-run operation is boaring task and big task for test automation operation
To eliminate this kind issue and improve operation, we built system categorize issue by machine learning, re-run test only when fail reason is temporary accident.
In this session , I show you these below
- What's test automation issue in daily operation
- How to resolve this issue. store big data, learning, system architecture etc.
- Actual result for improvement
20200630 Rakuten QA meetup #2 "Improve test automation operation"Sadaaki Emura
Test automation brings benefits but also struggles like many failure reports. Currently, temporary unstable failures make up 73.4% of issues but investigating them wastes time. The presenter proposes an auto recover system to automatically re-run tests predicted to fail due to temporary issues. This could reduce wasted monthly operation time from 75 hours to around 25 hours by automating recovery of the most common failure type. The system would use OCR, screenshots and previous data to predict failure reasons and automatically re-run tests to improve test automation efficiency.
This document provides an overview of test automation from the perspective of a test automation engineer. It discusses key topics like the test automation pyramid, reporting, design considerations, and deployment. The test automation pyramid emphasizes unit testing, integration testing, and end-to-end testing from the bottom to top. Reporting and metrics are important for understanding test results and efficiency. Design focuses on aspects like data-driven testing, robustness, and repeatability. Deployment involves piloting automation, maintaining scripts, and supporting evolving environments. The goal is to improve testing in areas like coverage, speed, and cost while maintaining quality.
This document discusses test automation, including the purpose of test automation, the test automation process, and the test automation pyramid. The key points are:
1. Test automation aims to improve test efficiency, provide wider test coverage, reduce costs, and speed up testing.
2. The test automation process involves defining the test scope, designing tests, coding tests, setting up the test environment, running tests, and maintaining automation over time.
3. The test automation pyramid illustrates that unit tests should form the base, as they are quick to write and run, while user interface tests are at the top as they are more complex and time-consuming.
This document provides information about an ISTQB Advanced Test Manager training course. The 4-day course will cover: (1) ISTQB Certified Tester Advanced Level certification and test processes; (2) test life cycles like V-Model and agile and activities like reviews and defect reporting; (3) team composition; and (4) defect management and test process improvement. The course is intended for test engineers, software engineers, testers, and quality assurance professionals seeking ISTQB Advanced Test Manager certification.
ISTQB Foundation and Selenium Java Automation TestingHiraQureshi22
This document provides an overview and summary of an ISTQB Foundation and Selenium Java Automation Testing course. The course covers ISTQB certification based professional training using the 2018 syllabus, as well as test automation using Selenium Java and .NET frameworks. It is designed to help students learn software testing skills and prepare for careers as test analysts or test automation engineers. Key topics include dynamic testing techniques, testing throughout the software development lifecycle, component testing, test management, and static testing. The course also provides hands-on training in test automation using Selenium WebDriver, building reusable automation components, cross-browser testing, and XSLT reporting.
The document discusses test management struggles and challenges in the software development life cycle (SDLC). It outlines three main challenges: 1) too much workload for reporting and manually linking test cases to incident tickets, 2) difficulty managing requirements and test cases and utilizing testing activities, and 3) difficulty completing automation tasks on time. It proposes solutions such as reducing reporting time, linking items automatically, improving test case management tools, and prioritizing automation.
The document outlines the QA process and responsibilities at Pearson. It discusses that QA is responsible for test case development and prioritization. It also describes the different types of testing conducted including functional, regression, exploratory, and automation testing. The document provides examples of test execution documentation and defect reporting guidelines. It discusses test environments, walkthrough procedures, and having QA buddies for expertise sharing.
5 Considerations When Adopting Automated TestingBhupesh Dahal
Most organizations have realized the benefits of and need for test automation—but is your investment being wisely utilized? Are you unknowingly building a test automation suite that will end up costing more than your actual product? Are you building a legacy test automation framework that may be ready to retire before you reap the benefits?
This presentation will discuss five points of consideration that will help your organization answer these questions and mitigate risks that they bring into light.
20200630 Rakuten QA meetup #2 "Improve test automation operation"Sadaaki Emura
Test automation brings benefits but also struggles like many failure reports. Currently, temporary unstable failures make up 73.4% of issues but investigating them wastes time. The presenter proposes an auto recover system to automatically re-run tests predicted to fail due to temporary issues. This could reduce wasted monthly operation time from 75 hours to around 25 hours by automating recovery of the most common failure type. The system would use OCR, screenshots and previous data to predict failure reasons and automatically re-run tests to improve test automation efficiency.
This document provides an overview of test automation from the perspective of a test automation engineer. It discusses key topics like the test automation pyramid, reporting, design considerations, and deployment. The test automation pyramid emphasizes unit testing, integration testing, and end-to-end testing from the bottom to top. Reporting and metrics are important for understanding test results and efficiency. Design focuses on aspects like data-driven testing, robustness, and repeatability. Deployment involves piloting automation, maintaining scripts, and supporting evolving environments. The goal is to improve testing in areas like coverage, speed, and cost while maintaining quality.
This document discusses test automation, including the purpose of test automation, the test automation process, and the test automation pyramid. The key points are:
1. Test automation aims to improve test efficiency, provide wider test coverage, reduce costs, and speed up testing.
2. The test automation process involves defining the test scope, designing tests, coding tests, setting up the test environment, running tests, and maintaining automation over time.
3. The test automation pyramid illustrates that unit tests should form the base, as they are quick to write and run, while user interface tests are at the top as they are more complex and time-consuming.
This document provides information about an ISTQB Advanced Test Manager training course. The 4-day course will cover: (1) ISTQB Certified Tester Advanced Level certification and test processes; (2) test life cycles like V-Model and agile and activities like reviews and defect reporting; (3) team composition; and (4) defect management and test process improvement. The course is intended for test engineers, software engineers, testers, and quality assurance professionals seeking ISTQB Advanced Test Manager certification.
ISTQB Foundation and Selenium Java Automation TestingHiraQureshi22
This document provides an overview and summary of an ISTQB Foundation and Selenium Java Automation Testing course. The course covers ISTQB certification based professional training using the 2018 syllabus, as well as test automation using Selenium Java and .NET frameworks. It is designed to help students learn software testing skills and prepare for careers as test analysts or test automation engineers. Key topics include dynamic testing techniques, testing throughout the software development lifecycle, component testing, test management, and static testing. The course also provides hands-on training in test automation using Selenium WebDriver, building reusable automation components, cross-browser testing, and XSLT reporting.
The document discusses test management struggles and challenges in the software development life cycle (SDLC). It outlines three main challenges: 1) too much workload for reporting and manually linking test cases to incident tickets, 2) difficulty managing requirements and test cases and utilizing testing activities, and 3) difficulty completing automation tasks on time. It proposes solutions such as reducing reporting time, linking items automatically, improving test case management tools, and prioritizing automation.
The document outlines the QA process and responsibilities at Pearson. It discusses that QA is responsible for test case development and prioritization. It also describes the different types of testing conducted including functional, regression, exploratory, and automation testing. The document provides examples of test execution documentation and defect reporting guidelines. It discusses test environments, walkthrough procedures, and having QA buddies for expertise sharing.
5 Considerations When Adopting Automated TestingBhupesh Dahal
Most organizations have realized the benefits of and need for test automation—but is your investment being wisely utilized? Are you unknowingly building a test automation suite that will end up costing more than your actual product? Are you building a legacy test automation framework that may be ready to retire before you reap the benefits?
This presentation will discuss five points of consideration that will help your organization answer these questions and mitigate risks that they bring into light.
The document discusses effective test automation practices in an agile environment. It outlines the benefits of automated testing such as early feedback, safety net for manual tests, and ability to run tests unattended. It presents success stories from companies like Google, Ebay, Facebook, and Amazon on their extensive use of automated testing. The document also covers test automation techniques like the test pyramid, WSO2's test automation framework, continuous integration, and continuous delivery. It emphasizes the importance of selecting the right tools, processes, and team for successful test automation.
Test Automation Architecture That Works by Bhupesh DahalQA or the Highway
The document discusses test automation architecture and provides recommendations for building an effective architecture. It recommends prioritizing unit testing and API/service layer testing over GUI testing to create a testing pyramid. Unit tests should be isolated and test small pieces of code, while API tests can test application logic through service calls. GUI tests should be limited in number and used to test broad end-to-end scenarios, not every small scenario. The goal is to have fewer, more stable automated tests rather than many fragile tests. Following best practices like testing different layers, prioritizing types of tests, and continuous refactoring can help create a maintainable and effective test automation architecture.
The document discusses various techniques for testing software, including their strengths and limitations. It begins by noting that while unit tests are useful for preventing regressions and ensuring something works, they don't provide much information when they pass and finding all possible test cases is impossible. Formal methods like regular expressions and finite state machines can help reduce the input space. Property based testing allows specifying properties that must always be true rather than specific test cases. The document advocates using a combination of techniques like typing, fuzzing, and formal methods alongside testing to provide more confidence in code correctness with fewer tests. The key is focusing on the goal of quality software rather than any single testing technique.
QA Process Overview for Firefox OS 2014Anthony Chung
This document provides an overview of the QA process at Mozilla in 2014. It describes the roles that QA plays, including product and platform coverage, stability and performance testing, assisting with automation and maintenance, user support, and running internal beta testing programs. It also discusses metrics for test runs, acceptance criteria for releases, goals for stability and performance, and efforts to build a community around QA.
This document discusses using parallel_calabash to run automated tests in parallel to speed up test execution time. It describes how parallel_calabash works by grouping test features, spawning multiple processes across devices, and summarizing results. Running tests in parallel utilizes multiple CPU cores and significantly reduces test feedback time from over an hour to under 15 minutes, allowing for faster development cycles.
This document provides a tutorial for using Ranorex to test the GUI of a SimpleCalculator application. It discusses:
1. Installing and setting up Ranorex, which allows simulating user interactions for GUI testing.
2. Adding a GUITester class to an existing SimpleCalculator project and referencing the Ranorex library.
3. Writing tests that use Ranorex methods to control the mouse, interact with calculator controls, and assert the result is correct.
4. Running the GUI tests using the NUnit test runner to automatically simulate user interactions and verify functionality.
End-to-End Test Automation for Both Horizontal and Vertical ScaleErdem YILDIRIM
Slides from my talk at Selenium Camp Test Automation Conference - 2017
https://seleniumcamp.com/talk/end-to-end-test-automation-for-both-horizontal-and-vertical-scale/
Test automation (TA) activity has become a key critical work to guarantee the quality of system under test (SUT) by driving test and also development effort effectively. To bring this efficiency to projects, companies are investing on TA projects in a more motivated way. The question here is how we should design the automation strategy to handle complex TA projects together effectively. It can be done by automating test scenarios as E2E (end to end). Vertical E2E TA consists of; automating Test Data Preparation Phase and Unit, Integration and UI tests. For horizontal E2E TA; UI and Integration test cases, which are automated, designed as integrated real user scenarios. I will tell about the prerequisites, principles and key factors to have E2E automated tests. And also I will share hands on experienced E2E test automation projects that Selenium was the key tool.
Automation testing is crucial for large projects to achieve test coverage and speed. It is best suited when tests are repetitive, such as regression testing of unchanged parts of an application. Automation allows companies to execute repetitive and difficult tests faster to get quick feedback on new builds. However, automation requires significant investment and effort, so it is best to start with critical workflows that are stable and unlikely to change. Leveraging a crowd testing platform can help combat challenges in achieving full test coverage through a strategic combination of in-house and crowd-sourced testing.
Working in many companies as consultant, delivery manager or tech lead I have always seen the same mistakes made in test automation process. I could count successful cases on fingers of one hand. Sometimes people don’t understand the true value of test automation, sometimes just could not organize this process spending lots of money and time without any significant result. I want to share 5 top mistakes aggregated from whole my practice and solutions I recommend for them.
How To Transform the Manual Testing Process to Incorporate Test AutomationRanorex
Although most testing organizations have some automation, it's usually a subset of their overall testing efforts. Typically the processes have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process.
Learn how to transform your manual testing procedures and how to incorporate test automation into your overall testing process.
Testing in FrontEnd World by Nikita GalkinSigma Software
The document discusses different types of frontend testing including:
1. Linting - Used to enforce code style standards and best practices through static analysis.
2. Unit testing - Tests individual units/components of code without dependencies to validate business logic.
3. Component testing - Tests isolated React/Vue components through tools like Storybook for documentation and structural/interaction testing.
4. Visual testing - Tests UI using screenshots to catch visual regressions. Requires browsers.
5. End to end (E2E) testing - Focuses on user experience through full browser automation using tools like Protractor for Angular projects.
The document emphasizes writing tests that are fast,
This document provides an overview of Microsoft Test Manager (MTM) 2013 and how to use it for test planning, test case management, test runs, exploratory testing, and lab management. Key capabilities covered include creating test plans and test suites, managing manual and automated test cases, running tests and recording results, performing exploratory testing sessions, and setting up and using lab environments to collect diagnostic data during testing. The document demonstrates these capabilities through examples and screenshots.
Top 5 pitfalls of software test automatiionekatechserv
Automating tests is important to detect and fix defects early in the development cycle, which can be 100 times cheaper than fixing bugs after release. Automated tests allow bugs to be spotted and fixed early. While automation provides benefits like reduced costs, there are pitfalls to avoid like relying solely on automation for all testing needs, requiring extensive coding, producing false positives, and attempting to replace human testers. Key is using automation to aid, not replace, testers in executing tests efficiently.
DaKiRY_BAQ2016_QADay_Marta Firlej "Microsoft Test Manager tool – how can we u...Dakiry
Microsoft Test Manager is a tool that allows teams to plan, organize, track, and manage testing across their software lifecycle. It integrates with Team Foundation Server and Visual Studio to support test planning, creation of test cases and suites, manual and automated testing, defect tracking, and reporting. Some key capabilities include requirements tracing, status tracking, capturing diagnostic information during testing, and linking tests to automated test methods, builds, and virtual machine environments. It aims to support the entire testing process but requires other Microsoft tools and an initial configuration effort.
The Test Automation Pyramid is a useful model to help us understand and discuss automated testing efforts. Generally speaking it is a good practice to have lots of unit tests, fewer component integration tests, fewer API tests, and even fewer UI tests.
The Automation Firehose: Be Strategic and Tactical by Thomas HaverQA or the Highway
The document discusses strategies for automating software testing. It emphasizes taking a risk-based approach to determine what to automate based on factors like frequency of use, complexity, and legal risk. The document provides recommendations for test automation best practices like treating automated test code like development code, using frameworks and tools to standardize coding practices, and prioritizing unit and integration testing over UI testing. It also discusses challenges that can arise with test automation like flaky tests, long test execution times, and keeping automation in sync with changing software. Metrics for measuring the effectiveness of test automation are presented, like test coverage, defect findings and trends, and time savings.
Presentation of my talk in monthly meetup of AgileVietnam in May, 2015 about how we are being Agile in both working environment and project.
It's not easy-to-read because of my coding-presentation-language but hope that you can find some useful information :)
Don't hesitate to find and contact me on http://gurunh.com to get share as it maybe hard to understand without Planday context.
The document discusses improving test automation operations. It notes that test automation brings benefits but also struggles like many failure reports. It proposes classifying failure reports into categories like bugs, script issues, or temporary unstable errors. Temporary unstable errors account for 73.4% of failures but require manually re-running tests. To address this, an automatic recovery system is proposed that would predict the cause of failures as temporary unstable, re-run the tests, and reduce waste from manual operations. The system could save 65% of operation time through automated recovery of transient errors.
Hopper's approach to QA is described in the Case study. At Hopper, we believe that QA starts at the very beginning of product life cycle. This helps reduce risk and deliver quality products. We combine all aspects of QA - blackbox testing, performance testing, load testing, regression testing, QA Automation etc. We also design QA systems where the existing frameworks may not work.
The document discusses effective test automation practices in an agile environment. It outlines the benefits of automated testing such as early feedback, safety net for manual tests, and ability to run tests unattended. It presents success stories from companies like Google, Ebay, Facebook, and Amazon on their extensive use of automated testing. The document also covers test automation techniques like the test pyramid, WSO2's test automation framework, continuous integration, and continuous delivery. It emphasizes the importance of selecting the right tools, processes, and team for successful test automation.
Test Automation Architecture That Works by Bhupesh DahalQA or the Highway
The document discusses test automation architecture and provides recommendations for building an effective architecture. It recommends prioritizing unit testing and API/service layer testing over GUI testing to create a testing pyramid. Unit tests should be isolated and test small pieces of code, while API tests can test application logic through service calls. GUI tests should be limited in number and used to test broad end-to-end scenarios, not every small scenario. The goal is to have fewer, more stable automated tests rather than many fragile tests. Following best practices like testing different layers, prioritizing types of tests, and continuous refactoring can help create a maintainable and effective test automation architecture.
The document discusses various techniques for testing software, including their strengths and limitations. It begins by noting that while unit tests are useful for preventing regressions and ensuring something works, they don't provide much information when they pass and finding all possible test cases is impossible. Formal methods like regular expressions and finite state machines can help reduce the input space. Property based testing allows specifying properties that must always be true rather than specific test cases. The document advocates using a combination of techniques like typing, fuzzing, and formal methods alongside testing to provide more confidence in code correctness with fewer tests. The key is focusing on the goal of quality software rather than any single testing technique.
QA Process Overview for Firefox OS 2014Anthony Chung
This document provides an overview of the QA process at Mozilla in 2014. It describes the roles that QA plays, including product and platform coverage, stability and performance testing, assisting with automation and maintenance, user support, and running internal beta testing programs. It also discusses metrics for test runs, acceptance criteria for releases, goals for stability and performance, and efforts to build a community around QA.
This document discusses using parallel_calabash to run automated tests in parallel to speed up test execution time. It describes how parallel_calabash works by grouping test features, spawning multiple processes across devices, and summarizing results. Running tests in parallel utilizes multiple CPU cores and significantly reduces test feedback time from over an hour to under 15 minutes, allowing for faster development cycles.
This document provides a tutorial for using Ranorex to test the GUI of a SimpleCalculator application. It discusses:
1. Installing and setting up Ranorex, which allows simulating user interactions for GUI testing.
2. Adding a GUITester class to an existing SimpleCalculator project and referencing the Ranorex library.
3. Writing tests that use Ranorex methods to control the mouse, interact with calculator controls, and assert the result is correct.
4. Running the GUI tests using the NUnit test runner to automatically simulate user interactions and verify functionality.
End-to-End Test Automation for Both Horizontal and Vertical ScaleErdem YILDIRIM
Slides from my talk at Selenium Camp Test Automation Conference - 2017
https://seleniumcamp.com/talk/end-to-end-test-automation-for-both-horizontal-and-vertical-scale/
Test automation (TA) activity has become a key critical work to guarantee the quality of system under test (SUT) by driving test and also development effort effectively. To bring this efficiency to projects, companies are investing on TA projects in a more motivated way. The question here is how we should design the automation strategy to handle complex TA projects together effectively. It can be done by automating test scenarios as E2E (end to end). Vertical E2E TA consists of; automating Test Data Preparation Phase and Unit, Integration and UI tests. For horizontal E2E TA; UI and Integration test cases, which are automated, designed as integrated real user scenarios. I will tell about the prerequisites, principles and key factors to have E2E automated tests. And also I will share hands on experienced E2E test automation projects that Selenium was the key tool.
Automation testing is crucial for large projects to achieve test coverage and speed. It is best suited when tests are repetitive, such as regression testing of unchanged parts of an application. Automation allows companies to execute repetitive and difficult tests faster to get quick feedback on new builds. However, automation requires significant investment and effort, so it is best to start with critical workflows that are stable and unlikely to change. Leveraging a crowd testing platform can help combat challenges in achieving full test coverage through a strategic combination of in-house and crowd-sourced testing.
Working in many companies as consultant, delivery manager or tech lead I have always seen the same mistakes made in test automation process. I could count successful cases on fingers of one hand. Sometimes people don’t understand the true value of test automation, sometimes just could not organize this process spending lots of money and time without any significant result. I want to share 5 top mistakes aggregated from whole my practice and solutions I recommend for them.
How To Transform the Manual Testing Process to Incorporate Test AutomationRanorex
Although most testing organizations have some automation, it's usually a subset of their overall testing efforts. Typically the processes have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process.
Learn how to transform your manual testing procedures and how to incorporate test automation into your overall testing process.
Testing in FrontEnd World by Nikita GalkinSigma Software
The document discusses different types of frontend testing including:
1. Linting - Used to enforce code style standards and best practices through static analysis.
2. Unit testing - Tests individual units/components of code without dependencies to validate business logic.
3. Component testing - Tests isolated React/Vue components through tools like Storybook for documentation and structural/interaction testing.
4. Visual testing - Tests UI using screenshots to catch visual regressions. Requires browsers.
5. End to end (E2E) testing - Focuses on user experience through full browser automation using tools like Protractor for Angular projects.
The document emphasizes writing tests that are fast,
This document provides an overview of Microsoft Test Manager (MTM) 2013 and how to use it for test planning, test case management, test runs, exploratory testing, and lab management. Key capabilities covered include creating test plans and test suites, managing manual and automated test cases, running tests and recording results, performing exploratory testing sessions, and setting up and using lab environments to collect diagnostic data during testing. The document demonstrates these capabilities through examples and screenshots.
Top 5 pitfalls of software test automatiionekatechserv
Automating tests is important to detect and fix defects early in the development cycle, which can be 100 times cheaper than fixing bugs after release. Automated tests allow bugs to be spotted and fixed early. While automation provides benefits like reduced costs, there are pitfalls to avoid like relying solely on automation for all testing needs, requiring extensive coding, producing false positives, and attempting to replace human testers. Key is using automation to aid, not replace, testers in executing tests efficiently.
DaKiRY_BAQ2016_QADay_Marta Firlej "Microsoft Test Manager tool – how can we u...Dakiry
Microsoft Test Manager is a tool that allows teams to plan, organize, track, and manage testing across their software lifecycle. It integrates with Team Foundation Server and Visual Studio to support test planning, creation of test cases and suites, manual and automated testing, defect tracking, and reporting. Some key capabilities include requirements tracing, status tracking, capturing diagnostic information during testing, and linking tests to automated test methods, builds, and virtual machine environments. It aims to support the entire testing process but requires other Microsoft tools and an initial configuration effort.
The Test Automation Pyramid is a useful model to help us understand and discuss automated testing efforts. Generally speaking it is a good practice to have lots of unit tests, fewer component integration tests, fewer API tests, and even fewer UI tests.
The Automation Firehose: Be Strategic and Tactical by Thomas HaverQA or the Highway
The document discusses strategies for automating software testing. It emphasizes taking a risk-based approach to determine what to automate based on factors like frequency of use, complexity, and legal risk. The document provides recommendations for test automation best practices like treating automated test code like development code, using frameworks and tools to standardize coding practices, and prioritizing unit and integration testing over UI testing. It also discusses challenges that can arise with test automation like flaky tests, long test execution times, and keeping automation in sync with changing software. Metrics for measuring the effectiveness of test automation are presented, like test coverage, defect findings and trends, and time savings.
Presentation of my talk in monthly meetup of AgileVietnam in May, 2015 about how we are being Agile in both working environment and project.
It's not easy-to-read because of my coding-presentation-language but hope that you can find some useful information :)
Don't hesitate to find and contact me on http://gurunh.com to get share as it maybe hard to understand without Planday context.
The document discusses improving test automation operations. It notes that test automation brings benefits but also struggles like many failure reports. It proposes classifying failure reports into categories like bugs, script issues, or temporary unstable errors. Temporary unstable errors account for 73.4% of failures but require manually re-running tests. To address this, an automatic recovery system is proposed that would predict the cause of failures as temporary unstable, re-run the tests, and reduce waste from manual operations. The system could save 65% of operation time through automated recovery of transient errors.
Hopper's approach to QA is described in the Case study. At Hopper, we believe that QA starts at the very beginning of product life cycle. This helps reduce risk and deliver quality products. We combine all aspects of QA - blackbox testing, performance testing, load testing, regression testing, QA Automation etc. We also design QA systems where the existing frameworks may not work.
Seretta Gamba - A Sneaky Way to Introduce More Automated TestingTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on A Sneaky Way to Introduce More Automated Testing by Seretta Gamba. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
The document discusses GUI-based test automation. It provides an overview of test automation, explaining what it is and why organizations implement it. Some key benefits mentioned include finding more bugs, performing nightly regression tests, and shortening test periods. It also cautions that test automation requires careful planning and realistic goals to be effective. Metrics for measuring the success of test automation implementations are presented, and an example company's test automation system is evaluated based on factors like maintainability, efficiency, and flexibility.
In this case study, Moogilu highlights Software Testing ("QA"). QA is foundational to software development. Product risks are mitigated by doing a thorough QA. Ironically this is ignored by most companies for a variety of reasons including costs. QA is a big part of Moogilu offering for it reduces risk for our customers.
- Automation involves using a tool to automate manual test cases to shorten the testing lifecycle and avoid human errors. It requires planning and effort.
- QuickTest Professional (QTP) is an automation tool that supports testing various technologies. The testing process in QTP involves creating test scripts by recording actions, running the scripts to execute tests, and analyzing the test results.
- QTP's add-in manager allows loading add-ins for different technologies so objects in those technologies can be recognized and tested. Knowing the application under test's technology is important for choosing the right add-ins.
Engineering Student MuleSoft Meetup#4 - API Testing With MuleSoftJitendra Bafna
This document provides an overview of a meetup on testing with MuleSoft. It includes:
- An agenda covering testing overview, unit testing, the Munit test recorder, test best practices, and a demo.
- Information on testing stages like unit, integration, and acceptance testing. It also defines unit testing and the Munit framework.
- Details about creating and running Munit tests in Anypoint Studio, including test recorders, components, and coverage reports.
- Best practices for Munit testing like validating individual components, mocking dependencies, and writing negative test cases.
The document concludes with a trivia quiz and a call for future speaker nominations and feedback.
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses what test automation is, its benefits, and when it is applicable. It also covers QTP concepts like the user interface, recording and running tests, checkpoints, parameters, synchronization, and the object repository. Key points include how QTP recognizes and identifies objects, how to save and view test results, and best practices for configuring options and settings in QTP.
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses test automation concepts, benefits of automation, the automation life cycle, and factors to consider in automation planning. It also covers supported technologies and browsers in QTP, the QTP user interface, recording and running tests, object recognition, synchronization, checkpoints, parameters, and the object repository. The key points covered in 3 sentences are:
Test automation involves automating manual test cases using a tool to shorten testing time and avoid errors; QTP supports testing various application types and stores objects in its repository to recognize and identify them during testing; Parameters, checkpoints, synchronization, and the object repository are important
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses what test automation is, the benefits of automation, the automation life cycle, and when automation is applicable. It also describes the QTP user interface, how to record and run tests, view results, and work with objects and the object repository. The key points covered are test automation concepts, the QTP interface and features, best practices for recording, running and viewing tests, and how QTP recognizes and stores objects.
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses what test automation is, the benefits of automation, and factors to consider in automation planning. It also covers supported technologies and browsers in QTP, the add-in manager, and the QTP user interface. Key aspects like recording and running tests, checkpoints, synchronization, parameters, and regular expressions are explained at a high level.
This document provides an overview of automation fundamentals and an introduction to QuickTest Professional (QTP) 9.2. It discusses what test automation is, the benefits of automation, and factors to consider in automation planning. It also covers supported technologies and browsers in QTP, the add-in manager, and the main QTP window interface. The document provides a high-level introduction to recording and running tests in QTP.
The document summarizes a presentation on transforming manual testing processes to incorporate test automation. It discusses automation models and frameworks, incorporating automation into standard testing processes, typical test activities and deliverables for agile and waterfall SDLCs. It provides a deep dive into testing artifacts and how to better plan for automation. It also presents a case study where a command-driven testing framework was adopted to help automate manual testing processes and limit concurrent work by testers on the same templates. The presentation aims to provide best practices for planning and implementing test automation.
A customer case study using PinDown, the automatic debug tool for regression testing. Bugs are fixed 400% faster with 5x less discussions prior to each fix, compared with manual debug.
Test Management Training provides an overview of test management in Digité. It discusses key concepts like creating test scripts, test events, and test cases. Trainees will learn how to execute test events, link defects, and view test reports. The goal is for participants to understand test management and use the module to capture test scenarios, group test units, schedule test events, and monitor progress.
The document discusses various types and stages of software testing in the software development lifecycle, including:
1. Component testing, the lowest level of testing done in isolation on individual software modules.
2. Integration testing in small increments to test communication between components and non-functional aspects.
3. System testing to test functional and non-functional requirements at the full system level, often done by an independent test group.
4. The document provides details on planning, techniques, and considerations for each type of testing in the software development and integration process.
This document discusses improving automation testing speed at Rakuten. It describes how the author's team previously faced bottlenecks like long setup times and late test feedback. They created a "Mobile Labo" solution using Appium and Selenium to run tests concurrently on multiple devices from a single script. This sped up testing significantly. However, new issues emerged around device clashes and locating devices. The author aims to resolve such problems to further improve speed in the future.
This is collection of question & answer in software testing interview job. Part 2 with 10 questions and answers.
This is designed by Khoa Bui, which owner of http://www.testing.com.vn site
This document discusses continuous testing in an agile environment. It defines continuous testing as testing throughout the development process to identify bugs early. It explains that continuous testing helps control side effects, avoid defects, support multiple environments, get fast results, anticipate risks, and create reliable processes. The document provides an overview of how continuous testing works, including test environments, data management, automatic deployment, and test automation. It also discusses creating a continuous testing project, the agile test process, and how to implement effective continuous testing to improve quality and business value.
Similar to Test Automation Improvement by Machine Learning Jasst'21 Tokyo (20)
This document discusses unit testing and its benefits. It begins by outlining some questions about unit testing, then compares unit tests to UI tests. Unit tests are faster, test individual functions, and make code easier to change and refactor. The document provides an example of unit testing a password validation function in PHP Laravel. It discusses that while test coverage is a quality metric, high coverage alone does not guarantee high quality. It argues that internal quality through practices like unit testing does not require tradeoffs with development speed, and can actually improve productivity by reducing unnecessary tasks and lead time. Maintaining clean code through practices like unit testing is important for both quality and speed.
this slide is presentation in JaSST'22 Tohoku.
we faced 3 times challanging issue about test automation and resolved .
explain these issue and how to resolve it and what effective is gotton.
1st issue : process issue
2nd issue : scripting issue
3rd issue : maintenance issue
Test automation need operation.
- test automation keep latest specification.
- test automation is fragile. investigation is needed.
- test automation performance should be monitoring and improve it if low performance happened.
Struggles and Challenges in STLC in Ques No.13Sadaaki Emura
The document discusses struggles and challenges in software testing lifecycle (STLC) at Rakuten. It describes two main challenges - difficulty in managing requirements and test cases, and difficulty completing automation tasks on time. It then introduces requirement traceability matrix (RTM) as a solution to address the first challenge by linking requirements, test cases, and defects to ensure full test coverage and easy identification of impacted test cases when requirements change. RTM allows tracking of changes by onsite and offsite teams.
The document discusses lessons learned from analyzing requirements documents for testing purposes. It summarizes three approaches: 1) James Bach's Heuristic Test Strategy Model which classifies requirement information to guide testing, 2) Yoshio Shimizu's Universal Specification Describing Manner which stresses documenting the reason for requirements, and 3) Kazuhiro Ishihara's requirement matrix analysis which considers what should, must, and never happen for users. The key lessons are to analyze requirements from the user perspective and consider cases beyond what is explicitly specified to improve testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
7. 7
Test automation responsibility
1. Test automation covers regression
2. Update regression script and create new script during project
3. Do regression test in test environment everyday
8. 8
Test automation covers regression
Product
Project scope Regression
Cover by manual test
Cover by test automation
When project start.
• Manual test focus on project scope
• Test automation focus on all regression
Responsibility 1
9. 9
①
②
Update regression script and create new script during project
Area ①
Change specification
Script modify to cover last specification
Area ②
New function
Create new script to add to existing regression
Product
Project scope Regression
Cover by manual test
Cover by test automation
Responsibility 2
10. 10
Regression test in test environment everyday
All regression test is scheduled everyday
Objective
Find bug
Find environment issue
Find release operation rehearsal issue
Find non-functional bug
:
Responsibility 3
11. 11
Test automation responsibility
many operations
1. Test automation covers regression
2. Update regression script and create new script during project
3. Do regression test in test environment everyday
13. 13
What is test automation operation?
1. Check test automation results
2. Investigate test result of test automation failed
3. Handle test automation results
All regression test is scheduled everyday
operation
Responsibility 3
14. 14
Check test automation results
• 20 Jenkins servers are active
• Around 2,000 test exist
• Check summary in dashboard
• If test failed, get test result from dashboard
dashboard
Operation 1
15. 15
Investigate test result of test automation failed
~
1. Application bug
2. Test automation script bug
3. Environment issue
4. Temporary unstable
Investigate failed reason by test result
Operation 2
Failed reason category
16. 16
Handle test automation results
1. Application bug
2. Test automation script bug
3. Environment issue
4. Temporary unstable
Investigate failed reason by test result
Fix issue
Operation 3
18. 18
Fix test automation script bug
Failed reason issue = “Test automation script bug”
1. Check specification
2. Fix bug by myself
3. Check test automation work well
Operation 3
19. 19
Fix issue after environment is ready
Failed reason issue = “Environment issue”
1. During maintenance , stop test automation
2. Retry test automation after maintenance is finished
3. Manage scheduler
Operation 3
20. 20
Fix issue to retry test automation
Failed reason issue = “Temporary unstable”
1. Retry test automation
Website is busy
Non productive & take much time!
Operation 3
21. 21
Test automation operation issue
Temporary unstable issue operation is not productive and take much time
Test automation has some operations
23. 23
Investigate issue in detail
Verify this fact and Consider improvement
Temporary unstable issue operation is not productive and take much time
Hypothesis
24. 24
Verify current issue trend
・・・
Jenkins server
dashboard
1. Collect test results from Jenkins
2. classify failed reason category
Test result management system
①
②
25. 25
classify failed reason category
Failed
reason
When failed, investigate reason by test
result and screenshots, movie.
Finally classify failed reason category.
〇 Application Bug
〇Test automation script Bug
〇 Environment issue
〇Temporary unstable
27. 27
Operation time needed for “temporary unstable” issue
0.5 4.9
20.4
74.1
Application Bug Test automation Bug
Environment issue Temporary unstable
※2020/1 ~ 2020/10 summary
Unit = percentage
1. Investigate failed result
2. Run test automation again (retry)
3. Check result again
Example.
1 operation = 2 mins
Daily operation = 300
Monthly operation = 112 hours
Operation to fix issue
Reason ratio
29. 29
Operation time needed for “temporary unstable” issue
0.5 4.9
20.4
74.1
Application Bug Test automation Bug
Environment issue Temporary unstable
※2020/1 ~ 2020/10 summary
Unit = percentage
1. Investigate failed result
2. Run test automation again (retry)
3. Check result again
Example.
1 operation = 2 mins
Daily operation = 300
Monthly operation = 112 hours
Operation to fix issue
Reason ratio
30. 30
Improve operation idea “Auto healing system”
・・・
Jenkins server
1. Use classified data as training data
2. predict failed reason by machine learning
3. Retry test if there is a temporary unstable issue
①
②
Auto healing system
Training data
③
31. 31
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
32. 32
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Step-1Test executes and fails
Test result output
• text error message by test tool
• Error page screenshot
33. 33
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Use data
• Test result
• Training data (previous classified result data)
UseTechnology (machine learning)
• Tesseract-OCR
• Deep Neural Network with Keras
Step-2 Predict failed reason
34. 34
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
35. 35
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
text error message
Screen shot
36. 36
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
Invalid operation
(不正な操作が行われました。)
37. 37
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
不正な操作が行われました。
Temporary unstable
Training data
[Error] Cannot found XXXXpath
predict
38. 38
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
Temporary unstable
Training data
predict
39. 39
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
Input test report
Extract message from
screenshot
Text base
prediction
Image base
prediction
Cannot predict reason
Output prediction
Feedback
Training data
Prediction result
40. 40
Auto healing system flow
Test Result is
Failure
Predict reason
Temporary
unstable
Retry
Yes
• When predict reason is temporary unstable, retry test
• Otherwise do nothing
Step-3 check predict result and decide to retry
41. 41
Auto healing system brings …
1. Application bug
2. Test automation script bug
3. Environment issue
4. Temporary unstable
bug report & wait for fix
fix script & retry
recover environment & retry
retry
Category Operation
This operation is done in auto
42. 42
Auto healing system brings …
1. Retry operation is dismissed, then human cost is ZERO
2. Test result keep stable
3. Make good use of test environment resources
43. 43
Auto healing system brings …
0%
20%
40%
60%
80%
100%
Jan-20 Feb-20 Mar-20 Apr-20 May-20 Jun-20 Jul-20
auto healing manual operation
Example.
1 operation = 2 mins
Daily operation = 300
Monthly operation = 112 hours
In July
78% failed jobs is healed in auto
Reduction = 88 hours
All manual operation human cost
Auto healing human cost
78%
Ratio of way to fix issue
45. 45
Conclusion
1. Some of test automation operation are non-productive activity
2. These operation can be reduced with machine learning
3. There is still room for improvement auto healing system
Editor's Notes
Hello! Thank you for joining my presentation.
I’m Emura Sadaaki from Rakuten.
Today, I give you a presentation about “Test Automation Improvement by Machine Learning”
Before start presentation, let me introduce myself.
Go back to presentation.
Today , this presentation is 2 topics.
1st topic is “Test automation has daily operation and issue”
2nd topic is “This issue is resolved by system with machine learning”
Let’s get started
Agenda is here.
1st : Our QA team background
2nd : What is Test automation operation in our team and this issue
3rd : investigate this operation issue in detail
4th : improve operation to resolve this issue
Finally : conclusion
Let go 1st agenda “Background”
Our organization is this.
Our dept. have mainly 2 type group.
Left side is developer group to create product.
This group has product manager and engineer.
There are 11 developer group in our Dept.
Right side is QA group to check product quality when every application release.
This group has manual test team and test automation team.
I belong to test automation team.
This QA group do test for 11 developer group services.
11 services is here.
I explain test automation team responsibility in QA group.
Test automation team has 3 responsibility.
1st : test automation covers regression .
2nd : update regression script and add new script during project.
3rd : do regression test in test environment everyday.
I explain each responsibility in detail one by one.
1st responsibility is test automation covers regression.
This image show all function in product .
Blue area is regression area.
Test automation cover this blue area.
Let’s say “one new project start”.
In this time, Red area appears.
This is new specification.
And it is new project scope.
Manual test focus on this project scope , it’s red area.
Automation test cover regression during this project, it’s blue area.
2nd responsibility is “update regression script and create new script during project”.
As I said in 1st responsibility, test automation focus on regression area.
Area No.1, it is regression and this specification is changed to satisfy new specification.
So test automation basically does not work well
so to support last specification , we should modify script .
Other side,
Area No.2 , this function is totally new . Test automation does not cover this area yet.
So to support this new function, we should create new script then add to existing regression.
Finally we expand regression area
3rd responsibility is Regression test in test environment everyday.
Not only testing during project,
All test automation regression is scheduled everyday then check quality every time .
This main objective is finding bug.
others is finding environment issue, release operation rehearsal issue, non-functional bug (in Japanese 非機能テスト)
Etc..
As above,
Test automation responsibility are
- cover regression
- maintain script
- run regression test everyday
To keep these responsibility , many operations come up!
In 1st agenda I introduced our QA team organization and test automation responsibility.
In next agenda, I explain test automation operation and issue during this operation.
3rd responsibility is “All regression test is scheduled everyday.”
In this regression test, we have 3 operations
1st check test automation results
2nd investigate test result if test automation failed
3rd handle test automation results
I explain 3 operation one by one
1st Operation . Check test automation results.
Our team check if all regression test work well or not everyday.
Currently we have around 20 Jenkins servers.
In these Jenkins, around 2,000 regression test exist and run everyday.
To check this so many test status,
we have dashboard right image, this provide the test status summary in each service.
Ex. How many test is success , how many test is failed .
If test failed, we get the test result from this dashboard.
2nd operation is investigate test result of test automation failed.
We get test result from dashboard, like this image.
This report has information
Where test is failed
What test data is used
Failed page Screenshot
Etc.
From test result , we investigate why test is failed.
Failed reason is classified into mainly 4 category.
1st application bug
2nd test automation script bug
3rd environment issue
4th temporary unstable
3rd operation , we handle test automation results.
After investigate failed test, to be test success , we fix issue
Let me explain how to fix issue one by one
1st type “Application bug”
In this case, test automation does not work till developer fix application bug.
So
1st we report bug to engineers
Engineers get this bug report, fix application bug then notice QA team .
Finally run test automation again then check application bug fixed
2nd type “Test automation script bug”
In this case, it’s our side issue.
So check specification again, fix test automation script bug by myself.
Finally run test automation again then check test automation work well.
3rd type “Environment issue”
This type example is “maintenance mode”
Developer set this mode to do some operation. During this maintenance, test environment does not work.
So during this maintenance , we stop test automation.
Communicate developer, get information maintenance is finished, retry test automation .then check “ test automation work well “.
If we know this maintenance schedule in advance, manage test automation scheduler .
Final 4th type “temporary unstable”
This situation is test environment does not work “in short period”.
For example.
Network connection difficulty .
Application cause timeout.
Web server cause busy
In this case, fixing solution is very simple only “retry test automation “
Although we take much time to investigate this type issue, final operation is “RETRY”
So this temporary unstable issue is very non productive and take much time.
To run All regression test everyday, Test automation has some operations.
In these operations, temporary unstable issue is not productive and take much time.
3rd agenda investigate this test automation issue in detail.
During test automation operation, temporary unstable issue operation is not productive and take much time.
But this “much time” is hypothesis at that time.
1st , we need to verify this fact (take much time) .
if true, consider improvement method to resolve it.
To verify this fact, we collect all test result and find out issue trend.
So build this system.
1st we collect “all test results from 20 Jenkins servers”, and stock this information in DB automatically.
We check these test result, then classify all test results by failed reason category in dashboard.
This is classify operation web tool .
We investigate why test was failed, then classify failed reason category.
choose failed reason from mainly 4 category.
This is the summary failed reason category.
This circle is reason ratio
Red is “application bug”. it’s 0.5 %
Blue is “Test automation script bug” . It’s about 5%
Yellow is “Environment issue“. It’s about 20 %
Green is “Temporary unstable”. It’s three quarters !
This type issue occupy most of all
As I said , when this temporary unstable issue is happened, operation to fix issue is this step.
1st Investigate failed result
2nd retry test automation
Finally check result.
Lets calculate operation time.
For example
this 1 operation take 2 mins
daily operation 300
In this condition, we take about 100 hours per month.
It’s huge operation cost!
Next agenda is “improve operation”
By investigating issue, we found out “temporary unstable” type issue cause huge operation cost.
Also this operation is non-productive .
We decided to improve this operation to reduce human cost.
This is system to improve operations.
Called “Auto healing system”
1st we use classified data during investigate issue trend as training data (in Japanese 教師データ)
2nd This system predict failed reason by machine learning with this training data.
Finally , If there is a temporary unstable issue, retry test automation in Jenkins
This is flow chart
1st step : test executes and failed.
In this time, we get test results , mainly 2 things
- text error message
- Error page screenshot
2nd Step : predict failed reason
To predict it,
Use test result and training data.
This system use machine learning technology. Mainly 2 tech.
-Tessact OCR
-Deep Neural Network with Keras-library
Prediction step detail is here
As I said , we use test results “Test error message and Screenshot” as system input ,
We extract text error message from screenshot.
Example.
Invalid operation,不正な操作が行われました in Japanese from screenshot
Using both text error messages.
1st prediction is done with training data.
If 1st prediction is difficult, 2nd prediction start , image base prediction.
After prediction, this prediction result feedback to training data
3rd step check predict result , decide to retry test automation
When Prediction is “temporary unstable”, retry test automation.
Otherwise do nothing.
Auto healing do “Prediction” and “RED AREA” - “Retry test automation” in auto.
Auto healing system benefit is here
1st benefit :
Retry operation is dismissed . Then human cost is ZEO.
2nd benefit :
This auto healing run every time after test automation run.
Some failed test is fixed soon in auto.
So test results keep stable
3rd benefit :
Retry operation is done only when failed test might be fixed .
That’s why we can make good use of test environment resources.
This is actual output
This graph mean
100 % is all failed test automation.
RED is fixed by manual .
BLUE is fixed by auto healing, this fix operation is ZERO human operation
Pick up July case.
For example,monthly operation take about 112 hours.
78% is fixed by auto healing.
So ,about 88 hours human cost is reduct!
It’s very huge outcome.
Conclusion
I explained test automation responsibility.
To keep this responsibility, we have many test automation operations, and some of them are non productive activity.
I introduce Auto healing system with Machine learning.
It can reduce these operation.
We can reduce non productive cost, but this auto healing is not perfect yet.
For example, prediction accuracy is not high, feedback is manual operation.
That why there is still room for improvement auto healing system.
So we want to improve this system more and more.
That it from me.
Thank you for attending my presentation!