Online Quality Assurance Day 2020 #2
РАМЕЛЛА БАСЕНКО
«ROI of automation or how to sell your automation ideas to customers»
telegram: wwww.t.me/goqameetup
fb: www.fb.com/goqaevent
fb: www.fb.com/qaday.org
Сайт: www.qaday.org
Nesma autumn conference 2015 - A QFD based tool for managing agile requiremen...Nesma
The document describes a methodology for estimating the cost of agile software development projects using Quality Function Deployment (QFD). It involves capturing user needs, prioritizing user stories, developing story cards to break stories into work items, estimating the effort required using function points, and tracking progress towards meeting business goals through multiple iterations of a quality deployment matrix. The methodology aims to provide estimates of what functionality can be delivered given a selected team size and number of sprints.
This document discusses key performance indicators (KPIs) for measuring agile projects. It begins by defining metrics and KPIs, noting that KPIs should be tied to strategic objectives and have defined targets. It then discusses characteristics of good KPIs and provides examples of both traditional and agile KPIs related to time, effort, scope, and quality. The document cautions that too many KPIs can be useless and advocates keeping metrics simple. It also discusses challenges like cheating on metrics and provides tips for using tools and dashboards to effectively measure agile performance.
The document discusses various metrics that can be used to measure performance in Agile software development such as velocity, burn down, defects, and quality metrics. It explains metrics like effort, schedule, cost, size, defects, and velocity that provide insight into productivity, predictability, and value. Key Agile principles of adaptive planning, value-driven prioritization, and continuous delivery are important to consider when selecting and using metrics.
This document discusses key performance indicators (KPIs) for software quality. It provides information on different types of KPIs including process, input, output, leading, and lagging KPIs. The document also outlines steps for creating KPIs such as defining objectives, identifying key result areas and tasks, and determining methods to measure results. Common mistakes in setting KPIs like having too many metrics or ones that do not change over time are also mentioned. Finally, the document recommends that KPIs should be clearly linked to strategy and provide answers to important questions.
6 Ways to Measure the ROI of Automated TestingSmartBear
Interested in automated testing, but unsure whether it is worth the initial costs? Find 6 ways to measure the ROI of automated testing for your business in this presentation.
The document outlines an agile QA automation process including early QA involvement in acceptance criteria, 3 or 4 amigos meetings before implementation, developing automation in parallel with code, integrating automation with CI tools like Jenkins, performing exploratory testing and recording findings, communicating post-release to stakeholders, demonstrating automation along with functionality, breaking down automation tasks with descriptions, performing sanity tests once live, creating production automation for highest priority features, learning lessons and automating important bugs, keeping automation up to date, and considering architecture, guidelines, parallel execution, and reporting when developing automation.
The document discusses creating a high-performing QA function through continuous integration, delivery, and testing. It recommends that QA be integrated into development teams, with automated testing, defect tracking, and ensuring features align with business needs. This would reduce defects and costs while improving customer experience through more frequent releases. Key steps outlined are implementing continuous integration and delivery pipelines, test-driven development, quality control gates, and measuring escaping defects to guide improvements.
Nesma autumn conference 2015 - A QFD based tool for managing agile requiremen...Nesma
The document describes a methodology for estimating the cost of agile software development projects using Quality Function Deployment (QFD). It involves capturing user needs, prioritizing user stories, developing story cards to break stories into work items, estimating the effort required using function points, and tracking progress towards meeting business goals through multiple iterations of a quality deployment matrix. The methodology aims to provide estimates of what functionality can be delivered given a selected team size and number of sprints.
This document discusses key performance indicators (KPIs) for measuring agile projects. It begins by defining metrics and KPIs, noting that KPIs should be tied to strategic objectives and have defined targets. It then discusses characteristics of good KPIs and provides examples of both traditional and agile KPIs related to time, effort, scope, and quality. The document cautions that too many KPIs can be useless and advocates keeping metrics simple. It also discusses challenges like cheating on metrics and provides tips for using tools and dashboards to effectively measure agile performance.
The document discusses various metrics that can be used to measure performance in Agile software development such as velocity, burn down, defects, and quality metrics. It explains metrics like effort, schedule, cost, size, defects, and velocity that provide insight into productivity, predictability, and value. Key Agile principles of adaptive planning, value-driven prioritization, and continuous delivery are important to consider when selecting and using metrics.
This document discusses key performance indicators (KPIs) for software quality. It provides information on different types of KPIs including process, input, output, leading, and lagging KPIs. The document also outlines steps for creating KPIs such as defining objectives, identifying key result areas and tasks, and determining methods to measure results. Common mistakes in setting KPIs like having too many metrics or ones that do not change over time are also mentioned. Finally, the document recommends that KPIs should be clearly linked to strategy and provide answers to important questions.
6 Ways to Measure the ROI of Automated TestingSmartBear
Interested in automated testing, but unsure whether it is worth the initial costs? Find 6 ways to measure the ROI of automated testing for your business in this presentation.
The document outlines an agile QA automation process including early QA involvement in acceptance criteria, 3 or 4 amigos meetings before implementation, developing automation in parallel with code, integrating automation with CI tools like Jenkins, performing exploratory testing and recording findings, communicating post-release to stakeholders, demonstrating automation along with functionality, breaking down automation tasks with descriptions, performing sanity tests once live, creating production automation for highest priority features, learning lessons and automating important bugs, keeping automation up to date, and considering architecture, guidelines, parallel execution, and reporting when developing automation.
The document discusses creating a high-performing QA function through continuous integration, delivery, and testing. It recommends that QA be integrated into development teams, with automated testing, defect tracking, and ensuring features align with business needs. This would reduce defects and costs while improving customer experience through more frequent releases. Key steps outlined are implementing continuous integration and delivery pipelines, test-driven development, quality control gates, and measuring escaping defects to guide improvements.
We have some great new enhancements that were added to qTest on Mach 28th 2016 that will help make your testing process much more efficient. We are also introducing a beta version of our new Atlassian HipChat add-on that will help your team improve collaboration while using qTest.
View the slides for the On-Demand webinar that aired on April 6th, to learn about all the new qTest features:
- Copy and paste across projects
- Data query enhancements
- Filtering tree structure
- New! Hip-Chat Add-On
View the On-Demand Webinar here: http://pi.qasymphony.com/qtest-7.4-release-webinar-lp056
The increase in use of Agile methodology to deliver large projects has changed the way the industry looks as testing. The skill set required for Agile testing is vastly different from what the current crop of testers are used to.
Xavient proposed a solution, which involved automating the execution of monthly release testing. Xavient leveraged an onsite offshore model for this initiative with 80% of the test team located offshore. The tools being used were HP Test Director and WinRunner. As part of the “automation of monthly release testing”, Xavient’s test team took on the responsibility to create, maintain and run the automated test scripts in tandem with the manual test scripts.
This document discusses the myth of calculating return on investment (ROI) for test automation. It argues that ROI formulas that simply compare manual testing costs to automation costs are too complex and rely on too many assumptions to be meaningful. Instead, a financial options model that considers the costs of defects is proposed. Key points made include that the ROI of automation depends on factors like the cost of defects in an organization and that agile practices can reduce the needed investment in automation by taking an iterative approach. The document recommends measuring an organization's costs from non-quality before determining how much can be reasonably invested in test automation.
Performance Testing in the Agile LifecycleLee Barnes
Traditional large scale end-of-cycle performance tests served enterprises well in the waterfall era. However, as organizations transition to agile development models, many find their tried and true approach to performance testing—and their performance testing resources—becoming somewhat irrelevant. The strict requirements and lengthy durations just don’t fit in the context of an agile cycle. Additionally, investigating system performance at the end of the development effort misses out on the early stage feedback offered by an agile approach. And it’s more important than ever that today’s agile-built systems perform. So how can agile organizations ensure optimum performance of their business critical systems? Lee Barnes discusses why agile teams need to change their thinking about performance from a narrow focus on testing to a broader focus on analysis—from a people, process and technology perspective. Take back techniques for shifting your performance testing/analysis earlier in the development cycle and extracting performance data that is immediately actionable.
Deliver software with fewer defects by simply tracking the 12 key performance indicators included in this list. Matt Angerer, Senior Solution Architect at ResultsPositive, has curated a list of the most important KPI's for your testing and quality assurance professionals.
The document discusses quality assurance (QA) metrics in agile development. It begins by defining quality for both products and processes, noting that QA influence increases as development moves from requirements to validation. It then covers the types of metrics that can be used as a foundation for measuring product quality, including quantitative, qualitative, absolute, relative, and derivative metrics. Finally, it provides examples of QA metrics that can be used for daily monitoring of quality, as well as metrics that can be included in regular quality reports for sprints and releases.
Top Ten Secret Weapons For Agile Performance TestingAndriy Melnyk
This document outlines top secret weapons for agile performance testing. It discusses making performance explicit, having performance testers work as part of development teams, driving performance tests with customer requirements, taking a disciplined scientific approach to analyzing test results, starting performance testing early in projects, automating performance test workflows, and getting frequent feedback to iteratively improve.
Getting to Done, Usably: User Experience Acceptance Criteria on Agile ProjectsJoshua Ledwell
This document outlines an agile project to improve the user experience of a duct layout software over 3 months. It describes establishing acceptance criteria focused on completion time, usability surveys, and bug/debt levels. The team iteratively tested workflows and saw completion times drop from 15 to 10 minutes in the first month by adding snap fitting and shortcut tools. Further improvements like automatic dimensioning and tutorial videos reduced times to under 5 minutes by the final month. User experience testing and heuristics ensured the software became easier to learn and use.
The world of a software house is a constant search for compromise between quality and costs. In many cases, the cost-cutting starts from the test automation. Then you start to talk about ROI but recognize that numbers are not on your side. We were there and what we have found out is that only a complete change in our approach allows us to find common ground with our clients. I will reveal one detail from the presentation - we are not talking about test automation with clients anymore - as a result we do it more and more.
Are you surprised that success automatically generates new challenges which we further translate into opportunities? We had to reconsider our approach to the test automation environment, internal frameworks and the way we share them between projects, including code ownership, … And again, one simple but unobvious solution allows us to both deliver what we promise and to earn more on our projects.
As we have been reshaping our approach to the test automation, we had to change the way of delivery too. One of the main decisions was skip out the role of test automation engineer (or software developer in test). We decided to go with the whole team approach which is consistent with the way we sell it.
Find it interesting? Join me and listen to our story about how we have transformed test automation.
Turn by Turn: A Practical Guide To Test Management Perforce
Looking to accelerate your testing strategy without missing a turn?
There’s no need to stop and ask for directions.
In this webinar we’ll show you how to solve the most common test delivery issues and close the gaps in your DevOps strategy using Helix ALM. You’ll see how easy it is to speed up test delivery by setting up coverage for both manual and automated testing teams.
Join Nico Krüger to see how Helix ALM can help you simplify test management.
You'll learn how to:
-Cover both shift left and shift right testing.
-Track testing progress, handle bugs and manage re-testing efforts.
-Ensure complete test coverage for every part of your product.
This document summarizes an Agile Test Automation session which covered:
- The agenda included an introduction to Agile testing process and tools, a demonstration, and Q&A
- Agile values like communication and feedback affect testing by making the whole team responsible for quality using test-driven development and continuous integration
- Test automation tools discussed included test harnesses like JUnit and Selenium, as well as functional testing tools like Cucumber and Concordion
This document discusses test automation in agile projects. It begins with an overview of agile principles like the agile manifesto. It then discusses agile testing principles and practices like continuous integration and continuous delivery. The bulk of the document focuses on test automation, including why it's important, different types of test automation frameworks, and design considerations like the test automation pyramid. It provides tips for test automation including design patterns, abstraction layers, and evolving the framework over time.
Continuous integration (CI) is the practice of regularly merging code changes into a shared repository. With CI, developers merge their code changes daily, which helps prevent integration issues. Automated unit tests are run on the merged code to catch any errors or failures. If all tests pass, the changes are committed to the main codebase. As code is built on the CI server, releases are automatically deployed to test, staging, and production environments on a scheduled basis. The goals of CI are to frequently integrate changes, catch errors early, and automate testing and deployments.
The document summarizes an email product summit discussing upcoming reliability, performance, and feature improvements to an email service. Key points discussed include:
1. The summit will cover email infrastructure upgrades around reliability and visibility, performance improvements from a microservices architecture, and new features.
2. Performance improvements include faster response times for smaller emails using serverless technologies and 40 minute sends for large 500k emails, down from 7 hours.
3. New features include a 160x larger template size limit, full UTF-8 support, hourly broadcast counts, and removal of dynamic content and preview restrictions.
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.
The document summarizes best practices for test automation, including:
- Unit tests should be automated as the first step and follow naming conventions.
- Integration and performance tests require grouping, isolation, and handling test data issues.
- UI tests can be automated with Selenium and integrated into the build pipeline.
- Automated test data, code, and plan generation may be useful once a project's structure stabilizes.
- Automation aims to provide transparency, improve skills, and reduce manual work over time through a smarter approach.
Test automation can provide significant benefits but also requires proper understanding and implementation. Some common myths include that automation is simple or that commercial tools are too expensive. In reality, automation requires software development skills and commercial tools have benefits and are relatively inexpensive compared to development costs. An effective test automation framework like keyword-driven testing can further improve returns by reducing maintenance costs and allowing testers without programming knowledge to automate tests. Case studies show frameworks can complete projects faster and with better results.
We have some great new enhancements that were added to qTest on Mach 28th 2016 that will help make your testing process much more efficient. We are also introducing a beta version of our new Atlassian HipChat add-on that will help your team improve collaboration while using qTest.
View the slides for the On-Demand webinar that aired on April 6th, to learn about all the new qTest features:
- Copy and paste across projects
- Data query enhancements
- Filtering tree structure
- New! Hip-Chat Add-On
View the On-Demand Webinar here: http://pi.qasymphony.com/qtest-7.4-release-webinar-lp056
The increase in use of Agile methodology to deliver large projects has changed the way the industry looks as testing. The skill set required for Agile testing is vastly different from what the current crop of testers are used to.
Xavient proposed a solution, which involved automating the execution of monthly release testing. Xavient leveraged an onsite offshore model for this initiative with 80% of the test team located offshore. The tools being used were HP Test Director and WinRunner. As part of the “automation of monthly release testing”, Xavient’s test team took on the responsibility to create, maintain and run the automated test scripts in tandem with the manual test scripts.
This document discusses the myth of calculating return on investment (ROI) for test automation. It argues that ROI formulas that simply compare manual testing costs to automation costs are too complex and rely on too many assumptions to be meaningful. Instead, a financial options model that considers the costs of defects is proposed. Key points made include that the ROI of automation depends on factors like the cost of defects in an organization and that agile practices can reduce the needed investment in automation by taking an iterative approach. The document recommends measuring an organization's costs from non-quality before determining how much can be reasonably invested in test automation.
Performance Testing in the Agile LifecycleLee Barnes
Traditional large scale end-of-cycle performance tests served enterprises well in the waterfall era. However, as organizations transition to agile development models, many find their tried and true approach to performance testing—and their performance testing resources—becoming somewhat irrelevant. The strict requirements and lengthy durations just don’t fit in the context of an agile cycle. Additionally, investigating system performance at the end of the development effort misses out on the early stage feedback offered by an agile approach. And it’s more important than ever that today’s agile-built systems perform. So how can agile organizations ensure optimum performance of their business critical systems? Lee Barnes discusses why agile teams need to change their thinking about performance from a narrow focus on testing to a broader focus on analysis—from a people, process and technology perspective. Take back techniques for shifting your performance testing/analysis earlier in the development cycle and extracting performance data that is immediately actionable.
Deliver software with fewer defects by simply tracking the 12 key performance indicators included in this list. Matt Angerer, Senior Solution Architect at ResultsPositive, has curated a list of the most important KPI's for your testing and quality assurance professionals.
The document discusses quality assurance (QA) metrics in agile development. It begins by defining quality for both products and processes, noting that QA influence increases as development moves from requirements to validation. It then covers the types of metrics that can be used as a foundation for measuring product quality, including quantitative, qualitative, absolute, relative, and derivative metrics. Finally, it provides examples of QA metrics that can be used for daily monitoring of quality, as well as metrics that can be included in regular quality reports for sprints and releases.
Top Ten Secret Weapons For Agile Performance TestingAndriy Melnyk
This document outlines top secret weapons for agile performance testing. It discusses making performance explicit, having performance testers work as part of development teams, driving performance tests with customer requirements, taking a disciplined scientific approach to analyzing test results, starting performance testing early in projects, automating performance test workflows, and getting frequent feedback to iteratively improve.
Getting to Done, Usably: User Experience Acceptance Criteria on Agile ProjectsJoshua Ledwell
This document outlines an agile project to improve the user experience of a duct layout software over 3 months. It describes establishing acceptance criteria focused on completion time, usability surveys, and bug/debt levels. The team iteratively tested workflows and saw completion times drop from 15 to 10 minutes in the first month by adding snap fitting and shortcut tools. Further improvements like automatic dimensioning and tutorial videos reduced times to under 5 minutes by the final month. User experience testing and heuristics ensured the software became easier to learn and use.
The world of a software house is a constant search for compromise between quality and costs. In many cases, the cost-cutting starts from the test automation. Then you start to talk about ROI but recognize that numbers are not on your side. We were there and what we have found out is that only a complete change in our approach allows us to find common ground with our clients. I will reveal one detail from the presentation - we are not talking about test automation with clients anymore - as a result we do it more and more.
Are you surprised that success automatically generates new challenges which we further translate into opportunities? We had to reconsider our approach to the test automation environment, internal frameworks and the way we share them between projects, including code ownership, … And again, one simple but unobvious solution allows us to both deliver what we promise and to earn more on our projects.
As we have been reshaping our approach to the test automation, we had to change the way of delivery too. One of the main decisions was skip out the role of test automation engineer (or software developer in test). We decided to go with the whole team approach which is consistent with the way we sell it.
Find it interesting? Join me and listen to our story about how we have transformed test automation.
Turn by Turn: A Practical Guide To Test Management Perforce
Looking to accelerate your testing strategy without missing a turn?
There’s no need to stop and ask for directions.
In this webinar we’ll show you how to solve the most common test delivery issues and close the gaps in your DevOps strategy using Helix ALM. You’ll see how easy it is to speed up test delivery by setting up coverage for both manual and automated testing teams.
Join Nico Krüger to see how Helix ALM can help you simplify test management.
You'll learn how to:
-Cover both shift left and shift right testing.
-Track testing progress, handle bugs and manage re-testing efforts.
-Ensure complete test coverage for every part of your product.
This document summarizes an Agile Test Automation session which covered:
- The agenda included an introduction to Agile testing process and tools, a demonstration, and Q&A
- Agile values like communication and feedback affect testing by making the whole team responsible for quality using test-driven development and continuous integration
- Test automation tools discussed included test harnesses like JUnit and Selenium, as well as functional testing tools like Cucumber and Concordion
This document discusses test automation in agile projects. It begins with an overview of agile principles like the agile manifesto. It then discusses agile testing principles and practices like continuous integration and continuous delivery. The bulk of the document focuses on test automation, including why it's important, different types of test automation frameworks, and design considerations like the test automation pyramid. It provides tips for test automation including design patterns, abstraction layers, and evolving the framework over time.
Continuous integration (CI) is the practice of regularly merging code changes into a shared repository. With CI, developers merge their code changes daily, which helps prevent integration issues. Automated unit tests are run on the merged code to catch any errors or failures. If all tests pass, the changes are committed to the main codebase. As code is built on the CI server, releases are automatically deployed to test, staging, and production environments on a scheduled basis. The goals of CI are to frequently integrate changes, catch errors early, and automate testing and deployments.
The document summarizes an email product summit discussing upcoming reliability, performance, and feature improvements to an email service. Key points discussed include:
1. The summit will cover email infrastructure upgrades around reliability and visibility, performance improvements from a microservices architecture, and new features.
2. Performance improvements include faster response times for smaller emails using serverless technologies and 40 minute sends for large 500k emails, down from 7 hours.
3. New features include a 160x larger template size limit, full UTF-8 support, hourly broadcast counts, and removal of dynamic content and preview restrictions.
It Seemed a Good Idea at the Time: Intelligent Mistakes in Test AutomationTechWell
Some test automation ideas seem very sensible at first glance but contain pitfalls and problems that can and should be avoided. Dot Graham describes five of these “intelligent mistakes”—1. Automated tests will find more bugs quicker. (Automation doesn’t find bugs, tests do.) 2. Spending a lot on a tool must guarantee great benefits. (Good automation does not come “out of the box” and is not automatic.) 3. Let’s automate all of our manual tests. (This may not give you better or faster testing, and you will miss out on some benefits.) 4. Tools are expensive so we have to show a return on investment. (This is not only surprisingly difficult but may actually be harmful.) 5. Because they are called “testing tools,” they must be tools for testers to use. (Making testers become test automators may be damaging to both testing and automation.) Join Dot for a rousing discussion of “intelligent mistakes”—so you can be smart enough to avoid them.
The document summarizes best practices for test automation, including:
- Unit tests should be automated as the first step and follow naming conventions.
- Integration and performance tests require grouping, isolation, and handling test data issues.
- UI tests can be automated with Selenium and integrated into the build pipeline.
- Automated test data, code, and plan generation may be useful once a project's structure stabilizes.
- Automation aims to provide transparency, improve skills, and reduce manual work over time through a smarter approach.
Test automation can provide significant benefits but also requires proper understanding and implementation. Some common myths include that automation is simple or that commercial tools are too expensive. In reality, automation requires software development skills and commercial tools have benefits and are relatively inexpensive compared to development costs. An effective test automation framework like keyword-driven testing can further improve returns by reducing maintenance costs and allowing testers without programming knowledge to automate tests. Case studies show frameworks can complete projects faster and with better results.
Improving ROI with Scriptless Test AutomationMindfire LLC
This is where scriptless test automation comes into the picture. Businesses today may utilize Scriptless Test Automation to automate test cases without having to worry about the complexities of coding. It speeds up the time to learn and build code, resulting in a shorter time to market, a greater return on investment, and increased coverage with little maintenance.
Relieving the Testing Bottle Neck in Your Projects | cPrime + QASymphonyQASymphony
This document provides an overview of QASymphony and discusses how it can help relieve testing bottlenecks in projects. It addresses how testing can become a bottleneck if testers cannot test efficiently. It proposes moving testing earlier in the development cycle using a test-first approach. This allows code to be deployed sooner by removing testing as the bottleneck. It also discusses how QASymphony can help optimize both manual and automated testing.
Relieveing the Testing Bottle Neck - WebinarCprime
When shifting to Agile, testing is often a bottleneck in the process, as it is the last step in the cycle. But, the responsibility to remove the bottleneck is not on the tester alone.
Introduction to automated testing life cycle methodologyBugRaptors
Bugraptors ensures that the Automated Testing Life-Cycle Methodology represents a structured approach, which is executed in planned and a systematic manner. The Automated Testing Life-cycle Methodology (ATLM) is comprised of six primary processes or stages.
The document outlines a testing methodology that involves understanding requirements, creating workflow diagrams, developing test cases and checklists, and turning information into documented test results. It emphasizes clear communication with developers, ensuring pre-requisites are met, following schedules, and using technical skills and tools like QTP and Loadrunner. Fulfilling the role requires skills like detail orientation, willingness to learn, and suggesting process improvements. A typical workday involves understanding requirements, creating and executing test cases, reporting bugs, and attending meetings.
This document provides an overview of automation testing. It defines automation testing as automatically testing software using test scripts without human intervention. The document discusses how automation testing aims to expedite testing, increase coverage, and improve accuracy compared to manual testing. It provides an example of how automation could be used to test an e-commerce checkout process. Key differences between manual and automation testing are outlined, and types of automation testing as well as advantages and disadvantages of waterfall and V-model approaches are covered.
This prez talks about the automation benefits, usage of QTP and it's different kind of frameworks.
Also talks about the skills set required for QTP implementations.
This document discusses six ways to measure the return on investment (ROI) of automated testing and building a business case for test automation. It outlines common ROI pitfalls to avoid and key variables that define an organization's test scope. The six ways to measure ROI are: 1) automation of new test cases, 2) regression testing, 3) coverage across environments, 4) reduction of defect leakage, 5) test redundancy and reuse, and 6) reduction of knowledge leakage. A successful automation transformation requires focus on talent, test approach, and tools over a multi-year period according to a transformation map and initiative charters.
This document discusses best practices for developing an automated testing framework. It recommends using a hybrid keyword-driven and data-driven approach to reduce scripting efforts. Some key points covered include the benefits of automation like reduced costs and increased speed/accuracy over manual testing. It also discusses factors to consider when selecting an automation tool, common challenges, and provides an example case study showing the ROI achieved through automation. Best practices emphasized include loose coupling of framework components, reuse of generic libraries, and treating framework development as a distinct project.
M. Holovaty, Концепции автоматизированного тестированияAlex
The document discusses concepts related to automated testing, including:
1) Automated testing scripts are developed and updated in sync with the cyclic development process of the application under test.
2) Automated testing is effective when the time to create, update, and analyze scripts across iterations is less than the time for manual testing.
3) Effective logging, test result modeling, and failure analysis are important for reducing the time spent understanding failures in automated tests.
How To Transform the Manual Testing Process to Incorporate Test AutomationRanorex
Although most testing organizations have some automation, it's usually a subset of their overall testing efforts. Typically the processes have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process.
Learn how to transform your manual testing procedures and how to incorporate test automation into your overall testing process.
The document provides guidance on software testing reports. A test report should document the results of testing defined in the test plan and serve three objectives: define the testing scope, present results, and provide conclusions and recommendations. An example test summary report for a bus ticket booking application is provided covering testing scope, types of testing, metrics, environment, lessons learned, best practices, and exit criteria. The report aims to communicate testing results to stakeholders.
How to manage your testing automation project ttm methodologyRam Yonish
מנהלים רבים וארגונים רבים מיישמים אוטומציה בתהליך הבדיקות שלהם אבל עדיין מרגישים שההחזר על ההשקעה נמוך ואף שלילי. מחקרים רבים מראים כי הבעיה נובעת מחוסר תיאום ציפיות, זיהוי לא נכון של הבעיות שהכלים באים לפתור, בחירת כלי לא מתאים ותהליך הטמעה שגוי.
מתודולוגיית TMM (Testing tools management) באה לתת מענה בדיוק לבעיות שהוצגו. המתודולוגיה כוללת הגדרת השלבים השונים בפרויקט אוטומציה, החל מהגדרת הבעיה, דרך בחירת הכלי, בחינת הכלי, הטמעה ומדידת האפקטיביות שלו לכל אורך הפרויקט
7 Tips from Siemens Energy for Success with AutomationWorksoft
Nathan Sharp of Siemens Energy recently spoke at the SAP Project Management in Atlanta and shared 7 important elements for the successful adoption of automated business process validation in their organization.
Originally presented by Nathan Sharp of Siemens Energy at SAPinsider’s Project Management conference.
The document discusses agile testing approaches. It defines testing as executing software with test cases to find failures and demonstrate correct execution. It then discusses key aspects of agile testing including: running tests iteratively throughout development rather than just at the end; automating tests wherever practical; and having testers work collaboratively as part of development teams. It outlines success factors like focusing on delivering customer value and continually improving testing practices. The document advocates for automating a large portion of testing to provide rapid feedback and free up resources while balancing automation costs.
The document discusses key aspects of successful test automation including:
1. Applying a software development process to automation to improve reliability and maintainability.
2. Improving testing processes with robust manual testing and defect management before automating.
3. Clearly defining requirements for what to automate and goals of the automation effort.
КАТЕРИНА АБЗЯТОВА «Ефективне планування тестування ключові аспекти та практ...QADay
Lviv Direction QADay 2024 (Professional Development)
КАТЕРИНА АБЗЯТОВА
«Ефективне планування тестування ключові аспекти та практичні поради»
https://linktr.ee/qadayua
Досвід здачі іспиту ISTQB Expert level: подробиці, перепідготовка, актуальніс...QADay
Рамелла Басенко – Lead QA Engineer & Engineering Manager at AgileEngine
- Огляд актуального ISTQB портфоліо з іспитами всіх рівнів
- Детальніше про Expert Level та його напрями
- Цінність сертифікатів ISTQB в сучасних реаліях
- Мій досвід здачі іспиту ISTQB Expert Level і що ж робити коли з першого разу не вийшло
МОРРІС-ВСЕСЛАВ ШОСТАК «Роль QA в індустрії програмного та апаратного забезпеч...QADay
Online QADay 2024 #1
МОРРІС-ВСЕСЛАВ ШОСТАК «Роль QA в індустрії програмного та апаратного забезпечення: Важливість та Виклики»
https://linktr.ee/qadayua
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
2. About Speaker
Speaker: Ramella Basenko
Role: Lead QA Engineer at AgileEngine, ISTQB certified
Test Manager
7 years of experience in QA area, 5 years of QA team
management experience.
3. Agenda
1. Common QA/AQA processes overview
2. What is ROI of automation and how to calculate it, how to get positive ROI rate for AQA
3. What are the key points which help to sell your automation ideas to customer
4. Example of a Success Story
5. Questions and answers
4. Common QA/AQA process overview
After just a few releases of comprehensive manual
regression tests, it becomes obvious that this is a big
bottleneck.
The business demands frequent releases, but there just
isn't the QA capacity to keep up. The team has two
choices:
- slim down its testing scope so that regression tests
can be completed more quickly (but of course this
would introduce a greater likelihood of regression
bugs)
- choose to release less frequently so not to repeatedly
incur the high cost of regression testing (but then
users would have to wait longer for new features, and
time-to-market opportunities would be missed
5. Common QA/AQA Process Overview
Regression testing often becomes a pain point
for an Agile teams, as it presents two major
challenges:
1. Increased time and efforts. Regression
testing may take up a lot of time, especially
in large projects. Sometimes, teams have
to spend an entire sprint on regression,
which is hardly acceptable. And increased
time and efforts mean increased costs.
2. Lack of test engineers’ concentration.
Running same test cases again and again
for a long time results in the lack of
concentration. This can lead to bugs
sneaking into production, which also
increases overall project costs.
6. Common AQA process overview
On the testing processes of medium and large-sized projects, there is a lot of repetitive work for regression tests
on each iteration of the development process. To reduce the expenses for QA processes, as a rule, companies
use testing automation.
Differences between manual and automated tests:
● Time when each kind of testing might be applied
● Speed of testing
● Error rate
What benefits we will get by using of test automation:
● Shorten Time to Market period
● Lower error rate
● Lower defect leakage to UAT team or customers
8. ROI of automation and how to calculate it
Return on investment (ROI) is a financial ratio that illustrates business profit or loss margins.
Test Automation ROI = ((Savings-Investments)/ Investments)*100%
Investment = Automated Test Script Development Time
+ Automated Test Script Execution Time
+ Automated Test Script Analysis Time
+ Automated Test Maintenance Time
+ Manual Test Execution Time
Savings = Manual Test Execution or Analysis Time
* Total Number of Test Cases (Automated & Manual)
* Period of ROI
/ 8
**NOTE Period of ROI is the number of weeks for which ROI is to be calculated. Divided by 8 is
done wherever manual effort is needed. Divided by 18 is done wherever automation is done.
9. ROI of automation and how to calculate it
The cost function for manual testing is linear - it
costs a fixed X for each run of the test. The cost of
automated testing on the other hand is a step-wise
function: to run it once costs more than running it
manually (because you have to write the test), but
then every subsequent run costs nothing at all.
Overall return on investment is achieved when the two
lines intersect, and so the more often you run the
tests, the more you save.
10. Key Points to sell your automation ideas to customer
● You should always know what you are trying to accomplish with automation - define objectives
for your automation effort, define metrics for those objectives and define goals for each metric;
● Create comparison document with automation tools evaluation (include pros and cons of
mentioned tools and approaches, recurring and non-recurring costs, risks and opportunity
costs);
● Prepare Demo POC (if there are people with appropriate skills in a team) of selected tool and
show it to your customer;
● Perform a thorough ROI analysis and be able to show that the benefits of the tools exceed the
costs (according to release cycles on a project);
● Have a plan in place that manages the significant risks for the automation effort;
● Be ready to prove your point of view and answer any questions that customer may have
(technical or process questions);
11. Example of a Success Story
● Custom framework was built and demoed to customer with short list of tests in February 2019 (in three
weeks total);
● An additional AQA person was hired to the team in March 2019;
● In parallel the whole team with QA lead worked on Regression coverage document to have necessary base
for automation;
● In July 2019 we had first release where we used our automation suite for regression (around 700 test cases
which can be run against different browsers and databases) and regression time was reduced to 2.5-3
weeks;
● In February 2020 we had the whole regression suite automated + new features implemented between
release (more than 1200 test cases run in 40 minutes for one browser and one DB source) - regression
time is reduced to 1.5-2 weeks;
● Now - we can make regression in a single week and use our automation suite within Jenkins pipeline for
any build we need.
Precondition: Until we created automation suite and organized AQA process duration of our regression was 8-9
weeks of work (for team of 6 people) 👀
Goal of our customer was: To have more frequent releases and reduce regression time at least at least twice
12. Example of a Success Story
Manual effort before automation process started:
9 weeks => 9*5*8 = 360h a week for 1 QA person
6 persons => 2160h for one Regression cycle
Time cost for Automation process set up(non-recurring costs):
2 AQA persons * 9 months = 9m*4w*5d*8h*2p= 2880h
If we measure by automated regression runs per release and suppose that we run this suite twice
before release and we had 3 releases a 1 year, ROI in 1 year will be following :
ROI =( (3*2*2160 - 2880)/ 2880 )*100% = 350%