The presentation will cover the following items:
Release Readiness criteria as a part of a master test plan, their types and details. (~ 30 min)
Release readiness criteria appliance and areas of influence. (~ 5 min)
Quality report and quality metrics + their connection to release readiness criteria. (~ 15 min)
Concluding release readiness. (~1 min)
Questions (~ 10 min)
Real case studies of QA management in big teams (60-100 people). How to setup robust QA processes and approaches in them. Main impediments and problems, how to solve them. SAFe.
In this presentation we are going to summarize and share with you QA estimation approach that was developed and successfully applied on different projects at Testing Center of Excellence at Ciklum. We will consider factors and basis which should be considered while starting estimation process, QA Estimation approach for main and additional activities should be taken into account, try to compose estimation guide for Regression testing and find out how to adjust QA Estimation by risks/assumptions multipliers.
'An Evolution Into Specification By Example' by Adam KnightTEST Huddle
For the last four years myself and my colleagues at RainStor have been evolving a process for testing a structure data archiving system in an Agile development environment. In this talk I will discuss the evolution of a team from a rudimentary Agile implementation on an unreleased product, to our current process which uses the fundamental elements of Specification By Example to successfully deliver software functionality across 30 different platform/backend configurations to a series of high profile and demanding customers. Last year our company was used as a case study for successful implementation in Gojko Adzic's book on Specification By Example.
My report will discuss the lessons learned during the early implementation and the challenges faced in moving away from a compressed waterfall approach. Through a process of incremental change we have identified and tackled the fundamental issues that undermined the development effort as a team. I’ll describe some of the mistakes made in attempting to implement a more formal process of requirements documentation into an Agile implementation and the benefits we uncovered on moving to a more flexible user story based approach. I’ll also discuss some of the issues around trying to implement user stories in a server system with no GUI and very technical and performance based requirements.
Raising the importance of quality and the status of testing both within the development team and the organisation as a whole has allowed the challenges facing the team to be recognised and respected. The result has been a more collaborative approach taken between developers and testers both through “collaborative specification” of user stories and tackling the problems that impact the delivery of value to the customers. I also plan to discuss how we’ve expanded from documenting acceptance criteria for each user story such that we now document Criteria, Assumptions and Risks for each feature and, rather than a ‘Done/Not Done’ approach how we identify the confidence in each of these categories to measure the confidence we have in each new feature being implemented.
Having the test team as an involved and influential team through the entire development process has also allowed us to implement a number of testability features to help to make the product more testable. I will discuss the benefits of having development understand and prioritise testability issues with some illustrative examples.
I will discuss the challenges and benefits of developing our own metadata driven test harnesses as opposed to an off the shelf solution. I’ll detail how having control over these harnesses has allowed us to work towards a self documenting test system using realistic customer examples as “Automated Specifications” of the RainStor system allowing us to explain current behaviour to Product Management in terms of well understood customer scenarios.
QA Fest 2017. Владимир Примаков. QA метрики. Взгляд на качество с разных стор...QAFest
Что такое качество продукта и процесса разработки. Как его измерять. Какие метрики гарантируют качество продукта, а какие важны для принятии решения о готовности продукта к релизу. Тренды качества, их польза в понимании улучшения качества продукты и процесса разработки. Абсолютные и относительные метрики. Инструменты.
Real case studies of QA management in big teams (60-100 people). How to setup robust QA processes and approaches in them. Main impediments and problems, how to solve them. SAFe.
In this presentation we are going to summarize and share with you QA estimation approach that was developed and successfully applied on different projects at Testing Center of Excellence at Ciklum. We will consider factors and basis which should be considered while starting estimation process, QA Estimation approach for main and additional activities should be taken into account, try to compose estimation guide for Regression testing and find out how to adjust QA Estimation by risks/assumptions multipliers.
'An Evolution Into Specification By Example' by Adam KnightTEST Huddle
For the last four years myself and my colleagues at RainStor have been evolving a process for testing a structure data archiving system in an Agile development environment. In this talk I will discuss the evolution of a team from a rudimentary Agile implementation on an unreleased product, to our current process which uses the fundamental elements of Specification By Example to successfully deliver software functionality across 30 different platform/backend configurations to a series of high profile and demanding customers. Last year our company was used as a case study for successful implementation in Gojko Adzic's book on Specification By Example.
My report will discuss the lessons learned during the early implementation and the challenges faced in moving away from a compressed waterfall approach. Through a process of incremental change we have identified and tackled the fundamental issues that undermined the development effort as a team. I’ll describe some of the mistakes made in attempting to implement a more formal process of requirements documentation into an Agile implementation and the benefits we uncovered on moving to a more flexible user story based approach. I’ll also discuss some of the issues around trying to implement user stories in a server system with no GUI and very technical and performance based requirements.
Raising the importance of quality and the status of testing both within the development team and the organisation as a whole has allowed the challenges facing the team to be recognised and respected. The result has been a more collaborative approach taken between developers and testers both through “collaborative specification” of user stories and tackling the problems that impact the delivery of value to the customers. I also plan to discuss how we’ve expanded from documenting acceptance criteria for each user story such that we now document Criteria, Assumptions and Risks for each feature and, rather than a ‘Done/Not Done’ approach how we identify the confidence in each of these categories to measure the confidence we have in each new feature being implemented.
Having the test team as an involved and influential team through the entire development process has also allowed us to implement a number of testability features to help to make the product more testable. I will discuss the benefits of having development understand and prioritise testability issues with some illustrative examples.
I will discuss the challenges and benefits of developing our own metadata driven test harnesses as opposed to an off the shelf solution. I’ll detail how having control over these harnesses has allowed us to work towards a self documenting test system using realistic customer examples as “Automated Specifications” of the RainStor system allowing us to explain current behaviour to Product Management in terms of well understood customer scenarios.
QA Fest 2017. Владимир Примаков. QA метрики. Взгляд на качество с разных стор...QAFest
Что такое качество продукта и процесса разработки. Как его измерять. Какие метрики гарантируют качество продукта, а какие важны для принятии решения о готовности продукта к релизу. Тренды качества, их польза в понимании улучшения качества продукты и процесса разработки. Абсолютные и относительные метрики. Инструменты.
The 3 Pillars Approach to Agile Testing Strategy with Bob Galen & Mary ThornTEST Huddle
Far too often agile adoptions focus just on the development teams, agile frameworks, or technical practices as a part of their adoption strategies. And then there’s the near perpetual focus on tooling or developing test automation without striking a balanced approach. Often the testing activity and the testing teams are “left behind” in agile strategy development or worse yet, they’re simply “along for the ride”. That is not an effective transformation strategy.
Join experienced agile coaches Bob Galen and Mary Thorn as they share the Three Pillars framework for establishing a balanced strategic plan for effective quality and testing. The Three Pillars focus on development and test automation, testing practices, and collaboration activities that will ensure you have a balanced approach to agile testing. Specifically, risk-based testing, exploratory testing, paired collaboration around agile requirements, agile test design, and TDD-BDD-Functional testing automation will be explored as tactic within a balanced Three Pillars framework. You will leave with the tools to immediately initiate or re-tool a much more effective and balanced agile testing strategy.
ortion pills to be shipped to house
От хаоса к автоматизации тестирования на примере BackendCOMAQA.BY
В докладе мы на реальном примере проговорим классическую сложную организационно-техническую ситуацию. Вы стали частью команды «нового» проекта. Документации нет или почти нет. Автотесты используются как «магический талисман». Процессы по части тестирования отсутствуют как класс. «Кто виноват и что делать», как не растеряться, разобраться с поставленными задачами, более того, трансформировать проект в комфортный и технически интересный. Мы на реальном примере поговорим о том, как завоевать доверие заказчика, обновить/организовать процессы, «продать» эффективную автоматизацию тестирования. Покажем недостатки изначального подхода, детально проговорим технические аспекты предложенного решения: архитектуру и детали реализации фреймворка для backend тестирования.
Practical Test Strategy Using HeuristicsTEST Huddle
Key Takeaways
- See what makes a good test strategy
- Learn how to make a thorough test strategy
- Identify what is the ‘Heuristic Test Strategy Model’ is
- Develop a solid test strategy that fits fast
- Discover how diversification can help you to create a test strategy
dbg Agile Testing Presentation, demonstrating the use of Test Charters, Exploratory Testing, Session Based Testing and Testing Tours. With thanks to James Bach, Lisa Crispin, Janet Gregory and James Whittaker
Dietmar Strasser - Traditional QA meets Agile DevelopmentTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Traditional QA meets Agile Development by Dietmar Strasser. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Manage Testing by Dependencies—Not ActivitiesTechWell
Traditionally, test management has focused on two areas—test planning and test execution. Test planning creates the test strategy and prepares test cases. Test execution focuses on who is responsible for and assigned to executing the respective test cases and logging defects. These views, however, are not inclusive of everything a tester does. For example, the work of team members must be coordinated, environments made ready, and test data prepared. For this reason, Jim Trentadue says the testing effort should be managed by dependencies—not activities. Jim shares logical models for managing the activities your testers are—and should be—doing to support testing efforts. Learn how to create and manage the relationships between common testing deliverables, such as test cases with dependencies on test data or defects affected by build and environment management work. By focusing on testing activities’ dependencies and relationships, you will be able to better manage your testing efforts across the various testing phases.
Combinatorial Black-Box Testing with Classification TreesTechWell
A basic problem in software testing often is choosing a subset from the near infinite number of possible test cases. Consider the challenges of testing multiple browsers, multiple mobile devices, mobile applications, or use case paths. Testers must select test cases to design, create, and then execute to obtain sufficient coverage—all while managing the time it takes to test relative to risks. Even though test resources are limited, you still want to select the best possible set of tests. Peter Kruse shares his experiences designing test cases with TESTONA, the most popular tool for systematic test design of classification tree-based tests. Peter shows how to integrate expected test outcomes and how to obtain executable test scripts directly from the test specification or user stories. If you are looking to jumpstart your systematic test design and want to avoid unnecessary tests and overhead, this session is for you!
The presentation covers several established agile based testing methods and models that testers may be able to use to further enhance their testing approach.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
What is testing?
“An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.”
- Cem Kaner
Moving Towards Zero Defects with Specification by ExampleSteve Rogalsky
Love tracing bugs in a defect tracking system? Love the bug-fix cycle? If so, then don't come to this presentation. We'll be discussing how Specification by Example (also known as Acceptance Test Driven Development) will help move you towards a zero defect system by building the right thing the first time.
The 3 Pillars Approach to Agile Testing Strategy with Bob Galen & Mary ThornTEST Huddle
Far too often agile adoptions focus just on the development teams, agile frameworks, or technical practices as a part of their adoption strategies. And then there’s the near perpetual focus on tooling or developing test automation without striking a balanced approach. Often the testing activity and the testing teams are “left behind” in agile strategy development or worse yet, they’re simply “along for the ride”. That is not an effective transformation strategy.
Join experienced agile coaches Bob Galen and Mary Thorn as they share the Three Pillars framework for establishing a balanced strategic plan for effective quality and testing. The Three Pillars focus on development and test automation, testing practices, and collaboration activities that will ensure you have a balanced approach to agile testing. Specifically, risk-based testing, exploratory testing, paired collaboration around agile requirements, agile test design, and TDD-BDD-Functional testing automation will be explored as tactic within a balanced Three Pillars framework. You will leave with the tools to immediately initiate or re-tool a much more effective and balanced agile testing strategy.
ortion pills to be shipped to house
От хаоса к автоматизации тестирования на примере BackendCOMAQA.BY
В докладе мы на реальном примере проговорим классическую сложную организационно-техническую ситуацию. Вы стали частью команды «нового» проекта. Документации нет или почти нет. Автотесты используются как «магический талисман». Процессы по части тестирования отсутствуют как класс. «Кто виноват и что делать», как не растеряться, разобраться с поставленными задачами, более того, трансформировать проект в комфортный и технически интересный. Мы на реальном примере поговорим о том, как завоевать доверие заказчика, обновить/организовать процессы, «продать» эффективную автоматизацию тестирования. Покажем недостатки изначального подхода, детально проговорим технические аспекты предложенного решения: архитектуру и детали реализации фреймворка для backend тестирования.
Practical Test Strategy Using HeuristicsTEST Huddle
Key Takeaways
- See what makes a good test strategy
- Learn how to make a thorough test strategy
- Identify what is the ‘Heuristic Test Strategy Model’ is
- Develop a solid test strategy that fits fast
- Discover how diversification can help you to create a test strategy
dbg Agile Testing Presentation, demonstrating the use of Test Charters, Exploratory Testing, Session Based Testing and Testing Tours. With thanks to James Bach, Lisa Crispin, Janet Gregory and James Whittaker
Dietmar Strasser - Traditional QA meets Agile DevelopmentTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Traditional QA meets Agile Development by Dietmar Strasser. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Manage Testing by Dependencies—Not ActivitiesTechWell
Traditionally, test management has focused on two areas—test planning and test execution. Test planning creates the test strategy and prepares test cases. Test execution focuses on who is responsible for and assigned to executing the respective test cases and logging defects. These views, however, are not inclusive of everything a tester does. For example, the work of team members must be coordinated, environments made ready, and test data prepared. For this reason, Jim Trentadue says the testing effort should be managed by dependencies—not activities. Jim shares logical models for managing the activities your testers are—and should be—doing to support testing efforts. Learn how to create and manage the relationships between common testing deliverables, such as test cases with dependencies on test data or defects affected by build and environment management work. By focusing on testing activities’ dependencies and relationships, you will be able to better manage your testing efforts across the various testing phases.
Combinatorial Black-Box Testing with Classification TreesTechWell
A basic problem in software testing often is choosing a subset from the near infinite number of possible test cases. Consider the challenges of testing multiple browsers, multiple mobile devices, mobile applications, or use case paths. Testers must select test cases to design, create, and then execute to obtain sufficient coverage—all while managing the time it takes to test relative to risks. Even though test resources are limited, you still want to select the best possible set of tests. Peter Kruse shares his experiences designing test cases with TESTONA, the most popular tool for systematic test design of classification tree-based tests. Peter shows how to integrate expected test outcomes and how to obtain executable test scripts directly from the test specification or user stories. If you are looking to jumpstart your systematic test design and want to avoid unnecessary tests and overhead, this session is for you!
The presentation covers several established agile based testing methods and models that testers may be able to use to further enhance their testing approach.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
What is testing?
“An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.”
- Cem Kaner
Moving Towards Zero Defects with Specification by ExampleSteve Rogalsky
Love tracing bugs in a defect tracking system? Love the bug-fix cycle? If so, then don't come to this presentation. We'll be discussing how Specification by Example (also known as Acceptance Test Driven Development) will help move you towards a zero defect system by building the right thing the first time.
Product Descriptions The Best Thing Since Sliced Breaddwhelbourn
PRINCE advocates writing product descriptions to help define the products that your project will deliver. But Product descriptions provide much more aid to your project beyond specification of the products.
ISO 9001 - It sets out the criteria for a quality management system and is th...Tushar Sadhye
ISO 9001 - It sets out the criteria for a quality management system and is the only standard in the family that can be certified to (although this is not a requirement).
ISO over one million companies and organizations in over 170 countries implement 9001:2008.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
Advanced A/B Testing at Wix - Aviran Mordo and Sagy Rozman, Wix.comDevOpsDays Tel Aviv
While A/B test is a very known and familiar methodology for conducting experiments on production when you do that on a large scale it has many challenges in the organization level and operational level.
At Wix we are practicing continuous delivery for over 4 years. Conducting A/B tests and writing feature toggles is at the core of our development process. However when doing so on a large scale, with over 1000 experiments every month, it holds many challenges and affect everyone in the company, from developers, product managers, QA, marketing and management.
In this talk we will explain what is the lifecycle of an experiment, some of the challenges we faced and the effect on our development process.
* How an experiment begins its life
* How an experiment is defined
* How do you let non technical people control the experiment while preventing mistakes
* How an experiment go live, what is the lifecycle of an experiment from beginning to end
* What is the difference between client and server experiments
* How do you keep the user experience and not confuse them
* How does it affect the development process
* How can QA test an environment that changes every 9 minutes
* How can support help users when every user may be part of different experiment
* How can we find if an experiment is causing errors when you have millions of permutations [at least 2^(number of active experiments)]
* What are the effects of always having multiple experiments on system architecture
* What are the development patterns when working with AB test
At Wix we have developed our 3rd generation experiment system called PETRI, which is (will be) open sourced, that helps us maintain some order in a chaotic system that keep changing. We will also explain how PETRI works, what are the patterns in conducting experiments that will have a minimal effect on performance and user experience.
Acceptance Test Driven Development at StarWest 2014jaredrrichardson
This is my half day Acceptance Test Driven Development course as given in Anaheim at StarWest 2014 (October). It's based on Ken Pugh's 1/2 day tutorial.
Presentation about how to start performing exploratory testing as a developer. Very basic and simple, and very streamlined. Should be the start for a developer who has not tested before.
Engineers: Apply Automation to Increase Quality, Speed to MarketApril Bright
We live in the age of machine learning, artificial intelligence and other automated systems. Why, then, are we performing tedious tasks that we can streamline during the product development phase? First, there is Design Verification testing. Second, there is Design Validation testing. Some of these tests use simple pass/fail attribute data, while others use continuous data. We will focus on ways to automate the analysis of that continuous data, which can ensure more accurate and timely results.
QA Club Kiev #17 QA Challenge by Oleksandr MaidaniukQA Club Kiev
What is you next move?
MUST in 3-5 years:
Technical higher education
English level
Ready to take Ownership
Be a Technical Expert
Willing to follow best practice
Soft Skills
Time Management
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
2. 22
11 Years general working experience in IT/QA (30+ projects)
----------
•QA Consultant at Ciklum Interactive
•QA Manager (Manual)
•Automated Testing Manager
•Load Testing Manager
•Project Manager
----------
•Hobbies: Job, Wife and 2 children, Sport
----------
Skype: vladimir.primakov
Linkedin: ua.linkedin.com/in/vladimirprimakov/
Email: v.v.primakov@gmail.com
Some Words About Me
Volodymyr Prymakov (Vladimir Primakov)
3. 33
The presentation will cover the following items:
• Release Readiness criteria as a part of a master test plan, their
types and details. (~ 30 min)
• Release readiness criteria appliance and areas of influence. (~ 5
min)
• Quality report and quality metrics + their connection to release
readiness criteria. (~ 15 min)
• Concluding release readiness. (~1 min)
• Questions (~ 10 min)
Presentation’s Structure
Presentation Plan
7. 77
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
8. 88
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
9. 99
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
10. 1010
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
11. 1111
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
12. 1212
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
13. 1313
Short Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% Features of Any
Priority
Detailed
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
14. 1414
Expanded Example
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% High
(Blocker/Critical)
Detailed
Exploratory
testing
100% 0%
Medium (Major) Detailed
Exploratory
testing
100% 0%
Low (Normal/Trivial) Happy path
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
15. 1515
Test Deepness
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% High
(Blocker/Critical)
Detailed
Exploratory
testing
100% 0%
Medium (Major) Detailed
Exploratory
testing
100% 0%
Low (Normal/Trivial) Happy path
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
Test Deepness
16. 1616
Test Deepness
Criteria for New Functionality Testing
TEST DEEPNESS / DETALIZATION
Test Case Based Thought Out Test
Detailed Exploratory Test
Happy Path Exploratory test
Smoke test
Nothing
17. 1717
Test Coverage
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% High
(Blocker/Critical)
Detailed
Exploratory
testing
100% 0%
Medium (Major) Detailed
Exploratory
testing
100% 0%
Low (Normal/Trivial) Happy path
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
Test Coverage
19. 1919
MaximumFail Rate %
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% High
(Blocker/Critical)
Detailed
Exploratory
testing
100% 0%
Medium (Major) Detailed
Exploratory
testing
100% 0%
Low (Normal/Trivial) Happy path
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
Fail Rate %
20. 2020
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
High (Blocker / Critical) 0 0 0 * *
Medium (Major) 0 0 0 * *
Low (Normal/Trivial) 0 0 * * *
PASS CRITERIA
*if too many bugs are in place for the functionality – the bugs number can be
considered for any specific case separately
21. 2121
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
High (Blocker / Critical) 0 0 0 * *
Medium (Major) 0 0 0 * *
Low (Normal/Trivial) 0 0 * * *
PASS CRITERIA
*if too many bugs are in place for the functionality – the bugs number can be
considered for any specific case separately
22. 2222
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
High (Blocker / Critical) 0 0 0 * *
Medium (Major) 0 0 0 * *
Low (Normal/Trivial) 0 0 * * *
PASS CRITERIA
*if too many bugs are in place for the functionality – the bugs number can be
considered for any specific case separately
23. 2323
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
High (Blocker / Critical) 0 0 0 * *
Medium (Major) 0 0 0 * *
Low (Normal/Trivial) 0 0 * * *
PASS CRITERIA
24. 2424
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
25. 2525
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
26. 2626
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
Failed Release Readiness Criteria
27. 2727
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
Failed Release Readiness Criteria
Status: Pass Rate %
Failed With Acceptable Bugs
28. 2828
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
Failed Release Readiness Criteria
Status: Pass Rate %
Failed With Acceptable Bugs
29. 2929
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
Failed Release Readiness Criteria
Status: Pass Rate %
Failed With Acceptable Bugs
30. 3030
Fail Rate %
Criteria for New Functionality Testing
Functionality Priority
Maximum Allowable Bugs Number
(Story, Improvement,
New Feature)
Blocker
Bugs
Critical
Bugs
Major
Bugs
Normal
Bugs
Trivial
Bugs
Features of Any Priority 0 0 * * *
PASS CRITERIA (short and not so strict)
Status: FAIL RATE %
Blocked
Failed Release Readiness Criteria
Status: Pass Rate %
Failed With Acceptable Bugs
Passed
31. 3131
Fail Rate %
Criteria for New Functionality Testing
Test Coverage
Fail Rate is 29%, But should be 0%
32. 3232
Test Environment Coverage
Criteria for New Functionality Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Functionality
Priority
Minimum Test
Deepness
Minimum
Functionality
Coverage
Maximum
Fail Rate %
The Highest one 100% High
(Blocker/Critical)
Detailed
Exploratory
testing
100% 0%
Medium (Major) Detailed
Exploratory
testing
100% 0%
Low (Normal/Trivial) Happy path
Exploratory
testing
100% 0%
High 100%
(all high priority
test
environments)
All priorities Happy path
Exploratory
testing
100% 0%
Medium Please See test environments related criteria
Low Please See test environments related criteria
Test Environments Coverage
33. 3333
Test Environment Coverage
Criteria for New Functionality Testing
Test Environments
Type OS Browser/Device Resolution Priority
PC Win 7 Chrome 1920*1080 Highest
PC Win 8 (metro) IE 10 1920*1080
HighMac MacOS Firefox 1920*1080
iPad iOs7 iPad 4 2048*1536
Mac MacOS Safari 2880*1800
MediumPC Win 7 IE 11 1366*768
iPad iOs7 iPad 2 1024*768
PC Linux Firefox 1366*768
Low
34. 3434
Test Environment Coverage
Criteria for New Functionality Testing
Test Environments
Type OS Browser/Device Resolution Priority
PC Win 7 Chrome 1920*1080 Highest
PC Win 8 (metro) IE 10 1920*1080
HighMac MacOS Firefox 1920*1080
iPad iOs7 iPad 4 2048*1536
Mac MacOS Safari 2880*1800
MediumPC Win 7 IE 11 1366*768
iPad iOs7 iPad 2 1024*768
PC Linux Firefox 1366*768
Low
There is only 1 highest
priority environment so the
coverage, which it gives for
the priority, is 100%.
35. 3535
Test Environment Coverage
Criteria for New Functionality Testing
Test Environments
Type OS Browser/Device Resolution Priority
PC Win 7 Chrome 1920*1080 Highest
PC Win 8 (metro) IE 10 1920*1080
HighMac MacOS Firefox 1920*1080
iPad iOs7 iPad 4 2048*1536
Mac MacOS Safari 2880*1800
MediumPC Win 7 IE 11 1366*768
iPad iOs7 iPad 2 1024*768
PC Linux Firefox 1366*768
Low
1 environment of 3 high priority
environments gives ~ 33% coverage.
All these environments together
gives 100% coverage for the priority.
39. 3939
DEFINTION OF DONE (EXAMPLE)
User story reviewed & adjusted correspondingly
Unit-test created and passed
Integration test created and passed
Pear review passed
UX-review passed
Manual Test Passed
End User Test Passed…
Definition of Done
Criteria for New Functionality Testing
45. 4545
Example
Criteria for Regression Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Scenario
Priority
Minimum
Test
Deepness
Minimum
Functionalit
y Coverage
Maximum
Fail Rate %
The Highest one 100% High Detailed
Exploratory
testing
100% 0%
Medium (That
might be affected
by recent product
changes)
Happy path
Exploratory
testing
100% 0%
Other Medium Happy path
Exploratory
testing
50% 0%
High See test environments related criteria
Medium See test environments related criteria
Low See test environments related criteria
46. 4646
Example
Criteria for Regression Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Scenario
Priority
Minimum
Test
Deepness
Minimum
Functionalit
y Coverage
Maximum
Fail Rate %
The Highest one 100% High Detailed
Exploratory
testing
100% 0%
Medium (That
might be affected
by recent product
changes)
Happy path
Exploratory
testing
100% 0%
Other Medium Happy path
Exploratory
testing
50% 0%
High See test environments related criteria
Medium See test environments related criteria
Low See test environments related criteria
47. 4747
Example
Criteria for Regression Testing
Test Environment
Priority
Coverage of Test
Environments
Tested Scenario
Priority
Minimum
Test
Deepness
Minimum
Functionalit
y Coverage
Maximum
Fail Rate %
The Highest one 100% High Detailed
Exploratory
testing
100% 0%
Medium (That
might be affected
by recent product
changes)
Happy path
Exploratory
testing
100% 0%
Other Medium Happy path
Exploratory
testing
50% 0%
High See test environments related criteria
Medium See test environments related criteria
Low See test environments related criteria
48. 4848
Criteria for Test Environment Testing
Possible Test Environment Ingredients
Internet Speed Antivirus, Firewalls
Plugins
Integration with other
applications and API
User Rights
Runtimes
External Devices
Screen Resolutions and Modes
Hardware and Software Platforms and Modes
Web-browsers
Mobile Software Platforms
Mobile Software Platforms
External Libraries
Hardware
Characteristics and
Capacity
Robots (HD,
Software,
Capabilities)
50. 5050
Criteria for Test Environment Testing
Product-wide Test
Deepness
Full Regression
Full Regression for
functionality of certain
priority
Main product Flows test /
User Acceptance Test /
Cover test
Selective Regression
Smoke Test
51. 5151
Criteria for Test Environment Testing
Coverage of
Test Environments
Product-wide Test Deepness
52. 5252
Criteria for Test Environment Testing
Coverage of
Test Environments
Product-wide Test Deepness
Fail Rate
53. 5353
Criteria for Test Environment Testing
Test Environment Priority Target (functionality) Minimum Test
Deepness
Minimum
Coverage of
Test
environments
MaximumFa
il Rate %
Highest Please see the release readiness-criteria for new functionality and old
functionality
High New functionality Please see release readiness-criteria for new
functionality
Old (Regression-related)
functionality
Selective Regression
and/or
Main product Flows
test
100% 0%
Medium New and Old (Regression-
related) functionality
Product-wide Smoke
test
75%* 0%
Low New and Old (Regression-
related) functionality
Product-wide Smoke
test
50%* 0%
Example
*Not covered environments may be covered next release and so on
54. 5454
Criteria for Test Environment Testing
Test Environment Priority Target (functionality) Minimum Test
Deepness
Minimum
Coverage of
Test
environments
MaximumFa
il Rate %
Highest Please see the release readiness-criteria for new functionality and old
functionality
High New functionality Please see release readiness-criteria for new
functionality
Old (Regression-related)
functionality
Selective Regression
and/or
Main product Flows
test
100% 0%
Medium New and Old (Regression-
related) functionality
Product-wide Smoke
test
75%* 0%
Low New and Old (Regression-
related) functionality
Product-wide Smoke
test
50%* 0%
Example
*Not covered environments may be covered next release and so on
55. 5555
Criteria for Test Environment Testing
Test Environment Priority Target (functionality) Minimum Test
Deepness
Minimum
Coverage of
Test
environments
MaximumFa
il Rate %
Highest Please see the release readiness-criteria for new functionality and old
functionality
High New functionality Please see release readiness-criteria for new
functionality
Old (Regression-related)
functionality
Selective Regression
and/or
Main product Flows
test
100% 0%
Medium New and Old (Regression-
related) functionality
Product-wide Smoke
test
75%* 0%
Low New and Old (Regression-
related) functionality
Product-wide Smoke
test
50%* 0%
Example
*Not covered environments may be covered next release and so on
56. 5656
Criteria for Unit & Integration Testing
Test Coverage
Fail Rate
GUI-LESS
Functionality
GUI
Functionality*
Unit & Integration Testing
Test Coverage
Fail Rate
*Criteria may be not so strict, because GUI functionality can be also manually tested
57. 5757
Criteria for Unit & Integration Testing
Example
Integration Testing or Unit-Testing
Priority of
functionality
Minimum
Functionality
Coverage
MaximumFail
Rate %
High 100% 0%*
Medium 0% 0%*
*Depends on severity of the problems found in
integration/unit testing
58. 5858
Criteria for Special Types of testing
Type of testing* Sufficient?
(yes/no)
Passed? Conclusions
Load/Performance
testing (client/server
side)
NOT READY
Security Testing NOT READY
UX-Testing READY
(Accessibility Testing) NOT READY
GENERAL CONCLUSSION: NOT READY
* Usually has separate release readiness criteria, but in any case it should be sufficient
and passed.
Special types of testing, which are obligatory to be conducted and passed
69. 6969
All Not Closed Bugs – Not-Assigned to any Sprint
Number of Bugs and their severity
70. 7070
Number of Bugs and their severity
All Not Closed Bugs – Not-Assigned to any Sprint
71. 7171
Bugs List (for All not-closed bugs) – That fails release readiness criteria
Number of Bugs and their severity
Bugs List (for All not-closed bugs) – That fails release readiness criteria
Assigned to sprints
Not-Assigned to sprints
72. 7272
Number of Bugs and their severity
Make a separate release readiness
conclusion for this criterion type
73. 7373
New Functionality Statuses (From Tasks Management System)
New Functionality Testing & Results
Features Statuses (Current release)
74. 7474
New Functionality Statuses (From Tasks Management System)
New Functionality Testing & Results
Features Statuses (Current release)
Not-Resolved Features By Components
(Current release)
86. 8686
GUI-less functionality
Unit & Integration testing & Results
Type of testing Coverage Fail Percentage Conclusions
Unit-test 50% 0% READY
Integration test 100% 0% READY
GENERAL RELEASE READINESS CONCLUSSION: READY
87. 8787
GUI-less functionality
Unit & Integration testing & Results
Type of testing Coverage Fail Percentage Conclusions
Unit-test 50% 0% READY
Integration test 100% 0% READY
GENERAL RELEASE READINESS CONCLUSSION: READY
88. 8888
GUI-less functionality
Unit & Integration testing & Results
Type of testing Coverage Fail Percentage Conclusions
Unit-test 50% 0% READY
Integration test 100% 0% READY
GENERAL RELEASE READINESS CONCLUSSION: READY
89. 8989
GUI-less functionality
Unit & Integration testing & Results
Type of testing Coverage Fail Percentage Conclusions
Unit-test 50% 0% READY
Integration test 100% 0% READY
GENERAL RELEASE READINESS CONCLUSSION: READY
90. 9090
Special Types of Testing & Results
Type of testing* Sufficient?
(yes/no)
Passed? Conclusions
Load/Performance
testing (client/server
side)
NOT READY
Security Testing NOT READY
UX-Testing READY
(Accessibility Testing) NOT READY
GENERAL RELEASE READINESS CONCLUSSION: NOT READY
* Usually has separate release readiness criteria, but in any case it should be sufficient
and passed.
91. 9191
Special Types of Testing & Results
Type of testing* Sufficient?
(yes/no)
Passed? Conclusions
Load/Performance
testing (client/server
side)
NOT READY
Security Testing NOT READY
UX-Testing READY
(Accessibility Testing) NOT READY
GENERAL RELEASE READINESS CONCLUSSION: NOT READY
* Usually has separate release readiness criteria, but in any case it should be sufficient
and passed.
92. 9292
Special Types of Testing & Results
Type of testing* Sufficient?
(yes/no)
Passed? Conclusions
Load/Performance
testing (client/server
side)
NOT READY
Security Testing NOT READY
UX-Testing READY
(Accessibility Testing) NOT READY
GENERAL RELEASE READINESS CONCLUSSION: NOT READY
* Usually has separate release readiness criteria, but in any case it should be sufficient
and passed.
93. 9393
Special Types of Testing & Results
Type of testing* Sufficient?
(yes/no)
Passed? Conclusions
Load/Performance
testing (client/server
side)
NOT READY
Security Testing NOT READY
UX-Testing READY
(Accessibility Testing) NOT READY
GENERAL RELEASE READINESS CONCLUSSION: NOT READY
* Usually has separate release readiness criteria, but in any case it should be sufficient
and passed.
94. 9494
Concluding Release Readiness
Criterion Status
1. Number of Bugs and their severity
NOT READY
2. New Functionality Testing & Results
READY
3. Regression Testing & Results
READY
4. Test Environments Testing & Results
NOT READY
5. Unit & Integration testing & Results
READY
6. Special Types of Testing & Results
NOT READY
GENERAL CONCLUSSION: NOT READY
95. 9595
Concluding Release Readiness
Criterion Status
1. Number of Bugs and their severity
NOT READY
2. New Functionality Testing & Results
READY
3. Regression Testing & Results
READY
4. Test Environments Testing & Results
NOT READY
5. Unit & Integration testing & Results
READY
6. Special Types of Testing & Results
NOT READY
GENERAL CONCLUSSION: NOT READY