The document discusses how cognitive biases can cause testers to miss bugs. It explains that people have two types of thinking: System 1 thinking is fast, intuitive, and prone to biases, while System 2 thinking is slower, more deliberate, and logical. Common biases that can affect testers include the representative bias, the curse of knowledge, the congruence bias, and the confirmation bias. The document recommends that testers employ more System 1 thinking through techniques like exploratory testing to leverage their intuition to find bugs. It also suggests test managers create an environment where testers feel comfortable using more System 1 thinking approaches.
Root Cause Analysis | 5 whys | Tools of accident investigation I Gaurav Singh...Gaurav Singh Rajput
The 5 Whys is a root cause analysis tool used to identify the underlying cause of problems. It involves asking "why" five times to get to the root cause. Some key aspects of using the 5 Whys include clearly defining the problem, asking full questions at each step, focusing on both why a defect occurred and why it was not detected earlier, and challenging identified root causes to verify they reproduce the problem. The goal is to identify the systemic root cause, not just surface explanations, through a structured questioning process.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
This document provides an overview of the 5 Whys root cause analysis tool. The 5 Whys involves asking "why" five times to determine the root cause of a problem. It should address why something was made incorrectly and why it was not detected. While typically involving five questions, the number is flexible based on the complexity of the problem. When applying the 5 Whys, clearly define the problem, ask full questions, and follow the thought process without jumping to conclusions. The goal is to identify systemic causes that allow problems rather than just surface explanations.
Slides from a 5/10/2017 talk at the Nasdaq Entrepreneurial Center (@theCenter) about a lean research mindset, the mechanics of learning from users, and the structure of a research prototype test session.
This document discusses exploratory testing. It defines exploratory testing as testing where the tester actively designs tests during the testing process and uses information gained from testing to design new tests. Key aspects of exploratory testing include investigation, discovery, learning, and using imagination to think of new tests. The document outlines what makes an excellent exploratory tester, the differences between exploratory and traditional scripted testing, pros and cons of exploratory testing, how to perform exploratory testing using techniques like the "tour bus principle", and myths about exploratory testing.
The document provides an overview of the 5 Whys root cause analysis tool. The 5 Whys involves asking "Why?" five times to determine the root cause of a problem. It should address both why a defective part was made and why the defect was not detected earlier. While typically involving five questions, the number may vary depending on the complexity of the problem. The tool helps analyze problems by tracing them back from obvious to less obvious causes through a series of why questions. The goal is to identify systemic root causes that allow problems rather than just resolving the specific problem.
This document discusses strategies for handling non-ideal testing situations when adopting agile practices. It addresses problems like lack of test automation, test data issues, legacy reporting needs, geographic distribution of teams, and separation of testers and programmers. The proposed solutions generally involve starting small, using technology to facilitate remote collaboration, targeting important information in reports, and developing relationships between team members.
Root Cause Analysis | 5 whys | Tools of accident investigation I Gaurav Singh...Gaurav Singh Rajput
The 5 Whys is a root cause analysis tool used to identify the underlying cause of problems. It involves asking "why" five times to get to the root cause. Some key aspects of using the 5 Whys include clearly defining the problem, asking full questions at each step, focusing on both why a defect occurred and why it was not detected earlier, and challenging identified root causes to verify they reproduce the problem. The goal is to identify the systemic root cause, not just surface explanations, through a structured questioning process.
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
This document provides an overview of the 5 Whys root cause analysis tool. The 5 Whys involves asking "why" five times to determine the root cause of a problem. It should address why something was made incorrectly and why it was not detected. While typically involving five questions, the number is flexible based on the complexity of the problem. When applying the 5 Whys, clearly define the problem, ask full questions, and follow the thought process without jumping to conclusions. The goal is to identify systemic causes that allow problems rather than just surface explanations.
Slides from a 5/10/2017 talk at the Nasdaq Entrepreneurial Center (@theCenter) about a lean research mindset, the mechanics of learning from users, and the structure of a research prototype test session.
This document discusses exploratory testing. It defines exploratory testing as testing where the tester actively designs tests during the testing process and uses information gained from testing to design new tests. Key aspects of exploratory testing include investigation, discovery, learning, and using imagination to think of new tests. The document outlines what makes an excellent exploratory tester, the differences between exploratory and traditional scripted testing, pros and cons of exploratory testing, how to perform exploratory testing using techniques like the "tour bus principle", and myths about exploratory testing.
The document provides an overview of the 5 Whys root cause analysis tool. The 5 Whys involves asking "Why?" five times to determine the root cause of a problem. It should address both why a defective part was made and why the defect was not detected earlier. While typically involving five questions, the number may vary depending on the complexity of the problem. The tool helps analyze problems by tracing them back from obvious to less obvious causes through a series of why questions. The goal is to identify systemic root causes that allow problems rather than just resolving the specific problem.
This document discusses strategies for handling non-ideal testing situations when adopting agile practices. It addresses problems like lack of test automation, test data issues, legacy reporting needs, geographic distribution of teams, and separation of testers and programmers. The proposed solutions generally involve starting small, using technology to facilitate remote collaboration, targeting important information in reports, and developing relationships between team members.
The 5 Whys is a root cause analysis tool used to identify the underlying cause of a problem by repeatedly asking "Why?". It involves asking "Why?" five times to get to the root of a problem. You start with a problem statement and work your way down by asking "Why?" until you identify the root cause. It is important to clearly define the problem, ask full questions, and follow the thought process through five iterations or until the root cause becomes clear. The goal is to identify the systemic cause that allowed the problem to occur rather than just resolving the problem itself.
This document provides an overview of 5 Why analysis, a root cause analysis tool. It discusses when to use 5 Why analysis, such as for recurring errors or quality issues. The general guidelines for 5 Why analysis include using a cross-functional team, asking "why" until the root cause is uncovered, and ensuring corrective actions address root causes rather than just symptoms. Examples of applying 5 Why analysis to problems like a vehicle not starting and long assembly times are also provided. Potential problems that can occur with 5 Why analysis include stopping at symptoms rather than root causes and different conclusions from different people.
Test automation has some bitter truths that are often overlooked. While automation can help with confirmation testing of deterministic scenarios, many critical testing tasks like exploration and qualitative evaluation are not easily automated. Automation also does not necessarily decrease the costs of testing when development, maintenance, and debugging of automation code is considered. It is more accurate to consider test automation as programmatic testing rather than assuming full automation is possible. Both automated and manual testing are misleading terms, and the focus should be on using tools like automation to extend testing rather than replace human testing.
Expecting the Unexpected: Preparing for Successful User Research Sessions (Do...Fiona Tranquada
The document discusses how to prepare for successful user research sessions by anticipating and planning for unexpected situations. It recommends identifying potential issues before a study, such as participants being late or unable to complete tasks. The authors advise establishing backup plans, like scheduling extra participants or creating paper surveys. They also suggest preparing the product, space, and technology to avoid technical problems. Guidelines are provided for briefing participants and observers. The document stresses practicing moderating skills and technical setups to improve sessions. The overall message is that thorough preparation can help user researchers successfully handle unexpected challenges during studies.
The document discusses "worst practices" of software testing according to The Testing Troll. It provides 5 "worst practices" and alternatives suggested by The Testing Troll. The first is to learn about real testing oracles rather than relying only on requirements. The second is to focus regression testing on tests that reveal new information rather than repetitive testing. The third is to use automation as a tool to extend abilities rather than replace manual testing. The fourth is to provide information about risks through risk-based testing rather than just assuring quality. And the fifth is to be always alert to your context rather than following best practices blindly. The Testing Troll advocates thinking critically and focusing on exploration and human aspects of testing over fixed processes.
The document discusses the 5 Whys technique for determining the root cause of problems. It explains that the 5 Whys involves repeatedly asking "Why?" to peel back the layers of symptoms and arrive at the underlying cause. While called "5 Whys," the exact number of times you ask why may be fewer or more. The benefits are that it helps identify root causes, relationships between causes, and is a simple tool. It is most useful for problems involving human factors. The approach involves writing the problem, asking why it happens, and looping back until the root cause is agreed upon. Examples are provided to illustrate the process.
When you need to bring along the product team for help on a guerilla usability study, this is a quick intro to their role in the study and how to be a good facilitator.
The document discusses qualities of good testers, noting that they practice epistemology and have skills like critical thinking. Good testers have attitudes of being cautious, curious, and critical. Soft skills are also important for testers, including skills like communication, time management, and maintaining a positive attitude.
Brief introduction to Session-Based Test Management and to how Exploratory Testing is understood and approached under the influence of the Context-Driven Testing movement.
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
Exploratory Testing Explained and Experienced
- Exploratory testing is an approach to software testing that involves dynamically testingsoftware without a fixed plan, using the results of previous tests to determine subsequent tests.
- It is a disciplined approach that finds unknown unknowns and helps testers examine software from different perspectives to uncover more bugs. Tests are performances rather than fixed artifacts.
- Exploratory testing requires testers to be able to strategically choose and defend their test approaches, explain what they have tested, and determine when they are done testing rather than just finding bugs randomly. It is a more systematic approach than unplanned testing.
Creating a Virtuous Cycle - The Research and Design Feedback LoopJulie Stanford
This document discusses the importance of an effective research and design feedback loop to create thoughtful and user-centered products. It outlines common pitfalls when research and design teams do not collaborate well, such as communicating findings, engaging with each other's work, and believing research results. It then provides a step-by-step "virtuous loop formula" to facilitate collaboration between research and design, including prioritizing research questions, jointly reviewing prototypes, evaluating findings together, and continuously revisiting questions and designs based on new insights. Following this process can help avoid reactive "patchwork" solutions and instead lead to thoughtful designs informed by user needs.
Challenging Your Project’s Testing Mindsets - Joe DeMeyerQA or the Highway
Participating on a project as a Tester has some interesting contrasts and some unexpected experiences when compared to participating as a Developer. Having experienced both, I was most surprised with skepticism, concern, general invisibility, and subtle questions about my qualifications as a Tester. It seemed because I was a Tester, I had no credibility.The start of the project is the best time to make an impression, establish your identity, and build credibility. Every Tester knows this but many times other priorities interfere with establishing a foundation for working with other team members and providing value to the project as a Tester.
In this session, I present an approach to challenging mindsets and establishing credibility: be available, visible, and vocal. Be available to meet team members, be visible at introductory meetings, be vocal about the role of testing in your project. Building upon these lays the foundation for participation in key meetings, improving testability, getting bugs fixed, and engaging team members in testing.
The session includes tips for participating in requirements, design, and code reviews. It concludes with methods of building credibility with your testing team, using delegation to build new test leaders, and advocating for testability in project products.
This document discusses using the 5 Whys technique for root cause analysis. It begins by explaining why root cause analysis is used, which is to find the root causes of complex problems. It then provides an overview of the 5 Whys process, which involves identifying the problem, asking why it occurred, and repeating until the root cause is uncovered. As an example, it analyzes a problem where gloves were unexpectedly mixed into rubber compound using 5 Whys. It determines through iterative questioning that the root cause was lack of trash bins for glove disposal in the production area. Corrective actions included removing contaminated rubber and remilling, while preventative action was to provide trash bins.
The document discusses the 5 Why's technique for root cause analysis. It can be used for troubleshooting, quality improvement, and problem solving. The process involves repeatedly asking "Why?" five times to determine the root cause of a problem by drilling down through its symptoms. Tools like Ishikawa charts, design of experiments, and statistical analysis can also aid in root cause analysis.
The document discusses the 5 Why analysis technique for finding the root cause of problems. 5 Why involves asking "why" five times to uncover the underlying cause, though it may require more or fewer questions. Guidelines include using a cross-functional team, avoiding bias, and ensuring the answers don't include words like "because" before moving to the next why. Corrective actions should address the root cause to prevent future issues rather than just treating symptoms. Examples and criticisms of 5 Why are also provided.
About Joseph Ours' Presentation – “Bad Metric – Bad!”
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Organizations blindly adopt a set of metrics as a way of satisfying some process transparency requirement, rarely applying any statistical or scientific thought behind the measures and metrics they establish and interpret. Many metrics do not represent what people believe they do and as a result can lead to erroneous decisions. Joseph looks at some of the common and some of the humorous testing metrics and determines why they are failures. He further discusses the real purpose of metrics, metrics programs and finishes with pitfalls into which you fall.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
The document provides an agenda and overview for a training on systematic problem solving using tools like 5 Whys. The agenda covers introductions, an exercise on defining problems, an introduction to 5 Whys technique, team exercises applying the techniques, and a wrap up. The training will teach participants how to use 5 Whys to peel back the layers of a problem to identify the root cause by repeatedly asking "Why?". Identifying the root cause allows for preventing future recurrence of the problem.
Usability testing / Nearly everything you need to know to get startedRebecca Destello
Usability testing involves:
1. Recruiting and testing target users on a product or system
2. Analyzing the results to identify any usability problems
3. Reporting findings and recommendations to stakeholders
When confronted with a problem, have you ever stopped and asked "why" five times? The Five Whys technique is a simple but powerful way to troubleshoot problems by exploring cause-and-effect relationships.
Don’t Let Missed Bugs Cause Mayhem in your Organization!Qualitest
This document discusses how cognitive biases can cause testers to miss bugs and provides strategies to overcome these biases. It explains that testers make judgments using both fast, intuitive System 1 thinking and slower, deliberate System 2 thinking. Common cognitive biases like representative bias, confirmation bias, and inattentional blindness are described as well as how they can influence testing. The document recommends techniques like exploratory testing to leverage more intuitive System 1 thinking and find bugs. It suggests test managers foster an environment where testers are comfortable using more subjective thinking and the QA profession shifts focus from requirements coverage to risk-based exploratory testing.
How Did I Miss That Bug? Managing Cognitive Bias in TestingTechWell
How many bugs have you missed that were obvious to others? We all approach testing hampered by our own biases. Understanding our biases—preconceived notions and the ability to focus our attention—is key to effective test design, test execution, and defect detection. Gerie Owen and Peter Varhol share an understanding of how the testers’ mindsets and cognitive biases influence their testing. Using principles from the social sciences, Gerie and Peter demonstrate that you aren’t as smart as you think you are. They show how to use knowledge of biases—Inattentional Blindness, Representative Bias, the Curse of Knowledge, and others—not only to understand the impact of cognitive bias on testing but also to improve your individual and test team results. Finally, Gerie and Peter provide tips for managing your biases and focusing your attention in the right places throughout the test process so you won’t miss that obvious bug.
The 5 Whys is a root cause analysis tool used to identify the underlying cause of a problem by repeatedly asking "Why?". It involves asking "Why?" five times to get to the root of a problem. You start with a problem statement and work your way down by asking "Why?" until you identify the root cause. It is important to clearly define the problem, ask full questions, and follow the thought process through five iterations or until the root cause becomes clear. The goal is to identify the systemic cause that allowed the problem to occur rather than just resolving the problem itself.
This document provides an overview of 5 Why analysis, a root cause analysis tool. It discusses when to use 5 Why analysis, such as for recurring errors or quality issues. The general guidelines for 5 Why analysis include using a cross-functional team, asking "why" until the root cause is uncovered, and ensuring corrective actions address root causes rather than just symptoms. Examples of applying 5 Why analysis to problems like a vehicle not starting and long assembly times are also provided. Potential problems that can occur with 5 Why analysis include stopping at symptoms rather than root causes and different conclusions from different people.
Test automation has some bitter truths that are often overlooked. While automation can help with confirmation testing of deterministic scenarios, many critical testing tasks like exploration and qualitative evaluation are not easily automated. Automation also does not necessarily decrease the costs of testing when development, maintenance, and debugging of automation code is considered. It is more accurate to consider test automation as programmatic testing rather than assuming full automation is possible. Both automated and manual testing are misleading terms, and the focus should be on using tools like automation to extend testing rather than replace human testing.
Expecting the Unexpected: Preparing for Successful User Research Sessions (Do...Fiona Tranquada
The document discusses how to prepare for successful user research sessions by anticipating and planning for unexpected situations. It recommends identifying potential issues before a study, such as participants being late or unable to complete tasks. The authors advise establishing backup plans, like scheduling extra participants or creating paper surveys. They also suggest preparing the product, space, and technology to avoid technical problems. Guidelines are provided for briefing participants and observers. The document stresses practicing moderating skills and technical setups to improve sessions. The overall message is that thorough preparation can help user researchers successfully handle unexpected challenges during studies.
The document discusses "worst practices" of software testing according to The Testing Troll. It provides 5 "worst practices" and alternatives suggested by The Testing Troll. The first is to learn about real testing oracles rather than relying only on requirements. The second is to focus regression testing on tests that reveal new information rather than repetitive testing. The third is to use automation as a tool to extend abilities rather than replace manual testing. The fourth is to provide information about risks through risk-based testing rather than just assuring quality. And the fifth is to be always alert to your context rather than following best practices blindly. The Testing Troll advocates thinking critically and focusing on exploration and human aspects of testing over fixed processes.
The document discusses the 5 Whys technique for determining the root cause of problems. It explains that the 5 Whys involves repeatedly asking "Why?" to peel back the layers of symptoms and arrive at the underlying cause. While called "5 Whys," the exact number of times you ask why may be fewer or more. The benefits are that it helps identify root causes, relationships between causes, and is a simple tool. It is most useful for problems involving human factors. The approach involves writing the problem, asking why it happens, and looping back until the root cause is agreed upon. Examples are provided to illustrate the process.
When you need to bring along the product team for help on a guerilla usability study, this is a quick intro to their role in the study and how to be a good facilitator.
The document discusses qualities of good testers, noting that they practice epistemology and have skills like critical thinking. Good testers have attitudes of being cautious, curious, and critical. Soft skills are also important for testers, including skills like communication, time management, and maintaining a positive attitude.
Brief introduction to Session-Based Test Management and to how Exploratory Testing is understood and approached under the influence of the Context-Driven Testing movement.
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
Exploratory Testing Explained and Experienced
- Exploratory testing is an approach to software testing that involves dynamically testingsoftware without a fixed plan, using the results of previous tests to determine subsequent tests.
- It is a disciplined approach that finds unknown unknowns and helps testers examine software from different perspectives to uncover more bugs. Tests are performances rather than fixed artifacts.
- Exploratory testing requires testers to be able to strategically choose and defend their test approaches, explain what they have tested, and determine when they are done testing rather than just finding bugs randomly. It is a more systematic approach than unplanned testing.
Creating a Virtuous Cycle - The Research and Design Feedback LoopJulie Stanford
This document discusses the importance of an effective research and design feedback loop to create thoughtful and user-centered products. It outlines common pitfalls when research and design teams do not collaborate well, such as communicating findings, engaging with each other's work, and believing research results. It then provides a step-by-step "virtuous loop formula" to facilitate collaboration between research and design, including prioritizing research questions, jointly reviewing prototypes, evaluating findings together, and continuously revisiting questions and designs based on new insights. Following this process can help avoid reactive "patchwork" solutions and instead lead to thoughtful designs informed by user needs.
Challenging Your Project’s Testing Mindsets - Joe DeMeyerQA or the Highway
Participating on a project as a Tester has some interesting contrasts and some unexpected experiences when compared to participating as a Developer. Having experienced both, I was most surprised with skepticism, concern, general invisibility, and subtle questions about my qualifications as a Tester. It seemed because I was a Tester, I had no credibility.The start of the project is the best time to make an impression, establish your identity, and build credibility. Every Tester knows this but many times other priorities interfere with establishing a foundation for working with other team members and providing value to the project as a Tester.
In this session, I present an approach to challenging mindsets and establishing credibility: be available, visible, and vocal. Be available to meet team members, be visible at introductory meetings, be vocal about the role of testing in your project. Building upon these lays the foundation for participation in key meetings, improving testability, getting bugs fixed, and engaging team members in testing.
The session includes tips for participating in requirements, design, and code reviews. It concludes with methods of building credibility with your testing team, using delegation to build new test leaders, and advocating for testability in project products.
This document discusses using the 5 Whys technique for root cause analysis. It begins by explaining why root cause analysis is used, which is to find the root causes of complex problems. It then provides an overview of the 5 Whys process, which involves identifying the problem, asking why it occurred, and repeating until the root cause is uncovered. As an example, it analyzes a problem where gloves were unexpectedly mixed into rubber compound using 5 Whys. It determines through iterative questioning that the root cause was lack of trash bins for glove disposal in the production area. Corrective actions included removing contaminated rubber and remilling, while preventative action was to provide trash bins.
The document discusses the 5 Why's technique for root cause analysis. It can be used for troubleshooting, quality improvement, and problem solving. The process involves repeatedly asking "Why?" five times to determine the root cause of a problem by drilling down through its symptoms. Tools like Ishikawa charts, design of experiments, and statistical analysis can also aid in root cause analysis.
The document discusses the 5 Why analysis technique for finding the root cause of problems. 5 Why involves asking "why" five times to uncover the underlying cause, though it may require more or fewer questions. Guidelines include using a cross-functional team, avoiding bias, and ensuring the answers don't include words like "because" before moving to the next why. Corrective actions should address the root cause to prevent future issues rather than just treating symptoms. Examples and criticisms of 5 Why are also provided.
About Joseph Ours' Presentation – “Bad Metric – Bad!”
Metrics have always been used in corporate sectors, primarily as a way to gain insight into what is an otherwise invisible world. Organizations blindly adopt a set of metrics as a way of satisfying some process transparency requirement, rarely applying any statistical or scientific thought behind the measures and metrics they establish and interpret. Many metrics do not represent what people believe they do and as a result can lead to erroneous decisions. Joseph looks at some of the common and some of the humorous testing metrics and determines why they are failures. He further discusses the real purpose of metrics, metrics programs and finishes with pitfalls into which you fall.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
The document provides an agenda and overview for a training on systematic problem solving using tools like 5 Whys. The agenda covers introductions, an exercise on defining problems, an introduction to 5 Whys technique, team exercises applying the techniques, and a wrap up. The training will teach participants how to use 5 Whys to peel back the layers of a problem to identify the root cause by repeatedly asking "Why?". Identifying the root cause allows for preventing future recurrence of the problem.
Usability testing / Nearly everything you need to know to get startedRebecca Destello
Usability testing involves:
1. Recruiting and testing target users on a product or system
2. Analyzing the results to identify any usability problems
3. Reporting findings and recommendations to stakeholders
When confronted with a problem, have you ever stopped and asked "why" five times? The Five Whys technique is a simple but powerful way to troubleshoot problems by exploring cause-and-effect relationships.
Don’t Let Missed Bugs Cause Mayhem in your Organization!Qualitest
This document discusses how cognitive biases can cause testers to miss bugs and provides strategies to overcome these biases. It explains that testers make judgments using both fast, intuitive System 1 thinking and slower, deliberate System 2 thinking. Common cognitive biases like representative bias, confirmation bias, and inattentional blindness are described as well as how they can influence testing. The document recommends techniques like exploratory testing to leverage more intuitive System 1 thinking and find bugs. It suggests test managers foster an environment where testers are comfortable using more subjective thinking and the QA profession shifts focus from requirements coverage to risk-based exploratory testing.
How Did I Miss That Bug? Managing Cognitive Bias in TestingTechWell
How many bugs have you missed that were obvious to others? We all approach testing hampered by our own biases. Understanding our biases—preconceived notions and the ability to focus our attention—is key to effective test design, test execution, and defect detection. Gerie Owen and Peter Varhol share an understanding of how the testers’ mindsets and cognitive biases influence their testing. Using principles from the social sciences, Gerie and Peter demonstrate that you aren’t as smart as you think you are. They show how to use knowledge of biases—Inattentional Blindness, Representative Bias, the Curse of Knowledge, and others—not only to understand the impact of cognitive bias on testing but also to improve your individual and test team results. Finally, Gerie and Peter provide tips for managing your biases and focusing your attention in the right places throughout the test process so you won’t miss that obvious bug.
This document discusses how testers can provide information to teams through risk-based testing and exploratory testing sessions of modern applications. It recommends identifying risks, prioritizing test cases based on risk, and conducting session-based exploratory testing to investigate risks. Automated testing tasks should leverage the computer's strengths rather than just automating human tests. Non-functional testing like performance and security should also be considered from the start. Developers can help testers by sharing testing infrastructure, reviewing automation code, and pairing on testing activities.
Aubrey Smith, Sparked Advisory
In this training, we will build on the foundation established in Lean Startup 101 and 201 by delving into examples and cases of the Lean Startup concepts in action. Attendees of Lean Startup 301 will be exposed to cutting edge work from thought leaders and experts using Lean Startup in practice today — at startups and within the enterprise. Participation in this session is essential: You will be asked to help design an MVP and experiment to test critical Leap of Faith Assumption(s) in groups and will be encourage to share experiences. The session is designed to allow attendees to stretch their skills and to push one-another to ‘learn by doing’. The session will also include:
Sample cases and live interviews with practitioners highlighting the application of core concepts;
Exercises designed to bring the concepts to life and challenge participants to deepen their skills;
Discussion of advanced topics such organizational culture and governance as well as industry-specific concepts such as using Lean Startup in heavily regulated markets.
Thanks to Lean Startup Co.’s law firm, Orrick, for being the sponsor for this track.
The document provides information and advice for PhD candidates preparing for their viva examination. It discusses the roles of the examiners, chairperson, and supervisor. It explains that the viva will involve a discussion of the research and the examiners will make their decision after the candidate leaves the room. Candidates are advised to prepare both technically, by being familiar with their work, and interpersonally, by managing their anxiety and having a conversational approach during the viva. Some sample questions the candidate might expect are also provided.
XBOSoft webinar - How Did I Miss That Bug - Cognitive Biases in Software TestingXBOSoft
The document discusses cognitive biases that can cause testers to miss bugs. It explains that software testing involves both objective comparisons to specifications as well as subjective judgments, and that missed bugs result from errors in judgment influenced by cognitive biases. Some biases discussed include representative bias, confirmation bias, and anchoring effect. The document advocates managing cognitive biases through techniques like exploratory testing, which focuses more on intuition and learning than requirements coverage. It suggests testers, managers, and the QA profession shift focus from finding bugs to providing information.
Decision Making Process & The Professional MonaCordel Francis
The document outlines a workshop on decision making presented by group members Kimeisha, Ramona, Nadain, and Cordel. It defines the decision making process and highlights its steps, elements, and inherent personal and system traps. The elements discussed include problem context, problem finding, rationales, settings, scope, procedures, outcomes, and implementation. Personal traps may include trying too hard to play it safe or seeking unanimous approval. The correct steps are define the problem, identify factors, develop alternatives, analyze alternatives, select the best, implement, and establish control.
The document discusses 10 signs that an organization's software testing may not be enough. These include having excessive production bugs, bugs found during user acceptance testing, growing bug counts over test cycles, not investing in testing compared to competitors, lacking clear criteria for what constitutes "enough" testing, testers advising against releasing software, weak prevention efforts like code reviews, lack of developer unit testing, frequently reduced testing periods causing deadline problems, and high tester turnover. The document advocates treating testing as risk management, increasing test reuse and automation, and addresses common challenges and questions around software testing.
Worst practices in software testing by the Testing trollViktor Slavchev
The document discusses best and worst practices in software testing according to a mythical testing creature called "The Testing Troll".
Some worst practices presented include relying solely on documentation, focusing only on repetitive regression testing of old tests, viewing automation as a replacement for human testers, and strictly following best practices without consideration of context.
The best practices emphasized thinking beyond requirements and oracles, prioritizing regression tests that reveal new information, using automation as a tool to enhance testing abilities rather than replace testers, providing information about potential risks, and being aware of testing context in different situations. The conclusion is that there are no absolute best practices and testers must be skeptical professionals who consider context.
QASymphony Webinar - "How to Start, Grow & Perfect Exploratory Testing on you...QASymphony
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
QASymphony - How to Start, Grow & Perfect Exploratory Testing on your Teamelizabethdiazqa
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
This document discusses implementation science and outlines a presentation on the topic. It defines implementation science as the study of planned human behavior change under organizational constraints. It discusses frameworks that can guide implementation practice and research, including process, determinant, and evaluation frameworks. It also covers study designs for evaluating implementation interventions, such as cluster randomized controlled trials, stepped wedge designs, and quasi-experimental designs. The document emphasizes that implementation research differs from other health research due to its focus on behavior change under organizational constraints.
This document provides an overview of evaluation in human-computer interaction. It discusses what evaluation is, the different types of evaluation (formative and summative), what can be evaluated (e.g. students, teachers), and methods for evaluation (e.g. checklists, questionnaires, interviews). It also presents the DECIDE framework for guiding evaluation, which includes determining goals, exploring questions, choosing approaches/methods, considering practical and ethical issues, and evaluating/interpreting data. The document provides examples and discusses the pros and cons of various evaluation techniques.
Watch the entire webinar: http://info.userzoom.com/online-surveys-design-webinar.html
UserZoom teamed up with Elizabeth Ferrall-Nunge, User Experience Research Lead at Twitter, to discuss how to create effective surveys and how to avoid common survey pitfalls.
Considerations When Planning & Conducting a Research Study.
1. Choosing the correct formative usability study setup
2. Recruiting effectively
3. Writing good test tasks
4. Remaining unbiased & facilitating ethically
5. Reporting with metrics
This document discusses qualitative and quantitative research methods for understanding user needs in human-computer interaction design. It explains that qualitative research, such as interviews and observations, are especially important early in the design process to understand user behaviors, needs, and contexts. Quantitative research like surveys can miss important details for design. The document provides guidance on conducting effective qualitative user interviews, including asking open-ended questions, following up, and getting a range of participant viewpoints.
This document discusses challenges for testers in agile development environments. It outlines several strategies testers can use to address these challenges, including:
- Pairing testers with developers to facilitate exploratory and interaction testing. This helps testers understand the codebase and developers understand testing needs.
- Pairing testers with analysts to help define requirements by example, clarify expectations, and drive development of acceptance tests.
- Prioritizing testing to address important risks rather than trying to do complete testing. A good tester is never done but must justify testing in terms of risk.
- Tracking bugs when testing completed iterations, even if fixes are made quickly, so issues can be prioritized like stories.
This document discusses challenges for testers in agile development environments. It outlines several strategies testers can use to address these challenges, including:
- Pairing testers with developers to facilitate exploratory and interaction testing. This helps testers understand the codebase and developers understand testing needs.
- Pairing testers with analysts to help define requirements by example, clarify expectations, and drive development of acceptance tests.
- Prioritizing testing to address important risks rather than trying to do complete testing. A good tester is never done but must justify testing in terms of risk.
- Tracking bugs when testing completed iterations, even if fixes are made quickly, so issues can be prioritized like stories.
Does dev ops need continuous testing devops days des moines 2018 v1GerieOwen
This document discusses the need for continuous testing in DevOps. It explains that continuous testing is an approach where all testing activities run continuously and are integrated with development and delivery. The key is to assess business risk, establish a safety net for users, and provide feedback throughout the software delivery pipeline. It provides details on implementing continuous testing such as engaging in continuous risk analysis, increasing testing velocity, and developing a culture of quality. It also discusses practices like defect management, test case optimization, environment access, and identifying bottlenecks. The document argues for integrating testing throughout the delivery pipeline and implementing automated testing checkpoints and non-functional testing earlier in the process. It emphasizes the importance of continuous monitoring in both testing and production.
Testing the brave new world of saa s applications quest 2018 v1GerieOwen
Testing SaaS applications presents unique challenges compared to traditional on-premise software. Key aspects to test include business processes, configurations, customizations, data migration and integrations. Non-functional testing of performance, availability, security and other qualities is also important. An effective test approach includes standard test scenarios addressing these areas, with separate test environments and tracks coordinated through Scrum of Scrums meetings. Specialized test skills are required, and planning for vendor upgrades is crucial.
Personas a fresh perspective on testing qatest 2018 6th versionGerieOwen
How well do you know the users of the applications you test? Knowing your users is critical to customer-focused testing, but how do we develop and apply that knowledge? Personas open that window into your users’ world.
Personas offer an immersion into your users’ characteristics, their abilities and their expectations of the application. Personas and user value stories can be applied not only to functional and usability testing, but also to performance, mobile and integration testing. By using personas and user value stories, testers can create a comprehensive, customer-centric test approach. Personas are a valuable test technique in virtually every methodology, especially in Agile and DevOps.
In this presentation, Viviane and Gerie will demonstrate how to develop personas and user value stories and how to incorporate them into your test process. We’ll discuss different styles of personas and how each can be used most effectively. Then, using real-life examples, we’ll delve into creating personas and user values stories and deriving test cases from them. Finally, we’ll provide tips on getting stakeholder acceptance on using personas and user value stories in testing.
Test automation and beyond developing an effective continuous test strategy d...GerieOwen
Continuous testing is one of the most effective ways of building quality into the continuous delivery pipeline; yet it is difficult to implement in practice. Continuous testing involves more than test automation. Although test automation is a must; continuous risk analysis and optimizing the test suite is critical so that test automation doesn’t become a bottleneck in the DevOps pipeline. In this presentation, you’ll learn how to implement an effective continuous test strategy throughout the continuous delivery pipeline.
Dev ops and groupthink an oxymoron devops days KievGerieOwen
This document discusses how groupthink can be a problem for DevOps teams and provides strategies for mitigating it. Groupthink occurs when a cohesive group overrides realistic problem solving and dissenting opinions. It can happen in DevOps teams when biases from different disciplines become polarized. Managers can use Container Difference Exchange theory to influence how teams self-organize by changing aspects like team composition, feedback processes, or exposure to outside ideas. Individually, team members should manage their own biases and speak up with different views. The goal is to collaborate as a group but think independently to avoid compromised integration work.
A wearables story mobile dev and test 2016GerieOwen
This document discusses the importance of testing the human experience with wearable technology. It defines wearables and provides examples. A story is told of a woman running a marathon with a wearable watch. The discussion covers creating personas to represent users, developing user value stories to understand how users will get value from wearables, and testing scenarios to evaluate the human interaction. Testing the physical, sensory, contextual and emotional aspects of usage is important as wearables are closely integrated with human activities.
Agile teams advocating quality when collaboration becomes groupthink qa&...GerieOwen
Groupthink occurs when a cohesive group makes flawed decisions due to a desire for conformity and lack of critical evaluation. In agile teams, groupthink can compromise quality by prioritizing velocity over testing. To counteract groupthink, leaders should manage their own and their team's mindsets, establish growth mindsets, rotate discussion roles, and appoint devil's advocates. Managers can also influence teams' self-organization by evaluating how the team's container, differences, and exchanges impact groupthink and making appropriate changes.
Testing wearable devices is fundamentally more complex than any other mobile device. Wearables become extensions of us, so testing should focus on the total user experience—the emotional, physical, and sensory reactions including the biases and mindset of the wearer. It involves testing in the real world of the wearer―when, where, and how the wearer and the device will function together. Using concepts from human-computer interaction design, Gerie Owen and Peter Varhol provide a framework for testing the “human experience” of wearables. Learn to develop personas by delving into the wearers’ personalities and characteristics to understand their expectations of the wearable. Then learn to create user value stories to test the ways in which the wearers will derive value from the wearable. Finally, learn the importance of human-experience testing as Gerie shares her personal story—a tale of two wearables and her 2011 Boston Marathon run.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
How did i miss that bug rtc
1. How Did I Miss That Bug?
Overcome Cognitive Bias In Testing
Gerie Owen & Peter Varhol
2. Presenters
Gerie Owen
gerie@gerieowen.com
• Quality Assurance Consultant
• Speaker and Blogger on Testing
topics
• Experienced Tester:
• Bug Finder and Bug Misser
Peter Varhol
peter@petervarhol.com
• International speaker on
technology topics
• Former product manager and
university professor
4. Presentation Outline
• Agenda • Why are we talking about
missed bugs?
• How do we miss bugs?
• System 1 and System 2 thinking
• Cognitive Biases Affecting
Testing
• Managing Cognitive Bias for
• Testers
• Test Leads and Managers
• Our Profession
5. Why are we talking about missing bugs?
• Have you ever missed a bug?
• Have you ever been asked how you missed a bug?
• Have you ever wondered how you missed a bug?
6. Consequences of Missed Bugs
• Possible Consequences of Missed Bugs:
• Negative Publicity
• Lost Sales
• Lost Customers
• Even Loss of Life
MISSED BUGS CAUSE MAYHEM
7. My Journey
The “HOW” is more important than the “WHY”
And now , I invite you to join with me
into the journey of
How Did I Miss that Bug?
8. How Do We Miss Bugs?
• Missed test cases
• Misunderstanding of requirements
• Misjudgment in risk-based testing
• Inattention
• Fatigue
• Burnout
• Multi-tasking
9. How Do We Test?
• What is Software Testing?
• Software testing is making judgments about the quality of the software
under test
• Involves:
• Objective comparisons of code to specifications,
• AND
• Subjective assessments regarding usability, functionality etc
10. What IS a Missed Bug?
An Error in Judgment!
To determine how testers miss bugs, we need to understand how humans
make judgments, especially in complex situations.
11. How do we make judgments?
• Thinking, Fast and Slow – Daniel Kahneman
• System 1 thinking – fast, intuitive, and sometimes wrong
• System 2 thinking – slower, more deliberate, more accurate
12. System 1 vs. System 2 Thinking
• System 1 thinking keeps us functioning
• Fast decisions, usually right enough
• Gullible and biased
• System 2 makes deliberate, thoughtful decisions
• It is in charge of doubt and unbelieving
• But is often lazy
• Difficult to engage
13. How Do We Apply System 1 and System 2
Thinking?
• System 1 thinking:
• Is applied in our initial reactions to situations.
• May employ Heuristics or rules of thumb
• System 2 thinking:
• Is applied when we analyze a problem, for example when calculating the answer to a math
problem.
• System 1 and System 2 can be in conflict:
• lead to biases in decision-making.
14. Consider This Problem
• A bat and ball cost $1.10
• The bat cost one dollar more than the ball
• How much does the ball cost?
15. How do Biases Impact Testing?
• We maintain certain beliefs in testing practice
• Which may or may not be factually true
• Those biases can affect our testing results
• We may be predisposed to believe something that affects our work and our
conclusions
• How do bias and error work?
• We may test the wrong things
• Not find errors, or find false errors
16.
17. The Representative Bias
• Happens when we judge the likelihood of an occurrence in a particular
situation by how closely the situation resembles similar situations.
• Testers may be influenced by this bias when designing data matrices,
perhaps not testing data in all states or not testing enough types of data.
• Case Study: Ability to print more than once bug
18. The Curse of Knowledge
• Happens when we are so knowledgeable about something, that our
ability to address it from a less informed, more neutral perspective is
diminished.
• When testers develop so much domain knowledge that they fail to test
from the perspective of a new user. Usability bugs are often missed due
to this bias.
• Case Study: Date of Death Bug
19. The Congruence Bias
• The tendency of experimenters to plan and execute tests on just their
own hypotheses without considering alternative hypotheses.
• This bias is often the root cause of missed negative test cases. Testers
write test cases to validate that the functionality works according to the
specifications and neglect to validate that the functionality doesn’t work
in ways that it should not.
• Case Study: Your negative test case or boundary miss
20. The Confirmation Bias
• The tendency to search for and interpret information in a way that
confirms one’s initial perceptions.
• Testers’ initial perceptions of the quality of code, the quality of the
requirements and the capabilities of developers can impact the ways in
which they test.
• Case Study: Whose code to you test the most thoroughly?
21. The Anchoring Effect
• The tendency to become locked on and rely too heavily on one piece of
information and therefore exclude other ideas or evidence that
contradicts the initial information.
• Software testers do this often when they validate code to specifications
exclusively without considering ambiguities or errors in the
requirements.
22.
23. Inattentional Blindness
• Chabris and Simon conducted experiments on how focusing on one thing
makes us blind to others
• Invisible gorilla on the basketball court
• Images on a lung x-ray
24. Inattentional Blindness
• a psychological lack of attention
• the tendency to miss obvious inconsistencies when focusing specifically
on a particular task.
• This happens in software testing when testers miss the blatantly obvious
bugs
25. Why Do We Develop Biases?
• The Blind Spot Bias
• We evaluate our own decision-making process differently than we evaluate how
others make decisions.
• West, Meserve and Stanovich
26. How Does This Apply To Testing?
•We must manage the way we think throughout the
test process.
• As individual testers
• As test managers
• As a professional community
27. How Can Testers Manage Their Thought
Processes?
•Use more System 1 thinking?
OR
•Use more System 2 thinking?
28. Test Methodology and
System 2 Thinking
• Test methodology is the analytical framework of testing; it
invokes our system 2 thinking and places the tester under
cognitive load.
• The determination of whether the actual results match the
expected results becomes an objective assessment.
29. How Do We Find Bugs?
Focus on System 1 thinking, intuition and
emotion
30. Focus On System 1 Thinking
• Heuristics used with Oracles
• Recognize our emotions as indicators of potential bugs
• Exploratory Testing
31. The Power of Exploratory Testing
• Exploratory testing is simultaneous learning, test design, and test
execution
• Exploratory testers often use tools
• record of the exploratory session
• generate situations of interest
32. The Characteristics of Exploratory Testing
• Planned
• Learning experience
• Discovery process
• Different for each application
33. How Should We Use Exploratory Testing?
• Unstructured
• Before beginning test case execution
• Minimizes preconceived notions about the application under test
• Oracle based
• Users’ perspectives
• Data flow
34. How Should We Use Exploratory Testing?
• Structured
• Use to create additional test cases
• May be done earlier, possible as modules are developed
• Session-Based
• Time-boxed charters
• Multiple testers
• Post test review session
35. Planning
• What are situations of interest?
• Usual tasks performed by the user
• Things that a user might do
• Things covered by charter
• Document the plan
• It’s not ad hoc testing
• Deviate from the plan as
necessary
36. Learning Experience
• What can I learn about the software?
• What all the buttons and forms do
• How it wants the user to work
• Strengths and weaknesses
37. Discovery
• What are situations of interest?
• Usual tasks performed by the user
• Things a user might do
• Does it behave as expected?
• If not, lets explore
• And document
38. What Can Test Managers Do?
• Foster an environment in which the testers feel comfortable and
empowered to use System 1 thinking.
• Plan for exploratory testing in the test schedule
• Encourage Testers to take risks
• Reward for Quality of bugs rather than quantity of test cases executed
39. What Can the QA Profession Do?
A Paradigm Shift
• Shift our focus from requirements coverage based test execution to a more
intuitive approach
• Exploratory testing and business process flow testing becomes the norm rather
than the exception
• Develop new testing frameworks where risk-based testing is executed through
targeted exploratory testing and is balanced with scripted testing
• Our purpose should be providing information versus finding bugs
40. Question Test Results
• Is there any reason to suspect we are evaluating our test results based
on self-interest, overconfidence, or attachment to past experiences?
• Have we fallen in love with our test results?
• Were there any differences of opinion among the team reviewing the
test results?
41. How Do We Find The Obvious Bugs?
• Focus less
• Use intuition
• Believe what we can’t believe
And so in most organizations, each time a bug, as tiny and insignificant as it may be, crawls into production, mayhem of magnanimous proportions ensues. And sometimes, the focus on finding out why it happened takes priority over the fix. In the name of continuous improvement, we begin the root cause analysis. Root cause analysis can take many forms. In some organizations, it is used effectively to make process improvements. In other organizations, it amounts to a witch hunt, the sole purpose of which is to assign blame.
I worked in an organization where the art of the witch hunt and assignment of blame was developed to the level of a science. All bugs escaping into user acceptance testing or production were immediately analyzed to determine root cause, i.e., code, requirements, missed test case, etc. If the root cause was determined to be a tester miss, this was also noted in the test management tool. Metrics were developed to track missed bugs and testers were effectively pulverized for missing bugs. Test leads dreaded the root cause analysis process and testers worked in fear of missing bugs.
As a test lead in this environment, I really wanted to help my test teams and reduce our bug misses. I started to think about how we missed bugs.
Test leads dreaded the root cause analysis process and testers worked in fear of missing bugs.
As a test lead in this environment, I really wanted to help my test teams and reduce our bug misses. I started to think about how we missed bugs. The more I thought about it, I realized that the “how” is probably more important than the “why”. And I began the journey into How Did I Miss That Bug? I invite you to journey with me.
But What else? What if we take a step back and examine how we test?
So then, what is a missed bug?
Has anyone ever thought about that? Social Scientists Daniel Kahneman and Amos Tversky and Behavioral Psychologist Dan Ariely have.
So as I, as a Test Lead and tester was struggling to understand how my teams and I were missing bugs, Peter had begun studying the role of cognition and bias in testing. He discussed this at several conferences in his presentation, “Moneyball and the Science of Building Great Test Teams”. Has anyone seen that presentation? When I saw it, I started to think about how this might apply to my quest to find out how we miss bugs. So now I’ll invite Peter to tell us about the research.
Daniel Kahneman, in his book Thinking, Fast and Slow, defines two types of thinking. He calls them System 1 and System 2 thinking.
These, of course, are models, and don’t have any physical representation in the brain or elsewhere.
Daniel Kahneman has developed a model that divides thinking into two components, which he calls System 1 and System 2. System 1 is immediate, reflexive thinking that is for the most part unconscious in nature. We do this sort of thinking many times a day. It keeps us functioning in an environment where we have numerous external stimuli, some important, and many not.
System 2 is more deliberate thought. It is engaged for more complex problems, those that require mental effort to evaluate and solve. System 2 makes more accurate evaluations and decisions, but it can’t respond instantly, as is required for many types of day-to-day decisions. And it takes effort, which means it can tire out team members.
How many of you have ever had pilot training? Years ago, in a PA-28 Cherokee 140 like this one, my instructor put me “under the hood” in practicing recovery from unusual attitudes. With the hood down, he put the plane into unusual flying positions from which I had to recover as quickly as possible. When he brought the hood up, I could see only the instrument panel. I rapidly developed a heuristic that enabled me to quickly identify and correct an unusual attitude. In short, I focused on the turn indicator and artificial horizon, and worked to center both of them.
My instructor figured out what I was doing, and I did the same thing the next time. My turn indicator and artificial horizon were centered, but I was still losing over 1000 feet a minute! I was stumped. My instructor had “crossed” the controls, leaving me in a slip that my heuristic couldn’t account for. I was worse than wrong; I couldn’t follow through at all once my heuristic failed. I never forgot that experience.
Many, many biases have been identified, but let’s discuss some of the ones that are most relevant to testing.
First let’s look at an example. Reading this obviously involves judgment? Can you read it?
Usability bugs are often missed due to this bias. I tested an eOrder entry application for an annuity product where upon the death of one spouse the surviving spouse could elect to continue the contract rather than annuitize. This election had to be made within six months of the date of death. The developer placed the date of death about eight screens into the application. So, a user could type in over half of the required information only to find out that it was to late to proceed based upon the date of death.
For example, if the specification is that a field should accept only alpha characters, the tester must also validate that the field does not accept numeric.
For example, tester’s may test a less experienced developer’s code more thoroughly than a more experienced developer because finding more bugs in the less experienced developer’s code will confirm the tester’s expectations.
When I travel, I usually take the express bus to the airport. When I’m leaving early in the morning, I buy my ticket ahead of time, but on this particular trip I didn’t. So here I was at about 5 min of 4 am. While I was waiting a the traffic light, I pulled $20 out of my purse and put it in my pocket. When got to the Logan express, the bus was already there, so I parked and ran with my bag flying behind me, tossed my $20 under the window and asked for a ticket, the cashier said something I didn’t quite understand but gave me the ticket. When I got on the bus, I noticed it was a different color and that it was a senior ticket. So when I gave the $20 and didn’t say anything when he attempted to question it, he interpreted, falsely, that I was a senior citizen and sold me the wrong ticket.
Play Gorilla Video next
I was testing a Smart Grid application and the associated devices including programmable thermostats, load control devices etc. I had a test lab with 18 different boxes with these devices and electric meters associated with them. In order to get load on the meters to test the calculations, we used heaters to simulate high usage. The test lab would get up to over 100 degrees. When the electric company would call a “critical event” the devices were show a red signal. When I tested a new release, I totally missed that the critical event didn’t activate the red signal and change the pricing because I was so focused on verifying the bugs fixes turned over in the release that I totally missed this new regression bug.
I suppose fatigue due to the extreme heat may have factored in my missing the bug as well.
An answer to why we aren’t as smart as we think comes from the research of West, Meserve and Stanovich which showed that we can easily identify biases in other’s decision-making process and we tend to believe that others are more susceptible to biases and preconceived notions than ourselves. We actually have a bias about biases!
Now that we understand why we are so susceptible to biases and preconceived notions, how do we effectively manage the way we think throughout the test process? Although the research shows that we cannot prevent biases and preconceived notions through awareness and understanding, we, as individual testers, can attempt to manage our thought processes as we approach our test projects.
In their working paper, How Can Decision Making Be Improved, Milkman et al. review the work of several researchers which suggests various ways of invoking system 2 thinking more heavily than system 1 thinking. These include evaluating multiple options simultaneously rather than accepting or rejecting each option individually, making choices when under less cognitive load and making decisions in groups rather than individually.
But will an increased focus on system 2 thinking improve testers’ effectiveness at finding bugs? I believe not. In software testing, our test methodology already focuses us and requires us to use system 2
Our use of requirements coverage matrices, data matrices, and test case execution coverage metrics are the tools of our trade and are designed to prevent us from missing bugs. These tools and processes are very useful in validating that the code is built to specifications and that all requirements have been coded and tested.
But will an increased focus on system 2 thinking improve testers’ effectiveness at finding bugs? I believe not.
However, is validating that the application functions according to specifications mean that the application is bug free? Does 100 percent test coverage with all test cases passed mean that the application works as the business customer intended? We all know that the answer to both of these questions is not necessarily. Now the question becomes, how do we find the bugs that prevent the application from working as the customer intended. We believe this requires us to refocus on system 1 thinking and intuition.
As much as heuristics are biases in themselves, when used with oracles, the principles for using a particular heuristic, they can be quite valuable in invoking System 2 thinking. For example, if a typical user, or various users if the application will be used by different users for different functions, is our oracle, we might test workflows and find bugs that we might not find by executing our test cases. How many times have you reported a bug to which the developer responds: “The user would never do that!”?
Users make decisions about using applications based on look and feel as well as ease of use. Since users and potential customers use their System 1 thinking to make these decisions, it follows that to test effectively, we, as testers, must also use System 1 thinking and anticipate the emotions of the user. There is no better way to perform this testing than to consider our emotions and what they might be telling us about the application under test. For example if we are feeling frustrated and anxious, perhaps there is a performance or usability issue.
Instead of asking what test they are instructed to run, exploratory testers ask what’s the best test I can perform, right now?
Exploratory testing is also about learning. The tester learns the application, maybe something about the subject domain, and perhaps a lot about what’s good and bad about the application.
If you kill a cockroach in your kitchen,
do you assume
you’ve killed the last bug?
Or do you call the exterminator?
Daniel Kahneman recommends that you ask three questions to minimize the impact of cognitive biases in your decision making. Here’s how they apply to finding bugs.