Test reporting is something few testers take time to practice. Nevertheless, it's a fundamental skill—vital for your professional credibility and your own self management. Many people think management judges testing by bugs found or test cases executed. Actually, testing is judged by the story it tells. If your story sounds good, you win. A test report is the story of your testing. It begins as the story we tell ourselves, each moment we are testing, about what we are doing and why. We use the test story within our own minds, to guide our work. James Bach explores the skill of test reporting and examines some of the many different forms a test report might take. As in other areas of testing, context drives good reporting. Sometimes we make an oral report; occasionally we need to write it down. Join James for an in-depth look at the art of the reporting.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product. We start with a vague idea of our strategy, organize it quickly, and document as needed in a concise way. In the end, the strategy can be as formal and detailed as you want it to be. In the beginning, though, we start small. If you want to focus on testing and not paperwork, this approach is for you.
This document provides an overview and introduction to the Rapid Software Testing course. It acknowledges those who contributed to developing the course material. The document outlines some assumptions about the audience for the course, including that attendees test software and want to improve their testing process. It presents the primary goal of the course as teaching how to test under uncertainty and with scrutiny. Key themes of Rapid Testing are also summarized, including putting the tester's mind at the center and considering cost versus value in testing activities.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Michael to see how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Exploratory testing is a software testing technique that emphasizes personal freedom and responsibility of testers to design and execute tests. It involves simultaneous learning, test design, and test execution while adapting tests as they are performed. Exploratory testing encourages creativity and adaptability to find bugs more quickly compared to scripted testing alone. However, it depends on tester skills and knowledge and may result in redundant testing if combined with scripted testing. Overall exploratory testing is best used with scripted testing and when test documentation is limited.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product. We start with a vague idea of our strategy, organize it quickly, and document as needed in a concise way. In the end, the strategy can be as formal and detailed as you want it to be. In the beginning, though, we start small. If you want to focus on testing and not paperwork, this approach is for you.
This document provides an overview of exploratory testing techniques. It discusses that exploratory testing involves simultaneous learning, test design, and test execution. Exploratory testing is tester-centric and focuses on problem solving strategies like heuristics rather than scripts. The document dispels some myths about exploratory testing, including that it is unstructured and cannot involve documentation. It provides examples of how documents can be used for reflection, information sharing, and reporting in exploratory testing.
The document summarizes an exploratory testing workshop. It discusses exploratory testing approaches, common traps testers fall into, and provides tips for effective exploratory testing. As an exercise, participants are asked to use exploratory testing to find issues with a Tilted Twister device within 20 minutes. Key problems identified include inability to detect color differences, motor arm overshooting, difficulty turning it on, calibration cube being too big, and taking too long to solve with memory issues. The debrief discusses the testing process and importance of the tester mindset in exploratory and automated testing.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product. We start with a vague idea of our strategy, organize it quickly, and document as needed in a concise way. In the end, the strategy can be as formal and detailed as you want it to be. In the beginning, though, we start small. If you want to focus on testing and not paperwork, this approach is for you.
This document provides an overview and introduction to the Rapid Software Testing course. It acknowledges those who contributed to developing the course material. The document outlines some assumptions about the audience for the course, including that attendees test software and want to improve their testing process. It presents the primary goal of the course as teaching how to test under uncertainty and with scrutiny. Key themes of Rapid Testing are also summarized, including putting the tester's mind at the center and considering cost versus value in testing activities.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Michael to see how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Exploratory testing is a software testing technique that emphasizes personal freedom and responsibility of testers to design and execute tests. It involves simultaneous learning, test design, and test execution while adapting tests as they are performed. Exploratory testing encourages creativity and adaptability to find bugs more quickly compared to scripted testing alone. However, it depends on tester skills and knowledge and may result in redundant testing if combined with scripted testing. Overall exploratory testing is best used with scripted testing and when test documentation is limited.
A test strategy is the set of ideas that guides your test design. It's what explains why you test this instead of that, and why you test this way instead of that way. Strategic thinking matters because testers must make quick decisions about what needs testing right now and what can be left alone. You must be able to work through major threads without being overwhelmed by tiny details. James Bach describes how test strategy is organized around risk but is not defined before testing begins. Rather, it evolves alongside testing as we learn more about the product. We start with a vague idea of our strategy, organize it quickly, and document as needed in a concise way. In the end, the strategy can be as formal and detailed as you want it to be. In the beginning, though, we start small. If you want to focus on testing and not paperwork, this approach is for you.
This document provides an overview of exploratory testing techniques. It discusses that exploratory testing involves simultaneous learning, test design, and test execution. Exploratory testing is tester-centric and focuses on problem solving strategies like heuristics rather than scripts. The document dispels some myths about exploratory testing, including that it is unstructured and cannot involve documentation. It provides examples of how documents can be used for reflection, information sharing, and reporting in exploratory testing.
The document summarizes an exploratory testing workshop. It discusses exploratory testing approaches, common traps testers fall into, and provides tips for effective exploratory testing. As an exercise, participants are asked to use exploratory testing to find issues with a Tilted Twister device within 20 minutes. Key problems identified include inability to detect color differences, motor arm overshooting, difficulty turning it on, calibration cube being too big, and taking too long to solve with memory issues. The debrief discusses the testing process and importance of the tester mindset in exploratory and automated testing.
Julian Harty - Alternatives To Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on "Presentation Title" by "Speaker Name". See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. James Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. James focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Exploratory testing is an approach that emphasizes freedom and responsibility of individual testers in a process where continuous learning, test design, and execution occur simultaneously. It is a disciplined, planned, and controlled form of testing that focuses on continuous learning. Research has shown there is no significant difference in results between exploratory testing and preplanned test cases, but exploratory testing requires significantly less effort overall. Effective exploratory testing requires skills like making models, keeping an open mind, and risk-based testing approaches. Both the strengths and potential blind spots of exploratory testing are discussed.
Shrini Kulkarni - Software Metrics - So Simple, Yet So Dangerous TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Software Metrics - So Simple, Yet So Dangerous by Shrini Kulkarni. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
The Test Coverage Outline: Your Testing Road MapTechWell
To assist in risk analysis, prioritization of testing, and test reporting (telling your testing story), you need a thorough Test Coverage Outline (TCO)—a road map of your proposed testing activities. By creating a TCO, you can prepare for testing without having to create a giant pile of detailed test cases. Paul Holland says that a comprehensive TCO helps the test team to get buy-in for the overall test strategy very early in the project and is valuable for identifying risk areas, testability issues, and resource constraints. Paul describes how to create a TCO including the use of heuristic-based checklists to help ensure you don’t overlook important elements in your testing. Learn multiple approaches for critical information gathering, the artifacts used as input for creating a TCO, and how you can use a TCO to maintain testing focus. Take back a new, lightweight tool to help you tell the testing story throughout your project.
Rikard Edgren - Testing is an Island - A Software Testing DystopiaTEST Huddle
This document summarizes trends in software testing that could diminish its effectiveness and enjoyment. It notes an increasing focus on verification over validation, precise measurement over subjective judgement, and short-term metrics over long-term quality. This narrowing scope risks making testers isolated and limiting their creativity, motivation and ability to consider the full context of a project. The document advocates a holistic and subjective approach that considers people and intangible factors, not just short-term quantifiable results. Subjectivity and considering the whole system, not just parts, are presented as useful for testing.
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
Exploratory Testing Explained and Experienced
- Exploratory testing is an approach to software testing that involves dynamically testingsoftware without a fixed plan, using the results of previous tests to determine subsequent tests.
- It is a disciplined approach that finds unknown unknowns and helps testers examine software from different perspectives to uncover more bugs. Tests are performances rather than fixed artifacts.
- Exploratory testing requires testers to be able to strategically choose and defend their test approaches, explain what they have tested, and determine when they are done testing rather than just finding bugs randomly. It is a more systematic approach than unplanned testing.
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
In this article I will explore why I think that deadlines should never be communicated to the development teams, and why all deadlines are basically meaningless anyway.
This document discusses the need to rethink the role of testers in agile and structured projects. It argues that changes in business demands and development practices are squeezing testers and that many current testing roles and skills may disappear. Specifically, it predicts that half of onshore testing roles will be eliminated in 5 years. It recommends testers focus on more strategic roles like business analysis, requirements management, and assurance rather than traditional testing tasks.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Using Stories to Test Requirements and SystemsPaul Gerrard
The document discusses using business stories to test requirements and systems. It explains that stories can help identify omissions, inconsistencies, and ambiguity in requirements. Stories are applicable at any stage of a project for different purposes. Structured stories follow a common format with a header, scenarios with given/when/then structures, and can have multiple scenarios to test different conditions. Stories can validate requirements by example and generate both manual and automated test cases. The document argues that a structured, disciplined approach to stories can benefit both agile and structured development approaches.
The document provides guidance for managing a team of junior testers. It discusses challenges such as lack of skills and experience in junior testers. It recommends setting clear expectations, providing frequent communication and feedback, ensuring knowledge sharing, and protecting the team to help them succeed. Patience and structure are important, as is repeating key messages, to help junior testers learn and improve. The goal is for the team to work cooperatively toward a common objective.
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Fabian Scarano - Preparing Your Team for the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Preparing Your Team for the Future by Fabian Scarano. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Improving the Mobile Application User Experience (UX)TechWell
If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications. After a brief discussion of how the meanings of usability and user experience depend on the context of your product, Phil defines measurements of usability and user experience that you can use right away to quantify these subjective attributes. He crystallizes abstract definitions into concepts that can be measured, with metrics to evaluate and improve your product, and provide numerous examples to demonstrate the concepts on how to improve your mobile app.
CAN I USE THIS?—A Mnemonic for Usability TestingTechWell
Often, usability testing does not receive the attention it deserves. A common argument is that usability issues are merely “training issues” and can be dealt with through the product's training or user manuals. If your product is only for internal staff use, this may be a valid response. However, the market now demands easy-to-use products—whether your users are internal or external. David Greenlees shares a tool he has developed to generate test ideas for usability testing. His mnemonic—CAN I USE THIS?—provides a solid starting point for testing any product. C for Comparable Product, A for Accessibility, N for Navigation … David shares how he has used this mnemonic on past projects while the training argument took place around him, and how they realized product improvements and greater user acceptance. Learn how you can quickly and effectively use this mnemonic on any project so you can give usability testing the attention it deserves.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Julian Harty - Alternatives To Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on "Presentation Title" by "Speaker Name". See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. James Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. James focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Exploratory testing is an approach that emphasizes freedom and responsibility of individual testers in a process where continuous learning, test design, and execution occur simultaneously. It is a disciplined, planned, and controlled form of testing that focuses on continuous learning. Research has shown there is no significant difference in results between exploratory testing and preplanned test cases, but exploratory testing requires significantly less effort overall. Effective exploratory testing requires skills like making models, keeping an open mind, and risk-based testing approaches. Both the strengths and potential blind spots of exploratory testing are discussed.
Shrini Kulkarni - Software Metrics - So Simple, Yet So Dangerous TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Software Metrics - So Simple, Yet So Dangerous by Shrini Kulkarni. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
The Test Coverage Outline: Your Testing Road MapTechWell
To assist in risk analysis, prioritization of testing, and test reporting (telling your testing story), you need a thorough Test Coverage Outline (TCO)—a road map of your proposed testing activities. By creating a TCO, you can prepare for testing without having to create a giant pile of detailed test cases. Paul Holland says that a comprehensive TCO helps the test team to get buy-in for the overall test strategy very early in the project and is valuable for identifying risk areas, testability issues, and resource constraints. Paul describes how to create a TCO including the use of heuristic-based checklists to help ensure you don’t overlook important elements in your testing. Learn multiple approaches for critical information gathering, the artifacts used as input for creating a TCO, and how you can use a TCO to maintain testing focus. Take back a new, lightweight tool to help you tell the testing story throughout your project.
Rikard Edgren - Testing is an Island - A Software Testing DystopiaTEST Huddle
This document summarizes trends in software testing that could diminish its effectiveness and enjoyment. It notes an increasing focus on verification over validation, precise measurement over subjective judgement, and short-term metrics over long-term quality. This narrowing scope risks making testers isolated and limiting their creativity, motivation and ability to consider the full context of a project. The document advocates a holistic and subjective approach that considers people and intangible factors, not just short-term quantifiable results. Subjectivity and considering the whole system, not just parts, are presented as useful for testing.
Santa Barbara Agile: Exploratory Testing Explained and ExperiencedMaaret Pyhäjärvi
Exploratory Testing Explained and Experienced
- Exploratory testing is an approach to software testing that involves dynamically testingsoftware without a fixed plan, using the results of previous tests to determine subsequent tests.
- It is a disciplined approach that finds unknown unknowns and helps testers examine software from different perspectives to uncover more bugs. Tests are performances rather than fixed artifacts.
- Exploratory testing requires testers to be able to strategically choose and defend their test approaches, explain what they have tested, and determine when they are done testing rather than just finding bugs randomly. It is a more systematic approach than unplanned testing.
Michael Bolton - Heuristics: Solving Problems RapidlyTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Heuristics: Solving Problems Rapidly by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
In this article I will explore why I think that deadlines should never be communicated to the development teams, and why all deadlines are basically meaningless anyway.
This document discusses the need to rethink the role of testers in agile and structured projects. It argues that changes in business demands and development practices are squeezing testers and that many current testing roles and skills may disappear. Specifically, it predicts that half of onshore testing roles will be eliminated in 5 years. It recommends testers focus on more strategic roles like business analysis, requirements management, and assurance rather than traditional testing tasks.
This talk suggests how we might make sense of the tools landscape of the near future, where the pressure to modernise processes and automate is greatest, and what a new test process supported by tools might look like.
Takeaways:
- We need to take machine learning in testing seriously, but it won’t be taking our jobs just yet
- We don’t need more test automation tools; today we need tools that capture tester knowledge
- Tools that that learn and think can’t work for testers until we solve the knowledge capture challenge.
View On-Demand Webinar: https://youtu.be/EzyUdJFuzlE
Things Could Get Worse: Ideas About Regression TestingTechWell
Michael Bolton, DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. Michael is a leader in the context-driven software testing movement with twenty years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.
Exploratory testing is an approach to testing that emphasizes the freedom and responsibility of testers to continually optimize the value of their work. It is the process of three mutually supportive activities done in parallel: learning, test design, and test execution. With skill and practice, exploratory testers typically uncover an order of magnitude more problems than when the same amount of effort is spent on procedurally scripted testing. All testers conduct exploratory testing in one way or another, but few know how to do it systematically to obtain the greatest benefits. Even fewer can articulate the process. Jon Bach looks at specific heuristics and techniques of exploratory testing that will help you get the most from this highly productive approach. Jon focuses on the skills and dynamics of exploratory testing, and how it can be combined with scripted approaches.
Using Stories to Test Requirements and SystemsPaul Gerrard
The document discusses using business stories to test requirements and systems. It explains that stories can help identify omissions, inconsistencies, and ambiguity in requirements. Stories are applicable at any stage of a project for different purposes. Structured stories follow a common format with a header, scenarios with given/when/then structures, and can have multiple scenarios to test different conditions. Stories can validate requirements by example and generate both manual and automated test cases. The document argues that a structured, disciplined approach to stories can benefit both agile and structured development approaches.
The document provides guidance for managing a team of junior testers. It discusses challenges such as lack of skills and experience in junior testers. It recommends setting clear expectations, providing frequent communication and feedback, ensuring knowledge sharing, and protecting the team to help them succeed. Patience and structure are important, as is repeating key messages, to help junior testers learn and improve. The goal is for the team to work cooperatively toward a common objective.
Rik Teuben - Many Can Quarrel, Fewer Can Argue TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Many Can Quarrel, Fewer Can Argue by Rik Teuben. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Fabian Scarano - Preparing Your Team for the FutureTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Preparing Your Team for the Future by Fabian Scarano. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Improving the Mobile Application User Experience (UX)TechWell
If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications. After a brief discussion of how the meanings of usability and user experience depend on the context of your product, Phil defines measurements of usability and user experience that you can use right away to quantify these subjective attributes. He crystallizes abstract definitions into concepts that can be measured, with metrics to evaluate and improve your product, and provide numerous examples to demonstrate the concepts on how to improve your mobile app.
CAN I USE THIS?—A Mnemonic for Usability TestingTechWell
Often, usability testing does not receive the attention it deserves. A common argument is that usability issues are merely “training issues” and can be dealt with through the product's training or user manuals. If your product is only for internal staff use, this may be a valid response. However, the market now demands easy-to-use products—whether your users are internal or external. David Greenlees shares a tool he has developed to generate test ideas for usability testing. His mnemonic—CAN I USE THIS?—provides a solid starting point for testing any product. C for Comparable Product, A for Accessibility, N for Navigation … David shares how he has used this mnemonic on past projects while the training argument took place around him, and how they realized product improvements and greater user acceptance. Learn how you can quickly and effectively use this mnemonic on any project so you can give usability testing the attention it deserves.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
It’s one week after your product’s launch, and everyone is happy. After all, for the first time in years, your product development exceeded expectations. Coding was completed on time with very few defects. Suddenly, the report of a major usability and security flaw destroys the euphoria and sends everything into chaos. Unfortunately, this is not uncommon in our industry. So, how can we mitigate such things from happening? As he shares stories about the complex domain of product delivery, Ray Arell introduces a framework with associated emergent practices that enable you to better guide your product to success. He presents an overview of the Cynefin model, a description of complicated and complex systems, and discusses how to use it to establish an effective testing strategy. Ray describes how to identify key patterns of product usage to establish a robust defect-prevention system that reduces product development costs. Lastly, Ray describes how to interview customers to identify key quality expectations, ensuring that your testing focuses on producing the highest value for your customers.
Randy Rice presented on lessons learned from user acceptance testing (UAT) on four different projects. The first project involved a new laboratory testing system that had severe performance issues and required three redeployments. The second project with the same company was more successful due to improved testing practices. The third project involved designing many tests based on business scenarios before the system's interface was known. The last project involved a complex legal system where system testing found most defects and UAT involved a simplified walkthrough. Key lessons included not relying solely on UAT, implementing incrementally, and adjusting UAT plans as more is learned.
Congruent Coaching: An Interactive ExplorationTechWell
We have opportunities to coach people all the time. Much of what we see as coaching is actually undercover training. Real coaching is richer—offering support while explaining options. In this interactive session, Johanna Rothman invites you to explore how to coach, regardless of your position in the organization. Teaching is just one option for coaching. You have many other options, depending on your coaching stance. You may select a counselor’s stance if you are managing up or a partner’s stance if you are a peer. You might even select a reflective observer’s stance or a technical advisor’s stance, depending on the situation. We will explore what to do when you see opportunities for coaching but you haven’t been asked to coach. Bring your coaching concerns, whether you are coaching onsite, or coaching at a distance, coaching one-on-one, or coaching teams. Let’s learn and build our coaching skills together.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. James Bach shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join James for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
Designing for Testability: Differentiator in a Competitive MarketTechWell
In today’s cost conscious marketplace, solution providers gain advantage over competitors when they deliver measurable benefits to customers and partners. Systems of even small scope often involve distributed hardware/software elements with varying execution parameters. Testing organizations often deal with a complex set of testing scenarios, increased risk for regression defects, and competing demands on limited system resources for a continuous comprehensive test program. Learn how designing a testable system architecture addresses these challenges. David Campbell offers practical guidance on the process to make testability a key discriminator from the earliest phases of product definition and design. Learn approaches that consistently deliver for high achieving organizations, and how these approaches impact schedule and architecture performance. Gain insight on how to select and customize techniques that are appropriate for your organization’s size, culture, and market.
A Guide to Cross-Browser Functional TestingvTechWell
The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.
User Acceptance Testing: Make the User a Part of the TeamTechWell
Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting realistic expectations. Showing how users add value and taking them through the UAT process strengthens their ability and commitment. Conducting user acceptance testing sessions as software functionality becomes available helps to build confidence and capability—and find defects earlier. Susan Bradley shares a five-step process that you can use in your organization to conduct user acceptance testing. Learn to conduct training, set up daily testing expectations, assign test cases to users, create a shared information site for both test case management and feedback documentation, conduct a review of noted issues with all interested parties, and participate in a retrospective regarding the UAT process to improve the process for next time.
Extreme Automation: Software Quality for the Next Generation EnterpriseTechWell
Software runs the business. The modern testing organization aspires to be a change agent and an inspiration for quality throughout the entire lifecycle. To be a change agent, the testing organization must have the right people and skill sets, the right processes in place to ensure proper governance, and the right technology to aid in the delivery of software in support of the business line. Traditionally, testing organizations have focused on the people and process aspect of solving quality issues. With the ever-increasing complexity of the software needed to run the enterprise, testing professionals must adopt technology to help solve some of the most challenging quality issues ever. In short, testing organizations must make the move to extreme automation and become proficient with modern tooling and its benefits. Theresa Lanowitz focuses on new and emerging technologies—proven and successful—to add to the workbench of the test professional.
Testing in the Wild: Practices for Testing Beyond the LabTechWell
The stakes in the mobile app marketplace are very high, with thousands of apps vying for the limited space on users’ mobile devices. Organizations must ensure that their apps work as intended from day one and to do that must implement a successful mobile testing strategy leveraging in-the-wild testing. Matt Johnston describes how to create and implement a tailored in-the-wild testing strategy to boost app success and improve user experience. Matt provides strategies, tips, and real-world examples and advice on topics ranging from fragmentation issues, to the different problems inherent in web and mobile apps, to deciding what devices you must test vs. those you should test. After hearing real-world examples of how testing in the wild affects app quality, leave with an understanding of and actionable information about how to launch apps that perform as intended in the hands of end-users—from day one.
During the past decade, test engineers have become experts in browser compatibility testing. Just when we thought everything was under control, along come native mobile applications that need to run across platforms far more diverse than the desktop browser landscape has ever been. The variety of OSs, screen sizes, and hardware technology combine to create hundreds of configurations that need some testing. Manual testing across so many deployment targets will drive anyone crazy. Stu Stern looks at the biggest challenges in mobile testing: functional, platform, display, and device compatibility testing and explores how you can use MonkeyTalk, a free open source tool to create test suites that can be easily run across today’s menagerie of mobile devices. MonkeyTalk can help you automate functional interactive tests for native, mobile, and hybrid iOS and Android apps—everything from simple "smoke tests" to sophisticated data-driven test suites.
Today’s test organizations often have sizable investments in test automation. Unfortunately, running and maintaining these test suites represents another sizable investment. All too often this hard work is abandoned and teams revert to a more costly, but familiar, manual approach. Jared Richardson says a more practical solution is to integrate test automation suites with continuous integration (CI). A CI system monitors your source code and compiles the system after every change. Once the build is complete, test suites are automatically run. This approach of ongoing test execution provides your developers rapid feedback and keeps your tests in constant use. It also frees up your testers for more involved exploratory testing. Jared shows how to set up an open source continuous integration tool and explains the best way to introduce this technique to your developers and testers. The concepts are simple when presented properly and provide solid benefits to all areas of an organization.
This document summarizes James Bach and Chris Ojaste's repair of a broken Kraft "Grate-It Fresh" parmesan cheese dispenser on December 25, 2008. The dispenser was not grating cheese despite 1/3 of the block remaining. Their analysis determined the grating mechanism involved a rotatable grating plate attached to a threaded spindle that pushes the cheese into the blades. They found the spindle threads were stripped, slipping the pressure plate. Attempts to manually grate the cheese failed. Further examination revealed grooves in the cheese face preventing the grater blades from engaging, identifying the root cause.
Test analysis & design good practices@TDT Iasi 17Oct2013Tabăra de Testare
The document discusses test analysis and design best practices. It covers defining test objectives, analyzing test items to identify conditions, designing test cases using various techniques, and ensuring traceability between requirements and test cases. Good practices for writing effective test cases are also presented, such as using a standardized naming convention and writing steps that verify a single testing idea. The importance of analysis and design in translating requirements into testable items prior to execution is emphasized.
Acceptance And Story Testing Patterns - By Charles BradleySynerzip
This webinar discusses best practices for creating Story Tests (aka Acceptance Tests).
Acceptance Testing, also known as Story Testing, is vital to achieve the Agile vision of “working software over comprehensive documentation.”
Read more at https://www.synerzip.com/webinar/acceptance-and-story-testing-patterns/
Testing is done throughout development to minimize risks. Testers evaluate the product, create test conditions, and identify potential issues to improve quality. Effective testing considers coverage of the product's structure, functions, data, interfaces, platform, and operations through accurate models and test procedures. Testers communicate risks and results so informed decisions can be made.
Importance of Software testing in SDLC and AgileChandan Mishra
1. The document discusses the importance of testing in the software development lifecycle (SDLC) to improve quality and identify defects before deployment. Testing helps verify requirements are implemented correctly and that components integrate properly.
2. It explains why separate testers are needed to test software in a neutral, unbiased way. Testers have a "negative" approach to find bugs, which developers lack due to implementation pressures.
3. The document outlines different types of software testing like unit, integration, system and acceptance testing. It also describes testing techniques like boundary value analysis, equivalence partitioning and comparison testing.
The document discusses exploratory testing and Keri Smith. It provides an overview of exploratory testing, noting that it emphasizes personal freedom and responsibility of testers to continually optimize testing. It also discusses Keri Smith's work in conceptual art and guided journals that encourage observing the world like artists and scientists.
Empirical research methods for software engineeringsarfraznawaz
This document outlines guidelines for empirical research methods in software engineering. It discusses case studies, experimental research, surveys, and post-mortem analysis. For each method, it provides examples and discusses how the method can be used to study software engineering problems. It also lists detailed guidelines for different aspects of empirical research, such as experimental context and design, data collection, analysis, and presentation and interpretation of results. The goal of the guidelines is to improve the quality and rigor of empirical studies in software engineering.
Trends in Software Testing: There has been a slow realization among the top executives that simply outsourcing testing to the lowest bidder is not resulting in a sufficient level of quality in their software products. In this session, Paul Holland will discuss how American companies are starting to reconsider “factory school” testing and are no longer satisfied with the current situation of simply outsourcing their “checking”. As the development side of software continues its dramatic shift toward Agile development – what role can testers have and how can testers still add value?
The Heuristic Test Strategy Model provides a framework for designing effective test strategies. It involves considering four key areas: 1) the project environment including resources, constraints, and other factors; 2) the product elements to be tested; 3) quality criteria such as functionality, usability, and security; and 4) appropriate test techniques to apply. Some common test techniques include functional testing, domain testing, stress testing, flow testing, and scenario testing.
Whitepaper Test Case Design and Testing Techniques- Factors to ConsiderRapidValue
Software testing is an essential and important technique for assessing the quality of a particular software product/service. In software testing, test cases and scenarios play an inevitable and a pivotal role. A good strategic design and technique help to improve the quality of the software testing process.
This whitepaper provides information about test case design activities, test analysis, quality risks, testing techniques, phases of test development. The paper also, explains the factors that need to be considered while choosing the right testing techniques and provides a checklist of test cases based on our rich experience of testing mobile apps.
Testing is important because software errors can have serious consequences like customer bank balances being inflated by $763 billion or radiation therapy machines overexposing patients. Testing helps verify that software meets its specifications and functions as intended. There are two main types of testing: static testing which analyzes source code without running programs, and dynamic testing which executes programs to look for errors. It is difficult to exhaustively test all possible inputs for non-trivial programs, so test cases must strategically sample a small percentage of inputs to uncover many defects. Both black box and white box testing methods aim to design effective test cases.
Get the Balance Right: Acceptance Test Driven Development, GUI Automation and...Michael Larsen
The document discusses different testing approaches including Acceptance Test Driven Development (ATDD), Test Driven Development (TDD), GUI automation, and exploratory testing. It explains that ATDD and TDD are design processes that help ensure software meets project needs, while testing involves asking questions of a product. GUI automation can simulate user actions but is fragile. Exploratory testing involves testing design and execution together in a flexible way. The document argues that these approaches work best in balance and that exploration is important at all levels, including with automation. It emphasizes putting the customer first and seeing the approaches as interdependent parts of an overall quality process.
The document discusses challenges with testing software without requirements documentation and provides some strategies to help with testing in such situations. It notes that QA teams may have to test without knowing what the application is supposed to do. It then suggests several paths that testing teams can take when faced with limited or missing documentation, such as UI teams creating screenshots and development teams creating technical design documents. The document also advocates for daily standup meetings between teams to help coordinate testing efforts in lieu of documentation.
This document provides an introduction to software testing for startups. It discusses that testing early in the development cycle results in faster development, better software, and enhanced investment appeal. It recommends creating test cases based on functional specifications and menus. The document outlines six principles of testing, including that you cannot test every scenario and defects congregate in particular areas. It recommends testing frequently with both developers and testers working closely together.
This document discusses exploratory testing and compares it to scripted testing. It outlines some key benefits of scripts such as careful test design and review. However, it also notes that scripts can become outdated as risk profiles and software change over time. Exploratory testing is described as simultaneously learning, designing, and executing tests without pre-scripted instructions. Some misconceptions about exploratory testing are addressed, such as the idea that it cannot be managed, measured, or documented. The document suggests that most situations benefit from a mix of exploratory and scripted approaches.
The document discusses test design and provides tips for becoming a better test designer. It explains that test design involves coming up with a well-thought-out and broad set of tests based on the application and schedule. Both over-testing and under-testing should be avoided. It also emphasizes practicing testing, collaborating with others, learning about the application, and finding new testing ideas to expand one's toolbox. The best test tool is noted as being one's own brain.
The document provides an overview of software testing and black box testing techniques. It discusses various testing methods like white box, black box and grey box testing. It also covers different types of testing like functional testing, acceptance testing, etc. and black box testing techniques like equivalence partitioning, boundary value analysis, state transition testing, and decision table testing. The document lists advantages of black box testing like not requiring knowledge of internal implementation and disadvantages like only a small number of inputs can be tested. It concludes by thanking the audience.
The document discusses various techniques for testing software such as black box testing, white box testing, coverage-based testing, model-based testing, property-based testing, and agile testing. It provides details on different types of coverage like code coverage, data coverage, and model-based coverage. It also describes different testing techniques like equivalence partitioning, input domain testing, and syntax generation that can be used with model-based testing. The document emphasizes applying critical thinking skills to testing and considering different perspectives.
This document contains the syllabus for a course on software verification, validation, and testing (CSE 565). It lists the topics that will be covered each week, including testing techniques like requirements-based testing, exploratory testing, structure-based testing, integration testing, and usability testing. It also covers testing at different stages like unit testing, integration testing, and system testing. The document provides an overview of the areas and concepts that will be learned throughout the course.
This document provides guidance on developing a tester mindset and strategies for effective testing. It discusses developing an analytical eye for detail and skeptical approach. It emphasizes avoiding confirmation bias by testing to disprove hypotheses rather than just verify them. It also stresses generating new testing ideas by considering different customer models and perspectives. Finally, it discusses prioritizing testing based on risk factors like new features, complexity, critical functions and upstream/downstream dependencies. The overall message is the importance of strategic and logical thinking to structure effective testing.
Isabel Evans stopped drawing and painting after being told she was not very good at it, which led to a loss of confidence in her creative and professional abilities. However, she realized that attempting creative activities is important for cognitive and emotional development, and that making mistakes and learning from failures allows for growth. By reengaging with failure through art and with support from others, Isabel was able to regain confidence in her abilities and reboot her career. The document discusses different perspectives on failure and the importance of learning from mistakes.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
This document summarizes a half-day tutorial on test design for fully automated build architectures presented by Melissa Benua of mParticle at STAREAST 2018. The tutorial covered guiding principles for test design including prioritizing important and reliable tests, structuring automated pipelines around components, packages, and releases, and monitoring test results through code coverage, flaky test handling, and logging versus counters. It also included exercises mapping test cases to functional boundaries and categories of tests to pipeline stages.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
The document summarizes a presentation about including databases in a continuous integration/delivery process. It discusses treating database code like application code by placing it under version control and integrating databases into the DevOps software development pipeline. This allows databases to be built, tested, and released like other software through continuous integration, delivery, and deployment.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Presentation of the OECD Artificial Intelligence Review of Germany
Rapid Software Testing: Reporting
1. MO
Half-day Tutorials
5/5/2014 1:00:00 PM
Rapid Software Testing:
Reporting
Presented by:
James Bach
Satisfice, Inc.
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. James Bach
Satisfice, Inc.
James Bach is founder and principal consultant of Satisfice, Inc., a software testing and
quality assurance company. In the eighties, James cut his teeth as a programmer, tester,
and SQA manager in Silicon Valley in the world of market-driven software development. For
nearly ten years, he has traveled the world teaching rapid software testing skills and serving
as an expert witness on court cases involving software testing. James is the author
of Lessons Learned in Software Testing and Secrets of a Buccaneer-Scholar: How Self-
Education and the Pursuit of Passion Can Lead to a Lifetime of Success.
3. Rapid Software Testing:
Reporting
James Bach, Satisfice, Inc.
james@satisfice.com
www.satisfice.com
Rapid Testing
Rapid testing is a mind-set
and a skill-set of testing
focused on how to do testing
more quickly,
less expensively,
with excellent results.
This is a general testing methodology.
It adapts to any kind of project or product.
4. The Premises of Rapid Testing
1. Software projects and products are relationships between people, who are
creatures both of emotion and rational thought.
2. Each project occurs under conditions of uncertainty and time pressure.
3. Despite our best hopes and intentions, some degree of inexperience,
carelessness, and incompetence is normal.
4. A test is an activity; it is performance, not artifacts.
5. Testing’s purpose is to discover the status of the product and any threats to its
value, so that our clients can make informed decisions about it.
6. We commit to performing credible, cost-effective testing, and we will inform
our clients of anything that threatens that commitment.
7. We will not knowingly or negligently mislead our clients and colleagues.
8. Testers accept responsibility for the quality of their work, although they
cannot control the quality of the product.
What is a test report?
A test report is any description, explanation, or justification of the
status of a test project.
A comprehensive test report is all of those things together.
A professional test report is one competently, thoughtfully, and
ethically designed to serve your clients in that context.
A test report isn’t “just the facts.” It’s a story about facts.
Learn to tell the testing story!
5. Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
The First Law of Reporting:
Be Credible!
They won’t listen to uncomfortable information,
unless you are credible.
They’ll assume you’re mistaken about surprising information,
unless you are credible.
They’ll assume you’re exaggerating about risks,
unless you are credible.
They’ll micro-manage your reporting,
unless you are credible.
6. Actually care about the project.
Actually care about people on the project.
Actually know how to do your job.
Do not tell lies or exaggerate.
Sweat the details in your own work.
Gain experience.
Study the technology.
Read all documents carefully.
Find things to appreciate about the work of others.
Acknowledge mistakes, correct them and learn from them.
Keep a journal and become the historian of your project.
Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
7. A Narrative Model of Testing
This is a map of the Rapid
Testing methodology that
I teach.
It is organized in the
structure of a story,
because story construction
is at the heart of what it
means to test.
Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
8. Let’s Count Unicorns!
Do you know what a
Unicorn is? Okay.
Answer this question:
How many
unicorns will
fit into your
cubicle?
In the absence of context…
test case counts mean NOTHING!
How much testing is 40 test cases?
How much is 400?
How about 40,000 test cases?
9. “Pass Rate” is a Stupid Metric.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
2/1
2/3
2/5
2/7
2/9
2/11
2/13
2/15
2/17
2/19
2/21
2/23
2/25
2/27
3/1
3/3
3/5
Pass Rate
Pass Rate
You shouldn’t take
test case counts seriously because…
Test cases are not independent.
Test cases are not interchangeable.
Test cases vary widely in value from case to case, tester to
tester, product to product, project to project, test technique to
test technique, and over time.
Test case design is subjective, so counts are easy to inflate.
Test cases do not— and can not—capture all the testing that
occurs (example: bug investigation)
Testers often don’t follow the test cases, anyway.
Automated test cases are fundamentally different from sapiently
executed tests.
Test cases represent what’s easy to put into a test case.
10. 15
Area
file/edit
view
insert
format
tools
slideshow
online help
clipart
converters
install
compatibility
general GUI
Effort
high
low
low
low
blocked
low
blocked
none
none
start 3/17
start 3/17
low
C.
1
1+
2
2+
1
2
0
1
1
0
0
3
Q.Comments
1345, 1363, 1401
automation broken
crashes: 1406, 1407
animation memory leak
new files not delivered
need help to test...
need help to test...
lab time is scheduled
Testing Dashboard Updated: Build:
2/21 38
Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
11. Activity-based test management
is designed to facilitate reporting
Thread-based Test Management:
This means organizing your whole test effort around test
activities that comprise your testing story. You manage
testing AND report status from a mind-map.
Session-based Test Management:
This means organizing testing into “sessions” which are
normalized units of uninterrupted test time. You can count
these more safely.
Visualizing Test Progress
13. Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
Risk-Based Testing Makes
Reporting More Relevant
Risk Area 1
Risk Area 2
Risk Area 3
Status of the product
and what we did to
test it….
Status of the product
and what we did to
test it….
Status of the product
and what we did to
test it…
(I rarely make a grid like this with a written report, because the artifacts I
use to manage testing, day-to-day, are focused on activities, not risks, and
I would have to create a special document to do a risk-based report.)
14. Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
Safety Language
(aka “epistemic modalities”)
“Safety language” in software testing, means to
qualify or otherwise draft statements of fact so
as to avoid false confidence.
Examples:
So far…
The feature worked
It seems…
I think…
It appears…
apparently…
I infer…
I assumed…
I have not yet seen any
failures in the feature…
15. Safety Language In Action
Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
16. To test is to construct three stories
(plus a bit more)
Level 1: A story about the status of the PRODUCT…
…about how it failed, and how it might fail...
…in ways that matter to your various clients.
Level 2: A story about HOW YOU TESTED it…
…how you configured, operated and observed it…
…about what you haven’t tested, yet…
…and won’t test, at all…
Level 3: A story about the VALUE of the testing…
…what the risks and costs of testing are…
…how testable (or not) the product is…
…things that make testing harder or slower…
…what you need and what you recommend…
(Level 3+: A story about the VALUE of the stories.)
…do you know what happened? can you report? does report serve its purpose?
Why should I be pleased
with your work?
Advice for Test Reporting
Build crediblity (by being credible).
Know the context of your tests (test framing).
Never use a number out of context (e.g. no test case counts).
Highlight general test activities (put tests in context).
Highlight product risk (put bugs in context).
Practice “safety language” (avoid misleading speech)
Tell a three-level testing story. (status testing value)
Don’t waste peoples’ time. (fit the report to the context)
18. - 1 -
James Bach and Chris Ojaste 12/25/08
Incident Report
Analysis and Repair of Kraft “Grate-It Fresh” Parmesan Cheese
Dispenser
Overview
We fixed a broken Kraft “Grate-It Fresh” self-contained disposable parmesan cheese dispensing unit.
This report details the incident, including the problem as it presented to us, analysis of the problem,
and corrective action we took.
Situation and Problem
The investigators (Chris and James) were attending a Christmas banquet at which was served pasta
along with grated parmesan cheese. The cheese was dispensed from a self-contained disposable unit,
inside of which there appeared to be a block of cheese.
“KRAFT Grate-It-Fresh Parmesan Cheese is the easy way to get the bold flavor of
freshly grated Parmesan cheese. This unique and convenient all-in-one package,
with 100% pure Parmesan cheese and a built-in grater, dispenses freshly grated
Parmesan cheese with each easy turn. It’s the most convenient way to top off all
your favorite dishes with the dynamic flavor of freshly grated Parmesan.”
(http://brands.kraftfoods.com/KraftParm/parmProducts.htm)
By rotating the dial on the bottom of the unit in a clockwise fashion, the cheese is
shaved off the block and delivered to the plate by means of gravity. However, our
cheese dispenser was not working. Multiple rotations of the dial delivered no
cheese at all.
Someone had to save Christmas! We resolved to investigate and repair the problem if possible.
Analysis and Repair Process
1. External physical inspection ruled out the possibility of cheese exhaustion as a cause of the
problem. By the weight of the unit and by visual inspection through the plastic case, we determined
that about 1/3 of a block of cheese remained to be grated.
2. Also by visual inspection we determined the apparent mechanism by which the grater works is
consistent with the cheese grater described in US patent 6,412,717. Specifically, a rotatable grating
plate is attached to a threaded spindle that passes through the cheese and through a pressure plate
on the opposite side of the cheese. By rotating the grating plate, the pressure plate is forced toward
the grating plate by the threads on the spindle. This pushes the cheese into the blades of the grating
plate. The grating plate and blades are plastic. The spindle and the pressure plate is also plastic.
The spindle seems to be made of a softer plastic than that of the pressure plate.
3. Experimentation established that the mechanism was functioning at least at a minimal level by
turning the grater in reverse and observing that the pressure plate pulled away from the cheese.
Turning the grater in the correct direction (clockwise) brought the pressure plate back into contact
with the cheese, pushing it into the grater. We then noted an increased resistance to turning
james@satisfice.com
godai92@live.com
19. consistent with the pressure being placed on the cheese. However, the pressure approached a
maximum, then eased, as if the pressure plate was slipping on the threads of the spindle. We
conjectured that the threads were stripped.
4. Our first repair strategy was to push the cheese into the grater by hand. We thought that might
move the pressure plate past the point where the spindle threads were stripped (assuming that the
pressure plate itself was not damaged). To get at the cheese, we removed the grating cap with brute
force (surprisingly this did not appear to damage it), which freed the entire mechanism from the
enclosing plastic case. This allowed us to provide a great deal of pressure to the pressure plate, in
addition that that of the damaged threads on the spindle. This strategy failed. No matter how much
pressure we applied, very little cheese came through the grater.
5. This led us to a systematic examination of possible failure mechanisms. Here’s what we came up
with:
The grater blades may be damaged.
The grating plate may be warped so that the grater blades fail to engage.
The shape of the cheese face may cause the grater blades to fail to engage.
6. Visual inspection of the blades and grating plate failed
to corroborate the hypothesis that the problem lay
with the grater mechanism, whereas examination of
the cheese block revealed grooves in the cheese face
that perhaps could account for the blades failing to get
any bite.
7. Our second repair strategy was to remove the
cheese from the spindle, flip it over, and replace it so
that the grater engaged a pristine face of cheese. This
improved the grating by a little bit. At this point we
returned to our first strategy and applied manual force
to the pressure plate. This improved grating
effectiveness dramatically, and slowly moved the
pressure plate past the damaged portion of the spindle.
We then reassembled the unit.
Outcome
The grater appeared to work.
Subsequent web searches on the product name suggested
the probable cause of the initial failure: The downward
facing part of the cheese block dried out and became too
hard to grate. (Interesting that we did not consider the
possibility of dried out cheese in our list of failure modes,
in step 5. However, our repair strategy coincidentally
worked, even though we misunderstood the root cause.)
Other people online have experienced this. Apparently,
the cheese is meant to be used within 14 days of breaking
the seal. This seems like an unrealistic requirement.
Contrast-enhancement of low-res
photo of spindle we were examining,
showing healthy threads below the
region of stripped threads. The
pressure plate (at bottom) now rests
on healthy threads.
20. Development Notes on “Incident Report”
By James Bach
Overview
I wrote this report as an exercise to help teach the art of performing an investigation and reporting
upon it.
Maybe you are young, inexperienced, or a self-taught thinker. Maybe you’d like to compete better to get
a job doing something that involves problem-solving or rapid learning. If so, then look for opportunities
in your own daily experience to perform an investigation such as this, and write a report about it. Do
several of them, and you will have a portfolio of your work to show prospective employers. Regardless
of your formal educational background, showing examples of your work speaks boldly about what you
can do.
Although this report describes an investigation. The general approach I’ve taken here can be applied to
many kinds of reports.
General Approach to Reporting
I begin with the question “who am I serving with this report?” and then “what is my goal in making this
report?” Usually, I am serving a paying client and my primary goal has to do with helping them solve
some specific problem. That’s a start. In this case, however, my clients are my students and colleagues.
My goal, here, is to successfully tell the story of a thought process. Success means several things:
The reader obtains a clear picture of the investigation.
The reader obtains a useful example of a report.
The reader feels able to contribute to or criticize the investigation, based on the report.
The reader learns how a simple event might become a showcase for scientific thinking.
In writing reports, there is nearly always another goal. The author may not be aware of this goal, but
here it is:
The author’s own reputation as a thinker is enhanced and not diminished.
Remember: every report you write affects how people think about you. Your ability to reason, your eye for
detail, your commitment, your professionalism, your care for others—all of these qualities and more are
being evaluated in the minds of your readers.
I want to write a clean, simple report. I try to minimize clutter and text. I want it to be short, punchy,
and readable. I use formatting to help the reader’s eye find relevant information quickly, but I try to
reduce the number of formatting elements in the document to avoid slipping into visual confusion. I’m
not always sure if I succeed, but that’s my goal.
Speaking of formatting, I used the “modern report” template, from Microsoft Word, as a base. Then I
changed the fonts to Cambria and Calibri. I use Calibri for bold facing, since a non-serif font looks better
in bold and helps to distinguish text from the un-bolded serifed text in the body of paragraphs. Also
notice that when I bold text inside a paragraph, I increase the size by one point. I use bolding for
emphasis, occasionally italics, but never underlining. Underlining is messy and old-fashioned. I often
highlight key ideas with bolding, so that the body of the text will not look like a big gray mass. This
improves readability and browsability.
I want to help the reader come to his own conclusions even if they might differ from mine. To do
that, I include not only my observations, but also information about how my observations were
obtained and how they might be mistaken. I separate my inferences from the observations on which
they are based (example “by visual inspection and weight…) and show how one follows from the other. I
21. also consider including background information that will help the reader make a better assessment of
what I did, such as the references to the patent and to the Kraft website.
The structure of the report should support the thinking the reader needs to do. As I design the
report, I anticipate the questions the reader will have, and arrange for the answers to those questions
to “pop out” from the text. In this case, I felt that a play-by-play narrative of the investigation would
serve that need best.
I want to use professional vocabulary. Although it can be perfectly fine to write a report in an
informal tone, I felt in this case it would be amusing to apply a more formal writing style to this trivial
investigation. I was going for something like the rhetorical tone of an NTSB accident investigation.
Aside from tone, I also wanted to practice “talking like a tester.” That means speaking with extra
precision and objectivity, as compared to casual conversation.
Walkthrough
Let me walk you through the report to show you how I did it and why I did it that way.
This is the masthead that comes with the “modern report” template in Microsoft Word. I like to use a
minimalist approach: author, contact information, and date. In some situations I may include more
information, such as who commissioned the report, or the version number of the report.
Sometimes I struggle with the title of a report. The title is important, because the report may be sitting
on a desk with lots of other papers. The title will be the part that catches the eye first. One way to title
the report would be to make it quite specific. This can be fine for a one-off report, but usually a report I
write is part of a series, or one example from a category of reports. So, I generally prefer a short title
that identifies the type of report this is, then I provide specific information in the sub-title.
Incident report is an okay title. But I fear it’s a little too generic. Incident could mean anything.
Investigation report might be better. I chose “incident” because one typical investigative situation is a
customer coming to a technical support organization with a problem. These are often called incidents.
The purpose of the overview is to communicate the essence of the whole report so that the reader may
decide if it’s worth reading at all. The essence of the report is that we found a problem and fixed it. But I
can’t just write “Overview: we found a problem and fixed it.” I don’t want my report to sound generic—
as if I’ve simply copied the text from another report. Anytime I write something that seems generic, I
want to replace it with something that gives at least a bit of detail that is specific to the situation at
hand. That’s why I named and described the object that we fixed.
22. Also notice that there is no table of contents in this report. The biggest problem with a table of contents
in a short document is that it conveys the subtle message to the reader that the report is full of fluff that
must be puffed up as much as possible to make it look more impressive. I’m annoyed with tables of
contents in reports that are less than about fifty pages long. I think they are a waste of space. If the
report is more than about 7 or 8 pages long, then I will list the sections of the report in the overview,
but I won’t give page numbers. It’s a simple matter for the reader to find the sections in a short
document.
The reason I describe the situation and problem is to show the focus and motivation of the
investigation. This creates a tension that is resolved in the meat of the report. At the end of the report I
go back to the top and ask myself if I have answered the questions or dealt with the challenges posed in
the situation and problem section.
I initially expected to have separate sections to describe the situation and the problem, but there
seemed to be too little to say about the each of those things, individually. Combining them created a
better flow and a critical mass of content.
One of the little challenges in writing this was to describe the object we repaired. After trying to
describe it in original words, I realized that I could use an official description of it, and a few moments
of web searching brought me to the Kraft site. The description was brief enough that I could include it
handily in the report. Anything included must be properly attributed, of course. In this case, the full link
to the web page makes sense to include, so the reader can look up more information.
I took a cell phone picture of the actual unit we repaired, but when I discovered the Kraft website had a
handsome official picture, I used that one instead.
I initially expected to have separate sections for analysis and repair, but as in the case of situation and
problem, I ended up combining them. In this case, analysis and repair activities were intertwined. I
didn’t see a graceful way of detangling them.
I numbered the paragraphs to convey a sense of step-by-step order. In fact, the investigation bounced
around a lot and branched. Reality is complicated, but part of the reporting process is to organize what
happened into a comprehensible narrative. That means the flow of events I report are going to be a bit
simpler than it happened in real life. In a complicated investigation I will often film it or take detailed
notes to preserve the sequence of events.
23. In a narrative style of reporting, I strive to create anticipation and interest in the mind of the readers.
That keeps them reading and thinking. I want them to follow along and get a sense of the things I
considered, and the false steps I made as well as the productive steps.
The highlight of the first step of the investigation is the method we used to examine the grater. I wrote
“external physical inspection” to distinguish what we did from plausible alternatives such as
disassembling the unit, or reading about the unit online.
Note on phrasing: See the words “cheese exhaustion.” I suppose I could have written “…the unit had run
out of cheese.” That would have been simpler and more accessible, but I was going for a more scientific
tone. I once saw an NTSB report refer to “fuel exhaustion” as a cause of an airplane accident, so I
emulated that.
In order to report credibly about the investigation and repair of a mechanical problem, I needed to
describe the mechanism with sufficient detail to allow the reader to appreciate the situation. As I tried
to do that, I found myself making up my own terms to describe the various parts of the grater. After a
few attempts writing in my own words, I realized that there might be a patent associated with the
grater. That patent may include exactly the description I needed.
I went to Google patent search and quickly discovered a cheese grater from 1978 (patent 4,082,230)
that looked something like the one we had repaired. I thought I would use that patent, until a few
minutes later I thought perhaps I should search for “food grater” or just “grater” instead of specifying a
cheese grater. This is because patents are sometimes written from the most general standpoint possible
in order to maximize the scope of the patent. That search turned up exactly the invention I was looking
for.
I considered pasting the exact description of the invention from the patent into the report. That didn’t
work well. The text was too long and complicated. Therefore, I settled for summarizing it using
technical terms drawn from the two patents.
In making my description, I referred to the patent. That way I have a good reason not to explain the
mechanism in any great detail, since the details are implicitly included by reference.
I tried to make the steps consistent by putting the action first in each step. Each step begins with some
variation of “we did this.” Here the experiment is briefly described. Just enough to create a reasonably
detailed mental image in the minds of readers.
24. The first repair strategy failed. In a report that seeks to describe only the problem and the solution, it is
not necessary to describe failed strategies. I included it because this report is also concerned with
demonstrating the investigative process itself.
In real life, we did not say “let’s systematically examine all possible failure mechanisms.” What we did
was bat around some ideas while each of us tried to force the cheese through the grater. In retrospect,
however, our chatter seemed equivalent to an open brainstorm of reasons why the product was failing.
The narrative would be incomplete unless I show how we ruled out the various possible causes of the
problem. That’s done in paragraph 6, which leads into the second, successful repair strategy.
The picture of the spindle is crude. It was based on the photo, below, taken with my Blackberry. I should
have photographed the spindle outside of the plastic case. It would have been much sharper. I didn’t
realize I was going to be writing a full report on this incident at the time of the investigation, or I would
have taken (and included) many more photos. Photographs, diagrams, and video bring a wonderful
dimension to investigative reports.
Because the photo of the spindle was so blurry, I used an image enhancement program to play with the
contrast and color balance until I was able to see the threads. Then I added annotation using Microsoft
Paint.
25. In the first draft of the report, I forgot to include the simplest information about the outcome: that the
grater appeared to be working. On reading through the draft several times, I fixed that.
Only as I was finishing up the report did it occur to me that I could use Google to discover whether
anyone else had been experiencing problems with the Kraft grater. Sure enough there are several
reports online. My first reaction to these was “don’t people have better things to do than to complain
about a trivial food product on their blogs?” and then I remembered that is sort of what I’m doing by
writing this report. Heh heh. People are motivated by lots of different things, I guess.
A troublesome element of the report is that it reveals a major oversight of the investigators: we failed
to consider over-dried cheese as the cause of the problem. This makes us look bad, in a way, but in
another way, including that information as a post-script shows that we might accept our mistakes and
learn from them.
Potential Improvements to the Investigation
It can be difficult to decide how much investigation is enough. We felt satisfied with achieving the
repair of the unit, but we hardly exhausted the possible branches of exploration and learning. Here are
some ideas for what we could have done:
We could attempt to measure the properties of the cheese block to quantify the amount of
drying that has occurred. We could perform experiments to track the drying process. We could
attempt to develop home-spun countermeasures to prevent the drying from taking place or
reverse the drying process, then report on their efficacy.
We could interview the homeowners to determine the history and provenance of the cheese
grating unit. How long had they owned it? When did they first open it?
We could search for more information online about the properties of the product and its
reported problems.
We could contact Kraft directly and ask about the product.
We could try the dried cheese with traditional metal graters to see if part of the blame lies with
the plastic grating plate.
We could have consulted other guests at the dinner.
We could have purchased several units and tested them in parallel.
26. OEW Case Tool
QA Analysis, 8/26/94
Summary
OEW is a complex application that is fairly stable, although not up to our standards for fit and
finish.
There are no existing tests for the product, only a rudimentary test outline that will need to be
translated from German. One full-time and one part-time tester work on the project. Those testers
are neither trained nor particularly experienced. The vendor’s primary strategy for quality
assurance is a fairly extensive beta test program.
We suggest a minimum of one tester to validate the changes to OEW. We also
suggest that the developer of OEW work onsite with our test team under our
supervision.
Feature Analysis
Complexity This is a complex application.
8 interesting menus
68 interesting menu items
40 obvious dialogs
5 kinds of windows
27 buttons on the speedbar
120 thousand lines of code
Functionality This application has substantial functionality.
Code Generation
Code Parsing
Code Diagramming
Build Invocation
Volatility The changes in the codebase will be minor.
Bug fixes.
Smallish U.I. tweaks.
Disable support for various things, including build invocation.
Operability The application is ready for testing immediately.
It operates like a late beta or shipping application.
The proposed changes will be unlikely to destabilize the app.
Customers We expect that large codebases will be generated, parsed or
diagrammed with this application.
About 25% of our beta testers have codebases larger than 200,000 lines.
The parsing capability will encourage customers to import their apps.
27. Risk Analysis
The risk of catastrophes occurring due to changes in the codebase is small.
The risk that the much larger and probably more demanding Borland market will be dissatisfied with
OEW is significant.
QA Strategies
Get this into beta 2, or send a special beta 2B to our testers who have large codebases.
Find beta bangers with large codebases and have them import into OEW.
Perform rudimentary performance analysis with big codebases.
Bring the existing OEW testers from Germany onsite.
Hire a dedicated OEW tester (contractor, perhaps).
Participate in a doc. and help review.
Translate existing test outline from German.
Perform at least one round of compatibility testing.
Schedule
The QA schedule will track the development schedule.
It may take a little while to recruit a tester.
Issues
Are there international QA issues?
28. PCE Security
Prioritizing Security Problems
Levels of Required Access
1. No access to LAN, access to PCE over Internet
2. Access to LAN, but no account on PCE
3. PCE Account, but no rights within account
4. Rights to some projects, not others
5. No rights for particular action within project
6. All rights and access
Level of Attack
1. No special knowledge/accidental
2. Casual hacker knowledge
3. User level knowledge of PCE
4. Special hacker knowledge
5. Developer level knowledge of PCE
Levels of Damage
Levels of Responsibility
Attack Vectors
Web Client
API
Server-to-Server Communication
Database Direct Attack
MS Project Attack
LDAP attack
Man-in-the-middle
DNS Poisoning
Shoulder Surfing
Social Engineering
Keyloggers and Malware
"Blood in the Water"
Unconstrained input
Obscure functions
Low level error messages
Technically informative error messages
Third-party components and interfaces
Generic O/S features and interfaces
Default configurations
Source Code
Security based on assumption of no malice
Degrees of freedom in input
Recent vulnerability disclosure in platform component
Testing Activities
Sniffing/Man-in-Middle Attack
Documentation Review
Whitebox Hazard Analysis
Fingerprinting
Google Hacking
Vulnerability Scanning/Lookup
SQL Injection
Directory Traversal
Cross-site Scripting
Input Constraint Attacks
HTTP Manipulation
Session Hijacking
Permissions Testing
Problems Found
Security Observed
Efforts Going Forward
Testing and Analysis Activities
Testers must learn security testing basics
Produce a security-specific test coverage outline
Document a concise security-specific test strategy
Consider security implications for testing
of each fix and enhancement
Periodically perform general security
regression testing
Monitor and apply patches to platform elements
Development Activities
Create installation notes that clearly
delineate security issues
Explain security architecture to testers.
Make finding obscure problems easier.
Consider reviewing Microsoft security
design checklists
Review internal permissions architecture
PCE Security.mmap - 4/9/2011 -
29. Spot Check Test Report
Prepared by James Bach, Principal Consultant, Satisfice, Inc. 8/14/11
1. Overview
This report describes one day of a paired exploratory survey of the Multi-Phasic Invigorator and
Workstation. This testing was intended to provide a spot check of the formal testing already routinely
performed on this project. The form of testing we used is routinely applied in court proceedings and
occasionally by 3rd
-party auditors for this purpose.
Overall, we found that there are important instabilities in the product, some of which could impair
patient safety; many of which would pose a business risk for product recall.
The product has new capabilities since August, but it has not advanced much in terms of stability since
then. The nature of the problems we found, and the ease with which we found them, suggest that these
are not just simple and unrelated mistakes. It is my opinion that:
The product has not yet been competently tested (or if it has been tested, many obvious
problems have not been reported or fixed).
The developers are probably not systematically anticipating the conditions and orientations and
combinations of conditions that product may encounter in the field. Error handling is generally
weak and brittle. It may be that the developers are too rushed for methodical design and
implementation.
The requirements are probably not systematically being reviewed and tested by people with
good competency in English. (e.g. the “Pulse Transmitter” checkbox works in a manner that is
exactly opposite to that specified in the requirements; error messages are not clearly written.)
These are fixable issues. I recommend:
Pair up the developers and testers periodically for intensive exploratory testing and fixing
sessions lasting at least one full day, or more.
Require the testers to be continuously on guard for anomalies of any kind, regardless of the test
protocol they are following at any given moment. Testers should be encouraged to use their
initiative, vary their use of the product, and speak up about what they see. Do not postpone the
discovery or reporting of any defect, even small ones—or else they will build up and the
processes creating these defects will not be corrected.
The requirements should be reviewed by testers who are fluent in English.
The developers should carefully diagram and analyze the state model of the product, and re-
design the code as necessary to assure that it faithfully implements that state model.
Unit-level testing by the developers, and systematic code inspection, as per FDA guidance.
30. 2. Test Process
The test team consisted of consulting tester James Bach (who led the testing) and Satisfice, Inc. intern
Oliver Bach.
The test session itself spanned about seven hours, most of which consisted of problem investigation.
Finding the problems listed below took only about two hours of that time.
The process we used was a paired exploratory survey (PES). This means two testers working on the same
product at the same time to discover and examine the primary features and workflows of the product
while evaluating them for basic capability and stability. One tester “plays” while the other leads,
organizes and records the work. A PES session is a good way to find a lot of problems quickly. I have
used this method on court cases and other consulting assignments over the years to evaluate the
quality of testing. The process is similar to that published by Microsoft as the General Functionality and
Stability Test Procedure (1999).
In this method of testing, we walk through the features of the product that are readily accessible,
learning about them, studying their states and interactions, while continuously applying consistency
heuristics as test oracles in our search for bugs. Ten such heuristics in particular are on our minds. These
ten have been published as the “HICCUPP” model in the Rapid Software Testing methodology. (See
http://www.satisfice.com/rst.pdf for more on that.)
We filmed most of the testing that we did, and delivered those videos to Antoine Rubicam.
We did not test the entire product during our one-day session. However, we sampled the product
broadly and deeply enough to get a good feel for its quality.
3. Test Results
The severe problems we found were as follows:
1. System crash after switching probes. If the orientation mode is improperly configured with the
circular probe such that there are no flip-flop mode cathodes active, and the probe is then
switched to “dissipated”, the application will crash at the end of the very next exfoliation
performed. (This is related to problems #6 and #7)
Risk: delay of procedure, loss of user confidence, potential violation of essential performance
standard of IEC60601, product recall
Implications: The developer may not have anticipated all the necessary code modifications
when dissipated mode probe support was added. Testers may not be doing systematic probe
swap testing.
2. No error displayed after ion transmitter failure during exfoliation. By pressing the start button
more than once in quick succession after an ion transmitter error is cleared, an exfoliation may
begin even though the transmitter was not in the correct pulse mode. The system is now in a
weird state. After that point, manually stopping the transmitter, changing the pulse rate, or
cutting power to the transmitter will not result in any error message being displayed.
31. Risk: patient death from skin abrasions formed due to unintentionally intensified exfoliation,
loss of user confidence, violation of IEC60601-1-8 and 60601-1-6, product recall
Implications: There seems to be a timing issue with error handling. The product acts differently
when buttons are pressed quickly than when buttons are pressed slowly. Testers may not be
varying their pace of use during testing.
3. Error message that SHOULD put system in safe mode does NOT. Ion transmitter error
messages can be ignored (e.g. "Exfoliation stopped. Ion flow is not high!"). After two or three
presses of the start button, exfoliation will begin even though multiple error messages are still
on the screen.
Risk: Requirements violation, violation of IEC 60601-1-8 and 60601-1-6, product recall.
Implications: Suggests that the testers may not be concerned with usability problems.
4. Can start exfoliation while exit menu is active (and subsequently exit during exfoliation). It
should not be possible to press the exit button while exfoliating. However, if you press the exit
button before exfoliating and the exit menu appears, the start button has not been disabled,
and the exfoliation will begin with the exit menu active. The user may then exit.
Risk: unintentional exfoliation, loss of user confidence, violation of IEC60601-1-6, product recall
Implications: Problems like this are why a careful review of the product state model and re-
design of the code would be a good idea. The bug itself is not likely to cause trouble, but the
fact that this bug exists suggests that many more similar bugs also exist in the product.
5. Probe menu freezes up after visiting settings screen (and at other apparently random times).
Going to settings screen, then returning, locks the probe mode menu until an exfoliation is
started, at which point the probe mode frees up again. We found that the menu may also lock
at apparently random intervals.
Risk: loss of user confidence
Implications: Indicates state model confusion; variables not properly initialized or re-initialized.
6. Partial system freeze after orientation mode failure. When in orientation mode with no
cathodes selected for flip-flop, an exfoliation session can be started, which is allowed to
proceed until flip-flop phase is activated. At that point, an error message displays and system is
locked with "orientation and flip-flop" modes both selected on the exfoliation mode menu. The
settings and exit buttons are also inoperative at that point. (This state can also be created by
switching probes. It is related to problems #1 and #7.)
Risk: Procedure delay, loss of user confidence, product recall
Implications: Indicates state model confusion; variables not properly initialized or re-initialized.
7. No error is displayed when orientation session begins and flip-flop cathodes are not activated.
When in orientation mode with no cathodes selected for flip-flop, an exfoliation session can be
started. Instead, an error message should be generated. (This is related to problems #1 and #6.)
32. Risk: loss of user confidence, creates opportunity for worse problems
Implications: Suggests the need for a deeper analysis of required error handling. Testers may
not be reviewing error handling behaviors.
8. Cathode 10 active in standing mode after deactivating all cathodes in flip-flop mode. De-
selection of cathodes in flip-flop or standing mode should cause de-selection of corresponding
cathodes in the other mode. However, de-selecting all flip-flop cathodes leaves cathode 10 still
active in standing mode. It’s easy to miss that cathode 10 is still active.
Risk: creates opportunity for confusion, possible inadvertent exfoliation with cathode 10,
possible violation of IEC60601-1-6
Implications: Suggests that the testers may not be concerned with usability problems.
9. Error message box can be shown off-screen. Error message boxes display at the location where
the previous box was dragged. This memory effect means that a message box may be dragged
to the side, or even off the screen, and thus the next occurrence of an error may be missed by
the operator.
Risk: creates opportunity for confusion, possible for operator to miss an error, violation of
IEC60601-1-8 and 60601-1-6, when combined with bug #3, it could result in potential harm to
the patient.
Implications: Suggests that the testers may not be concerned with usability problems.
10. Behavior of the "Pulse Transmitter" checkbox is the opposite of that specified in the FRS. The
FRS states "By selecting Pulse Transmitter checkbox application shall allow to perform
exfoliation session with manual controlled transmitter.” However, it is actually de-selecting the
checkbox which allows manual control.
Risk: business risk of failing an audit. It is potentially dangerous, as well as illegal, for the
product to behave in a manner that is the opposite of its Design Inputs and Instructions for Use.
Implications: This is a common and understandable problem in cases where the specifications
are written by someone not fluent in English. It is vital, however, to word requirements
precisely and to test the product against them. Bear in mind that the FDA personnel probably
will be native English-speakers.
11. Setting power to zero on an cathode does not cause the power to be less than 10 watts.
According to the log file, the power is well above the standard for “0” laid out in IEC60601.
(Also, displaying a “---“instead of “0” does not get around the requirement laid out in the
standard. This is true not only because it violates the spirit of the standard, but also because the
target value is displayed as “0” and the log file lists it as “0”.)
Risk: violation of IEC60601, product recall
Implications: The testers may not be familiar with the requirements of IEC60601. They may not
be testing at zero power because the formal test protocol does not require it.
Here are the lower severity problems we found:
33. 12. "Time allocated for cathode 10 is too short" message displays when time is rapidly dialed
down. The message only displays when the time is dialled down rapidly, and we were not able
to get it to display for any cathode other than 10.
13. Pressing ctrl key from exit menu causes immediate exit.
14. Exfoliation tones mysteriously change when only one cathode is active in standing mode. The
exfoliation tone for flip-flop mode is sounded for standing mode when all but one cathode is de-
activated.
15. Power can be set to zero during exfoliation without cancelling exfoliation. Since an exfoliation
cannot be started without at least one cathode set to a power greater than 0, and since de-
activating an cathode during an exfoliation session prevents it from being re-activated, it is
inconsistent to allow cathodes to be set to “0” power during an exfoliation unless they are
subsequently de-activated.
16. Power can be set to 1, which is unstable. Does it make sense to allow a power level of 1? The
display keeps flickering between 1 and “---“.
17. If orientation is used, the user may inadvertently fail to set temperature limit on one of the
exfoliation modes. Flip-flop and standing have different temperature limit settings. In our
testing, we found it difficult to remember to set the limit on both modes before beginning the
exfoliation session. This is a potential usability issue.
18. "Error-flow in standby mode should be low" message displayed at the same time as
"Exfoliation stopped. Transmitter flow is not high!" This is a confusing pair of messages, which
seem to require that the transmitter be in low flow and high flow at the same time.
19. Error messages stack on top of each other. If you press start with 0 power more than once,
then more than one error message is displayed. As many times as you press, more error
messages are displayed.