Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. Michael Bolton shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join Michael for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
TMAP Quality Engineering workshop on A4Q congress by Rik Marselis Rik Marselis
This workshop about quality engineering in high-performance IT delivery, based on the TMAP body of knowledge, explains some theory and then lets you practice with:
Indicators to measure quality
Unit testing - code coverage
Mutation testing
Path testing
AmpleLogic Low Code Application Development Platform allows you to create any number of business applications on our own. If you don’t have enough time or resources to build your application, then AmpleLogic’s no code web and application development platform is a low-cost solution that comes to your rescue. Where anyone can build an application without much effort, further which increases the business productivity and efficiency at their work levels.
Agile Testing - presentation for Agile User Groupsuwalki24.pl
Agile testing was present on Agile User Group. Presentation covers all aspects of testing on agile process, highlight the role of automation and issues with managing it.
Contents:
Behavior Driven Development (BDD)
Features of BDD
BDD Tools
BDD Framework
Examples of Cucumber/SpecFlow/BDD test
Gherkin – BDD Language
The Problem
Example of Gherkin
The Conclusion
SpecFlow Feature File
Keywords for the Feature File creation
TMAP Quality Engineering workshop on A4Q congress by Rik Marselis Rik Marselis
This workshop about quality engineering in high-performance IT delivery, based on the TMAP body of knowledge, explains some theory and then lets you practice with:
Indicators to measure quality
Unit testing - code coverage
Mutation testing
Path testing
AmpleLogic Low Code Application Development Platform allows you to create any number of business applications on our own. If you don’t have enough time or resources to build your application, then AmpleLogic’s no code web and application development platform is a low-cost solution that comes to your rescue. Where anyone can build an application without much effort, further which increases the business productivity and efficiency at their work levels.
Agile Testing - presentation for Agile User Groupsuwalki24.pl
Agile testing was present on Agile User Group. Presentation covers all aspects of testing on agile process, highlight the role of automation and issues with managing it.
Contents:
Behavior Driven Development (BDD)
Features of BDD
BDD Tools
BDD Framework
Examples of Cucumber/SpecFlow/BDD test
Gherkin – BDD Language
The Problem
Example of Gherkin
The Conclusion
SpecFlow Feature File
Keywords for the Feature File creation
This slide deck is for all the QA members who want to understand the methodology of test case design. These slides are not theoretical gyan but designed based on experience.
What are the Key drivers for automation? What are the Challenges in Agile automation and How to deal with them? How to automate? Who will automate? Which tool to select? Commercial or open source? What to automate? Which features? Here is what our experience says
Presented at GoTo Night Zurich, June 12 2014
Many teams struggle with the implementation of user story acceptance criteria and establishing a shared understanding about the expected story outcomes. This results in missed stakeholder expectations and ad-hoc assumptions made by the team. High efforts for regression testing and the lack of a reliable documentation about the current system behavior are further problems resulting from an unstructured approach to define and validate acceptance criteria.
In this session, you will learn how specification-by-example addresses these problems and overall increases the level of clarity on the project end-to-end. The presentation will cover the theory and practical experience from real projects, with concrete implementation examples based on the Gherkin specification language, that can be used for automated specification validation (available for .NET, Java, Ruby, PHP, JavaScript).
You will leave this session with a fundamental understanding of specification-by-example and its benefits, as well as concrete pointers on how to get started using it in your own projects.
Now-a-days the world of testing has shifted towards a Continuous Testing model. With increasing digital transformation, Agile & DevOps principles, the need to scale up quality initiatives becomes inevitable. Moreover, with increasing complexity and continuous integration cycles, the frequency of tests to release apps in a shorter time frame with continuous testing becomes the need of the hour to match the speed of Agile & DevOps.
Fast feedback loops and immediate responses allow businesses to adapt to changes in the market quicker than ever before. This is made possible with automation and continuous testing. But how do you achieve continuous testing?
In this, our specialist Sushma Nayak will discuss the present-day best practices in test automation at Knoldus and will examine the key testing trends that focus on the adoption of Continuous Delivery and the evolution of test automation in the coming year.
watch the video of this session on our website: https://www.knoldus.com/learn/webinars
Trabalho de Crystal Clear apresentado na Pós Graduação de Engenharia de Software Centrada em Métodos Ágeis em 2013.
A Família Crystal é um conjunto de metodologias criada por Alistair Cockburn. A
metodologia possui uma abordagem voltada à gestão de pessoas. Como Crystal Family
é muito sensível a fatores humanos, ela não é, propositalmente, completamente
definida, devendo se adaptar a cada projeto. Para a escolha de qual metodologia usar
deve-se considerar o número de pessoas e a criticidade do projeto. Crystal Clear é
membro da família Crystal, e é voltada para projetos com duas a oito pessoas, sentadas
na mesma sala ou escritórios próximos, de forma que todas as pessoas possam se
comunicar facilmente.
Waterfall vs Agile : A Beginner's Guide in Project ManagementJonathan Donado
A beginner's guide to learn about Waterfall and Agile methodologies and frameworks in project management.
This is done in plain English for the non-tech savvy reader.
Presentation by Jonathan Donado
Connect with me on Twitter @donadosays
Linkedin: https://www.linkedin.com/in/jonathandonado/
PMI / PMP / Agile / Business / Project Management / Project Manager / Waterfall
Be Fast on Your Feet: Kick Back and WATCH the BoardTechWell
Have limited time monitoring complex projects? Need to be fast on your feet during your teams’ standups? It’s a daunting task to keep track of the current work in flight. Steve Dempsen shares a mnemonic technique―WATCH—to help you think of and articulate critical questions to ask on the fly. For story cards remember W―Where is the card? Where should it be? A―What is the average time for a story this size? Are we on schedule? T―What is the status of testing? Test coverage and complexity? C―Is the story complete? consistent? And H―Is help needed? Who should we turn to? With limited time and complex subjects, ScrumMasters can use each letter in WATCH to quickly help their teams remain aware of the key aspects of development and remain focused on delivering effective solutions.
Comcast XFINITY Home: An Agile Case Study TechWell
Today's mobile application development is a complex endeavor made more difficult by teams often working at cross purposes. Separation of roles and responsibilities leads to intricate technological and personnel dependencies that makes projects challenging. Mark Hashimoto shares personal insights and lessons learned during the agile development effort of Comcast XFINITY Home iOS and Android mobile apps. Mark suggests that defining system interfaces first allows client, server, and test teams to develop in parallel; limiting mobile UX reviews to objective matters rather than subjective opinions builds trust and respect; creating binary acceptance criteria removes sprint completion ambiguity; and adhering to disciplined meeting goals reduces wasted time. However, not all lessons learned were of a technical or procedural nature. Mark describes the human dynamics involved and the most common frustrations facing your team—too many meetings, rework caused by ambiguous mobile requirements, missed deadlines, and problems that arise from a lack of time.
This slide deck is for all the QA members who want to understand the methodology of test case design. These slides are not theoretical gyan but designed based on experience.
What are the Key drivers for automation? What are the Challenges in Agile automation and How to deal with them? How to automate? Who will automate? Which tool to select? Commercial or open source? What to automate? Which features? Here is what our experience says
Presented at GoTo Night Zurich, June 12 2014
Many teams struggle with the implementation of user story acceptance criteria and establishing a shared understanding about the expected story outcomes. This results in missed stakeholder expectations and ad-hoc assumptions made by the team. High efforts for regression testing and the lack of a reliable documentation about the current system behavior are further problems resulting from an unstructured approach to define and validate acceptance criteria.
In this session, you will learn how specification-by-example addresses these problems and overall increases the level of clarity on the project end-to-end. The presentation will cover the theory and practical experience from real projects, with concrete implementation examples based on the Gherkin specification language, that can be used for automated specification validation (available for .NET, Java, Ruby, PHP, JavaScript).
You will leave this session with a fundamental understanding of specification-by-example and its benefits, as well as concrete pointers on how to get started using it in your own projects.
Now-a-days the world of testing has shifted towards a Continuous Testing model. With increasing digital transformation, Agile & DevOps principles, the need to scale up quality initiatives becomes inevitable. Moreover, with increasing complexity and continuous integration cycles, the frequency of tests to release apps in a shorter time frame with continuous testing becomes the need of the hour to match the speed of Agile & DevOps.
Fast feedback loops and immediate responses allow businesses to adapt to changes in the market quicker than ever before. This is made possible with automation and continuous testing. But how do you achieve continuous testing?
In this, our specialist Sushma Nayak will discuss the present-day best practices in test automation at Knoldus and will examine the key testing trends that focus on the adoption of Continuous Delivery and the evolution of test automation in the coming year.
watch the video of this session on our website: https://www.knoldus.com/learn/webinars
Trabalho de Crystal Clear apresentado na Pós Graduação de Engenharia de Software Centrada em Métodos Ágeis em 2013.
A Família Crystal é um conjunto de metodologias criada por Alistair Cockburn. A
metodologia possui uma abordagem voltada à gestão de pessoas. Como Crystal Family
é muito sensível a fatores humanos, ela não é, propositalmente, completamente
definida, devendo se adaptar a cada projeto. Para a escolha de qual metodologia usar
deve-se considerar o número de pessoas e a criticidade do projeto. Crystal Clear é
membro da família Crystal, e é voltada para projetos com duas a oito pessoas, sentadas
na mesma sala ou escritórios próximos, de forma que todas as pessoas possam se
comunicar facilmente.
Waterfall vs Agile : A Beginner's Guide in Project ManagementJonathan Donado
A beginner's guide to learn about Waterfall and Agile methodologies and frameworks in project management.
This is done in plain English for the non-tech savvy reader.
Presentation by Jonathan Donado
Connect with me on Twitter @donadosays
Linkedin: https://www.linkedin.com/in/jonathandonado/
PMI / PMP / Agile / Business / Project Management / Project Manager / Waterfall
Be Fast on Your Feet: Kick Back and WATCH the BoardTechWell
Have limited time monitoring complex projects? Need to be fast on your feet during your teams’ standups? It’s a daunting task to keep track of the current work in flight. Steve Dempsen shares a mnemonic technique―WATCH—to help you think of and articulate critical questions to ask on the fly. For story cards remember W―Where is the card? Where should it be? A―What is the average time for a story this size? Are we on schedule? T―What is the status of testing? Test coverage and complexity? C―Is the story complete? consistent? And H―Is help needed? Who should we turn to? With limited time and complex subjects, ScrumMasters can use each letter in WATCH to quickly help their teams remain aware of the key aspects of development and remain focused on delivering effective solutions.
Comcast XFINITY Home: An Agile Case Study TechWell
Today's mobile application development is a complex endeavor made more difficult by teams often working at cross purposes. Separation of roles and responsibilities leads to intricate technological and personnel dependencies that makes projects challenging. Mark Hashimoto shares personal insights and lessons learned during the agile development effort of Comcast XFINITY Home iOS and Android mobile apps. Mark suggests that defining system interfaces first allows client, server, and test teams to develop in parallel; limiting mobile UX reviews to objective matters rather than subjective opinions builds trust and respect; creating binary acceptance criteria removes sprint completion ambiguity; and adhering to disciplined meeting goals reduces wasted time. However, not all lessons learned were of a technical or procedural nature. Mark describes the human dynamics involved and the most common frustrations facing your team—too many meetings, rework caused by ambiguous mobile requirements, missed deadlines, and problems that arise from a lack of time.
How Agile Can We Go? Lessons Learned Moving from WaterfallTechWell
How agile are you? Once you jump off the waterfall and drink from the agile pool, there will probably be varying opinions as to the state of the organization’s agility. Some will be concerned that they are not agile enough; others will think they are agile while still adhering to old waterfall principles. Adapting to agile requires process changes that can cause friction within and between teams. Max McGregor’s organization Venafi has several teams working on multiple projects, spread worldwide. Even after a number of software releases using agile methods, teams still have challenges. Max provides insight into one mid-sized organization’s evolution through this process—where it’s working well, what the biggest challenges are, and what’s being done to increase its success with agile. Join Max to determine how agile you can or should become, and take back new ideas and methods to your teams to help them succeed.
Building Agile Teams in a Global EnvironmentTechWell
Many organizations use teams spread worldwide to develop valuable business applications. These organizations expect the teams to work as one harmonious unit without missing a beat—or should we say, a story point. A few organizations do it well; many not so well. Betsy Kauffman and Oscar Rodriquez share their experiences in working with globally distributed teams, discussing team models implemented in many organizations. They discuss how to transition from a model that may not be optimal (developers onshore and testing offshore) to a model where teams work together to deliver high quality working software regardless of their location. Along the way, explore “non-negotiables” and sustainable software engineering practices, i.e., DevOps and managing/maintaining solid team health, needed for building strong teams. Leave with a set of guiding principles you can implement day one that encompass agile leadership qualities, common sprint cadences, and “rules” to build strong successful teams.
User stories are the basis for products built using agile development. User stories are relatively short, comprised of enough information to start the development process, and designed to initiate further conversation about details. Short doesn’t necessarily mean useful. Ambiguous stories are “mysteries wrapped in an enigma”—potentially leading us to develop the wrong product. Phil Ricci explores ways to turn fuzzy user stories into sharply focused stories from their inception. That involves addressing questions of Are we talking with the right people? and Are we asking the right questions? Phil shares a four-step process—Review Description, Clarify User Role, Check for Discrepancies, Critically Review Acceptance Criteria—that sharpens the stories. Setting up a story maintenance schedule sponsored by the Product Owner with guidance from the ScrumMaster ensures that stories remain useful throughout their lifetime.
From Formal Test Cases to Session-Based Exploratory TestingTechWell
Agile software development is exciting, but what happens when your team is entrenched in older methodologies? Even with support from the organization, it is challenging to lead an organization through the transformation. As you start making smaller, more frequent releases, your manual test cases may not keep up, and your automated tests may not yet be robust enough to fill the gap. Add in the reality of shrinking testing resources, and it is obvious that change is required. But how and what should you change? Learn how Ron Smith and his team tackled these challenges by moving from a test case-driven approach to predominantly session-based exploratory testing, supported by “just enough” documentation. Discover how this resulted in testers who are more engaged, developers who increased their ability and willingness to test, and managers who increased their understanding and insight into the product. Use what you learn from Ron to begin the transformation in your organization.
Mobile Application Dev and QA Testing with Simulated EnvironmentsTechWell
Do you know that 63 percent of your users would be less likely to do business with you if they experience problems with your mobile application? To ensure top-notch user experience, you need to conduct thorough testing on unpredictable network conditions—even if testing components are unavailable. Wayne Ariola describes an innovative strategy of using simulated test environments to bring the behavior of system dependencies and network conditions under your direct control. Simulated test environments draw on two key technologies for anywhere access to a complete and realistic test environment. FIrst, service virtualization enables teams to emulate the behavior of myriad dependencies involved in end-to-end mobile application transactions. Second, mobile network virtualization adds the ability to emulate the performance of network bandwidth, latency, and jitter. With these tools in place, developers and testers can ensure that applications are validated extensively and accurately so your customers will experience great performance.
Test Automation: Investment Today Pays Back TomorrowTechWell
The results of a recent survey, authored by IBM and TechWell, showed that testers want to spend more time automating, more time planning, and more time designing tests—and less time setting up test environments and creating test data. So, where should testers and their organizations invest their time and money to achieve the desired results? What is the right level of technical ability for today’s testers to be successful? As this ongoing debate continues, the simple answer remains: It depends. Join Al Wagner as he explores the many opportunities in the world of testing and test automation. Consider the many approaches for building your automated testing skills and the solutions you create, weighing the pros and cons of each. Explore the options for test and dev organizations to consider to speed up releases and deliver more value to their companies. Leave with the ability to determine which approaches make sense for you and your employer.
The Coming Avalanche of Wearable Mobile AppsTechWell
For better or for worse—like it or not—mobile wearables are already changing our lives. Mobile wearable devices form a new generation of personalized technology that knows us better than our closest friends do. How many of your friends know how far you walked or what you ate? The challenge for developing wearable applications is incorporating the proper context to add value potential users haven’t considered—while being sensitive to their privacy. In our future, devices will wake us up earlier because of the ice storm last night and contact the people we are meeting to warn them we could be late. Philip Lew explores the most important element of mobile/wearable user experience and customer experience―context. Using real-world examples, Phil breaks down context into the elements you can incorporate into your design and development projects. Learn the contextual elements you need to incorporate right now and identify key factors for future generations’ products.
Chris Loder shares how his team at Halogen Software has implemented Selenium in a framework that everyone in his company's R&D group can use. With an ever-increasing amount of manual regression testing, the team needed an easy-to-use automation framework. Chris presents an example of how the framework they developed at Halogen Software is used and, while doing so, shows parts of the supporting code that automation developers will find interesting. Written in Java, the framework is using Selenium in some pretty cool ways. Chris starts off with flexible run configurations and how they are built. Then the tests meet the code. Are you a fan of design patterns? They are in the framework and are shown and discussed. Need conditional waits in your automation? See how Chris and his team implement them with great success. Take home some great ideas for your own automation framework.
Wearing UX—When Our Clothes Become the InterfaceTechWell
With the interest in wearable technology exploding, UX practitioners and development teams need to focus on creating experiences that intuitively fit the rhythm and ecosystem of a user’s daily life. Unfortunately, much like what happened early on with mobile design, wearable UX designers seem to have unlearned many of the best practices and heuristics they employ on, for example, desktop design. Starting with a historical perspective on technology adoption and an assessment of where we are today, Jason Snook discusses challenges designers face with the varied interfaces and interactions associated with wearables. Join this session to explore key UX considerations, including interaction design, adoption theory, and the social aspects and stigmas that are important to realizing the full potential of wearable experiences.
Transform a Manual Testing Process to Incorporate AutomationTechWell
Although most testing organizations have automation, it’s usually a subset of their overall efforts. Typically the processes for the department have been previously defined, and the automation team must adapt accordingly. The major issue is that test automation work and deliverables do not always fit into a defined manual testing process. Jim Trentadue explores what test automation professionals must do to be successful. These include understanding development standards for objects, structuring tests for modularity, and eliminating manual efforts. Jim reviews the revisions required to a V-model testing process to fuse in the test automation work. This requires changes to the manual testing process, specifically at the test plan and test case level. Learn the differences between automated and manual testing process needs, how to start a test automation process that ties into your overall testing process, and how to do a gap analysis for those actively doing automation, connecting better with the functional testing team.
Common System and Software Testing PitfallsTechWell
In spite of many great testing “how-to” books, people involved with system and software testing—testers, requirements engineers, system/software architects, system and software engineers, technical leaders, managers, and customers—continue to make many different types of testing-related mistakes. Think of these commonly-occurring human errors as a system of software testing pitfalls. And when projects fall into these pitfalls, testing is less effective at uncovering defects, people are less productive when testing, and project morale is damaged. Donald Firesmith has collected more than 150 of these testing anti-patterns, organized them into twenty categories, and documented each with name, description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, recommendations for avoidance and mitigation, and related pitfalls. Donald introduces this repository of testing pitfalls, explains its many uses, and provides directions for accessing additional information including his associated “how-not-to test” book and website that documents pitfalls and identifies pitfall categories.
Real-Time Contextual and Social Relevance in MobileTechWell
Personalized mobile user experience is a hot topic today because a smarter app will delight users, keep them coming back, and make your business stand out from the crowd. The extreme version of personalization is real-time contextual and social relevance. According to Jason Arbon, the contextual brain for your app is only a few API calls away. Based on lessons learned working on search relevance and personalization at Google, Bing, and a stealth mobile app startup, Jason describes the value, limitations, performance, and data-privacy of local and web services available today. He demonstrates practical examples of leveraging APIs such as Foursquare, Yelp, Google Places, Facebook, Location APIs (latlong + velocity), and Twitter. Then, Jason describes available natural language processing APIs such as NSLinguisticTagger and illustrates ways to use in-app usage data to improve an application’s contextual experience. Take away ideas for making your users happier—and you and your app look smarter.
A great deal of confusion surrounds the concepts of release automation, continuous integration, continuous delivery, and continuous deployment. Even some industry experts are confused about the differences. How these concepts work progressively to achieve high quality software delivery is generating a lot of discussion and controversy. Bryan Linder defines the methodology, processes, and tools associated with release automation, as well as the differences between its maturity levels. Understand the benefits of more frequent, smaller releases, and the exponential risk generated by large, infrequent releases. Hear highlights of industry case studies that demonstrate the substantial speed, quality, and ROI gains of improving your release automation process. Acquire the insight and motivation needed to take the next step—from wherever you organization is now—toward full release automation. Takeaways include a glossary of terms, a continuous integration tools comparison chart, and a release automation maturity chart.
A DevOps Approach for Building 100 iOS AppsTechWell
Apple and IBM forged a global partnership to transform enterprise mobility, which includes delivering 100 applications built exclusively for iOS devices. There are myriad challenges involved in producing that many mobile apps quickly—and with excellent user experience and quality. The team had to work smarter rather than simply throw more people at the project. Join Leigh Williamson as he discusses the DevOps techniques they implemented to accelerate their huge mobile development project: cloud hosted services for Xcode-driven continuous integration; an extended quality cycle for the mobile app once in production; and linked front-end/back-end deployments. Because integrating multiple tools from multiple vendors was unavoidable, they employed an automated pipeline for testing and integrating the code for 100 mobile apps. As the mobile landscape continues to evolve, the importance of continuously delivering engaging mobile apps integrated with your enterprise remains critical to everyone's success. Hear how one team met the challenge at scale.
Stop Maintaining Multiple Test EnvironmentsTechWell
Today, most of us struggle with non-production environments. Either the test data is not right or consistent, the dependencies are mismanaged, or “They just aren't quite like production.” Instead of striving for simpler environments, most organizations add test environments―pre-prod, UAT, stage, QAB, and so on. And they end up spending more and more time troubleshooting and maintaining environments rather than building and learning. It does not have to be this way. Joel Tosi shares his experience working with many large organizations in paths that start with DevOps and continuous delivery yet ultimately lead to the need to simplify test environments. Using simple examples and communication, Joel explains how teams should stop pushing applications through environments but rather pull them through tests. Leave with a fresh perspective on how you can simplify your testing strategies and ultimately stop creating and maintaining separate test environments.
There’s No Room for Emotions in Testing—Not!TechWell
Software testing is a highly technical, logical, rational task. There's no place for squishy emotional stuff here—not among professional testers. Or is there? Because of commitment, risk, schedule, and money, emotions often do run high in software development and testing. Our ideas about quality and bugs are rooted in our desires, which in turn are rooted in our feelings. People don't decide things based on the numbers; they decide based on how they feel about the numbers. It is easy to become frustrated, confused, or bored; angry, impatient, or overwhelmed. However, if we choose to be aware of our emotions and are open to them, feelings can be a powerful source of information for testers, alerting us to problems in the product and in our approaches to our work. You may laugh, you may cry...and you may be surprised as Michael Bolton discusses the important role that emotions play in excellent testing.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Michael to see how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
Using the Cloud to Load Test and Monitor Your ApplicationsTechWell
Load testing is often one of the most difficult testing efforts to set-up—in both time for the deployment and cost for the additional hardware needed. Using cloud-based software, you can transform this most difficult task to one of the easiest. Charles Sterling explains how load testing fits into the relatively new practice of DevOps. Then, by re-using the tests created in the load testing effort to monitor applications, the test team can help solve the challenges in measuring, monitoring, and diagnosing applications―not just in development and test but also into production. Chuck demonstrates web performance test creation, locally run load test creation, cloud executed load test to the cloud, application performance monitoring (APM), global system monitoring (GSM), and usage monitoring (UM) for near real-time customer input for your application.
You name the testing topic, and Alan Page has an opinion on it, hands-on practical experience with it—or both. Spend the afternoon with Alan as he discusses a variety of topics, trends, and tales of software engineering and software testing. In an interactive format loosely based on discovering new testing ideas—and bringing new life to some of the old ideas—Alan shares experiences and stories from his twenty year career as a software tester. Topics may include philosophical rants about code coverage and test pass rates; thoughts on the developer/tester relationship and quality ownership; and insights on test leadership and the real future of test. Join Alan for a unique opportunity to participate in intriguing discussions about testing that will expand your testing knowledge, give you the insight you need to grow your own career, and help your organization succeed.
The design secrets behind Slack’s amazing successUserTesting
Tina Chen, Design Lead at Slack, takes us behind the scenes to share the design processes at Slack. She’ll talk about what it's like to design at a company that’s growing rapidly, and walk us through a recent project that gave apps and bots the ability to interact more closely with users. We’ll also have a Q&A session with Tina after her presentation.
Candid Conversations With Product People: Using Continuous Customer Testing f...Aggregage
After weathering recessions with a wide range of iconic customers, CEO and Product Manager Luke Freiler has seen first hand the impact the Voice of the Customer has had in making or breaking tech companies during hard times. He's going to walk you through an agile process for continuous customer testing that saves you time and gives you full confidence in your products — no matter how many you're sending out the door this year.
Apply Phil Jackson’s Coaching Principles to Build Better Agile TeamsTechWell
Often referred to as the “Zen Master” for his unorthodox coaching style, professional basketball coach Phil Jackson won more professional sports championships than any other coach in history. Jackson led the Chicago Bulls and Los Angeles Lakers to a total of eleven NBA championships, but rather than studying and following the strategies of other coaches, Jackson developed a set of coaching principles aligned with his personal beliefs. Dion Stewart believes that agile coaches can learn a lot from Jackson’s focus on selfless teamwork, mindfulness, compassion, and ritual rather than simply coaching by ensuring teams are adhering to an agile process. Dion explores how coaches can help software teams build trust, create team unity, find freedom by minimizing process, and enhance performance by creating the best possible conditions for success and then letting go. Learn how Dion coaches teams and individuals to improved levels of performance, and discover how you, as a coach, and your software teams can improve by applying Phil Jackson’s coaching principles.
David Bernstein says that the core of Extreme Programming (XP) is comprised of five development practices: automating the build for continuously integrating software as it is written, collaborating with team members through pair programming, practicing agile design skills that enable testability, using test-first development to drive design, and refactoring code to reduce technical debt. Together, these five technical practices are essential for sustained success with XP and for many of the best agile teams. However, quite a few agile teams haven’t been exposed to some or all of these practices. David explores these XP practices, discusses how to use them to reduce risk, and explains how to build quality in at every level of the development process. He makes the business case for these technical practices by showing how they address the inherent risks and challenges in building software. David then looks at how practices from XP address the core issues of software development by helping us “build the right thing” and “build the thing right.”
In this webinar, we'll take a deep dive into:
An overview of current development lifecycles
Why Test-First Methodlogies
BDD/ATDD vs TDD
Typical problems with adopting BDD/ATDD from a more Test-First approach
How to overcome adoption challenges with people, processes and tools.
“Testing” in an agile environment is much different from classic testing on waterfall projects. Testers must be involved in all aspects of software development. Jeroen Mengerink shows you how professional testers can become key contributors in agile projects. First, he explains how to pair with and help the members of your agile team by identifying the test skills each of them needs to learn for the team to create a better quality product. Because agile development starts with user stories, there is an increased importance of end-to-end testing. Jeroen shows how to use mind mapping to provide insight into how to test an end-to-end flow. Performing risk analysis allows you to start testing as soon as the code becomes available. Finally, he discusses ways to monitor your testing to make sure you have a lean test strategy that reduces rework and waste. Welcome the changes that agile provides, but don’t forget the lessons and experiences from your past.
Agile Success with Scrum: It’s All about the PeopleTechWell
Is it possible to be doing everything Scrum says to do and still fail horribly? Unfortunately, the answer is yes—and teams do it every day. To many, Scrum means concentrating on the meetings and artifacts, and making sure the roles all do their jobs. Bob Hartman and Michael Vizdos explore why success with Scrum means understanding the people who do the work and giving them the tools and environment to do their best in a meaningful way. Drawing from their experiences as agile coaches and Certified Scrum Trainers, Bob and Michael help you better understand and practice the people side of Scrum. They explain ways that the Agile Manifesto interlocks with the five key Scrum people values—commitment, focus, openness, respect, and courage—and relates those values to lean software development principles. By focusing on the people side of Scrum and the lean principles they share, you can transform your Scrum teams into the best they can be.
Experiment Your Way to Product Success: The Science of High-Impact Experiment...Aggregage
Experimentation allows product managers to make decisions based on data rather than mere intuition. But too many teams don't know what to test, which leads to poorly designed experiments and unclear results. How can a product manager be certain they’re making effective decisions when it comes to experimentation?
Join Holly Hester-Reilly, Founder and Product Management Coach at H2R Product Science, as she shares her approach to high-impact experimentation. She’ll walk us through the entire process, from deciding what to test to sharing the results with stakeholders, to illustrate what strong experimentation practices look like and how they can be implemented in every organization.
Old Products, New Tricks: Driving Discovery and Experimentation in Your Organ...Aggregage
If you want to build what matters, you can't move forward blindly. But to make progress, you can't let things slow to a crawl while you focus resources on gathering data. This is where continuous discovery and experimentation come in.
Join Teresa Torres (Product Discovery Coach, Product Talk), David Bland (Founder, Precoil), and Hope Gurion (Product Coach and Advisor, Fearless Product) in a panel discussion as they cover how - and why - to build a culture of discovery and experimentation in your organization.
Old Products, New Tricks: Driving Discovery and Experimentation in Your Organ...BrittanyShear
If you want to build what matters, you can't move forward blindly. But to make progress, you can't let things slow to a crawl while you focus resources on gathering data. This is where continuous discovery and experimentation come in.
Join Teresa Torres (Product Discovery Coach, Product Talk), David Bland (Founder, Precoil), and Hope Gurion (Product Coach and Advisor, Fearless Product) in a panel discussion as they cover how - and why - to build a culture of discovery and experimentation in your organization.
Similar to Critical Thinking for Software Testers (20)
Do you ever feel you have lost confidence in your own abilities? Why does this happen? Isabel Evans spends a lot of time painting. Someone once commented, “Why are you doing this, when you are not very good at it?” And gradually she stopped drawing and painting, after being intimidated by a conventional vision of what good art should look like. At the same time, she experienced a parallel loss of confidence in her professional abilities. Attempting creative pursuits like drawing and painting is essential to cognitive, emotional, creative abilities and she began to understand the correlation between her creative activities and her confidence. Making errors, being wrong, failing – that is a generous gift we receive when we practice outside our skill level. By staying in a comfort zone and repeating successes, we stagnate. As Isabel started to create again she thought “I don’t feel good at it, I do feel good doing it” The difference was that she was learning, having ideas and the act of re-engaging with failure, together with the comradeship of friends and colleagues, including at Women Who Test, Isabel has regained her confidence in her professional abilities, and been able to reboot her career and joy. Join Isabel to share a journey from self-perceived failure, to recovery and renewed learning.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
Imagine this … As soon as any developed functionality is submitted into the code repository, it is automatically subjected to the appropriate battery of tests and then released straight into production. Setting up the pipeline capable of doing just that is becoming more and more common and something you need to know about. But most organizations hit the same stumbling block—just what IS the appropriate battery of tests? Automated build architectures don't always lend themselves well to the traditional stages of testing. In this hands-on tutorial, Melissa Benua introduces you to key test design principles—applicable to organizations both large and small—that allow you to take full advantage of the pipeline's capabilities without introducing unnecessary bottlenecks. Learn how to make highly reliable tests that run fast and preserve just enough information to let testers and developers determine exactly what went wrong and how to reproduce the error locally. Explore ways to reduce overlap while still maintaining adequate test coverage. Take back ideas about which test areas could benefit from being combined into a single suite and which areas could benefit most from being broken out altogether.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
DevOps is transforming software development with many organizations adopting lean development practices, implementing continuous integration (CI), and performing regular continuous deployment (CD) to their production environments. However, the database is largely ignored and often seen as a bottleneck in the DevOps process. Steve Jones discusses the challenges of database development and why many developers find the database to be an impediment to the CD process. Steve shares the techniques you can use to fit a database into the DevOps process. Learn how to store database code in a version control system, and the differences between that and application code. Steve demonstrates a CI process with SQL code and uses automated testing frameworks to check the code. Steve then shows how automated releases with manual gates can reduce the stress and risk of database deployments while ensuring consistent, reliable, repeatable releases to QA, UAT, and production.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Scale: The Most Hyped Term in Agile Development TodayTechWell
Scrum is everywhere. More than 90 percent of agile teams use it. But for many organizations wanting to scale agile, one team using Scrum is not enough. Dave West says the Nexus Framework, created by Ken Schwaber, the co-creator of Scrum, provides an exoskeleton for Scrum. Nexus allows multiple teams to work together to produce an integrated increment regularly. It addresses the key challenges of scaling agile development by adding new yet minimal events, artifacts, and roles to the Scrum framework. Dave discusses Nexus, addresses its boundaries, and explains what else is needed for agile to thrive in an organization. Dave explores how organizations have transitioned to agile, and examines their successes and challenges in implementing Scrum, how they envision scaling with Nexus, and goals for creating a Scrum Studio.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
When stars align: studies in data quality, knowledge graphs, and machine lear...
Critical Thinking for Software Testers
1. TA
Full Day Tutorial
10/14/2014 8:30:00 AM
"Critical Thinking for Software Testers"
Presented by:
Michael Bolton
DevelopSense
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Michael Bolton
DevelopSense
Tester, consultant, and trainer Michael Bolton is the coauthor (with James Bach) of Rapid
Software Testing, a course that presents a methodology and mindset for testing software
expertly in uncertain conditions and under extreme time pressure. A leader in the context-driven
software testing movement, Michael has twenty years of experience testing, developing,
managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based
consultancy. Prior to DevelopSense, he was with Quarterdeck Corporation, where he managed
the company’s flagship products and directed project and testing teams—both in-house and
worldwide. Contact Michael at michael@developsense.com.
6. Critical Thinking for Testers Michael Bolton and James Bach
4
Beware of
Shallow Agreement!
Wait, let’s try something really simple…
Reflex is IMPORTANT
But Critical Thinking is About Reflection
REFLEX
REFLECTION
Faster
Looser
Slower
Surer
get more
data
System 2
System 1
See Thinking Fast and Slow, by Daniel Kahneman
7. Critical Thinking for Testers Michael Bolton and James Bach
5
The Nature of Critical Thinking
• “Critical thinking is purposeful, self‐regulatory
judgment which results in interpretation,
analysis, evaluation, and inference, as well as
explanation of the evidential, conceptual,
methodological, criteriological, or contextual
considerations upon which that judgment is
based.” ‐ Critical Thinking: A Statement of Expert Consensus for Purposes
of Educational Assessment and Instruction, Dr. Peter Facione
(Critical thinking is, for the most part, about getting all the benefits of
your “System 1” thinking reflexes while avoiding self‐deception and
other mistakes.)
Bolton’s Definition of Critical Thinking
• Michael Bolton
Testing is enactment of critical thinking about software.
Critical thinking must begin with our belief in the
likelihood of errors in our thinking.
9. Critical Thinking for Testers Michael Bolton and James Bach
7
Don’t Be A Turkey
• Every day the turkey adds one more data
point to his analysis proving that the farmer
LOVES turkeys.
• Hundreds of observations
support his theory.
• Then, a few days before
Thanksgiving…
Based on a story told by Nassim Taleb, who stole it from Bertrand
Russell, who stole it from David Hume.
Graph of My Fantastic Life! Page 25!
(by the most intelligent Turkey in the world)
WellBeing!
DATA
ESTIMATED
POSTHUMOUSLY
AFTER THANKSGIVING
“Corn meal a little off
today!”
Don’t Be A Turkey
• No experience of the past can LOGICALLY be
projected into the future, because we have no
experience OF the future.
• No big deal in a world of
stable, simple patterns.
• BUT SOFTWARE IS NOT
STABLE OR SIMPLE.
• “PASSING” TESTS CANNOT
PROVE SOFTWARE GOOD.
Based on a story told by Nassim Taleb, who stole it from Bertrand
Russell, who stole it from David Hume.
16. Critical Thinking for Testers Michael Bolton and James Bach
14
Levels of Assumptions
Reckless Assumptions that are too risky regardless of how they are
managed. Obviously bad assumptions. Don’t make them.
Risky Assumptions that might be wrong or cause trouble, but can be
okay with proper management. If you use them, declare them.
Safe Assumptions that are acceptable to make without any special
management or declaration, but still might cause trouble.
Obvious Assumptions so safe that they cause trouble only IF you manage
them, because people will think you are joking, crazy, or
insulting.
It is silly to say “don’t make assumptions.”
Instead, say “let’s be careful about risky assumptions.”
Exercise
What makes an assumption more dangerous?
• Not “what specific assumptions are more
dangerous?”…
• But “what factors would make one
assumption more dangerous than another?”
• Or “what would make the same assumption
more dangerous from one time to another?”
17. Critical Thinking for Testers Michael Bolton and James Bach
15
What makes an assumption more dangerous?
1. Consequential: required to support critical plans and activities. (Changing the
assumption would change important behavior.)
2. Unlikely: may conflict with other assumptions or evidence that you have. (The
assumption is counter‐intuitive, confusing, obsolete, or has a low probability of
being true.)
3. Blind: regards a matter about which you have no evidence whatsoever.
4. Controversial: may conflict with assumptions or evidence held by others. (The
assumption ignores controversy.)
5. Impolitic: expected to be declared, by social convention. (Failing to disclose the
assumption violates law or local custom.)
6. Volatile: regards a matter that is subject to sudden or extreme change. (The
assumption may be invalidated unexpectedly.)
7. Unsustainable: may be hard to maintain over a long period of time. (The
assumption must be stable.)
8. Premature: regards a matter about which you don’t yet need to assume.
9. Narcotic: any assumption that comes packaged with assurances of its own safety.
10.Latent: Otherwise critical assumptions that we have not yet identified and dealt
with. (The act of managing assumptions can make them less critical.)
What are We Seeing Here?
• Mental models and modeling are often
dominated by unconscious factors.
• Familiar environments and technologies allow
us to “get by” on memory and habit.
• Social conventions may cause us to value
politeness over doing our disruptive job.
• Lack of pride and depth in our identity as
testers saps our motivation to think better.
18. Critical Thinking for Testers Michael Bolton and James Bach
16
• You may not understand. (errors in interpreting
and modeling a situation, communication errors)
• What you understand may not be true. (missing
information, observations not made, tests not run)
• You may not know the whole story. (perhaps what
you see is not all there is)
• The truth may not matter, or may matter much
more than you think. (poor understanding of risk)
How to Think Critically:
Introducing Pauses
Giving System 2 time to wake up!
Huh?
Really?
And?
So?
To What Do We Apply Critical Thinking?
• The Product
• What it is
• Descriptions of it is
• Descriptions of what it does
• Descriptions of what it's
supposed to be
• Testing
• Context
• Procedures
• Coverage
• Oracles
• Strategy
• The Project
• Schedule
• Infrastructure
• Processes
• Social orders
• Words
• Language
• Pictures
• Problems
• Biases
• Logical fallacies
• Evidence
• Causation
• Observations
• Learning
• Design
• Behavior
• Models
• Measurement
• Heuristics
• Methods
• …
19. Critical Thinking for Testers Michael Bolton and James Bach
17
“Huh?”
Critical Thinking About Words
• Among other things, testers question premises.
• A suppressed premise is an unstated premise that
an argument needs in order to be logical.
• A suppressed premise is something that should
be there, but isn’t…
• (…or is there, but it’s invisible or implicit.)
• Among other things, testers bring suppressed
premises to light and then question them.
• A diverse set of models can help us to see the
things that “aren’t there.”
34
Example: Generating Interpretations
• Selectively emphasize each word in a
statement; also consider alternative meanings.
MARY had a little lamb.
Mary HAD a little lamb.
Mary had A little lamb.
Mary had a LITTLE lamb.
Mary had a little LAMB.
35
24. Critical Thinking for Testers Michael Bolton and James Bach
22
Treating absolute statements as a
heuristic helps to defends you
against critical thinking errors.
And yes, that’s a heuristic.
Heuristic Model:
The Four‐Part Risk Story
• Victim. Someone that experiences the impact of a problem.
Ultimately no bug can be important unless it victimizes a human.
• Problem: Something the product does that we wish it wouldn’t do.
• Vulnerability: Something about the product that causes or allows
it to exhibit a problem, under certain conditions.
• Threat: Some condition or input external to the product that, were it
to occur, would trigger a problem in a vulnerable product.
Some person may be hurt or annoyed
because of something that might go wrong
while operating the product,
due to some vulnerability in the product
that is triggered by some threat.
26. Critical Thinking for Testers Michael Bolton and James Bach
24
Remember this, you testers!
Models Link Observation and Inference
• A model is an idea, activity, or object…
• …that represents another idea, activity, or object…
• …whereby understanding the model may help you understand
or manipulate what it represents.
51
such as an idea in your mind, a diagram, a list of words, a spreadsheet, a
person, a toy, an equation, a demonstration, or a program
such as something complex that you need to work with or study.
‐ A map helps navigate across a terrain.
‐ 2+2=4 is a model for adding two apples to a basket that already has two apples.
‐ Atmospheric models help predict where hurricanes will go.
‐ A fashion model helps understand how clothing would look on actual humans.
‐ Your beliefs about what you test are a model of what you test.
27. Critical Thinking for Testers Michael Bolton and James Bach
25
Models Link Observation & Inference
• Testers must distinguish
observation from inference!
• Our mental models form the
link between them
• Defocusing is lateral thinking.
• Focusing is logical (or “vertical”)
thinking.
52
My model
of the world
“I see…”
“I believe…”
53
Modeling Bugs as Magic Tricks
• Our thinking is limited
• We misunderstand probabilities
• We use the wrong heuristics
• We lack specialized knowledge
• We forget details
• We don’t pay attention to the right things
• The world is hidden
• states
• sequences
• processes
• attributes
• variables
• identities
Magic tricks work
for the same reasons
that bugs exist
Studying magic can
help you develop
the imagination
to find better bugs.
Testing magic is
indistinguishable from
testing sufficiently
advanced technology
28. Critical Thinking for Testers Michael Bolton and James Bach
26
Observation vs. Inference
• Observation and inference are easily confused.
• Observation is direct sensory data, but on a very low level it is
guided and manipulated by inferences and heuristics.
• You sense very little of what there is to sense.
• You remember little of what you actually sense.
• Some things you think you see in one instance may be
confused with memories of other things you saw at other
times.
• It’s easy to miss bugs that occur right in front of your eyes.
• It’s easy to think you “saw” a thing when in fact you merely
inferred that you must have seen it.
54
Observation vs. Inference
• Accept that we’re all fallible, but that we can learn to be better
observers by learning from mistakes.
• Pay special attention to incidents where someone notices something
you could have noticed, but did not.
• Don’t strongly commit to a belief about any important evidence
you’ve seen only once.
• Whenever you describe what you experienced, notice where you’re
saying what you saw and heard, and where you are instead jumping
to a conclusion about “what was really going on.”
• Where feasible, look at things in more than one way, and collect more
than one kind of information about what happened (such as repeated
testing, paired testing, loggers and log files, or video cameras).
55
30. Critical Thinking for Testers Michael Bolton and James Bach
28
Guideword Heuristics for Diagram Analysis
What’s there? What happens? What could go wrong?
Boxes
• Interfaces (testable)
• Missing/Drop‐out
• Extra/Interfering/Transient
• Incorrect
• Timing/Sequencing
• Contents/Algorithms
• Conditional behavior
• Limitations
• Error Handling
Lines
• Missing/Drop‐out
• Extra/Forking
• Incorrect
• Timing/Sequencing
• Status Communication
• Data Structures
Web Server
Database
LayerApp Server
Browser
Paths
• Simplest
• Popular
• Critical
• Complex
• Pathological
• Challenging
• Error Handling
• Periodic
Testability!
Visualizing Test Coverage: Annotation
59
Web Server
App Server
Browser
Database
Layer
Build
Error Monitor
Survey
Build
Error Monitor
Coverage analysis
Force fail
Force fail
Man-in-middle
Data generator
Build stressbots
Server stress
Performance data Inspect reports
Table consistency oracle
Datagen Oracle
Performance history
Build history oracle
History oracle
History oracle
Review Error
Output
31. Critical Thinking for Testers Michael Bolton and James Bach
29
Beware Visual Bias!
• setup
• browser type & version
• cookies
• security settings
• screen size
• review client-side scripts & applets
• usability
• specific functions 60
Web Server
App Server
Browser
Database
Layer
Testing against requirements
is all about modeling.
“The system shall operate at an input voltage range
of nominal 100 ‐ 250 VAC.”
“Try it with an input voltage in the range of 100-250.”
Poor answer:
How do you test this?
34. Critical Thinking for Testers Michael Bolton and James Bach
32
Kaner & Bond’s Tests for Construct Validity
from http://www.kaner.com/pdfs/metrics2004.pdf
• What is the purpose of your measurement? The scope?
• What is the attribute you are trying to measure?
• What are the scale and variability of this attribute?
• What is the instrument you’re using? What is its scale and
variability?
• What function (metric) do you use to assign a value to the
attribute?
• What’s the natural scale of the metric?
• What is the relationship of the attribute to the metric’s value?
• What are the natural, foreseeable side effects of using this
measure?
The essence of good measurement is a model that incorporates
answers to questions like these.
If you don’t have solid answers, you aren’t doing measurement;
you are just playing with numbers.
Test Framing
• Test framing is the set of logical connections
that structure and inform a test and its result
• The framing of a test consists of
– premises; essentially ordinary statements
– logical “connectors”
• formal: if, then, else, and, or
• informal: although, maybe,
• A change in ONE BIT in the framing of the test
can invert its result.
35. Critical Thinking for Testers Michael Bolton and James Bach
33
Exercise:
Test Framing
• “I performed the tests. All my tests passed.
Therefore, the product works.”
• “The programmer said he fixed the bug. I can’t
reproduce it anymore. Therefore it must be
fixed.”
• “Microsoft Word frequently crashes while I am
using it. Therefore it’s a bad product.”
• “It’s way better to find bugs earlier than to find
them later.”
68
Safety Language
(aka “epistemic modalities”)
• “Safety language” in software testing, means to qualify
or otherwise draft statements of fact so as to avoid
false confidence.
• Examples:
So far…
The feature worked
It seems…
I think…
It appears…
apparently…
I infer…
I assumed…
I have not yet seen any
failures in the feature…
36. Critical Thinking for Testers Michael Bolton and James Bach
34
Who Says?
Critical Thinking About Research
• Research varies in quality
• Research findings often contradict one another
• Research findings do not prove conclusions
• Researchers have biases
• Writers and speakers may simplify or distort
• “Facts” change over time
• Research happens in specific environments
• Human desires affect research outcomes
Asking the Right Questions: A Guide to Critical Thinking
M. Neil Browne & Stuart M. Keeley
72
ALL of these things apply to testing, too.
Critical Thinking About
Common Beliefs About Testing
• Every test must have an expected, predicted result.
• Effective testing requires complete, clear, consistent,
and unambiguous specifications.
• Bugs found earlier cost less to fix than bugs found later.
• Testers are the quality gatekeepers for a product.
• Repeated tests are fundamentally more valuable.
• You can’t manage what you can’t measure.
• Testing at boundary values is the best way to find bugs.
37. Critical Thinking for Testers Michael Bolton and James Bach
35
Critical Thinking About
Common Beliefs About Testing
• Test documentation is needed to deflect legal liability.
• The more bugs testers find before release, the better
the testing effort has been.
• Rigorous planning is essential for good testing.
• Exploratory testing is unstructured testing, and is
therefore unreliable.
• Adopting best practices will guarantee that we do a
good job of testing.
• Step by step instructions are necessary to make testing
a repeatable process.
Critical thinking about practices
What does “best practice” mean?
• Someone: Who is it? What do they know?
• Believes: What specifically is the basis of their belief?
• You: Is their belief applicable to you?
• Might: How likely is the suffering to occur?
• Suffer: So what? Maybe it’s worth it.
• Unless: Really? There’s no alternative?
• You do this practice: What does it mean to “do” it? What does it
cost? What are the side effects? What if you do it badly? What if
you do something else really well?
38. Critical Thinking for Testers Michael Bolton and James Bach
36
Beware of…
• Numbers: “We cut test time by 94%.”
• Documentation: “You must have a written plan.”
• Judgments: “That project was chaotic. This project was a success.”
• Behavior Claims: “Our testers follow test plans.”
• Terminology: Exactly what is a “test plan?”
• Contempt for Current Practice: CMM Level 1 (initial) vs.
CMM level 2 (repeatable)
• Unqualified Claims: “A subjective and unquantifiable requirement
is not testable.”
Look For…
• Context: “This practice is useful when you want the power of creative
testing but you need high accountability, too.”
• People: “The test manager must be enthusiastic and a real hands‐on
leader or this won’t work very well.”
• Skill: “This practice requires the ability to tell a complete story about
testing: coverage, techniques, and evaluation methods.”
• Learning Curve: “It took a good three months for the testers to get
good at producing test session reports.”
• Caveats: “The metrics are useless unless the test manager holds daily
debriefings.”
• Alternatives: “If you don’t need the metrics, you ditch the daily
debriefings and the specifically formatted reports.”
• Agendas: “I run a testing business, specializing in exploratory testing.”
40. Critical Thinking for Testers Michael Bolton and James Bach
38
The Regression Testing Reality
“We run a smattering of old checks
to ensure that they still find no bugs...
And we assume that any bug not found
is also not important.”
Critical Thinking About Processes
• This is a description of a bug investigation process
that a particular company uses. Does it make sense?
See James Bach, Investigating Bugs: A Testing Skills Study
http://www.satisfice.com/articles/investigating‐bugs.pdf
43. Critical Thinking for Testers Michael Bolton and James Bach
41
Some Common Thinking Errors
• Fundamental Attribution Error
– “it always works that way”; “he’s a jerk”
– failure to recognize that circumstance and context
play a part in behaviour and effects
• The Similarity‐Uniqueness Paradox
– “all companies are like ours”; “no companies are like
ours”
– failure to consider that everything incorporates
similarities and differences
• Missing Multiple Paths of Causation
– “A causes B” (even though C and D are also required)
Some Common Thinking Errors
• Assuming that effects are linear with causes
– “If we have 20% more traffic, throughput will slow
by 20%”
– this kind of error ignores non‐linearity and
feedback loops—c.f. general systems
• Reactivity Bias
– the act of observing affects the observed
– a.k.a. “Heisenbugs”, the Hawthorne Effect
• The Probabilistic Fallacy
– confusing unpredictability and randomness
– after the third hurricane hits Florida, is it time to
relax?
44. Critical Thinking for Testers Michael Bolton and James Bach
42
Some Common Thinking Errors
• Binary Thinking Error / False Dilemmas
– “all manual tests are bad”; “that idea never works”
– failure to consider gray areas; belief that something is
either entirely something or entirely not
• Unidirectional Thinking
– expresses itself in testing as a belief that “the
application works”
– failure to consider the opposite: what if the
application fails?
– to find problems, we need to be able to imagine that
they might exist
Some Common Thinking Errors
• Availability Bias
– the tendency to favor prominent or vivid instances in
making a decision or evaluation
– example: people are afraid to fly, yet automobiles are
far more dangerous per passenger mile
– to a tech support person (or to some testers), the
product always seems completely broken
– spectacular failures often get more attention than
grinding little bugs
• Confusing concurrence with correlation
– “A and B happen at the same time; they must be
related”
45. Critical Thinking for Testers Michael Bolton and James Bach
43
Some Common Thinking Errors
• Nominal Fallacies
– believing that we know something well because we can
name it
• “equivalence classes”
– believing that we don’t know something because we
don’t have a name for it at our fingertips
• “the principle of concomitant variation”; “inattentional
blindness”
• Evaluative Bias of Language
– failure to recognize the spin of word choices
– …or an attempt to game it
– “our product is full‐featured; theirs is bloated”
Some Common Thinking Errors
• Selectivity Bias
– choosing data (beforehand) that fits your preconceptions
or mission
– ignoring data that doesn’t fit
• Assimilation Bias
– modifying the data or observation (afterwards) to fit the
model
– grouping distinct things under one conceptual umbrella
– Jerry Weinberg refers to this as “lumping”
– for testers, the risk is in identifying setup, pinpointing,
investigating, reporting, and fixing as “testing”
46. Critical Thinking for Testers Michael Bolton and James Bach
44
Some Common Thinking Errors
• Narrative Bias
– a.k.a “post hoc, ergo propter hoc”
– explaining causation after the facts are in
• The Ludic Fallacy
– confusing complex human activities with random,
roll‐of‐the‐dice games
– “Our project has a two‐in‐three chance of success”
• Confusing correlation with causation
– “When I change A, B changes; therefore A must be
causing B”
Some Common Thinking Errors
• Automation bias
– people have a tendency to believe in results from an automated
process out of all proportion to validity
• Formatting bias
– It’s more credible when it’s on a nicely formatted spreadsheet or
document
– (I made this one up)
• Survivorship bias
– we record and remember results from projects (or people) who
survived
– the survivors prayed to Neptune, but so did the sailors who died
– What was the bug rate for projects that were cancelled?
48. Critical Thinking for Testers Michael Bolton and James Bach
46
A = C B = D A > B D > C
Program A: If Program A is adopted, 200 people will be saved.
(3/4 surveyed prefer this to B)
Program B: If Program B is adopted, there is 1/3 probability that
600 people will be saved, and 2/3 probability that
no people will be saved.
Program C: If Program A is adopted, 400 people will die.
Program D: If Program B is adopted, there is 1/3 probability that
600 people will be saved, and 2/3 probability that
no people will be saved.
(3/4 surveyed prefer this to C)
Isn’t it all “just semantics”?
• What does “semantics” mean?
– Semantics is the branch of linguistics concerned with logic
and meaning, so dismissing something as “semantics” is
cool as long as logic and meaning don’t matter to you.
• How many “events” were there when the twin towers
were hit? Does it matter?
– To insurers and property owners (who were insured per
event), it matters a great deal.
• What’s the difference between “ad hoc” testing and
“exploratory testing”? Don’t they mean the same
thing?
– We can tell you what the structures and skills of
exploratory testing are. Can you tell us what the structures
skills of “ad hoc” testing are?
49. Critical Thinking for Testers Michael Bolton and James Bach
47
Tacit vs. Explicit Knowledge
• Explicit knowledge is knowledge that has been
told
• Tacit knowledge comes in three forms
– Relational (“weak”) tacit knowledge: knowledge,
residing in a human mind, that could be told, but
has not been for various reasons
– Somatic (“medium”) tacit knowledge: knowledge
that comes from residing in a human body
– Collective (“strong”) tacit knowledge: knowledge
that is embodied in society
See Harry Collins, Tacit and Explicit Knowledge
The Role of Tacit Knowledge
• Although people tend to favour the explicit
(because we can talk about it relatively easily),
much of what we do in testing is based on
using and developing tacit knowledge.
• A key problem with test‐case‐focused testing
is that much of the tacit knowledge necessary
for excellent testing cannot be encoded.
What are the elements of tacit knowledge in
your testing? What parts can be made explicit?
What parts cannot?