In this presentation we introduce the most useful testing techniques in order to find bugs (ad hoc testing, combinatorial testing, test flow diagram, cleanroom testing and testing trees).
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
Software testing is a process used to identify issues and ensure quality in developed software. It involves techniques like unit testing of individual code components, integration testing of interface between components, and system testing of the full application. While exhaustive testing of all possible inputs is not feasible due to time constraints, techniques like equivalence partitioning, boundary value analysis, and error guessing help prioritize test cases. The goal is to thoroughly test the most important and error-prone areas with the time available.
The client faced challenges with regression testing Oracle Applications due to constant upgrades. Infosys created an automation framework that enabled the client to reduce regression testing efforts and costs by 80% and minimize business interruptions. The framework included documenting test cases, developing automated scripts using testing tools, executing the scripts across multiple releases, and managing tests. This improved cost savings, delivery confidence, maintainability, and resource utilization.
1. Defect removal effectiveness measures the percentage of defects found by a particular development activity compared to the total defects present.
2. Several metrics have been proposed to measure defect removal effectiveness, including error detection efficiency, removal efficiency, early detection percentage, and phase containment effectiveness.
3. Studies have shown that defect removal effectiveness tends to increase with higher levels of software process maturity based on the CMM, with level 1 organizations having around 85% effectiveness and level 5 organizations around 95% effectiveness.
Continuous integration, testing, and delivery processes aim to provide fast feedback on code changes. This is done through frequent automated testing and deployment of code changes. Some key aspects discussed are:
- Continuous integration involves automatically testing code changes through builds and running automated tests. Frequent and immediate feedback is the goal but all tests may be too time-consuming.
- Continuous testing executes tests early and often based on code modifications to provide quick feedback.
- Continuous delivery deploys code changes to testing environments after builds to allow more testing, including performance and load tests. Continuous deployment then automatically deploys to production.
Prioritizing tests, running different test configurations, increasing non-UI testing, and splitting test
This document outlines an agile QA framework for testing in a scrum environment. It discusses roles like developers, QA and AQA working interchangeably. The framework focuses on a whole team approach with automation, a balanced process, and incremental testing. It provides guidelines for infrastructure, process, defect management and an automation approach using a defects derivative model and business process testing. The goal is to improve quality through a balanced emphasis on prevention, automation and other factors.
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
Pairwise testing - Strategic test case designXBOSoft
Pairwise testing is a technique used to test software interactions between input parameters by selecting all possible unique pairs of parameter values. It aims to find bugs caused by interactions between two parameters, which are common, while being more efficient than testing all possible input combinations. Tools can generate pairwise test sets to cover all pairs, while testing far fewer cases than exhaustive testing. However, pairwise testing requires effective partitioning of input values and has limitations when dependencies between more than two parameters exist or inputs are not discrete. It works best when there are many possible input values that can be separated into equivalence classes.
КАТЕРИНА АБЗЯТОВА - Getting ready for ISTQB Foundation 4.0: Overview and Q&A ...QADay
This document provides an overview of ISTQB certifications and the CTFL 4.0 exam. It discusses the changes between versions 3.1 and 4.0 of the CTFL certification, including a greater emphasis on agile testing and risk-based testing in 4.0. The document reviews the exam structure, options for taking the exam in a test center or remotely, and answers frequently asked questions. Key resources for exam preparation are also presented, including recommended courses and practice tests.
Software testing is a process used to identify issues and ensure quality in developed software. It involves techniques like unit testing of individual code components, integration testing of interface between components, and system testing of the full application. While exhaustive testing of all possible inputs is not feasible due to time constraints, techniques like equivalence partitioning, boundary value analysis, and error guessing help prioritize test cases. The goal is to thoroughly test the most important and error-prone areas with the time available.
The client faced challenges with regression testing Oracle Applications due to constant upgrades. Infosys created an automation framework that enabled the client to reduce regression testing efforts and costs by 80% and minimize business interruptions. The framework included documenting test cases, developing automated scripts using testing tools, executing the scripts across multiple releases, and managing tests. This improved cost savings, delivery confidence, maintainability, and resource utilization.
1. Defect removal effectiveness measures the percentage of defects found by a particular development activity compared to the total defects present.
2. Several metrics have been proposed to measure defect removal effectiveness, including error detection efficiency, removal efficiency, early detection percentage, and phase containment effectiveness.
3. Studies have shown that defect removal effectiveness tends to increase with higher levels of software process maturity based on the CMM, with level 1 organizations having around 85% effectiveness and level 5 organizations around 95% effectiveness.
Continuous integration, testing, and delivery processes aim to provide fast feedback on code changes. This is done through frequent automated testing and deployment of code changes. Some key aspects discussed are:
- Continuous integration involves automatically testing code changes through builds and running automated tests. Frequent and immediate feedback is the goal but all tests may be too time-consuming.
- Continuous testing executes tests early and often based on code modifications to provide quick feedback.
- Continuous delivery deploys code changes to testing environments after builds to allow more testing, including performance and load tests. Continuous deployment then automatically deploys to production.
Prioritizing tests, running different test configurations, increasing non-UI testing, and splitting test
This document outlines an agile QA framework for testing in a scrum environment. It discusses roles like developers, QA and AQA working interchangeably. The framework focuses on a whole team approach with automation, a balanced process, and incremental testing. It provides guidelines for infrastructure, process, defect management and an automation approach using a defects derivative model and business process testing. The goal is to improve quality through a balanced emphasis on prevention, automation and other factors.
In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters.
Pairwise testing - Strategic test case designXBOSoft
Pairwise testing is a technique used to test software interactions between input parameters by selecting all possible unique pairs of parameter values. It aims to find bugs caused by interactions between two parameters, which are common, while being more efficient than testing all possible input combinations. Tools can generate pairwise test sets to cover all pairs, while testing far fewer cases than exhaustive testing. However, pairwise testing requires effective partitioning of input values and has limitations when dependencies between more than two parameters exist or inputs are not discrete. It works best when there are many possible input values that can be separated into equivalence classes.
КАТЕРИНА АБЗЯТОВА - Getting ready for ISTQB Foundation 4.0: Overview and Q&A ...QADay
This document provides an overview of ISTQB certifications and the CTFL 4.0 exam. It discusses the changes between versions 3.1 and 4.0 of the CTFL certification, including a greater emphasis on agile testing and risk-based testing in 4.0. The document reviews the exam structure, options for taking the exam in a test center or remotely, and answers frequently asked questions. Key resources for exam preparation are also presented, including recommended courses and practice tests.
Introduction of FMEA; Definition, Activities, important terms, factors, RPN; Process of FMEA; Steps of FMEA
Types of FMEA; FMEA Application; FMEA Related Tools:
Root Cause Analysis, Pareto Chart, Cause Effect Diagram
Cause-effect graphs capture relationships between inputs (causes) and outputs (effects) in black box testing. Causes and effects are represented as nodes in a graph connected by intermediate nodes. An example graphs the causes if the first character is 'A' or 'B' and the second column is a number leading to the effect that the file is updated, or other causes and effects like erroneous characters printing message X12. The methodology involves decomposing the system, identifying causes and effects, establishing the graph of relationships between them, adding constraints, converting the graph to a decision table, and producing a test per line of the simplified table.
The document outlines the key phases of the Software Testing Life Cycle (STLC) process. It describes 6 phases: 1) Requirement Analysis/Review to understand requirements, 2) Test Planning to develop the test plan, 3) Test Designing to create test cases and scripts, 4) Test Environment Setup to prepare the test environment, 5) Test Execution to run the test cases and report bugs, and 6) Test Closure to finalize testing and complete documentation. The goal of STLC is to systematically test software through a planned process to improve quality.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of poka-yoke and human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
The document provides information about manual testing processes and concepts. It discusses 1) why manual testing is chosen as a career, 2) the skills needed to get a manual testing job, 3) when testing occurs in the software development lifecycle, and 4) the different types and levels of testing. It also defines key terms like requirements documents, test cases, defects, environments, and software development process models.
In computer programming and software testing, smoke testing (also confidence testing or sanity testing) is preliminary testing to reveal simple failures severe enough to (for example) reject a prospective software release.
The document discusses exploratory testing and Keri Smith. It provides an overview of exploratory testing, noting that it emphasizes personal freedom and responsibility of testers to continually optimize testing. It also discusses Keri Smith's work in conceptual art and guided journals that encourage observing the world like artists and scientists.
This is the chapter 5 of ISTQB Advance Test Automation Engineer certification. This presentation helps aspirants understand and prepare content of certification.
This document provides an overview of failure mode and effects analysis (FMEA). It describes FMEA as a structured approach to identify ways a product or process can fail, estimate risks from specific causes, and prioritize actions to reduce risk. The document outlines the FMEA process, including establishing a team, identifying failure modes and their effects, analyzing severity, occurrence and detection, calculating a risk priority number, and developing recommended actions. It also distinguishes between design FMEA and process FMEA.
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
This document provides an overview of Toyota's lean manufacturing system known as the Toyota Production System (TPS). It discusses how TPS was developed based on the philosophies of Toyota's founders and leaders like Taiichi Ohno. Key aspects of TPS discussed include just-in-time production using kanban systems, jidoka or built-in quality control, eliminating waste, visual management with 5S, and problem-solving through continuous improvement. The document positions TPS as a holistic management approach focused on eliminating waste and respecting people, not just an inventory reduction technique.
*Software Testing Certification Courses: https://www.edureka.co/software-testing-certification-courses *
This Edureka PPT on "Software Testing Life Cycle" will provide you with in-depth knowledge about software testing and the different phases involved in the process of testing.
Below are the topics covered in this session:
Introduction to Software Testing
Why Testing is Important?
Who does Testing?
Software Testing Life Cycle
Requirement Analysis
Test Planning
Test Case Development
Test Environment Setup
Test Execution
Test Cycle Closure
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Instagram: https://www.instagram.com/edureka_lea...
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Control-Plan-Training.pptx for the Automotive standard AIAGVikrantPawar37
This document provides training on control plans, which are used to systematically control processes. It discusses key aspects of control plans such as identifying product and process characteristics, establishing specifications and control methods, and defining reaction plans. The objectives are to understand how to develop a control plan and link each step to the process FMEA. A plug assembly control plan example is also provided to demonstrate how to structure the control plan.
The 7 software testing principles briefly explained. Everyone who works in software development company should know these principles.
It happens frequently that testers or qa people are not taken into account as part of the process in the software development lifecycle and this happens expecially when the principles are not known.
The document describes the testing life cycle process which includes test plan preparation, test case design, test execution and log preparation, defect tracking, and test report preparation. It then provides details about each step of the testing life cycle process such as how to prepare test plans, design test cases, execute tests and log results, track defects, and prepare test reports.
This document provides an overview of software testing concepts and processes. It discusses the importance of testing in the software development lifecycle and defines key terms like errors, bugs, faults, and failures. It also describes different types of testing like unit testing, integration testing, system testing, and acceptance testing. Finally, it covers quality assurance and quality control processes and how bugs are managed throughout their lifecycle.
Webinar: Process Improvement in Government Using Lean Six SigmaGoLeanSixSigma.com
This document outlines how to implement Lean Six Sigma projects in government to drive continuous process improvement and savings. It discusses the challenges of continuous improvement in government compared to business. A solution presented is to treat Lean Six Sigma projects as strategic financial assets in a "Money Machine" approach to generate an ongoing pipeline of improvement projects and financial benefits. Key elements are identifying meaningful projects, reliably assessing their potential financial impact, and ensuring timely project completion to maintain a steady flow of savings. Leadership engagement is important to prioritize projects and support progress. The overall goal is to build an engine for sustainable value creation and a more adaptive organization.
In this presentation, we will discuss about software development techniques, where we will cover life cycle model, life cycle of a system, system requirement analysis and various software development life cycle models.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
This document discusses various software testing metrics including defect density, requirement volatility, test execution productivity, and test efficiency. Defect density measures the number of defects found divided by the size of the software. Requirement volatility measures the percentage of original requirements that were changed. Test execution productivity measures the number of test cases executed per day. Test efficiency measures the percentage of defects found during testing versus post-release. These metrics provide ways to measure software quality and testing effectiveness.
The document provides instructions for a machine learning lab experiment using the Weka machine learning software. Students are asked to run several classifiers on a dataset containing RNA-binding protein sequences to predict whether amino acids bind to RNA or not. Classifiers include Naive Bayes, J48 decision tree, support vector machine (SVM) with linear and RBF kernels. Students record performance metrics from 5-fold cross validation and testing on a separate protein sequence, and analyze which classifier worked best.
This document provides instructions for a machine learning lab assignment. Students are asked to use the Weka machine learning tool to classify RNA-binding proteins using various algorithms, including Naive Bayes, J48 decision tree, SVM with linear and RBF kernels. Performance is measured using 5-fold cross-validation on the training set and classification of a separate test protein. Results for accuracy and other metrics are recorded in tables.
Introduction of FMEA; Definition, Activities, important terms, factors, RPN; Process of FMEA; Steps of FMEA
Types of FMEA; FMEA Application; FMEA Related Tools:
Root Cause Analysis, Pareto Chart, Cause Effect Diagram
Cause-effect graphs capture relationships between inputs (causes) and outputs (effects) in black box testing. Causes and effects are represented as nodes in a graph connected by intermediate nodes. An example graphs the causes if the first character is 'A' or 'B' and the second column is a number leading to the effect that the file is updated, or other causes and effects like erroneous characters printing message X12. The methodology involves decomposing the system, identifying causes and effects, establishing the graph of relationships between them, adding constraints, converting the graph to a decision table, and producing a test per line of the simplified table.
The document outlines the key phases of the Software Testing Life Cycle (STLC) process. It describes 6 phases: 1) Requirement Analysis/Review to understand requirements, 2) Test Planning to develop the test plan, 3) Test Designing to create test cases and scripts, 4) Test Environment Setup to prepare the test environment, 5) Test Execution to run the test cases and report bugs, and 6) Test Closure to finalize testing and complete documentation. The goal of STLC is to systematically test software through a planned process to improve quality.
This document outlines an introductory training on the concept of poka-yoke, or mistake proofing. It is divided into 12 sessions that cover topics such as the paradigm shift to zero errors, introductions to poka-yoke principles and examples, process waste management, zero defect quality systems, the three qualifiers of poka-yoke (simple/inexpensive, 100% inspection, immediate feedback), examples of poka-yoke from daily life, poka-yoke systems, methods of implementing poka-yoke, and types of poka-yoke and human mistakes. The overall aim is to teach participants how to utilize mistake proofing approaches to prevent errors and reduce defects.
The document provides information about manual testing processes and concepts. It discusses 1) why manual testing is chosen as a career, 2) the skills needed to get a manual testing job, 3) when testing occurs in the software development lifecycle, and 4) the different types and levels of testing. It also defines key terms like requirements documents, test cases, defects, environments, and software development process models.
In computer programming and software testing, smoke testing (also confidence testing or sanity testing) is preliminary testing to reveal simple failures severe enough to (for example) reject a prospective software release.
The document discusses exploratory testing and Keri Smith. It provides an overview of exploratory testing, noting that it emphasizes personal freedom and responsibility of testers to continually optimize testing. It also discusses Keri Smith's work in conceptual art and guided journals that encourage observing the world like artists and scientists.
This is the chapter 5 of ISTQB Advance Test Automation Engineer certification. This presentation helps aspirants understand and prepare content of certification.
This document provides an overview of failure mode and effects analysis (FMEA). It describes FMEA as a structured approach to identify ways a product or process can fail, estimate risks from specific causes, and prioritize actions to reduce risk. The document outlines the FMEA process, including establishing a team, identifying failure modes and their effects, analyzing severity, occurrence and detection, calculating a risk priority number, and developing recommended actions. It also distinguishes between design FMEA and process FMEA.
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
This document provides an overview of Toyota's lean manufacturing system known as the Toyota Production System (TPS). It discusses how TPS was developed based on the philosophies of Toyota's founders and leaders like Taiichi Ohno. Key aspects of TPS discussed include just-in-time production using kanban systems, jidoka or built-in quality control, eliminating waste, visual management with 5S, and problem-solving through continuous improvement. The document positions TPS as a holistic management approach focused on eliminating waste and respecting people, not just an inventory reduction technique.
*Software Testing Certification Courses: https://www.edureka.co/software-testing-certification-courses *
This Edureka PPT on "Software Testing Life Cycle" will provide you with in-depth knowledge about software testing and the different phases involved in the process of testing.
Below are the topics covered in this session:
Introduction to Software Testing
Why Testing is Important?
Who does Testing?
Software Testing Life Cycle
Requirement Analysis
Test Planning
Test Case Development
Test Environment Setup
Test Execution
Test Cycle Closure
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Instagram: https://www.instagram.com/edureka_lea...
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Control-Plan-Training.pptx for the Automotive standard AIAGVikrantPawar37
This document provides training on control plans, which are used to systematically control processes. It discusses key aspects of control plans such as identifying product and process characteristics, establishing specifications and control methods, and defining reaction plans. The objectives are to understand how to develop a control plan and link each step to the process FMEA. A plug assembly control plan example is also provided to demonstrate how to structure the control plan.
The 7 software testing principles briefly explained. Everyone who works in software development company should know these principles.
It happens frequently that testers or qa people are not taken into account as part of the process in the software development lifecycle and this happens expecially when the principles are not known.
The document describes the testing life cycle process which includes test plan preparation, test case design, test execution and log preparation, defect tracking, and test report preparation. It then provides details about each step of the testing life cycle process such as how to prepare test plans, design test cases, execute tests and log results, track defects, and prepare test reports.
This document provides an overview of software testing concepts and processes. It discusses the importance of testing in the software development lifecycle and defines key terms like errors, bugs, faults, and failures. It also describes different types of testing like unit testing, integration testing, system testing, and acceptance testing. Finally, it covers quality assurance and quality control processes and how bugs are managed throughout their lifecycle.
Webinar: Process Improvement in Government Using Lean Six SigmaGoLeanSixSigma.com
This document outlines how to implement Lean Six Sigma projects in government to drive continuous process improvement and savings. It discusses the challenges of continuous improvement in government compared to business. A solution presented is to treat Lean Six Sigma projects as strategic financial assets in a "Money Machine" approach to generate an ongoing pipeline of improvement projects and financial benefits. Key elements are identifying meaningful projects, reliably assessing their potential financial impact, and ensuring timely project completion to maintain a steady flow of savings. Leadership engagement is important to prioritize projects and support progress. The overall goal is to build an engine for sustainable value creation and a more adaptive organization.
In this presentation, we will discuss about software development techniques, where we will cover life cycle model, life cycle of a system, system requirement analysis and various software development life cycle models.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
This document discusses various software testing metrics including defect density, requirement volatility, test execution productivity, and test efficiency. Defect density measures the number of defects found divided by the size of the software. Requirement volatility measures the percentage of original requirements that were changed. Test execution productivity measures the number of test cases executed per day. Test efficiency measures the percentage of defects found during testing versus post-release. These metrics provide ways to measure software quality and testing effectiveness.
The document provides instructions for a machine learning lab experiment using the Weka machine learning software. Students are asked to run several classifiers on a dataset containing RNA-binding protein sequences to predict whether amino acids bind to RNA or not. Classifiers include Naive Bayes, J48 decision tree, support vector machine (SVM) with linear and RBF kernels. Students record performance metrics from 5-fold cross validation and testing on a separate protein sequence, and analyze which classifier worked best.
This document provides instructions for a machine learning lab assignment. Students are asked to use the Weka machine learning tool to classify RNA-binding proteins using various algorithms, including Naive Bayes, J48 decision tree, SVM with linear and RBF kernels. Performance is measured using 5-fold cross-validation on the training set and classification of a separate test protein. Results for accuracy and other metrics are recorded in tables.
This document summarizes TestersDesk.com, an online toolkit that provides tools to automate software test design and data generation. It describes tools that help testers design test cases by generating combinations of test parameter values, reducing large test spaces, and permuting test orders. Other tools help generate common test data like strings, files, and structured data according to provided templates and formats. The tools aim to reduce manual test preparation work and help testers focus on core testing activities.
Ever tried doing Test First Test Driven Development? Ever failed? TDD is not easy to get right. Here's some practical advice on doing BDD and TDD correctly. This presentation attempts to explain to you why, what, and how you should test, tell you about the FIRST principles of tests, the connections of unit testing and the SOLID principles, writing testable code, test doubles, the AAA of unit testing, and some practical ideas about structuring tests.
Blackboxtesting 02 An Example Test Seriesnazeer pasha
1. The document describes an example test series for a simple program that adds two numbers entered by the user.
2. It outlines the initial testing process, including performing simple tests, exploring all parts of the program, looking for more challenging tests, and focusing on boundary conditions.
3. The document discusses techniques for test design such as brainstorming test cases, equivalence partitioning, and boundary value analysis to identify important tests without testing all possible combinations.
This document discusses test-driven development and unit testing with JUnit. It covers:
- Writing tests before code using stubs, so code is testable and requirements are clear.
- Key aspects of JUnit like test classes, fixtures, assertions and annotations like @Before, @Test.
- Best practices like testing individual methods, writing simple tests first, and repeating the test-code cycle until all tests pass.
- Features of JUnit in Eclipse like generating test stubs from code and viewing test results.
The overall message is that testing saves significant time versus debugging, helps write better code, and is an essential part of the development process. Tests should cover all requirements and edge cases to
The document discusses test case design and provides guidance on creating effective test cases. It recommends that test cases have a reasonable chance of catching errors, are not redundant, and are neither too simple nor too complex. Additionally, it emphasizes the importance of making program failures obvious. Various techniques are described for selecting the best test cases, including equivalence partitioning and boundary value analysis. Equivalence classes group test cases that are expected to have the same outcome, and boundary values at the edges of valid inputs are most likely to find failures.
With the explosion of front end technologies such as HTML5 and JS frameworks, it's not enough to do TDD just in your server side code. And frankly, there are likely a lot of DB procedures that are being developed without unit tests. Let's take a step back and look at a full application stack using (front) HTML, CSS, JS, (middle) java, Spring, json client/server, and (back) DB. Lance will show conceptual strategies, tools, and illustrative code examples for common unit test patterns that come up in those areas.
How much information should your test cases (or test missions, charters, or other types or similar test artifacts) include? What are the pros and cons of adding lots of detailed information in your test cases? These are questions I will discuss in this article, based on my experience with testing.
Do we really need game testers in development teams? What is it that defines the core competence of a tester, and does this competence add any value to the development team?
The document discusses various techniques for software testing including black box testing, equivalence partitioning, boundary value analysis, cause-effect graphing, pairwise testing, and special case testing. The goal of testing is to identify defects by designing test cases that are most likely to cause failures and reveal faults in the software.
SE2018_Lec 20_ Test-Driven Development (TDD)Amr E. Mohamed
The document discusses test-driven development (TDD) and unit testing. It explains that TDD follows a cycle of writing an initial failing test case, producing just enough code to pass that test, and refactoring the code. Unit testing involves writing test cases for individual classes or functions, using assertions to validate expected outcomes. The JUnit framework is introduced for writing and running unit tests in Java.
Principles of design of experiments (doe)20 5-2014Awad Albalwi
This document discusses experimental design and optimization. It defines key terms like factors, responses, and residuals. It explains that experimental design is used to systematically examine problems in research, development and production. Factorial design is introduced as a method to study the effects of all factors and interactions on responses. The document provides an example experimental design to investigate if playing violent video games causes violent behavior. It outlines defining the population, randomly selecting a sample, using control and experimental conditions, measuring dependent variables, and comparing results to draw conclusions.
Test-driven development (TDD) is a software development process that relies on the repetition of short development cycles called the TDD cycle. The TDD cycle involves first writing a test case that defines a desired improvement or new function, then producing code to pass that test and finally refactoring the new code to acceptable standards. The document discusses TDD training which includes topics like fixtures, assertion patterns, test quality and a case study. It motivates TDD by explaining how it helps build quality code, improves maintainability and meets client needs by focusing on internal and external quality. Key aspects of TDD like the AAA test format and strategies for selecting the next test are also covered. Finally, the document reviews evidence from case
The document discusses software testing objectives, principles, techniques and processes. It covers black-box and white-box testing, unit and integration testing, and challenges of object-oriented testing. Testing aims to find bugs but can never prove their absence. Exhaustive testing is impossible so testing must be planned and systematic. Frameworks like xUnit can help automate unit testing.
The document discusses various aspects of unit testing including definitions, benefits, naming standards, and best practices. It provides definitions for terms like error, defect, failure. It outlines the benefits of unit testing like finding bugs early, enabling code refactoring. It discusses what should be tested like boundaries, error conditions. It also provides examples of good test names and guidelines for structuring and naming unit tests.
The document discusses unit testing and provides guidance on effective unit testing practices. It defines key terms like error, defect, and failure. It outlines the benefits of unit testing like finding defects earlier and maintaining stable code. It discusses naming conventions and frameworks for unit tests. It provides examples of different types of unit tests and guidelines for writing good unit tests that are independent, fast, and test all functionality. The document emphasizes testing boundary conditions and errors as well as documenting test cases.
The document discusses various aspects of unit testing including definitions, benefits, naming standards, and best practices. It provides definitions for terms like error, defect, failure. It outlines the benefits of unit testing like finding bugs early, enabling code refactoring. It discusses what should be tested like boundaries, error conditions. It also provides examples of good test names and guidelines for structuring and naming unit tests.
Similar to Quality Assurance 2: Searching for Bugs (20)
User Experience 8: Business, Ethics and MoreMarc Miquel
Based on the document, dark patterns in games can be categorized into three main types:
1. Temporal dark patterns which manipulate a player's time through repetitive grinding or requiring play during specific time windows.
2. Monetary dark patterns which deceive players into spending more money than intended, such as pay-to-skip challenges or including paid content that was already on the game disc.
3. Social capital dark patterns which exploit social relationships, such as pyramid schemes that require inviting friends or impersonating other players' actions.
The document discusses how these patterns aim to maximize company profits through manipulating time, money or social factors, often against a player's best interests or without their consent. UX professionals must be aware
User Experience 7: Quantitative Methods, Questionnaires, Biometrics and Data ...Marc Miquel
This presentation introduces the most important quantitative research methods: questionnaires, biometrics and data analysis. It discusses several case studies in which these methods are employed.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
User Experience 6: Qualitative Methods, Playtesting and InterviewsMarc Miquel
This presentation introduces the most fundamental qualitative methods: the playtesting and the interview. It discusses when to use it and the possible bias the researcher may incur.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
User Experience 5: User Centered Design and User ResearchMarc Miquel
This presentation introduces the user-centered design paradigm and the field of game user research. It includes some hypothetical case studies which are later discussed in the following presentations.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
User Experience 4: Usable User InterfaceMarc Miquel
The document discusses user interfaces in video games. It makes three key points:
1. The interface is everything that allows a player to interact with and control the game, including both physical inputs like controllers as well as digital outputs like on-screen menus and HUD elements.
2. Good interface design requires consideration of usability, aesthetics, information architecture, and interaction design. Key usability goals are learnability, efficiency, preventing errors, and satisfaction.
3. There are generally two types of digital interfaces: general menus for navigating options when not actively playing, and in-game UIs for displaying key information during gameplay. Card sorting can help test and improve how information is organized within interfaces.
User Experience 3: User Experience, Usability and AccessibilityMarc Miquel
This presentation introduces the most important usability models among other concepts (affordances, heuristics, etc.).
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
This is an introduction to the most important psychology concepts from the perspective of UX and their application to video games and software.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
User Experience 1: What is User Experience?Marc Miquel
The document provides an overview of an introduction to a university course on user experience. It discusses the following key points:
1. The history and roots of user experience, tracing back to ergonomics in ancient times and the integration of human factors research with computer science and design in recent decades.
2. Definitions of user experience, which focus on all aspects of a user's experience interacting with products and services, including usability, desirability, and emotional satisfaction.
3. An introduction to the topics that will be covered in the course, including what user experience is, common UX problems, intuitive design, and how culture can impact design understanding.
4. An example of analyzing the
In this presentation we introduce the concept quality assurance in video games along with the most important concepts, team members and testing phases.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
In this presentation we introduce the game balance "interesting strategies". It is especially important as games with a single dominant strategy are boring. No strategy must be much better than others and without drawbacks.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
Game Balance 3: Player Equality and FairnessMarc Miquel
In this presentation we introduce the game balance type "player equality and fairness". It is essential so the players do not feel the game is unworthy of playing. All the players must feel they are given the chances to win.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
In this presentation we introduce the game balance type 'sustained uncertainty'. Uncertainty is usually understood as related to randomness and difficulty. It is essential to keep the game interesting to the user.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
In this presentation we introduce the concept game balance, its different types, and the most useful methods to study it.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
public presentation of "Calçotada Wars" the card gameMarc Miquel
This is a presentation I gave in FNAC Plaça Catalunya in order to explain and show "Calçotada Wars" (the card game) for promotional purposes.
For more info about the project, check out marcmiquel.com
public presentation of "La Puta i la Ramoneta" the card gameMarc Miquel
This is a presentation I gave in Ateneu Igualadí in order to explain and show "La Puta i la Ramoneta" (the card game) for promotional purposes.
For more info about the project, check out marcmiquel.com
Towards a User-Centered Wikipedia - Viquitrobada, 26 de novembre 2016, ValènciaMarc Miquel
En aquesta presentació faig dues observacions; la primera sobre com s'ha construït Viquipèdia, quins són els seus valors relacionats amb la cultura hacker i com poden obstruïr el disseny centrat en l'usuari; la segona sobre com pel viquipedista és fonamental desenvolupar una identitat de comunitat i com s'ha d'ajudar als nouvinguts a crear-la. Per altra banda i vinculat amb les observacions, faig dues propostes per centrar el disseny i la cultura de Viquipèdia en els editors per tal de millorar l'engagement (participació).
Cultural Identities in Wikipedia (Wikimania 2016)Marc Miquel
Unlike in most social network platforms, in Wikipedia editors are not encouraged to disclose personal traits, hobbies or affiliations. In fact, I think the identity issue has not been discussed enough. Since the project is dedicated to promote a common good, there is no content ownership, and the personal aspects become uncomfortable, or partly taboo. However, I defend that identity matters, in terms of building a Wikipedian reputation, and that editors' identities are tightly related to the content. As a Wikipedian, would you contribute equally if you couldn't choose the topics?
In this presentation I want to address the creation process and composition of Wikipedia language editions as a matter of identity. Our research on the issue has shown us that an identity-based motivation allows editors to conciliate the Wikipedian identity in the community along with their other identities. Therefore, in order to act congruently with each of such identities, they contribute with content related to them. To assess the influence of this motivation type, we developed a method and identified articles related to each Wikipedia language edition's Cultural Identities. The results on 40 Wikipedias show that this kind of content represents almost a quarter of each language edition. We analyze the content in terms of topical coverage and find that different specific topics emerge as important for each of them, although the most important topics are generally Geography, People and Culture. Inspecting how articles related to each language edition's cultural identities are exported to other languages, we show relationships between Wikipedias.
The selection of articles reflecting each Wikipedia language based cultural identities is a rich source for research, but can be also a useful base to establish an intercultural exchange between Wikipedia language editions. We propose the diversity of content across languages to be seen as an asset, and the spread of content specific to a language edition to be facilitated by automatic tools. The main point is to recognize the power of identity as a motivator for action and as a driver for change. Finally, we present a project called Wikiidentities in which we will disseminate the results of the research, make the datasets available, and provide some ideas and debate on how identities can be key to bridge the culture gap in any Wikipedia.
Happiness Has To Do With Clarity - World Information Architecture Day '15Marc Miquel
Most of the times we hear design for engagement or for better user experiences. Why don’t we design for happiness? Who is interested in happy users? I will give various examples of games and websites whose success depends on many things but joy and pleasure. Probably the key is in their information architecture and consequently in their interaction design. We as designers have an enormous responsibility for users’ behaviours. How much aware are we of our designs implications? And how much are the users?
To me, happiness in UX is the absence of frustration. Let's fight 'dark patterns' to make a more free Internet.
If you want to learn about Dark Patterns: www.darkpatterns.org
The Elements of Videogambling ExperienceMarc Miquel
For more information: http://uxmag.com/articles/dark-ux-the-elements-of-the-video-gambling-experience
This is a presentation I gave in La-Salle University (Barcelona) on April 12th about Videogambling Design and deceptive user experience. I include some of the most used dark patterns in the business and the tricks companies use to keep gamblers playing for longer sessions.
Its material is complementary to the deceptive UI designs in www.darkpatterns.org.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
1. Lesson 2: Searching for Bugs
Third year course in Quality Assurance and Game Balance
Bachelor Degree in Video Game Design and Production
Third term, April 2019 Dr. Marc Miquel Ribé
2. Overview of the Lesson
In this lesson we will see:
QA and QC: Testing techniques
1. Ad hoc testing
2. Planning from techniques
3. Combinatorial testing
4. Test flow diagram
5. Cleanroom testing
6. Testing trees
Defect Trigger
Test Suites Exercise
Automated Testing
Testing World
4. 1. Ad hoc testing
• When there is no testing, there is always a sort of testing. When there are no test
suites, there can still be some sort of testing. This is ad hoc testing, sometimes
called ‘exploratory testing’.
• Ad hoc testing, sometimes referred to as “general” testing, describes searching for
defects in a less structured way. “I wonder what happens if I do…?”
• Ad hoc testing presents you the opportunity to test the game as you would play.
Perhaps this is what you have been doing in your previous projects.
• Think of a big project instead: this may imply assigning members from different test
teams to do ‘ad hoc testing’ in other.
Ad hoc testing is the test of the “right-side
of the brain”. The creative side.
5. • Rotating testers. “Who turned the lights on?”. The “fresh eyes” concept is
applicable to structured testing as well. It’s wise to have testers rotate the specific
suites they’re responsible for periodically—even every build.
• Avoid “groupthink”. As a test manager you want testers to have different tastes
and habits, so their Ad Hoc testing has more quality.
• IMPORTANT: Even though ad hoc testing may be ‘more creative’, it needs to be
documented. If you are not ‘documenting’, you are not testing.
• How do you create ad hoc testing documentation? Instead of test cases and test
suites, you only define ‘goals’ and important conditions. You can use screen-
recording.
6. Reproducing bugs
• Ad hoc testing can turn into a “directed testing”, in order to reproduce specific
bugs.
• To “reproduce” is almost as using the Scientific Method
1. Observe some phenomenon.
2. Develop a theory—a hypothesis—as to what caused the phenomenon.
3. Use the hypothesis to make a prediction; for example, if I do this, it will happen
again.
4. Test that prediction by retracing the steps in your hypothesis.
5. Repeat steps 3 and 4 until you are reasonably certain your hypothesis is true.
Rate of reproduction is critical!
• What is best for a reproduction rate, a percentage or a number of times?
7. 2. Creating test suites
• The test case might focus only on a certain action, or a combination of menu options
that have been resulting in bugs up until this point.
• The goal, however, is to communicate from programmer to tester a specific thing that
needs to be tested.
Anything that the player can do is candidate to be a test case.
Test suites combine test cases that are “near”.
Developers should help the lead tester in creating the test suites.
8. Creating a test suite and incorporating it into the test plan is not easy.
• Why do we create test suites? For the same reason that we create documentation: to
manage work-load and work resources.
• How much documentation? How much testing is enough to trust the game? How
specific should be the test suites?
Well, these are the big questions. I (personally) answer them with three more questions:
What is the cost of lack of coordination?
What is the cost of test repetition?
How can we demand programmers to do better their job without ‘proof’?
How can we be sure we avoided the ’top priority’ highly severe bugs?
…
Some of the following ‘techniques’ help in preparing and executing reduced but useful
test suites which help in detecting bugs:
(1) Combinatorial testing, (2) Test flow diagram, (3) Cleanroom testing.
9. 3. Combinatorial testing
Testing Technique #1
Pairwise combinatorial testing is a way to find defects and gain confidence in the
game software while keeping the test sets small relative to the amount of
functionality they cover.
What might be the purpose of this technique? Clearly, to help in dividing the main
branches of the tree and not running them all.
It gives a ‘big picture’ to structure following tests cases.
For instance, the game may have: game events, game settings, gameplay options,
hardware configurations, character attributes, among others. (parameters)
Each of the parameters may have different values (enumerations, ranges, boundaries,
etc.). Consider that “going to a place” or ”playing at a certain speed” a combination.
Combinations may rise to hundreds or thousands.
10. This test combines character attributes for a Jedi character in a Star Wars game to
test their effects on combat animations and damage calculations.
We may want to test different combinations with our fighters in Star Wars,
considering the following parameters: gender and light saber. Later, we want to add
the side of the force the are.
Complete three-way combinatorial table for Jedi combat.
2^3 = 8 rows
11. Pairwise combinatorial tables let you add complexity and coverage without increasing
the number of tests you will need to run. We do not need all the combinations, we
only need all pairs of components being tested.
• Male Gender paired with each Light Saber choice (1H, 2H) (row 1, row 2)
• Female Gender paired with each Light Saber choice (1H, 2H) (row 3, row 4)
• Male Gender paired with each Force choice (Light, Dark) (row 1, row 2)
• Female Gender paired with each Force choice (Light, Dark) (row 3, row 4)
• One-Handed (1H) Light Saber paired with each Force choice (Light, Dark) (row 1, row 3)
• Two-Handed (2H) Light Saber paired with each Force choice (Light, Dark) (row 2, row 4)
This is a completed
Pairwise combinatorial
Table!
12. The process to construct
1. Choose the parameter with the highest dimension (more number of values).
2. Create the first column by listing each test value for the first parameter N times, where N is
the dimension of the next-highest dimension parameter.
3. Start populating the next column by listing the test values for the next parameter.
4. For each remaining row in the table, enter the parameter value in the new column that
provides the greatest number of new pairs with respect to all of the preceding parameters
entered in the table. If no such value can be found, alter one of the values previously entered
for this column and resume this step.
5. If there are unsatisfied pairs in the table, create new rows and fill in the values necessary to
create one of the required pairs. If all pairs are satisfied, then go back to step 3.
6. Add more unsatisfied pairs using empty spots in the table to create the most new pairs. Go
back to step 5.
7. Fill in empty cells with any one of the values for the corresponding column (parameter).
EXERCISE: Let’s use the preceding process to complete a pairwise combinatorial table for some
of the NFL 2K5 Game Options.
Quarter length (1, 5, 15 min), Play Calling (Package, Formation, Coach), Game Speed (Slow,
Normal, Fast).
13. We need to verify that each pair exists and
write between parenthesis the row number.
14. Let’s use the preceding process to complete a pairwise combinatorial table for some
of the NFL 2K5 Game Options.
We need to verify that each pair exists and write
between parenthesis the row number.
If we want to add more complexity, we could add
more parameters: Challenges (Yes/No), Coach
Mode (Yes/No).
Let’s suppose we already introduced challenges…
15. We need to verify that each pair exists and write between parenthesis the row number.
When we ‘verify pairs’ we only need to write up the new pairs, not the old ones…
In any case, in a company you might use an algorithm to generate the table. Or combinatorial
templates.
It’s time again to check that all the required pairs for the new column
(Challenges ) are satisfied:
Quarter Length = “1 min” is paired with “Yes” (rows 1, 4) and “No” (row 7).
Quarter Length = “5 min” is paired with “Yes” (row 5) and “No” (rows 2, 8).
Quarter Length = “15 min” is paired with “Yes” (row 6) and “No” (rows 3, 9).
Play Calling = “Package” is paired with “Yes” (rows 1, 6) and “No” (row 8).
Play Calling = “Formation” is paired with “Yes” (row 4) and “No” (rows 2, 9).
Play Calling = “Coach” is paired with “Yes” (row 5) and “No” (rows 3, 7).
Game Speed = “Slow” is paired with “Yes” (rows 1, 5) and “No” (row 9).
Game Speed = “Normal” is paired with “Yes” (row 6) and “No” (rows 2, 7).
Game Speed = “Fast” is paired with “Yes” (row 4) and “No” (rows 3, 8).
Challenges = “Yes” is paired with “Yes” (row 1) and “No” (rows 3, 7, 8, 9).
Challenges = “No” is paired with “Yes” (rows 4, 5, 6) and “No” (row 2).
16. 4. Test flow diagram
Testing Technique #2
Test flow diagrams (TFDs) are graphical models representing game behaviors from the
player’s perspective. Testing takes place by traveling through the diagram to exercise the
game in both familiar and unexpected ways.
• It is not a flowchart, it has nothing to do with algorithms.
• Why draw diagrams? Because it is easier for some people to think visually.
• Parts of the diagram: terminators, flows (events and actions) and states.
We want testers to think of the game from the player perspective and plan test cases.
17. Terminators – are represented as boxes that indicate where testing starts and ends
Events – are operations initiated by the users.
Actions – exhibits temporary or transitional behavior in response to an event.
States – represents persistent game behavior and are a re-entrant. They are ”bubbles”.
Flow connects one game state with another.
The TFD does not have to represent all possible events for the portion of the game
being tested. Events should be considered according to the unique or important
behaviors related.
Events, actions and states
are known as primitives.
18. Creating a TFD is not as mechanical as Combinatorial Tables. They could be useful to test
gameplay or map transitions between quests, matches or challenges (e.g. what if I get all the
solo mission complete and then I do individual quests?).
You can use paper and pencil, SmartDraw or Microsoft Visio.
The idea is to create a TFD and gain detail.
Let’s put an example: the ability to pick up a weapon and its ammo while the game properly
keeps track of your ammo count and performs the correct audible and visual effects
19.
20. Main Advantage of the TFD? It helps you to not forget any path!
Main Disadvantage of the TFD? It may get too complex sometimes!
• Remember: Each event is like a test case.
• You can reuse some of its states and events among different test suites.
Data Dictionary. It is usual to create internal ”wikis” (as documentation) with
clarification for the effects associated to each event in the TFD.
21. Baseline path as direct as possible, with as
many states as possible, without repeating
states or looping back. This is useful as it is
prioritized. The Baseline path is 1,2,4,13
Derived from flow 1:
1,9,8,2,4,13
1,11,10,2,4,13
Derived from flow 2:
1,2,3,4,13
1,2,7,2,4,13
Derived from flow 4:
1,2,4,5,4,13
1,2,4,6,4,13
ALERT. Flow 12 is not covered.
We can use 11 with the baseline:
1,11,12,8,2,4,13
Derived paths are complementary variations to the baseline.
They propose alternatives at each event/action taken.
22. • Finally, you can ’translate’ the TFD into a list of test cases with “what to do” and
”what to expect”. TFD are useful to prepare the testing.
Depending on what to test, it is easier to use one or another technique in order to
create a list of test cases.
23. 5. Cleanroom Testing
Testing Technique #3
The original purpose of Cleanroom testing was to exercise software in order to make
mean time to failure (MTTF) measurements over the course of the project.
If we fix the most common bugs, then, the time between failures increases.
Cleanroom testing aims to produce the tests that play the game the way players will
play it at their homes.
The goal is to prevent the most common problems that players would encounter.
Therefore, we can obtain usage information in order to assign different probabilities:
• Mode-based usage
This may involve: single player, campaign, multiplayer, and online.
• Player-type usage
Taking into account Bartle’s Taxonomy, for instance. Or the statistics of the industry.
• Real-life data
Taking into account data from the current players.
24. Cleanroom combinatorial tables will not be “pairwise” combinatorial tables. They are created by
the “test designer” by estimation. Let’s take this example with four invented player profiles.
These would be the parameters for the game HALO Advanced Controls.
Look Sensitivity: 1, 3 (default), 10
Invert Thumbstick: Yes, No (default)
Controller Vibration: Yes (default), No
Invert Flight Control: Yes, No (default)
Auto Center: Yes, No (default)
These tables would be our estimations for
each player profile:
25. We want to create 6 test cases. Then, we call six randomly generated numbers: 30, 89, 77, 25,
50 and 97. We use the Casual Player.
Look Sensitivity for the Casual player has these intervals 1-10 (1), 11-95 (3), and 96-100 (10).
The first random number is 30, so it falls between 11-95 and the value for the first row of the
table is 3. We then repeat the procedure for each of the random numbers. Look Sensitivity is 3
all the time but the last one, which is 10 (because 97 is in the third range).
We can then repeat the procedure with 6 more random numbers and the second parameter
table (Invert Thumbstick) and check the intervals for the Casual player. Then we would repeat
for the third parameter…
26. How do we apply Cleanroom to TFD?
By assigning probabilities to paths
following a personal criteria.
Very similarly, we use random
numbers, then we calculate the path
by using the probability.
With random numbers: 30, 27, 35, 36,
82, 59, 92, 88, 80, 74, 42, and 13.
We would obtain the path: 1, 3, 4, 7,
9, 11, 7, 9, 11, 7, 9, 11, 6.
27. 6. Test trees
Testing the combinations of peripherals, game modes, configuration options, game
paths, would take literally an army of testers.
Again, what do you want to cover? What should we cover? How do we start?
We are blindfolded…
28. One approach is to re-create the different “test trees” in the game (Schultz, Game
Testing, 2011). These are useful classifications in order to design proper test suites.
“Test trees” are tools to identify and create the necessary test cases and test suites.
29. There are various types of threes:
1. Test case tree. They are useful to document the hierarchical relationship
between test cases and game features, elements, and functions.
2. Tree feature tests. They reflect the tree structure of features or functions
contained in the game.
30. 1. Test case tree
Test case trees document the hierarchical relationship between test cases and game
options, elements, and functions.
Think of it as a prior step to design
a list of test cases (test suite).
If we update the code for the game
mode “Skirmish” from the RTS game
Dawn of War, we need to think of test
cases to cover the possible relationships.
It does tell us the depth, not that we
should do everything. It is a good starting
Point.
QUESTION: From the test case tree example, which test branch(es) should you re-run for a
new release of the game that fixes a bug with the sound effect used for the Orks “Big Shoota”
weapon? Ideally and pragmatically?
What functionality is available where
31. 2. Tree feature tests
Tree feature tests reflect the tree structures of features or functions designed into
the gameplay.
Let us think of Final Fantasy Tactics Advance. Characters need to develop certain skills at one or
more jobs before other choices become available.
We create the test cases
What happens when
32. Terran technology tree from Starcraft.
QA needs the this ’tree’ to create tests which verify the availability of each of the
elements at the precise moment.
33. Many elements may be structured as “tree feature test” and it can be really useful
to define test cases. Think of the following features:
• Technology trees
• Advancing jobs, careers, or skills
• Progressing through tournaments
• Adding or upgrading spells, abilities, or superpowers
• Increasing complexity and types of skills needed to craft or cook items
• Earning new ranks, titles, trophies, or medals
• Unlocking codes, upgrades, or power-ups
• Unlocking new maps, environments, or race courses
• Unlocking better cars, outfits, or opponents
Some games may be more linear than others and facilitate this (!)
35. Defect triggers
We want to design test cases that are likely to find bugs. This can help us.
Orthogonal Defect Classification (ODC) system developed by IBM allows to classify
defects into types in order to reveal how they were introduced, be found or avoided.
You may remember them from Lesson 1: Function, Assignment, Checking, Timing,
Build, Algorithm, Documentation and Interface.
Orthogonal Defect Classification (ODC) also includes a set of Defect Triggers to
categorize the way defects are caused to appear.
Test cases can also be thought according to the triggers.
Defect trigger can also be included in the bug description!
Test suites that do not account for each of the triggers will be incapable of revealing
all of the defects in the game.
36. Game software operating regions
The game operation can be divided into different regions. These are conceptual mappings
to the game. They can be applied to the game as a whole or to some parts such as levels,
missions, etcetera.
The pre-game has the hardware functioning but the game has not started.
The game start has some cinematic, p.e. loading progress, and some basic functions are
happening.
The in-game is the actual gameplay where the player interacts.
The post-game is all the possible ways and process of quitting. It can include cinematics,
saving, etcetera.
37. Game software operating regions
Six Defect Triggers span the four game operating regions. These triggers describe ways to
cause distinct categories of game defects to show up during testing. We can use these
triggers to create test cases.
• The configuration trigger (usually in the pre-game region).
• The startup trigger (e.g. operations while game function is starting, graphics loaded,
variables initialized, etc.).
• The exception trigger (e.g. special parts of the game; alerts for network problems).
• The stress trigger (e.g. specific conditions: memory, screen resolution, disk space, etc.).
• The normal trigger (e.g. game operations in the in-game operating region).
• The restart trigger (e.g. failure as a result of quitting wrongly, ejecting the disk, etc.).
38. Examples:
“The sniper rifle now reloads automatically in all game modes. It no longer waits for the
player to let go of the fire button”. Restart
“When playing a Disintegration match in MultiMatch, the reload delay for the sniper rifle
is now shorter”. Startup
“Saved game issues with long computer names fixed”. Stress
“Mouse 4 and 5 sensitivity levels are not available in the Configure menu”. Configuration
39. Examples:
“The Explosive damage has been increased”. Normal
“Death will no longer break the chat in MultimMatch, when it should be
disconnected until the respawn”. Restart
“Teams are no longer automixed on map restart in MultiMatch”. Restart
“The server will not inform the player if he votes to switch to a map that does not
exist on the server”. Exception
41. EXERCISE:
“Texas HoldEm Poker Deluxe” Android Play: Texas Hold ‘Em poker has become very
popular recently and many videogames on various platforms are popping up to take
advantage of the present level of interest in this card game.
Now, make a list or outline of how you would include each trigger (Configuration,
Startup, Exception, Stress, Normal, Restart) in your testing this game.
• Don’t stop at one example—list at least three values, situations, or cases for each
of the non-Normal triggers.
• Remember to include tests of the betting rules—not just the mechanics and
winning conditions for the hand.
42. Besides the Normal trigger testing, which you are accustomed to, here are some ways to
utilize other defect triggers for this hypothetical poker game:
• Start-up: Do stuff during the intro and splash screens, try to bet all of your chips on
the very first hand, try to play without going through the in-game tutorial.
• Configuration: Set the number of players at the table to the minimum or maximum,
set the betting limits to the minimum or maximum, play at each of the difficulty
settings available, play under different tournament configurations.
• Restart: Quit the game in the middle of a hand and see if you have your original chip
total when you re-enter the game, create a split pot situation where one player has
wagered all of his chips but other players continue to raise their bets, save your game
and then reload it after losing all of your money.
• Stress: Play a hand where all players bet all of their money, play for long periods of
time to win ridiculous amounts of cash, take a really long time to place your bet or
place it as quickly as possible, enter a long player name or an empty one (0
characters).
• Exception: Try to bet more money than you have, try to raise a bet by more than the
house limit, try using non-alphanumeric characters in your screen name.
43. Three more conclusions regarding Test Case and Test Suite definition:
1. Test cases need to be specific about a single thing (you need to understand well the
game elements to design test cases).
Think of external documentation if you want to extend the description of specific
effects in the ‘expected results’ of the test.
2. Consequently, test cases need to be detailed with “conditions” if they are not in an
ordered test suite.
3. Test case order matters (make the tester’s life easier and he will be more efficient).
45. Test Suites in Test Plans (Example)
In three minutes, “what tests suites would you include in the test plan for X game?”
This could be a “key question” in a job interview for a video game company.
What would you answer?
46. The first answer is simple…“It depends on the time and budget constraints”.
47. Test Flow Diagram, Pairwise Combinatorial Tables, Cleanroom Testing, Test Trees, Defect
Triggers…They all could help you to design the right test suites for all these games.
Exercise: Propose different testing techniques for each game.
48. Are you thinking about the Test Plan for your game?
• How many test suites?
• How many types of testing?
• Are all defect triggers covered?
50. Automated Testing
Automated testing provides methods in order to test by using software and compare
later the actual outcomes with the expected outcomes.
Some argue that all should be automated; others are convinced that it is only viable
in few specific instances.
Some reasons why Automated testing can be useful
• Consistency of results in different devices (case of the mobile).
• Faster testing.
• Performance issues (fps in specific moments, CPU usage using AI, etc.).
• Large number of concurrent players for stress and load testing.
• Improving game reliability.
Some reasons why Automated testing is not always useful
• Some specific movements and transitions are not possible.
• Some expected outcome may be not possible automatically evaluate.
51. Difficulties that prevent from embracing Automated testing
• Production-costs can be very high if you want to try all the test cases
automatically. You have the project more time in dev.
• Reusability of automated testing is not clear, since every game has new code.
• Large-scale automated testing implies a) very well-trained teams, b) big projects
with big budgets.
52. What games *must* we automatize? Massive Multiplayer Games
• The Sims Online, World of Warcraft, etc. are games which can be automatized as
they have a high number of repetitive actions, must simulate a very large number
of players, are synchronized among a large number of servers and clients, etc.
• Many other types of games can benefit from automated testing, too. Level-based
games such as first-person shooters could have sequences of play tested by well-
written scripts that imitate a player repeatedly shooting in a certain pattern.
Certain patterns: not all patterns.
53. Integrating Development with Testing Environment
SimScript: The TSO team used a subset of the main game client to create a test client
in which the GUI is mimicked via a script-driven control system.
SimScript is an extremely
simple scripting system.
As it is intended to mimic
or record a series of user
actions, it supports no
conditional statements,
loop constructs, or
arithmetic operations.
Stored procedures and
const-style parameters
are used to support
reusable scripts for
common functionality
across multiple tests.
54. Integrating Development with Testing Environment (Testdroid from Bitbar)
• Testdroid tells us that we should not test with emulators and instead use their
infrastructure to test with real-devices with an automated environment.
55. • TestDroid uses testing frameworks such as Robotium and Appium.
• Appium Framework to code scripts / test cases for a game (based on JUnit). Then,
using Testdroid Cloud service to select devices.
56. • Appium Framework to code scripts / test cases for a game (based on JUnit). Then,
using Testdroid Cloud service to select devices.
57. Utilities of Cloud Testing in mobile phones
While running a game on cloud-based real devices, check the following things:
• The game’s graphics and UI
• Are the controls all usable (buttons, menus, boxes) in all devices?
• Is the navigation flowing smooth?
• How much delay is there between game stages?
• Screen resolution – how does this affect the operating system version?
• Screen orientation – does the device react the way it is supposed to?
• Are animations flowing well?
• Are fonts implemented properly?
• Image recognition is another “trend” in automated testing.
58. Graphic testing is part of the automated testing set of tools in order to detect flaws in
performance. It compares real-time generated images in a specific device with
previously stored images.
From left to right, the reference image, test result image and finally the diff image.
59. Real example from Automated testing with the game Clash of Clans:
• Test Script contains the following 'steps':
1. Identify the used platform (Android / iOS)
2. Search for 'Goldmine' graphical element (using find image to search for screen)
3. Script runs test to go to shop
4. Script runs test to buy a cannon
5. Script runs test to place the cannon (all these three are defined in .png files)
6. Script runs test to start a battle!
7. Game is brought down and test ends.
Simple actions just to test compatibility with devices.
Sometimes we ‘trust a bit’…
We cannot detect very specific transition states in all mobile phones as the cost for
repeating tests would be high and it would not make sense to find a bug in one
device and not in another.
60. Performance testing in Mobile phones
• Good performance is a requirement to
good user experience. User wants to see
constant progress with the game, do the
smooth gameplaying, graphics
performance needs to be up to snuff and
work across all different mobile devices
(phones, tablets etc.) the very same.
• Typical metrics from performance tests are
CPU load, memory consumption and
frames per second (FPS). Based on them,
coders can work better.
• Load testing. More oriented to the infrastructure and its optimization (testing the
number of connections, CPU of the servers, etc.).
• Stress testing. More oriented towards understanding how the game scenarios relate to
the device characteristics. Looking for the peak of resources or number of users is called
“spike testing”, while “soak testing” is checking performance along time.
61. Capture/Playback Automation
Capture/playback test automation is like using a digital video recorder while you test.
Your inputs are recorded as you play the game and at various points you designate,
screen images are captured for comparison whenever you “replay” the test in the future.
Testdroid supports scripting of Unity3D Games (Testdroid Recorder).
It generates a code out of the player’s action.
QUESTION: At what times it can be useful?
62. Automated Testing: Summary of Pros / Cons (Again)
Pros:
- It can save many man hours and can be more “efficient” in some cases.
- It can be useful to do compatibility testing with multiple devices.
- It can be invaluable to do stress testing, load testing, etc.
Cons:
- It will cost more to ask a developer to write more lines of code into the game (or an
external tool) then it does to pay a tester (in general) and there is always the chance
there is a bug in the bug testing program.
- Reusability is another problem; you may not be able to transfer a testing program
from one title (or platform) to another.
- And of course, there is always the “human factor” of testing that can never truly be
replaced. There are things you cannot imagine trying with automated testing.
64. Tester Profile and environment
• What are some unknown facts about game testing?
- “Playing games for a living”. No: methodical work in pressure conditions.
- “You know the games before everyone”. No: broken games and small parts of it.
- It is impossible to fix everything. Testers do not fix, testers are sometimes asking for
something to be fixed. It is a whole new perspective.
- Good QA and QM is closer to engineering (planning, efficiency, management, etc.).
• What are the daily responsibilities of a QA tester?
- The test suite/document you have been assigned to. Work is delimited. These is the
real responsibility for which they can fire you.
- To find new bugs, even though you accomplished with the test suite, ad hoc testing
is sometimes an unnoticed requirement in the inner team competition.
- Reporting in a proper way so the programmer can save time. Testers may not have a
high reputation from the perspective of the development team.
- You need high skills to perform certain movements at the level it might be required
to test it (for the entire day). Testers need to be gamers.
65. • What are the working conditions associated with a testing team?
- Some profiles need to be temporary as they only fill the requirement for the
project. Others can get a regular job, as the company continuously release games.
- Lower paid than programming and software testing. Companies know the profile
for the end-tester is low and very few get paid properly.
- Competition is part of the team ethics (not for promotion though), as bugs are
their trophies. Good tester is a meticulous, methodical, and always ready worker.
- Most testers have a low studies profile and therefore stay in that specific niche,
even switching from one company to another (in best cases they learnt to code).
• What are some demographics associated with testers?
- The largest share of the group is in their twenties (62% 22-25). They are usually
men (70%) with a prior enthusiast about video games. Testing does not kill love
towards games, neither it increases it.
66. • What are some tester types? (in an humorous tone; Levy, Game Development
Essentials: Game QA & Testing; 2009)
Blank: This character has no memories, skills, knowledge, or equipment. This is the
tester who likes to play games and, following a friend’s “advice,” applied for a game
testing position. Blanks can either learn from the best and become pros, or resign
themselves to boredom and sleep on the job.
Technician: Technicians are ... technical. They often have attended at least one
technical school, and they might even be programmers already. Lacking observational
skills, the Technician doesn’t have the best “eye”—and can’t hear that well either.
Technicians can, however, break through the toughest bugs.
67. Artist: Artists love film and music and are usually able to draw and paint. Artists have
amazing eyes. They can see every single visual bug—even the smallest ones. They
spot z-fighting (when textures seem to “fight” with each other). Lack analytical skills.
Hybrid: A Hybrid has the soul of an Artist and the skills of a Technician—often
someone with a programming background, but who enjoys drawing on the side.
Hybrids can be maddening. Sometimes their skill levels are simply not high enough
for them to actually crack a bug’s steps. Sometimes they turn into leaders.
68. Stone: Most Stones are useless but, again, some hidden talent might shine under all
that immobility. Sometimes, Stones are excellent at full-on speedruns (playing
through the game really fast) because their concentration skills are unmatched. On
the other hand, they might not be the best communicators—and neither care about
nor follow hierarchy.
Berzerker: Berzerkers seem to thrive on chaos and may actually fear a peaceful work
environment. Berzerkers might be good at testing, maybe even phenomenal, but they
make everything personal and more difficult than it has to be.
69. Mini-Boss: Mini-Bosses tend to be frustrated Artist or Technician types—and most of
the time they are older than other testers. Mini Bosses think they have power—but
they’re testers just like everyone else. Even if Mini-Bosses indeed have talent, their
poor social skills annihilate any chances of promotion.
Elf: They don’t go to work as game testers because they see it as a career; no, Elves
test games because it’s cool. They also like to speak up to Executive Producers in the
middle of do-or-die team meetings. They end up leaving.
70. 7. How to enter and escape
What can you do to enter?
• Companies websites and openings (King, Blizzard, Zitro, etcetera.)
• Job board sites (Infojobs).
• Participate in Open beta testing and get noticed.
• Sites to learn more about the profile (becomeagametester.com)
71. QA high positions end up in
management.
This is the CV from someone who made the
QA from Square Enix much more efficient.
72. QA Engineering is oriented towards white-box
and designing tests for testers.
73. The end tester often is not
required to have any
experience.
However, in the employer
wishlist, there is knowledge
on automated testing and
development methodologies,
which are unlikely for a
starting position.
74. Do you need to leave testing?
• You are not being promoted; you are never given any complex tasks; your only
motivation is to feel “safe”.
• This is a reasonable career path if the tester is a good professional. In three years,
he might become a Lead tester for some games.
QA is a good entrance door for video game companies, but it is good to keep moving.