The document discusses testing strategies such as isolation, generalization, regression testing, halo testing, and smoke testing. Isolation involves examining root causes of defects by reproducing them in different situations. Generalization understands a defect's broader impact since code is reused. Regression testing verifies fixed bugs and reproduction steps while smoke testing provides early assurance a system won't fail catastrophically. Halo testing hypothetically tests around a photo's content.
In this presentation we introduce the concept quality assurance in video games along with the most important concepts, team members and testing phases.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
Regression testing is important to ensure new software changes do not break existing functionality. Automating regression testing helps manage the large number of test cases needed and speeds up release cycles. Key aspects of managing regression include establishing a baseline, comparing new results to the baseline, debugging failures efficiently, and automating testing processes to reduce human effort and testing time.
This lecture discusses testing game mods during the implementation process. It emphasizes the importance of testing early and often to identify bugs and issues. Different types of testing are described, including functionality, compliance, compatibility, localization, soak, beta, regression, load, and multiplayer testing. The assignment is to build out a section of the student's Skyrim mod by adding a location, enemies, loot, dialogue, and a trap. Students are then instructed to conduct playtesting with 5 players and document any issues or feedback in a testing log to submit. The goal of playtesting is to improve the mod, not just show it off.
The document outlines a five step process for debugging games: 1) reproducing the problem consistently, 2) collecting clues, 3) pinpointing the error, 4) repairing the problem, and 5) testing the solution. It then provides tips for each step such as ways to narrow down possible causes and ensure the underlying problem is fixed. The document also discusses tough debugging scenarios and ways to prevent bugs through compiler settings, assertions, and code practices.
1. The document discusses the concept of "bug advocacy", which is the practice of writing bug reports in a way that motivates programmers to fix the bug.
2. Effective bug reports motivate programmers by highlighting how serious or widespread the bug is. They also overcome objections by providing clear reproduction steps and evidence of customer impact.
3. The document recommends testing around found bugs to prove they are more serious or common than initially thought. This includes varying your own actions, program settings, and software/hardware environment to trigger related or worse failures. The goal is to sell programmers on the importance of fixing the bug.
The document provides an overview of software testing techniques, focusing on black-box testing. It discusses the basics of software testing, including verification and validation practices. It also describes the two main techniques of black-box and white-box testing, as well as six types of testing involving both: unit testing, integration testing, functional testing, system testing, stress testing, and performance testing. Finally, it discusses strategies for writing efficient test cases that can find faults with minimal effort.
This document provides an overview of software testing techniques, including black-box and white-box testing. It discusses the basics of software testing throughout the development lifecycle, including verification to check that the product is being built correctly and validation to ensure the right product is being built. Six types of testing involve both black-box and white-box approaches. The document also describes strategies for writing fewer test cases while still finding faults, and using templates for defined, repeatable test cases.
This document provides an overview of software testing techniques, including black-box and white-box testing. It discusses the basics of software testing throughout the development lifecycle, including verification to check that the product is being built correctly and validation to ensure the right product is being built. Six types of testing involve both black-box and white-box techniques. The document also describes strategies for writing fewer test cases while still finding faults, and using a template for defined test cases.
In this presentation we introduce the concept quality assurance in video games along with the most important concepts, team members and testing phases.
These slides were prepared by Dr. Marc Miquel. All the materials used in them are referenced to their authors.
Regression testing is important to ensure new software changes do not break existing functionality. Automating regression testing helps manage the large number of test cases needed and speeds up release cycles. Key aspects of managing regression include establishing a baseline, comparing new results to the baseline, debugging failures efficiently, and automating testing processes to reduce human effort and testing time.
This lecture discusses testing game mods during the implementation process. It emphasizes the importance of testing early and often to identify bugs and issues. Different types of testing are described, including functionality, compliance, compatibility, localization, soak, beta, regression, load, and multiplayer testing. The assignment is to build out a section of the student's Skyrim mod by adding a location, enemies, loot, dialogue, and a trap. Students are then instructed to conduct playtesting with 5 players and document any issues or feedback in a testing log to submit. The goal of playtesting is to improve the mod, not just show it off.
The document outlines a five step process for debugging games: 1) reproducing the problem consistently, 2) collecting clues, 3) pinpointing the error, 4) repairing the problem, and 5) testing the solution. It then provides tips for each step such as ways to narrow down possible causes and ensure the underlying problem is fixed. The document also discusses tough debugging scenarios and ways to prevent bugs through compiler settings, assertions, and code practices.
1. The document discusses the concept of "bug advocacy", which is the practice of writing bug reports in a way that motivates programmers to fix the bug.
2. Effective bug reports motivate programmers by highlighting how serious or widespread the bug is. They also overcome objections by providing clear reproduction steps and evidence of customer impact.
3. The document recommends testing around found bugs to prove they are more serious or common than initially thought. This includes varying your own actions, program settings, and software/hardware environment to trigger related or worse failures. The goal is to sell programmers on the importance of fixing the bug.
The document provides an overview of software testing techniques, focusing on black-box testing. It discusses the basics of software testing, including verification and validation practices. It also describes the two main techniques of black-box and white-box testing, as well as six types of testing involving both: unit testing, integration testing, functional testing, system testing, stress testing, and performance testing. Finally, it discusses strategies for writing efficient test cases that can find faults with minimal effort.
This document provides an overview of software testing techniques, including black-box and white-box testing. It discusses the basics of software testing throughout the development lifecycle, including verification to check that the product is being built correctly and validation to ensure the right product is being built. Six types of testing involve both black-box and white-box approaches. The document also describes strategies for writing fewer test cases while still finding faults, and using templates for defined, repeatable test cases.
This document provides an overview of software testing techniques, including black-box and white-box testing. It discusses the basics of software testing throughout the development lifecycle, including verification to check that the product is being built correctly and validation to ensure the right product is being built. Six types of testing involve both black-box and white-box techniques. The document also describes strategies for writing fewer test cases while still finding faults, and using a template for defined test cases.
The document provides an overview of software testing techniques, focusing on black-box testing. It discusses the basics of software testing, including verification and validation practices. It also describes the two main techniques of black-box and white-box testing, as well as six types of testing involving both: unit testing, integration testing, functional testing, system testing, stress testing, and performance testing. Finally, it discusses strategies for writing efficient test cases that can find faults with minimal effort.
The document discusses debugging processes and techniques. It defines debugging as the process of finding, correcting and removing bugs from programs. There are three main types of errors: syntactic, semantic, and logic errors. The debugging process involves reproducing the problem reliably, finding the source of the error, fixing just that one error, testing the fix, and optionally looking for more errors. Key debugging techniques include inserting print statements, using a debugger, explaining the code to someone else, and fixing only one error at a time. The overall goal of debugging is to methodically match symptoms to causes to locate and correct errors in code.
The document outlines a 5-step process for debugging game code: 1) reproducing the problem consistently, 2) collecting clues, 3) pinpointing the error, 4) repairing the problem, and 5) testing the solution. It also provides tips for preventing bugs through code organization and testing, and describes Game Maker-specific debugging tools like print messages, debug mode, and log files.
Flight checks -QA for Releases that Prevent Disasters from Escaping into the ...Brie Hoblin
The document discusses flight checks and quality assurance for software releases to prevent disasters. It provides examples of past software disasters like the Mars Climate Orbiter that exploded due to using the wrong measurement units. The document defines a disaster as a bug that significantly harms a client or users. It discusses scenarios to determine if they constitute disasters and outlines factors to consider for deployment decisions, rollback vs hotfixing, and preventing disasters through quality assurance.
This document provides an overview of debugging techniques and best practices. It discusses what debugging is, common debugging rules and methods, as well as tools and techniques for preventing bugs. Key points covered include understanding the system, making failures reproducible, dividing problems, changing one thing at a time, and keeping an audit trail. The document also mentions code reviews, assertions, and defensive programming as ways to prevent bugs.
The document discusses techniques for software testers to advocate for bugs they have found to get them prioritized and fixed. It recommends testers think of bug reports as tools to convince programmers to spend time fixing issues. Effective bug reports motivate programmers by highlighting impact and addressing objections. Testers should research failure conditions thoroughly by varying their own behavior, program options/settings, and software/hardware environments to prove bugs are more serious or widespread than initially found. The goal is to provide compelling arguments to prioritize and fix bugs.
Stakeholders always want to release when they think we’ve “finished testing”. They believe we have revealed “all of the important problems” and “verified all of the fixes,” and now it’s time to reap the rewards. However, as testers we still can assist in improving software by learning about problems after code has rolled “live-to-site”—especially if it’s a website. At eBay we have a post-ship “site quality” mindset in which testers continue to learn from A/B testing, operational issues, customer sentiment analysis, discussion forums, and customer call patterns—just to name a few. Jon Bach explains how and what eBay’s Live Site Quality team learns every day about what they just released to production. Take away some ideas on what you can do to test and improve value—even after you’ve shipped.
This document provides an overview of software testing techniques, focusing on black-box testing. It defines software testing as verifying that a software product meets requirements and identifying bugs. The two main types of testing are black-box testing, which ignores internal code structure, and white-box testing, which considers internal structure. Six types of testing are discussed: unit testing examines individual code units; integration testing verifies code units work together; functional and system testing ensure requirements are met; and stress, performance, and usability testing evaluate non-functional properties. Acceptance testing involves customers verifying requirements are fulfilled.
This document provides an overview of software testing techniques, focusing on black-box testing. It defines software testing as verifying that a software product meets requirements and identifying bugs. The two main types of testing are black-box testing, which ignores internal code structure, and white-box testing, which considers it. Six types of testing are discussed: unit testing examines individual code units; integration testing checks code unit interactions; functional and system testing evaluate requirements compliance; and stress, performance, and usability testing evaluate non-functional properties. Acceptance testing involves customer testing against requirements.
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flexmichael.labriola
This document discusses automated testing in Flex. It begins by explaining why automated testing is important, such as reducing costs from software errors and allowing developers to change code without fear of breaking other parts of the project. It then covers topics like writing unit tests, using theories and data points to test over multiple values, and writing integration tests. The document emphasizes that writing testable code is key, and provides some principles for doing so, such as separating construction from application logic and using interfaces. It also discusses using fakes, stubs and mocks to isolate units for testing.
Unit testing is easy... In a perfect world.
Our world is not.
This talk will cover a bunch of tips, tricks, and techniques to retrofit ugly legacy applications so parts of them can be unit tested.
(Examples given in Java using JUnit and Mockito)
This document discusses common mistakes made in software testing. It identifies five themes of mistakes: the role of testing, planning the testing effort, personnel issues, the tester at work, and overreliance on technology. Under the first theme, it discusses mistakes like defining the role of testing too narrowly and not providing context for bug data. The second theme discusses mistakes like a bias toward functional testing over scenario and configuration testing. It also discusses not testing documentation, installation, stress, and load.
The document discusses error and exception handling in UiPath Studio. It defines different types of errors like syntax errors and exceptions. Exceptions are events caught and handled by the program. Common exceptions that may occur in UiPath projects are discussed, including NullReferenceException, IndexOutOfRangeException, and SelectorNotFoundException. The document also covers different exception handling techniques in UiPath like TryCatch, Throw, Rethrow, and the ContinueOnError property. Finally, it discusses the Global Exception Handler used to handle unexpected exceptions.
The document discusses test cases, defects (bugs), and bug reports. It provides definitions and examples of test cases, their purpose and components. Examples of test management tools and test-driven development are also presented. Defects and what constitutes a good bug report are defined. The importance of collaboration between testers and developers is emphasized.
This chapter discusses various types of defects that can occur in software and strategies for testing and detecting them. It defines key terms like failure, defect, and error. It covers different types of testing like glass-box/white-box testing which examines internal design/code and black-box testing without access to internals. Specific defect categories discussed include logical errors, loop/termination issues, precondition failures, and concurrency problems like deadlocks. The importance of equivalence partitioning and boundary value analysis is emphasized to design effective tests.
The Most Important Thing: How Mozilla Does Security and What You Can Stealmozilla.presentations
The document discusses Mozilla's approach to software security and provides recommendations for how to implement an effective security process. Some of the key points covered include:
1) Security is not a linear process and should have feedback loops to continuously learn from problems and prevent recurrences.
2) The most important thing is to systematically capture knowledge from security incidents to avoid repeating mistakes.
3) Extensive testing is critical to maintain security and catch issues early, with Mozilla running over 55,000 automated tests daily.
4) Code reviews should be mandatory to catch mistakes and spread security knowledge throughout the organization.
Characterizing and Predicting Which Bugs Get ReopenedThomas Zimmermann
This document summarizes a study characterizing and predicting which software bugs get reopened. Through a qualitative survey, researchers identified six main causes of bug reopens: bugs being difficult to reproduce, misunderstood root causes, insufficient bug information, increased bug priority later on, regression bugs, and process-related issues. A quantitative analysis using logistic regression found that bugs found through code analysis tools or human review were less likely to reopen, while bugs found through system or customer testing were more likely to reopen. The analysis also found that bugs opened by people with higher reputations based on past bugs were less likely to reopen.
Become a Better Developer with Debugging Techniques for Drupal (and more!)Acquia
What is debugging? How is it different from simply writing a program, and how can you get better at it? A structured debugging approach narrows down problems, rather than using random changes and guesses, and can help you identify and solve problems faster and more effectively.
In this webinar about debugging techniques for Drupal, we’ll cover:
-A general approach to debugging Drupal problems
Common sources of bugs
-A tour of useful debugging tools and techniques that can help you start to see into the inner workings of any version of Drupal
-The use of tools such as XDebug, the devel suite, and client side debugging such as Firebug, LiveHTTPHeaders, and javascript debugging
The document provides an overview of software testing techniques, focusing on black-box testing. It discusses the basics of software testing, including verification and validation practices. It also describes the two main techniques of black-box and white-box testing, as well as six types of testing involving both: unit testing, integration testing, functional testing, system testing, stress testing, and performance testing. Finally, it discusses strategies for writing efficient test cases that can find faults with minimal effort.
The document discusses debugging processes and techniques. It defines debugging as the process of finding, correcting and removing bugs from programs. There are three main types of errors: syntactic, semantic, and logic errors. The debugging process involves reproducing the problem reliably, finding the source of the error, fixing just that one error, testing the fix, and optionally looking for more errors. Key debugging techniques include inserting print statements, using a debugger, explaining the code to someone else, and fixing only one error at a time. The overall goal of debugging is to methodically match symptoms to causes to locate and correct errors in code.
The document outlines a 5-step process for debugging game code: 1) reproducing the problem consistently, 2) collecting clues, 3) pinpointing the error, 4) repairing the problem, and 5) testing the solution. It also provides tips for preventing bugs through code organization and testing, and describes Game Maker-specific debugging tools like print messages, debug mode, and log files.
Flight checks -QA for Releases that Prevent Disasters from Escaping into the ...Brie Hoblin
The document discusses flight checks and quality assurance for software releases to prevent disasters. It provides examples of past software disasters like the Mars Climate Orbiter that exploded due to using the wrong measurement units. The document defines a disaster as a bug that significantly harms a client or users. It discusses scenarios to determine if they constitute disasters and outlines factors to consider for deployment decisions, rollback vs hotfixing, and preventing disasters through quality assurance.
This document provides an overview of debugging techniques and best practices. It discusses what debugging is, common debugging rules and methods, as well as tools and techniques for preventing bugs. Key points covered include understanding the system, making failures reproducible, dividing problems, changing one thing at a time, and keeping an audit trail. The document also mentions code reviews, assertions, and defensive programming as ways to prevent bugs.
The document discusses techniques for software testers to advocate for bugs they have found to get them prioritized and fixed. It recommends testers think of bug reports as tools to convince programmers to spend time fixing issues. Effective bug reports motivate programmers by highlighting impact and addressing objections. Testers should research failure conditions thoroughly by varying their own behavior, program options/settings, and software/hardware environments to prove bugs are more serious or widespread than initially found. The goal is to provide compelling arguments to prioritize and fix bugs.
Stakeholders always want to release when they think we’ve “finished testing”. They believe we have revealed “all of the important problems” and “verified all of the fixes,” and now it’s time to reap the rewards. However, as testers we still can assist in improving software by learning about problems after code has rolled “live-to-site”—especially if it’s a website. At eBay we have a post-ship “site quality” mindset in which testers continue to learn from A/B testing, operational issues, customer sentiment analysis, discussion forums, and customer call patterns—just to name a few. Jon Bach explains how and what eBay’s Live Site Quality team learns every day about what they just released to production. Take away some ideas on what you can do to test and improve value—even after you’ve shipped.
This document provides an overview of software testing techniques, focusing on black-box testing. It defines software testing as verifying that a software product meets requirements and identifying bugs. The two main types of testing are black-box testing, which ignores internal code structure, and white-box testing, which considers internal structure. Six types of testing are discussed: unit testing examines individual code units; integration testing verifies code units work together; functional and system testing ensure requirements are met; and stress, performance, and usability testing evaluate non-functional properties. Acceptance testing involves customers verifying requirements are fulfilled.
This document provides an overview of software testing techniques, focusing on black-box testing. It defines software testing as verifying that a software product meets requirements and identifying bugs. The two main types of testing are black-box testing, which ignores internal code structure, and white-box testing, which considers it. Six types of testing are discussed: unit testing examines individual code units; integration testing checks code unit interactions; functional and system testing evaluate requirements compliance; and stress, performance, and usability testing evaluate non-functional properties. Acceptance testing involves customer testing against requirements.
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flexmichael.labriola
This document discusses automated testing in Flex. It begins by explaining why automated testing is important, such as reducing costs from software errors and allowing developers to change code without fear of breaking other parts of the project. It then covers topics like writing unit tests, using theories and data points to test over multiple values, and writing integration tests. The document emphasizes that writing testable code is key, and provides some principles for doing so, such as separating construction from application logic and using interfaces. It also discusses using fakes, stubs and mocks to isolate units for testing.
Unit testing is easy... In a perfect world.
Our world is not.
This talk will cover a bunch of tips, tricks, and techniques to retrofit ugly legacy applications so parts of them can be unit tested.
(Examples given in Java using JUnit and Mockito)
This document discusses common mistakes made in software testing. It identifies five themes of mistakes: the role of testing, planning the testing effort, personnel issues, the tester at work, and overreliance on technology. Under the first theme, it discusses mistakes like defining the role of testing too narrowly and not providing context for bug data. The second theme discusses mistakes like a bias toward functional testing over scenario and configuration testing. It also discusses not testing documentation, installation, stress, and load.
The document discusses error and exception handling in UiPath Studio. It defines different types of errors like syntax errors and exceptions. Exceptions are events caught and handled by the program. Common exceptions that may occur in UiPath projects are discussed, including NullReferenceException, IndexOutOfRangeException, and SelectorNotFoundException. The document also covers different exception handling techniques in UiPath like TryCatch, Throw, Rethrow, and the ContinueOnError property. Finally, it discusses the Global Exception Handler used to handle unexpected exceptions.
The document discusses test cases, defects (bugs), and bug reports. It provides definitions and examples of test cases, their purpose and components. Examples of test management tools and test-driven development are also presented. Defects and what constitutes a good bug report are defined. The importance of collaboration between testers and developers is emphasized.
This chapter discusses various types of defects that can occur in software and strategies for testing and detecting them. It defines key terms like failure, defect, and error. It covers different types of testing like glass-box/white-box testing which examines internal design/code and black-box testing without access to internals. Specific defect categories discussed include logical errors, loop/termination issues, precondition failures, and concurrency problems like deadlocks. The importance of equivalence partitioning and boundary value analysis is emphasized to design effective tests.
The Most Important Thing: How Mozilla Does Security and What You Can Stealmozilla.presentations
The document discusses Mozilla's approach to software security and provides recommendations for how to implement an effective security process. Some of the key points covered include:
1) Security is not a linear process and should have feedback loops to continuously learn from problems and prevent recurrences.
2) The most important thing is to systematically capture knowledge from security incidents to avoid repeating mistakes.
3) Extensive testing is critical to maintain security and catch issues early, with Mozilla running over 55,000 automated tests daily.
4) Code reviews should be mandatory to catch mistakes and spread security knowledge throughout the organization.
Characterizing and Predicting Which Bugs Get ReopenedThomas Zimmermann
This document summarizes a study characterizing and predicting which software bugs get reopened. Through a qualitative survey, researchers identified six main causes of bug reopens: bugs being difficult to reproduce, misunderstood root causes, insufficient bug information, increased bug priority later on, regression bugs, and process-related issues. A quantitative analysis using logistic regression found that bugs found through code analysis tools or human review were less likely to reopen, while bugs found through system or customer testing were more likely to reopen. The analysis also found that bugs opened by people with higher reputations based on past bugs were less likely to reopen.
Become a Better Developer with Debugging Techniques for Drupal (and more!)Acquia
What is debugging? How is it different from simply writing a program, and how can you get better at it? A structured debugging approach narrows down problems, rather than using random changes and guesses, and can help you identify and solve problems faster and more effectively.
In this webinar about debugging techniques for Drupal, we’ll cover:
-A general approach to debugging Drupal problems
Common sources of bugs
-A tour of useful debugging tools and techniques that can help you start to see into the inner workings of any version of Drupal
-The use of tools such as XDebug, the devel suite, and client side debugging such as Firebug, LiveHTTPHeaders, and javascript debugging
2. GameHouse Confidential 2
Isolation
Isolation is the process of examining the root causes of a defect.
• While the exact root cause might not be determined it is important to try and
separate the symptoms of the problem from the cause.
• Isolating a defect is generally done by reproducing it multiple times in different
situations to get an understanding of how and when it occurs.
3. GameHouse Confidential 3
Generalization
Generalization is the process of understanding the broader impact of a defect.
• Because developers reuse code elements throughout a program a defect
present in one element of code can manifest itself in other areas.
• A defect that is discovered as a minor issue in one area of code might be a
major issue in another area.
• Individuals logging defects should attempt to extrapolate where else an issue
might occur so that a developer will consider the full context of the defect, not
just a single isolated incident.
• A defect report that is written without isolating and generalizing it, is a half
reported defect.
4. GameHouse Confidential 4
Severity
The importance of a defect is usually denoted as its “severity”. There are many
schemes for assigning defect severity – some complex, some simple. In JIRA,
severity is synonymous with Priority.
• Almost all feature “Severity-1” (Blocker) and “Severity-2” (Critical)
classifications which are commonly held to be defects serious enough to delay
completion of a project or build release.
• “Severity-3” (Major) is the middle ground that is important enough to be fixed
relatively quickly, but not immediately.
• In many other companies, developers and testers get into arguments about
whether a defect is “Severity-4” (Normal) or “Severity-5” (Trivial) and time is
wasted.
5. GameHouse Confidential 5
Priority
Bugs should be assessed in terms of impact and probability to determine
Priority.
• Impact is a measure of the seriousness of the defect when it occurs and can
be classed as “high” or “low”
– high impact implies that the user cannot complete the task at hand,
– low impact implies there is a workaround or it is a cosmetic error.
• Probability is a measure of how likely the defect is to occur and again is
assigned either “Low” or “High”.
Impact/Probability = Priority
High/High High
High/Low Medium
Low/Low Low
7. GameHouse Confidential 7
Pokédex 000 item duplication glitch
The Pokédex 000 item duplication glitch (commonly referred to as the Rare Candy glitch due
to the preferred item chosen to duplicate) is an infamous glitch in the Pokémon Generation 1
games. It allows the player to duplicate items in their bag.
• Through the development of Pokémon Yellow, which occurred in the two years following
the release of Pokémon Red and Green in Japan, the old man glitch was disabled by
blanking the data for wild Pokémon before overwriting it, and reprogramming shore tiles to
not have any Pokémon. However, a user can still encounter the glitch Pokémon and exploit
the item duplication glitch using the Ditto glitch or the Cable Club escape glitch.
8. GameHouse Confidential 8
• RESULT: The sixth item in the bag is duplicated upon encountering the glitch Pokémon,
and again if it is caught. The quantity of this item will be increased by 128, provided that
the quantity is less than 128 before performing the glitch. This means that the player is
free to perform the glitch again by swapping the item, or using/tossing the duplicated
item to reduce its quantity back under 128.
• CAUSE: Every Pokémon has two separate bit lists that tells the game whether it has
been seen or caught. If the bit is off, that means it hasn't been caught or seen.
• Missingno.'s Pokédex seen bit is in the same location as the bit that stores how many of
the 6th item is in the bag, as well as 'M (00)'s. This is why, when Missingno. or 'M (00) is
encountered, the sixth item slot is increased by 128 if the quantity of the item is less than
128. The glitchy box symbol is a result of the game attempting to display a number
greater than 99, which causes it to grab sprites from beyond the number sprites.
Sometimes it can appear to be a blank tile, but if one goes somewhere else or out of
battle it will revert back to an unusual tile.
• A way to tell whether " 9" is in fact 9 or [blank tile]9 is to select Toss. The quantity is
displayed with a leading zero if the amount is actually 9, and as simply " 9" if not.
10. Bug Regression
• Bug regression is among the most crucial qualities to have as a tester.
There is a broad generalization of regression being only verifying fixed bugs.
Actually there are two facets to regression
– Bug Fix Verification
– Validation of Reproduction Steps
• The first is simple. Fully comprehend the reproduction steps. If steps are not
entirely clear, we assign the bug back to the Reporter with comments as to
why it is not clear. As we do regressions, these are the ones sent back as
CONFUSED.
• As a Tester, our bugs need to be the most clear and concise bugs in the
JIRA database. Developers and artists will write bugs and tasks specific to
their production tracking.
GameHouse Confidential 10
11. Regression Priorities
• Provide adequate coverage without wasting time should be a primary
consideration when conducting regression tests.
– Sort and assign groups of bugs to regress.
• Casino example: Slots, Blackjack, etc.
– If attempting to isolate a new issue, have a fellow team member try and reproduce
the issue with you.
• Try to spend as little time as possible doing regression testing without
reducing the probability that you will detect new failures in old, already tested
code.
– Make sure to check or keep a mental note of issues as you regress them.
• If time permits, check closed bugs of the same type and make sure they aren’t
reopened without us being aware.
GameHouse Confidential 11
12. New Issues from Regression
• It is important to remember that when a bug is fixed and a new build is
released, that new bugs are likely to be introduced.
• Check surrounding issues through functionality testing. Look for general
problems within the game itself or its user interface, such as stability
issues, game mechanic issues, and game asset integrity.
– Example: Purchase Testing
GameHouse Confidential 12
13. Halo Testing
• ha·lo (hl)n. pl. ha·los or ha·loes
A circular band of colored light around a light source, as around the sun or
moon, caused by the refraction and reflection of light by ice particles suspended
in the intervening atmosphere.
Abstract Exercise: Halo Test around this photo
GameHouse Confidential 13
14. Smoke Testing
Initially was used to refer to physical tests made to test for leaks by running non-
toxic smoke through a closed system of pipes.
By metaphorical extension, the term is also used for the first test made after
assembly or repairs to a system, to provide some assurance that the system
under test will not catastrophically fail.
• Smoke testing performed on a particular build is also known as a build
verification test. Microsoft claims that after code reviews, "smoke testing is the
most cost effective method for identifying and fixing defects in software
• Smoke tests can be broadly categorized as functional tests or unit tests.
Functional tests exercise the complete program with various inputs.
GameHouse Confidential 14