This document discusses negative testing strategies for automating tests. It begins by defining negative testing as testing invalid or unexpected inputs to show how a system does not work. The document then provides examples of how writing negative tests can improve quality, coverage, stability and make applications more robust. Specific strategies are outlined, such as discussing negative acceptance criteria with users, separating tests into functional groups, and using feature flags to enable and disable features for different test types. The conclusion emphasizes changing to a more positive mindset about negative testing to achieve better outcomes.
5. ⢠One of the market leaders in low
code platforms
⢠Help organizations to build and
improve apps in lightening speed
(months turn into weeks)
⢠In-house cloud support (and support
for major cloud vendors)
⢠Based out of Rotterdam, Boston,
and London
Mendix
6. Disclaimer
âQuality is value to some personâ â¨
Gerald (Jerry) M. Weinberg
âTesting is questioning a product in order to
evaluate itââ¨
James Bach
7. ⢠What is negative testing?
⢠The light-bulb moment
⢠Why negative testing is needed?
⢠How I write (better) negative tests: inspired by true events
⢠Recap: helpful practices
⢠Conclusion
⢠Q&A
Agenda
8. âtesting the system by giving invalid dataâ â an accepted answer in
stackoverflow
âTests aimed at showing that a component or system does not work.
Negative testing is related to the tester's attitude rather than a specific
test approach or test design technique, e.g., testing with invalid input
values or exceptions.â - ISTQB Glossary
Some negative test scenarios for testing a pen â
⢠Change medium to write-on
⢠Replace the refill with an empty one
⢠Put the pen in liquid and verify
What is Negative testing?
9. ⢠It is different from positive testing
⢠Fragile and not straightforward
⢠Too improbable to some
⢠Sometimes the term is used as a guide word heuristic
⢠There is nothing negative in negative testing
The light-bulb moment
10. ⢠Prevents application from crashing
⢠Finds defects â improved quality
⢠Increases testing coverage
⢠More stable and reliable application
⢠Ensures application handles both good and bad data well
Why negative testing is needed?
Good Input Process Good Output Bad Input Process Ok Output
âş đ
11. How I write (better) negative tests:
inspired by true events
12. ⢠Ask end user examples for negative scenarios
⢠Makes application robust
⢠Sparks productive discussions within the team
⢠May result in new business rule(s) and/or recruitments
⢠Also talk in terms of What the product should not do...
Discuss negative acceptance criteria too
14. ⢠Ask end user examples for negative scenarios
⢠Makes application robust
⢠Sparks productive discussions within the team
⢠May result in new business rule(s) and/or recruitments
⢠Also talk in terms of What the product should not do...
Discuss negative acceptance criteria too
16. ⢠The classic question applies here
⢠Depends on risk and frequency of use
⢠If costly to maintain â consider ROI
⢠Technically challenging â implement feature differently
⢠Write test friendly code â less fragile tests
What and what not to automate
17. ⢠Should have clear test objective
⢠Should verify an user behavior
⢠Group tests into functional slices â
â˘Separate test to check routing to different banks
â˘Seperate tests to check payment method decisioning
⢠Group tests based on type â
â˘granular functional tests
â˘integration tests
â˘E2E/chain tests
Separate tests with clear test objective
19. ⢠Should have clear test objective
⢠Should verify an user behavior
⢠Group tests into functional slices â
â˘Separate test to check routing to different banks
â˘Seperate tests to check payment method decisioning
⢠Group tests based on type â
â˘granular functional tests
â˘integration tests
â˘E2E/chain tests
Separate tests with clear test objective
21. ⢠Should have clear test objective
⢠Should verify an user behavior
⢠Group tests into functional slices â
â˘Separate test to check routing to different banks
â˘Seperate tests to check payment method decisioning
⢠Group tests based on type â
â˘granular functional tests
â˘integration tests
â˘E2E/chain tests
Separate tests with clear test objective
23. Scenario: Check cancel payment functionality in an online donation application
Set up:
â˘Execute a head-less test automation script
â˘Trigger a database script to populate the data
Actual test: Test only the payment cancelling feature
Test set up can be a hack
Fill in donor
details
Choose mode of
payment
Cancel payment
24. Negative testing exposes loose ends
Positive scenario of successful payment for an online donation system:
⢠Comments box is an optional field
⢠So, often not filled
Negative scenario of spamming the application:
⢠The same comments box is a key component
⢠The team may agree to change the behavior
â only available to authenticated user
25. Negative testing exposes loose ends
Positive scenario of successful payment for an online donation system:
⢠Comments box is an optional field
⢠So, often not filled
Negative scenario of spamming the application:
⢠The same comments box is a key component
⢠The team may agree to change the behavior
â only available to authenticated user
26. ⢠Negative tests can result in false positives
False positives
Positive test like â
auth_user.should
have_comments(â. t-helper-
textareaâ)
⢠Will pass if the comments section
is available
⢠Will fail if the comments section is
removed/changed
Negative test like â
guest_user.should_not
have_comments(â. t-helper-
textareaâ)
⢠Will pass if the comments section is
removed
⢠Will pass even if class is renamed to
t-helper-comments in DOM
Positive Test Negative Test False positive
27. ⢠Balance with DRY (Donât Repeat Yourself) opposite
⢠Write adjacent helper methods
Possible solutions to false positives
28. Scenario: User is able to add and delete contribution cause
Technical assertion
Functionally un-automatable!
30. Scenario: User is able to add and delete contribution cause
Technical assertion
Functionally un-automatable!
31. Scenario: User is able to add and delete donation cause
Customized assertion
Customized assertions
32. Scenario: Check deletion of a particular object (split shapes) in a modeling tool
Test should not test âTest Dataâ
Say NO to these kind of tests Say YES to these kind of tests
33. Scenario: Validate length of donorâs first name
Implementation impacts negative test
automation
Explicit Tests
No Tests
34. ⢠The examples are phrased out as what customers donât want
⢠Convert examples to positive behavior
⢠A sample scenario can be â
Given weâve gone live with the front pageâ¨
When the president of USA resigns on the same dayâ¨
Then the site shouldnât go down under the weight of people reading that ne
⢠Negative scenarios are powerful - often related to interesting stories
Negative non-functional scenarios in BDD
35. ⢠âPostive test biasâ exists in TDD approach
Negative testing in TDD
Number of
Test Cases
28.67%
71.33%
Positive TCs Negative TCs
Number of
Defects
Found
71.15%
28.85%
Positive TCs Negative TCs
Source:
Title: Effects of Negative Testing on TDD: An Industrial
Experiment
Author: Adnan Causevic, Rakesh Shukla, Sasikumar
Punnekkat
36. Feature flag in negative testing
Usage of feature flag:
âEarly access
âRun A/B tests
âNewbie vs Power user
Example
A Feature Flag (or Feature Toggle) is the ability to easily turn on/off features
of an application.
37. ⢠A test to check the donor and donation details are available in different tabs
only with feature flag
⢠A test to check the donor and donation details are not available in tabbed view
without feature flag
⢠Separate test(s) for detailed checks of the tabbed views
Feature flag is live
38. ⢠A test to check the donor and donation details are available in different tabs
only with feature flag
⢠A test to check the donor and donation details are not available in tabbed view
without feature flag
⢠Separate test(s) for detailed checks of the tabbed views
Feature flag is removed
39. â Discuss both positive and negative A/C
â Write test friendly code - less fragile tests
â Separate tests with clear test objective
â Focus on test, set up can be hack
â Negative testing exposes loose ends
â DRY opposite and adjacent helper
methods
â Customized assertions
â Test functionality not test data
â Implementation impacts
â Give positive twist to negative
non-functional BDD scenarios
â Positive test bias in TDD
â Feature flags in negative testin
Recap: helpful practices
40. Ă Testing strategy is based on âpositiveâ testing
Ă Teams map tests to requirements
Ă Customers face issues related to untested conditions
Ă Customers used product in an unusual way
âTesting a product against requirements is necessary but not enough
âItâs more of a mindset change â everyone in team should be involved
âMore negative testing will lead to more positive outcomes
Time to get positive about negative testing