Roberto Casadei 2013-05-30
Roberto Casadei
Notes taken from:
Effective Unit Testing: A guide for Java developers
Effective Unit Testing
Effective Unit Testing 2
Testing
Expressing and validating assumptions and intended behavior
of the code
Checking what code does against what it should do
Tests help us catch mistakes
Tests help us shape our design to actual use
Tests help us avoid gold-plating by being explicit about what
the required behavior is
The biggest value of writing a test lies not in the resulting test
but in what we learn from writing it
Effective Unit Testing 3
The value of having tests
First step: (automated) unit tests as a quality tool
Helps to catch mistakes
Safety net against regression
failing the build process when a regression is found
Second step: unit tests as a design tool
Informs and guides the design of the code towards its actual purpose
and use
From design-code-test to test-code-refactor (i.e. TDD) a.k.a red-
green-refactor
The quality of test code itself affects productivity
Effective Unit Testing 4
Test-Driven Development (TDD)
Direct results:
Usable code
Lean code, as production code only implements what's required by the
scenario it's used for
Sketching a scenario into executable code is a design activity
A failing test gives you a clear goal
Test code becomes a client for production code, expressing your needs
in form of a concrete example
By writing only enough code to make the test pass, you keep your
design simple and fit-for-purpose
Effective Unit Testing 5
Behaviour-Driven Development (BDD)
Born as a correction of TDD vocabulary
“Test” word as source for misunderstandings
Now, commonly integrated with business analysis and
specification activities at requirements level
Acceptance tests as examples that anyone can read
Effective Unit Testing 6
Not just tests but good tests (1)
Readability
Mantainability
Test-code organization and structure
Not just structure but a useful structure
Good mapping with your domain and your abstractions
What matters is whether the structure of your code helps you locate the
implementation of higher-level concepts quickly and reliably
So, pay attention to:
Relevant test classes for task at hand
Appropriate test methods for those classes
Lifecycle of objects in those methods
Effective Unit Testing 7
Not just tests but good tests (2)
It should be clear what your tests are actually testing
Do not blindly trust the names of the tests
The goal is not 100% coverage → testing the “right
things”
A test that has never failed is of little value – it's probably not
testing anything
A test should have only one reason to fail
because we want to know why it failed
Effective Unit Testing 8
Not just tests but good tests (3)
Test isolation is important
Be extra careful when your tests depend on things such as:
Time, randomness, concurrency, infrastructure, pre-existing data, persistence,
networking
Examples of measures:
Test doubles
Keep test code and the resources it uses together
Making tests set up the context they need
Use in-memory database for integration tests that require persistence
In order to rely on your tests, they need to be repeatable
Effective Unit Testing 9
Test Doubles
Effective Unit Testing 10
Test doubles
Objects to be substituted for the real implementation for testing
purposes
Replacing the code around what you want to test to gain full control if its
context/environment
Essential for good test automation
Allowing isolation of code under test
From code it interacts with, its collaborators, and dependencies in general
Speeding-up test execution
Making random behavior deterministic
Simulating particular conditions that would be difficult to create
Observing state & interaction otherwise invisible
Effective Unit Testing 11
Kinds of test doubles
Stubs: unusually short things
Fake objects: do it without side effects
Test spies: reveal information that otherwise would be
hidden
Mocks: test spies configured to behave in a certain way
under certain circumstances
Effective Unit Testing 12
Stubs
(noun) def. “a truncated or unusually short thing”
A stub is simple impl that stands for a real impl
e.g. An object with methods that do nothing or return a default
value
Best suited for cutting off irrelevant collaborators
Effective Unit Testing 13
Fake objects
Replicating the behavior of the real thing without the side
effects and other consequences of using the real thing
Fast alternative for situations where the real thing is
difficult or cumbersome to use
Effective Unit Testing 14
Test spies
Built to record what happens when using them
E.g. they are useful when none of the objects passed in
as arguments to certain operations can reveal through
their API what you want to know
Effective Unit Testing 15
Mock objects
Mocks are test spies that specify the expected interaction
together with the behavior that results from them
E.g. A mock for UserRepository interface might be told to …
return null when findById() is invoked with param 123, and
return a given User instance when called with 124
Effective Unit Testing 16
Choosing the right test double
As usual, it depends
Rule of thumb: stub queries; mock actions
Effective Unit Testing 17
Structuring unit tests
Arrange-act-assert
Arrange your objects and collaborators
Make them work (trigger an action)
Make assertions on the outcome
BDD evolves it in given-when-then
Given a context
When something happens
Then we expect certain outcome
Effective Unit Testing 18
Check behavior, not implementation
A test should test..
just one thing, and..
test it well, while..
communicating its intent clearly
What's the desired behavior you want to verify?
What's just an implementation detail?
Effective Unit Testing 19
Test Smells
Effective Unit Testing 20
Readability
Why
Accidental complexity adds cognitive load
Goal
Reading test code shouldn't be hard work
How
The intent and purpose of test code should be explicit or easily deducible
Consider
Level of abstraction
Single Responsibility Principle (also applies to tests)
Effective Unit Testing 21
Readability smells (1)
Primitive assertions
Assertions that use a level of abstraction that is too low
E.g. Testing structural details of results
Twin of primitive obsession code smell (which refers to use of primitive types to represent higher-
level concepts)
Also the abstraction level of the testing API matters
General advice: keep a single level of abstraction in test methods
Hyperassertions
Assertions that are too broad
make it difficult to identify the intent and essence of the test
may fail if small details change, thus making it difficult to find out why
Approach: remove irrelevant details + divide-et-impera
Effective Unit Testing 22
Readability smells (2)
Incidental details
The test intent is mixed up with nonessential information
Approach
Extracts nonessential information into private helpers and setup methods
Give things appropriate, descriptive names
Strive for a single level of abstractions in a test method
Setup sermon
Similar to Incidental details but focuses on the setup of a test's fixture (= the context in which a
given test executes), i.e. on the @Before and @BeforeClass (setup) methods
Magic numbers
Generally, literal values does not communicate their purpose well
Approach: replace literals using costants with informative names that make their purpose explicit
Effective Unit Testing 23
Readability smells (3)
Split personality
When a test embodies multiple tests in itself
A test should only check one thing and check it well
so that what's wrong could be easily located
Approach: divide-et-impera
Split logic
Test code (logic or data) is scattered in multiple places
Approach:
Inline the data/logic into the test that uses it
Effective Unit Testing 24
Maintainability
Test code requires quality (as production code)
Maintainability of tests
is related to test readability
is related to structure
Look for
test smells that add cognitive load
test smells that make for a mantainance nightmare
test smells that cause erratic failures
Effective Unit Testing 25
Mantainability smells (1)
Duplication
needless repetition of concepts or their representations
all “copies” need to be synchronized
Examples:
Literal duplication → extract variables
Structural duplication (same logic operating with different data istances) → extract methods
Sometimes, it may be better to leave some duplication in favor of better readability
Conditional logic
can be hard to understand and error-prone
Control structures can be essential in test helpers but, in test methods, these structures tend to be a major
distraction
Thread.sleep()
It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries
Effective Unit Testing 26
Mantainability smells (2)
Flaky tests
Tests that fails intermittently
Does the behavior depend on time/concurrency/network/…?
When you have a source for trouble, you can
1) Avoid it 2) Control it 3) Isolate it
Unportable file paths
Possibly, use relative paths (e.g. evaluated against the project's root dir)
You could also put resources on Java's classpath and look them up via
getClass().getResource(filename).getFile()
Persistent temp files
Even though you should try to avoid using physical files altogether if not essential, remember to
delete temp files during teardown
Effective Unit Testing 27
Maintainability smells (3)
Pixel perfection
It refers to tests that assert against (hardcoded) low-level details even
though the test would semantically be at a higher-level
you may require a fuzzy match instead of a perfect match
From Parametrized-Test pattern to Parametrized-Mess
Some frameworks might not allow you
to trace a test failure back to the specific data set causing it
to express data sets in a readable and concise way
Lack of cohesion in test methods
→ each test in a test-case should use the same text fixture
Effective Unit Testing 28
Trustworthiness
We need to trust our tests so that we can feel confident
in evolving/modifying/refactoring code
Look for test code that deliver a false sense of security
Misleading you to think everything is fine when it's not
Effective Unit Testing 29
Trustworthiness smells (1)
Commented-out tests
Try to understand and validate their purpose, or delete them
Misleading comments
May deliver false assumptions
Do not comment what the test does, as the test code should show that clearly and promptly
Instead, comments explaining the rationale may be useful
Never-failing tests
Have no value
E.g. forgetting fail() in a try{}catch{}
Shallow promises
Tests that do much less than what they say they do
Effective Unit Testing 30
Trustworthiness smells (2)
Lowered expectations
Tests asserting for loose conditions (vague assertions, …) give a false sense of security
→ raise the bar by making the assertions more specific/precise
Platform prejudice
A failure to treat all platforms equal
Measures: different tests for different platforms
Conditional test
It's a test that's hiding a secret conditional within a test method, making the test logic
different from what its name would suggest
Platform prejudice is an example (the specific test depends on the platform)
As a rule of thumb, all branches in a test method should have a chance to fail
Effective Unit Testing 31
Some advanced stuff
Effective Unit Testing 32
Testable design
Design decisions can foster or hinder testability
Principles supporting testable design
Modularity
SOLID
Single responsability principle → a class should have only a single responsibility
Open/closed principle → sw entities should be open for extension, but closed for modification
– you can change what a class does without changing its source code
Liskov substitution principle → objects in a program should be replaceable with instances of their
subtypes without altering the correctness of that program
Inteface segregation principle → many client-specific interfaces are better than one general-purpose
interface
Dependency inversion principle → as a way for depending on abstractions rather than on concretions
– great for testability!
Effective Unit Testing 33
Testability issues
Testability issues
restricted access
private/protected methods
inability to observe the outcome (e.g. side effects), void methods
inability to substitute parts of an implementation
inability to substitute a collaborator
inability to replace some functionality
Effective Unit Testing 34
Guidelines for testable design
Avoid complex private methods
Avoid final methods
Avoid static methods (if you foresee chance for stub)
Use new with care
it hardcodes the implementation class → use IoC if possible
Avoid logic in constructors
Avoid the Singleton pattern
Favor composition over inheritance
Wrap external libraries
Avoid service lookups (factory classes)
as the collaborator is obtained internally to the method ( service = MyFactory.lookupService() ), it may be
difficult to replace the service
Effective Unit Testing 35
JUnit 4
Effective Unit Testing 36
JUnit4 basics
Package: org.junit
Test classes are POJO classes
Annotations
@Test (org.junit.Test)
@Before: marked methods are exec before each test method run
@After: marked methods are exec after each test method run
Using assertions & matchers
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
So that in your test methods you can write something as
assertThat( true, is( not(false) ) ); 
Effective Unit Testing 37
Parametrized-Test pattern in JUnit
Mark the test class with
@RunWith(org.junit.runners.Parametrized.class)
Define private fields and a constructor that accepts, in order, your parameters
public MyParamTestCase(int k, String name) { this.k=k; … }
Define a method that returns your all your parameter data
@org.junit.runners.Parametrized.Parameters
public static Collection<Object[]> data(){
return Arrays.asList( new Object[][] { { 10, “roby”}, … } );
}
Define a @Test method that works against the private fields that are defined to
contain the parameters.

Effective unit testing

  • 1.
    Roberto Casadei 2013-05-30 RobertoCasadei Notes taken from: Effective Unit Testing: A guide for Java developers Effective Unit Testing
  • 2.
    Effective Unit Testing2 Testing Expressing and validating assumptions and intended behavior of the code Checking what code does against what it should do Tests help us catch mistakes Tests help us shape our design to actual use Tests help us avoid gold-plating by being explicit about what the required behavior is The biggest value of writing a test lies not in the resulting test but in what we learn from writing it
  • 3.
    Effective Unit Testing3 The value of having tests First step: (automated) unit tests as a quality tool Helps to catch mistakes Safety net against regression failing the build process when a regression is found Second step: unit tests as a design tool Informs and guides the design of the code towards its actual purpose and use From design-code-test to test-code-refactor (i.e. TDD) a.k.a red- green-refactor The quality of test code itself affects productivity
  • 4.
    Effective Unit Testing4 Test-Driven Development (TDD) Direct results: Usable code Lean code, as production code only implements what's required by the scenario it's used for Sketching a scenario into executable code is a design activity A failing test gives you a clear goal Test code becomes a client for production code, expressing your needs in form of a concrete example By writing only enough code to make the test pass, you keep your design simple and fit-for-purpose
  • 5.
    Effective Unit Testing5 Behaviour-Driven Development (BDD) Born as a correction of TDD vocabulary “Test” word as source for misunderstandings Now, commonly integrated with business analysis and specification activities at requirements level Acceptance tests as examples that anyone can read
  • 6.
    Effective Unit Testing6 Not just tests but good tests (1) Readability Mantainability Test-code organization and structure Not just structure but a useful structure Good mapping with your domain and your abstractions What matters is whether the structure of your code helps you locate the implementation of higher-level concepts quickly and reliably So, pay attention to: Relevant test classes for task at hand Appropriate test methods for those classes Lifecycle of objects in those methods
  • 7.
    Effective Unit Testing7 Not just tests but good tests (2) It should be clear what your tests are actually testing Do not blindly trust the names of the tests The goal is not 100% coverage → testing the “right things” A test that has never failed is of little value – it's probably not testing anything A test should have only one reason to fail because we want to know why it failed
  • 8.
    Effective Unit Testing8 Not just tests but good tests (3) Test isolation is important Be extra careful when your tests depend on things such as: Time, randomness, concurrency, infrastructure, pre-existing data, persistence, networking Examples of measures: Test doubles Keep test code and the resources it uses together Making tests set up the context they need Use in-memory database for integration tests that require persistence In order to rely on your tests, they need to be repeatable
  • 9.
    Effective Unit Testing9 Test Doubles
  • 10.
    Effective Unit Testing10 Test doubles Objects to be substituted for the real implementation for testing purposes Replacing the code around what you want to test to gain full control if its context/environment Essential for good test automation Allowing isolation of code under test From code it interacts with, its collaborators, and dependencies in general Speeding-up test execution Making random behavior deterministic Simulating particular conditions that would be difficult to create Observing state & interaction otherwise invisible
  • 11.
    Effective Unit Testing11 Kinds of test doubles Stubs: unusually short things Fake objects: do it without side effects Test spies: reveal information that otherwise would be hidden Mocks: test spies configured to behave in a certain way under certain circumstances
  • 12.
    Effective Unit Testing12 Stubs (noun) def. “a truncated or unusually short thing” A stub is simple impl that stands for a real impl e.g. An object with methods that do nothing or return a default value Best suited for cutting off irrelevant collaborators
  • 13.
    Effective Unit Testing13 Fake objects Replicating the behavior of the real thing without the side effects and other consequences of using the real thing Fast alternative for situations where the real thing is difficult or cumbersome to use
  • 14.
    Effective Unit Testing14 Test spies Built to record what happens when using them E.g. they are useful when none of the objects passed in as arguments to certain operations can reveal through their API what you want to know
  • 15.
    Effective Unit Testing15 Mock objects Mocks are test spies that specify the expected interaction together with the behavior that results from them E.g. A mock for UserRepository interface might be told to … return null when findById() is invoked with param 123, and return a given User instance when called with 124
  • 16.
    Effective Unit Testing16 Choosing the right test double As usual, it depends Rule of thumb: stub queries; mock actions
  • 17.
    Effective Unit Testing17 Structuring unit tests Arrange-act-assert Arrange your objects and collaborators Make them work (trigger an action) Make assertions on the outcome BDD evolves it in given-when-then Given a context When something happens Then we expect certain outcome
  • 18.
    Effective Unit Testing18 Check behavior, not implementation A test should test.. just one thing, and.. test it well, while.. communicating its intent clearly What's the desired behavior you want to verify? What's just an implementation detail?
  • 19.
    Effective Unit Testing19 Test Smells
  • 20.
    Effective Unit Testing20 Readability Why Accidental complexity adds cognitive load Goal Reading test code shouldn't be hard work How The intent and purpose of test code should be explicit or easily deducible Consider Level of abstraction Single Responsibility Principle (also applies to tests)
  • 21.
    Effective Unit Testing21 Readability smells (1) Primitive assertions Assertions that use a level of abstraction that is too low E.g. Testing structural details of results Twin of primitive obsession code smell (which refers to use of primitive types to represent higher- level concepts) Also the abstraction level of the testing API matters General advice: keep a single level of abstraction in test methods Hyperassertions Assertions that are too broad make it difficult to identify the intent and essence of the test may fail if small details change, thus making it difficult to find out why Approach: remove irrelevant details + divide-et-impera
  • 22.
    Effective Unit Testing22 Readability smells (2) Incidental details The test intent is mixed up with nonessential information Approach Extracts nonessential information into private helpers and setup methods Give things appropriate, descriptive names Strive for a single level of abstractions in a test method Setup sermon Similar to Incidental details but focuses on the setup of a test's fixture (= the context in which a given test executes), i.e. on the @Before and @BeforeClass (setup) methods Magic numbers Generally, literal values does not communicate their purpose well Approach: replace literals using costants with informative names that make their purpose explicit
  • 23.
    Effective Unit Testing23 Readability smells (3) Split personality When a test embodies multiple tests in itself A test should only check one thing and check it well so that what's wrong could be easily located Approach: divide-et-impera Split logic Test code (logic or data) is scattered in multiple places Approach: Inline the data/logic into the test that uses it
  • 24.
    Effective Unit Testing24 Maintainability Test code requires quality (as production code) Maintainability of tests is related to test readability is related to structure Look for test smells that add cognitive load test smells that make for a mantainance nightmare test smells that cause erratic failures
  • 25.
    Effective Unit Testing25 Mantainability smells (1) Duplication needless repetition of concepts or their representations all “copies” need to be synchronized Examples: Literal duplication → extract variables Structural duplication (same logic operating with different data istances) → extract methods Sometimes, it may be better to leave some duplication in favor of better readability Conditional logic can be hard to understand and error-prone Control structures can be essential in test helpers but, in test methods, these structures tend to be a major distraction Thread.sleep() It slows down your test; so, you should use synchronization mechanisms such as count-down-latches or barries
  • 26.
    Effective Unit Testing26 Mantainability smells (2) Flaky tests Tests that fails intermittently Does the behavior depend on time/concurrency/network/…? When you have a source for trouble, you can 1) Avoid it 2) Control it 3) Isolate it Unportable file paths Possibly, use relative paths (e.g. evaluated against the project's root dir) You could also put resources on Java's classpath and look them up via getClass().getResource(filename).getFile() Persistent temp files Even though you should try to avoid using physical files altogether if not essential, remember to delete temp files during teardown
  • 27.
    Effective Unit Testing27 Maintainability smells (3) Pixel perfection It refers to tests that assert against (hardcoded) low-level details even though the test would semantically be at a higher-level you may require a fuzzy match instead of a perfect match From Parametrized-Test pattern to Parametrized-Mess Some frameworks might not allow you to trace a test failure back to the specific data set causing it to express data sets in a readable and concise way Lack of cohesion in test methods → each test in a test-case should use the same text fixture
  • 28.
    Effective Unit Testing28 Trustworthiness We need to trust our tests so that we can feel confident in evolving/modifying/refactoring code Look for test code that deliver a false sense of security Misleading you to think everything is fine when it's not
  • 29.
    Effective Unit Testing29 Trustworthiness smells (1) Commented-out tests Try to understand and validate their purpose, or delete them Misleading comments May deliver false assumptions Do not comment what the test does, as the test code should show that clearly and promptly Instead, comments explaining the rationale may be useful Never-failing tests Have no value E.g. forgetting fail() in a try{}catch{} Shallow promises Tests that do much less than what they say they do
  • 30.
    Effective Unit Testing30 Trustworthiness smells (2) Lowered expectations Tests asserting for loose conditions (vague assertions, …) give a false sense of security → raise the bar by making the assertions more specific/precise Platform prejudice A failure to treat all platforms equal Measures: different tests for different platforms Conditional test It's a test that's hiding a secret conditional within a test method, making the test logic different from what its name would suggest Platform prejudice is an example (the specific test depends on the platform) As a rule of thumb, all branches in a test method should have a chance to fail
  • 31.
    Effective Unit Testing31 Some advanced stuff
  • 32.
    Effective Unit Testing32 Testable design Design decisions can foster or hinder testability Principles supporting testable design Modularity SOLID Single responsability principle → a class should have only a single responsibility Open/closed principle → sw entities should be open for extension, but closed for modification – you can change what a class does without changing its source code Liskov substitution principle → objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program Inteface segregation principle → many client-specific interfaces are better than one general-purpose interface Dependency inversion principle → as a way for depending on abstractions rather than on concretions – great for testability!
  • 33.
    Effective Unit Testing33 Testability issues Testability issues restricted access private/protected methods inability to observe the outcome (e.g. side effects), void methods inability to substitute parts of an implementation inability to substitute a collaborator inability to replace some functionality
  • 34.
    Effective Unit Testing34 Guidelines for testable design Avoid complex private methods Avoid final methods Avoid static methods (if you foresee chance for stub) Use new with care it hardcodes the implementation class → use IoC if possible Avoid logic in constructors Avoid the Singleton pattern Favor composition over inheritance Wrap external libraries Avoid service lookups (factory classes) as the collaborator is obtained internally to the method ( service = MyFactory.lookupService() ), it may be difficult to replace the service
  • 35.
  • 36.
    Effective Unit Testing36 JUnit4 basics Package: org.junit Test classes are POJO classes Annotations @Test (org.junit.Test) @Before: marked methods are exec before each test method run @After: marked methods are exec after each test method run Using assertions & matchers import static org.junit.Assert.*; import static org.hamcrest.CoreMatchers.*; So that in your test methods you can write something as assertThat( true, is( not(false) ) ); 
  • 37.
    Effective Unit Testing37 Parametrized-Test pattern in JUnit Mark the test class with @RunWith(org.junit.runners.Parametrized.class) Define private fields and a constructor that accepts, in order, your parameters public MyParamTestCase(int k, String name) { this.k=k; … } Define a method that returns your all your parameter data @org.junit.runners.Parametrized.Parameters public static Collection<Object[]> data(){ return Arrays.asList( new Object[][] { { 10, “roby”}, … } ); } Define a @Test method that works against the private fields that are defined to contain the parameters.