SlideShare a Scribd company logo
1 of 55
Download to read offline
SDL Proprietary and Confidential
How to improve your
unit tests?
Peter Modos
12/12/2014
2
Image placeholder
Click on image icon
Browse to image you want
to add to slide
Agenda
○ Intro
○ Why do we need unit tests
○ Good unit tests
○ Common mistakes
○ How to measure unit test quality
○ Ideas to improve tests
Why to talk about
unit tests?
4
Why to talk about unit tests?
○ because there are plenty of bad unit tests out there
• and some were written by us
Why do we need
unit tests?
6
We do NOT write unit tests to
○ meet management’s expectation: have X% test coverage
Image source: http://www.codeproject.com/KB/cs/649815/codecoverage.png
7
We do NOT write unit tests to
○ slow down product development
Image source: https://c1.staticflickr.com/3/2577/4166166862_788f971667.jpg
8
We do NOT write unit tests to
○ break the build often
Image source: http://number1.co.za/wp-content/uploads/2014/12/oops-jenkins-f2e522ba131e713d5f0f2da19a7fb7da.png
9
We do write unit tests to
○ verify the proper behavior of our software
○ be confident when we change something
○ have live documentation of the expected functionality
○ guide us while writing new code (TDD)
Image source: http://worldofdtcmarketing.com/wp-content/uploads/2013/12/change-same.jpg
Good unit tests
11
Good unit tests
○ properly test whole functionality of the target
○ focus only on the target class
○ use only public interface of the target class
• do not depend on implementation details of the target
12
Good unit tests
Image source: http://zeroturnaround.com/wp-content/uploads/2013/01/PUZZLE-1.jpg
13
○ SUT: System Under Test
• public interface
• implementation
○ public collaborators
• constructor or other method param
○ private collaborators
• implementation detail
• hidden dependency (e.g. static method call)
Never depend on the grey parts!
14
Good unit tests - maintainable
○ written for the future
○ easy to read
• good summary of expected functionality (documentation of the target)
• good abstraction (separating what and how to test)
○ easy to modify
15
Good unit tests - test failure
○ failures are always reproducible
○ easy to find the reason of failure
• informative error messages
○ fail when we change the behavior of code
○ fail locally: in the test class where functionality is broken
16
Good unit tests - test execution
○ 100% deterministic
○ fast test run
• quick feedback after a code change
• thousands of tests in few seconds
○ arbitrary order of execution
○ no side-effects and dependency on other tests
○ environment independent
○ do not use external dependencies
• files, DB, network, other services
Common mistakes
18
Common mistakes
○ no tests
○ slow
 not executed
○ non-deterministic
 not executed
 red build is tolerated
19
Mistakes - part of the behavior is not tested
○ only happy execution path is tested
• missing error cases
• exceptions from dependencies are not tested
○ edge cases are not tested
• empty collection, negative number, etc.
○ returned value of target method is not verified
○ interactions with collaborators are not properly tested
• e.g. returned values, correctness of parameters
○ high coverage, but no real verification
20
Mistakes - dependency on internals of target
○ accessing internal state
• increasing visibility of private fields
• creating extra getters
• using reflection
○ increasing visibility of private methods
• because test is too complex
21
Mistakes - hard to maintain
○ difficult to understand the tests
• what is tested exactly
• how test is configured, executed
○ lot of duplication in test methods
• too much details of irrelevant parts in main test method
○ difficult to follow up changes of target
• interface change
• implementation change
22
Mistakes - lack of isolation
○ between test cases
• dependency between tests (shared state, static dependencies)
• one test failure makes the others failing
○ hard to handle the dependencies of the target class
 sign of design issues in target
• implicit dependencies
• dependency chain
• need to configure static dependencies
How to measure
unit test quality?
24
Measuring the quality of unit tests
○ code coverage measurement tools
○ mutation test for unit tests
• Pitest (https://github.com/hcoles/pitest)
○ code review
○ next developer working on them later
• or you 
Ideas to improve tests
26
Listen to the tests!
○ why is it difficult to test?
• change tested code accordingly
○ usually a sign of code smells
• tight coupling
• too much responsibility
• something is too big, too complex (class, method)
• too much state change is possible (instead of immutability)
• etc.
27
Use test doubles
○ to avoid dependency on collaborators’ real implementation
• different opinions about how much to use
• mockist vs classical testing
• good summary on http://martinfowler.com/articles/mocksArentStubs.html
○ usage of a mocking library
• speeds up test case writing
• enables behavior driven testing (vs state based testing)
○ but you can write your own stubs as well
28
Mockist mindset
○ using mocks for all collaborators (dependencies)
○ clearly reduce the scope of the test
• change in collaborator’s implementation will not influence the test
○ easier setup, full control over dependencies
• dependencies are passed with DI (preferably as constructor parameters)
○ drawback
• integration of real objects is not tested in unit level
• change in implementation might result change in tests
– but only when collaborators change
○ recommended library
• Mockito: https://github.com/mockito/mockito
29
Write tests first - TDD
○ lot of benefits
• first thinking about functionality of target
• drives the design
– what dependencies are needed
– API of dependencies
○ very difficult to write tests afterwards
• strong influence of existing code
• you will try to cover existing execution branches
• quite often the design is not (easily) testable
30
Organizing tests
○ usually one test class per target class
○ one test method tests exactly one interaction/scenario
towards the target
○ test methods might be grouped by functionality/tested
method
• by using inner classes
• by same method name prefix
31
Naming test methods
○ let the name tell what is tested
• clear summary what is tested in that scenario
○ do not worry if name is too long
• better than explanation in comments
○ using underscore might increase readability
○ concrete format is less important, just be consistent
○ no ‘test’ prefix is needed
• outdated convention of Junit
32
Naming test methods
○ <what happens>_when_<conditions of test scenario>
○ <what happens>_for_<method name>_when_<conditions>
○ <method name>_with_<conditions>_does_<what happens>
33
Naming test methods - example
@RunWith(Enclosed.class)
public class InMemoryAppTestRunRepositoryTest {
public static class GetAppTestRunMethod {
@Test
public void returnsAnEmptyOptional_when_testRunWasNotStoredWithTheRequestedId() {}
@Test
public void returnsProperTestRun_when_testRunWasStoredWithTheRequestedId() {}
@Test
public void returnsLastStoredTestRun_when_testRunWasStoredMultipleTimesWithTheRequestedId() {}
}
public static class GetAppTestRunsMethod {
@Test
public void returnsNoTestRuns_when_noTestRunsWereStored() {}
@Test
public void returnsLatestVersionOfAllStoredTestRuns_in_undefinedOrder_when_multipleTestRunsWereStored() {}
}
}
34
Naming test methods - example
35
Structure of a test method
@Test
public void name() {
<setup test conditions>
<call tested method(s) with meaningful parameters>
<verify returned value (if any)>
<verify expected interactions with collaborators (if needed)>
}
36
Content of test methods
○ keep test methods short and abstract
• important to see what is tested
• the how is tested can be hidden in helper methods
○ extract all unimportant details to helper methods
• prefer more specific helper methods with talkative name and fewer params
• these can call more generic methods to avoid duplication
○ but all important activities and parameters shall be explicit in the
test method
• call to the tested method(s)
• concrete scenario specific part of test setup
• verification of expected behavior
37
Verification in test methods
○ types of verification
• verify returned value of the tested method
• verify expected state change of the target object
• verify interaction with a collaborator
○ don’t have to repeat all verifications in all tests
• different tests can concentrate on different part
• although if verification is compact, it won’t hurt to repeat
38
Interactions with collaborators
○ important to distinguish between
• setting up test conditions
– we want a collaborator to return a given value / throw an exception
• verifying expected interactions
– verify calls only when there are no returned values
– order of interactions is usually important
○ parameters of calls
• always expect concrete param values when they are relevant (most of the time)
– expect the same object if you have the reference in the test
– use matchers if you don’t have the reference but know important properties of the
parameter
39
Defining collaborators
○ we need an explicit reference to all collaborators
• to be able to setup test conditions
• to be able to verify interactions
• if collaborators are created by the target object, we have no control over them
○ use dependency injection to pass them to the target object
• as constructor parameter (preferred)
• as setter parameter
– dangerous because collaborators might change / not be initialized
○ if collaborators must be created by the target
• pass factory object to create the dependency instead of direct instantiation
• we can mock the factory and have control over the returned collaborator
40
Value objects in tests
○ inputs to the given test
• constructor parameters of tested object
• parameters of the tested method
○ play important role in the expected behavior
• influence returned value
• influence interactions with collaborators
○ often worth to create new classes for the entities
• instead of having Strings for many parameters
• enjoy benefits of compile time type safety
○ if you have too many parameters
• group together the belonging entities in a new class
41
Value objects in tests
○ creation of value objects
• use well named constants for very simple types/values
– simple objects, Strings, dummy numbers
• extract creation of instances to methods when their state is important
– when their state is read
– easier to change when constructor changes
– easier to change when you want to switch between mocked/real instances
• use mock fields when their state is not important
– when only their reference is used during interactions
42
Matchers
○ can be applied for
• verification of returned values
• verification of an interaction: expected parameters
• setting up test conditions: for which parameters do you want a given answer
○ useful when
• you don’t have a reference to an expected object
• you want to verify only some aspects of the object
• you want to allow any value for an irrelevant parameter
○ recommended library
• Hamcrest (https://github.com/hamcrest)
• you can compose matchers and build your own
43
Avoid duplication in test classes
○ minimize the places where we have to change after
• change in constructor/method parameters
• method renaming
• splitting a class
○ extract common logic to
• test helper classes
• base test classes
• builder / utility classes in production code
44
Example test case: test condition setup
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
45
Example test case: tested method
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
46
Example test case: verification
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
47
Example test case: public collaborators
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
48
Example test case: created collaborators
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
private void fasQaToolClientFactoryReturnsFvtClientFor(TesterServer testerServer, FasQaToolClient returnedClient) {
fasQaToolClientFactoryReturnsClientFor(testerServer, PROCESS_FVT, returnedClient);
}
}
49
Example test case: value objects
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
50
Example test case: created value objects
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
private void paramsConverterReturnsParamsForConfigAndServer(FvtConfig config, Server targetServer,
Properties returnedProcessParams) {
when(paramsConverterMock.createProcessParams(config, targetServer)).thenReturn(returnedProcessParams);
}
}
51
Example for helper methods
public static class StartToolMethod extends FasQaToolClientTest {
@Test
public void startsProperProcess_on_daClient_with_passedParams() throws TestRunException {
…
client.startTool(toolParamsMock);
verifyProcessWasStartedOnDaClient(INSTANCE_NAME, TOOL_NAME, toolParamsMock);
}
private void verifyProcessWasStartedOnDaClient(String expectedInstanceName, String expectedProcess, Properties expectedParams) {
verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_START, expectedParams);
}
}
public static class StopToolMethod extends FasQaToolClientTest {
@Test
public void stopsProperProcess_on_daClient() throws TestRunException {
…
client.stopTool();
verifyProcessWasStoppedOnDaClient(INSTANCE_NAME, TOOL_NAME);
}
private void verifyProcessWasStoppedOnDaClient(String expectedInstanceName, String expectedProcess) {
verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_STOP, NO_PARAMS);
}
}
protected void verifyProcessCommandWasExecutedOnDaClient(String expectedInstanceName, String expectedProcess, String expectedVerb,
Properties expectedParams) {
verify(daClientMock).invokeProcessCommand(expectedInstanceName, expectedProcess, expectedVerb, expectedParams);
}
52
Control the uncontrollable
○ you won’t have full control if your tested class
• use Random
• use time related functionality (e.g. sleep(), now())
• is used by multiple threads
○ solution: do not use them directly
• Random: pass as parameter, maybe hide behind an interface
• time: move all time related functionality behind an interface
– production code: use System time, real sleep
– test code: own time, non-blocking sleep
• multithreading
– use Runnable, Callable and Executor framework
– test separately the main activity in single thread
– custom test implementation of collaborators to control concurrent environment
Recommended reading
54
Recommended reading
books
Growing Object-Oriented Software Guided by Tests
(Steve Freeman, Nat Pryce)
The Art of Unit Testing: with Examples in .NET
(Roy Osherove)
xUnit Test Patterns: Refactoring Test Code
(Gerard Meszaros)
Test Driven Development: By Example
(Kent Beck)
55
Recommended reading
blogs
Steve Freeman's blog
http://www.higherorderlogic.com
Kevin Rutherford's blog
http://silkandspinach.net
Google Testing on the Toilet
http://googletesting.blogspot.nl/search/label/TotT
Guide: Writing Testable Code (from Google)
http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf
… and plenty others

More Related Content

What's hot

Benefit From Unit Testing In The Real World
Benefit From Unit Testing In The Real WorldBenefit From Unit Testing In The Real World
Benefit From Unit Testing In The Real World
Dror Helper
 
Roy Osherove on Unit Testing Good Practices and Horrible Mistakes
Roy Osherove on Unit Testing Good Practices and Horrible MistakesRoy Osherove on Unit Testing Good Practices and Horrible Mistakes
Roy Osherove on Unit Testing Good Practices and Horrible Mistakes
Roy Osherove
 
Test Case Design
Test Case DesignTest Case Design
Test Case Design
acatalin
 
Software Testing Foundations Part 4 - Black Box Testing
Software Testing Foundations Part 4 - Black Box TestingSoftware Testing Foundations Part 4 - Black Box Testing
Software Testing Foundations Part 4 - Black Box Testing
Nikita Knysh
 
Moq presentation
Moq presentationMoq presentation
Moq presentation
LynxStar
 
Unit testing (workshop)
Unit testing (workshop)Unit testing (workshop)
Unit testing (workshop)
Foyzul Karim
 

What's hot (20)

Dynamic analysis in Software Testing
Dynamic analysis in Software TestingDynamic analysis in Software Testing
Dynamic analysis in Software Testing
 
TDD talk
TDD talkTDD talk
TDD talk
 
Benefit From Unit Testing In The Real World
Benefit From Unit Testing In The Real WorldBenefit From Unit Testing In The Real World
Benefit From Unit Testing In The Real World
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Roy Osherove on Unit Testing Good Practices and Horrible Mistakes
Roy Osherove on Unit Testing Good Practices and Horrible MistakesRoy Osherove on Unit Testing Good Practices and Horrible Mistakes
Roy Osherove on Unit Testing Good Practices and Horrible Mistakes
 
NUnit Features Presentation
NUnit Features PresentationNUnit Features Presentation
NUnit Features Presentation
 
Test Case Design
Test Case DesignTest Case Design
Test Case Design
 
Exposing Test Analyses with DrTests
Exposing Test Analyses with DrTestsExposing Test Analyses with DrTests
Exposing Test Analyses with DrTests
 
Hunt On The White Rabbit 10 A Eng
Hunt On The White Rabbit 10 A EngHunt On The White Rabbit 10 A Eng
Hunt On The White Rabbit 10 A Eng
 
Software Testing Foundations Part 4 - Black Box Testing
Software Testing Foundations Part 4 - Black Box TestingSoftware Testing Foundations Part 4 - Black Box Testing
Software Testing Foundations Part 4 - Black Box Testing
 
Unit Testing (C#)
Unit Testing (C#)Unit Testing (C#)
Unit Testing (C#)
 
Unit Testing Done Right
Unit Testing Done RightUnit Testing Done Right
Unit Testing Done Right
 
How and what to unit test
How and what to unit testHow and what to unit test
How and what to unit test
 
Moq presentation
Moq presentationMoq presentation
Moq presentation
 
Unit Test + Functional Programming = Love
Unit Test + Functional Programming = LoveUnit Test + Functional Programming = Love
Unit Test + Functional Programming = Love
 
Writing Tests Effectively
Writing Tests EffectivelyWriting Tests Effectively
Writing Tests Effectively
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Unit testing (workshop)
Unit testing (workshop)Unit testing (workshop)
Unit testing (workshop)
 
Gray box testing
Gray box testingGray box testing
Gray box testing
 
Testing Fundamentals
Testing FundamentalsTesting Fundamentals
Testing Fundamentals
 

Viewers also liked

Unit testing JavaScript using Mocha and Node
Unit testing JavaScript using Mocha and NodeUnit testing JavaScript using Mocha and Node
Unit testing JavaScript using Mocha and Node
Josh Mock
 
UNIT TESTING PPT
UNIT TESTING PPTUNIT TESTING PPT
UNIT TESTING PPT
suhasreddy1
 

Viewers also liked (10)

Unit testing 101
Unit testing 101Unit testing 101
Unit testing 101
 
Database Unit Testing Made Easy with VSTS
Database Unit Testing Made Easy with VSTSDatabase Unit Testing Made Easy with VSTS
Database Unit Testing Made Easy with VSTS
 
Unit Testing SQL Server
Unit Testing SQL ServerUnit Testing SQL Server
Unit Testing SQL Server
 
Efficient JavaScript Unit Testing, JavaOne China 2013
Efficient JavaScript Unit Testing, JavaOne China 2013Efficient JavaScript Unit Testing, JavaOne China 2013
Efficient JavaScript Unit Testing, JavaOne China 2013
 
Unit testing JavaScript using Mocha and Node
Unit testing JavaScript using Mocha and NodeUnit testing JavaScript using Mocha and Node
Unit testing JavaScript using Mocha and Node
 
JUnit- A Unit Testing Framework
JUnit- A Unit Testing FrameworkJUnit- A Unit Testing Framework
JUnit- A Unit Testing Framework
 
Unit testing of spark applications
Unit testing of spark applicationsUnit testing of spark applications
Unit testing of spark applications
 
Introduction to White box testing
Introduction to White box testingIntroduction to White box testing
Introduction to White box testing
 
UNIT TESTING PPT
UNIT TESTING PPTUNIT TESTING PPT
UNIT TESTING PPT
 
Software Testing
Software TestingSoftware Testing
Software Testing
 

Similar to How to improve your unit tests?

Type mock isolator
Type mock isolatorType mock isolator
Type mock isolator
MaslowB
 
Test-Driven Development
Test-Driven DevelopmentTest-Driven Development
Test-Driven Development
Meilan Ou
 
Type mock isolator
Type mock isolatorType mock isolator
Type mock isolator
MaslowB
 
Unit Testing Best Practices
Unit Testing Best PracticesUnit Testing Best Practices
Unit Testing Best Practices
Tomaš Maconko
 

Similar to How to improve your unit tests? (20)

Code coverage
Code coverageCode coverage
Code coverage
 
Object Oriented Testing
Object Oriented TestingObject Oriented Testing
Object Oriented Testing
 
Testing without defined requirements
Testing without defined requirementsTesting without defined requirements
Testing without defined requirements
 
Ch11lect1 ud
Ch11lect1 udCh11lect1 ud
Ch11lect1 ud
 
SOFTWARE TESTING W1_watermark.pdf
SOFTWARE TESTING W1_watermark.pdfSOFTWARE TESTING W1_watermark.pdf
SOFTWARE TESTING W1_watermark.pdf
 
Type mock isolator
Type mock isolatorType mock isolator
Type mock isolator
 
Intro to agile testing
Intro to agile testingIntro to agile testing
Intro to agile testing
 
Test-Driven Development
Test-Driven DevelopmentTest-Driven Development
Test-Driven Development
 
Testing, a pragmatic approach
Testing, a pragmatic approachTesting, a pragmatic approach
Testing, a pragmatic approach
 
Testing the Untestable
Testing the UntestableTesting the Untestable
Testing the Untestable
 
Test planning and software's engineering
Test planning and software's engineeringTest planning and software's engineering
Test planning and software's engineering
 
Avoiding test hell
Avoiding test hellAvoiding test hell
Avoiding test hell
 
Software testing ... who’s responsible is it?
Software testing ... who’s responsible is it?Software testing ... who’s responsible is it?
Software testing ... who’s responsible is it?
 
Software testing ... who is responsible for it?
Software testing ... who is responsible for it?Software testing ... who is responsible for it?
Software testing ... who is responsible for it?
 
Type mock isolator
Type mock isolatorType mock isolator
Type mock isolator
 
Chapter 13 software testing strategies
Chapter 13 software testing strategiesChapter 13 software testing strategies
Chapter 13 software testing strategies
 
STARCANADA 2013 Keynote: Lightning Strikes the Keynotes
STARCANADA 2013 Keynote: Lightning Strikes the KeynotesSTARCANADA 2013 Keynote: Lightning Strikes the Keynotes
STARCANADA 2013 Keynote: Lightning Strikes the Keynotes
 
Unit Testing Best Practices
Unit Testing Best PracticesUnit Testing Best Practices
Unit Testing Best Practices
 
Shyam presentation prefinal
Shyam presentation prefinalShyam presentation prefinal
Shyam presentation prefinal
 
Pragmatic software testing education - SIGCSE 2019
Pragmatic software testing education - SIGCSE 2019Pragmatic software testing education - SIGCSE 2019
Pragmatic software testing education - SIGCSE 2019
 

Recently uploaded

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Recently uploaded (20)

Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

How to improve your unit tests?

  • 1. SDL Proprietary and Confidential How to improve your unit tests? Peter Modos 12/12/2014
  • 2. 2 Image placeholder Click on image icon Browse to image you want to add to slide Agenda ○ Intro ○ Why do we need unit tests ○ Good unit tests ○ Common mistakes ○ How to measure unit test quality ○ Ideas to improve tests
  • 3. Why to talk about unit tests?
  • 4. 4 Why to talk about unit tests? ○ because there are plenty of bad unit tests out there • and some were written by us
  • 5. Why do we need unit tests?
  • 6. 6 We do NOT write unit tests to ○ meet management’s expectation: have X% test coverage Image source: http://www.codeproject.com/KB/cs/649815/codecoverage.png
  • 7. 7 We do NOT write unit tests to ○ slow down product development Image source: https://c1.staticflickr.com/3/2577/4166166862_788f971667.jpg
  • 8. 8 We do NOT write unit tests to ○ break the build often Image source: http://number1.co.za/wp-content/uploads/2014/12/oops-jenkins-f2e522ba131e713d5f0f2da19a7fb7da.png
  • 9. 9 We do write unit tests to ○ verify the proper behavior of our software ○ be confident when we change something ○ have live documentation of the expected functionality ○ guide us while writing new code (TDD) Image source: http://worldofdtcmarketing.com/wp-content/uploads/2013/12/change-same.jpg
  • 11. 11 Good unit tests ○ properly test whole functionality of the target ○ focus only on the target class ○ use only public interface of the target class • do not depend on implementation details of the target
  • 12. 12 Good unit tests Image source: http://zeroturnaround.com/wp-content/uploads/2013/01/PUZZLE-1.jpg
  • 13. 13 ○ SUT: System Under Test • public interface • implementation ○ public collaborators • constructor or other method param ○ private collaborators • implementation detail • hidden dependency (e.g. static method call) Never depend on the grey parts!
  • 14. 14 Good unit tests - maintainable ○ written for the future ○ easy to read • good summary of expected functionality (documentation of the target) • good abstraction (separating what and how to test) ○ easy to modify
  • 15. 15 Good unit tests - test failure ○ failures are always reproducible ○ easy to find the reason of failure • informative error messages ○ fail when we change the behavior of code ○ fail locally: in the test class where functionality is broken
  • 16. 16 Good unit tests - test execution ○ 100% deterministic ○ fast test run • quick feedback after a code change • thousands of tests in few seconds ○ arbitrary order of execution ○ no side-effects and dependency on other tests ○ environment independent ○ do not use external dependencies • files, DB, network, other services
  • 18. 18 Common mistakes ○ no tests ○ slow  not executed ○ non-deterministic  not executed  red build is tolerated
  • 19. 19 Mistakes - part of the behavior is not tested ○ only happy execution path is tested • missing error cases • exceptions from dependencies are not tested ○ edge cases are not tested • empty collection, negative number, etc. ○ returned value of target method is not verified ○ interactions with collaborators are not properly tested • e.g. returned values, correctness of parameters ○ high coverage, but no real verification
  • 20. 20 Mistakes - dependency on internals of target ○ accessing internal state • increasing visibility of private fields • creating extra getters • using reflection ○ increasing visibility of private methods • because test is too complex
  • 21. 21 Mistakes - hard to maintain ○ difficult to understand the tests • what is tested exactly • how test is configured, executed ○ lot of duplication in test methods • too much details of irrelevant parts in main test method ○ difficult to follow up changes of target • interface change • implementation change
  • 22. 22 Mistakes - lack of isolation ○ between test cases • dependency between tests (shared state, static dependencies) • one test failure makes the others failing ○ hard to handle the dependencies of the target class  sign of design issues in target • implicit dependencies • dependency chain • need to configure static dependencies
  • 23. How to measure unit test quality?
  • 24. 24 Measuring the quality of unit tests ○ code coverage measurement tools ○ mutation test for unit tests • Pitest (https://github.com/hcoles/pitest) ○ code review ○ next developer working on them later • or you 
  • 26. 26 Listen to the tests! ○ why is it difficult to test? • change tested code accordingly ○ usually a sign of code smells • tight coupling • too much responsibility • something is too big, too complex (class, method) • too much state change is possible (instead of immutability) • etc.
  • 27. 27 Use test doubles ○ to avoid dependency on collaborators’ real implementation • different opinions about how much to use • mockist vs classical testing • good summary on http://martinfowler.com/articles/mocksArentStubs.html ○ usage of a mocking library • speeds up test case writing • enables behavior driven testing (vs state based testing) ○ but you can write your own stubs as well
  • 28. 28 Mockist mindset ○ using mocks for all collaborators (dependencies) ○ clearly reduce the scope of the test • change in collaborator’s implementation will not influence the test ○ easier setup, full control over dependencies • dependencies are passed with DI (preferably as constructor parameters) ○ drawback • integration of real objects is not tested in unit level • change in implementation might result change in tests – but only when collaborators change ○ recommended library • Mockito: https://github.com/mockito/mockito
  • 29. 29 Write tests first - TDD ○ lot of benefits • first thinking about functionality of target • drives the design – what dependencies are needed – API of dependencies ○ very difficult to write tests afterwards • strong influence of existing code • you will try to cover existing execution branches • quite often the design is not (easily) testable
  • 30. 30 Organizing tests ○ usually one test class per target class ○ one test method tests exactly one interaction/scenario towards the target ○ test methods might be grouped by functionality/tested method • by using inner classes • by same method name prefix
  • 31. 31 Naming test methods ○ let the name tell what is tested • clear summary what is tested in that scenario ○ do not worry if name is too long • better than explanation in comments ○ using underscore might increase readability ○ concrete format is less important, just be consistent ○ no ‘test’ prefix is needed • outdated convention of Junit
  • 32. 32 Naming test methods ○ <what happens>_when_<conditions of test scenario> ○ <what happens>_for_<method name>_when_<conditions> ○ <method name>_with_<conditions>_does_<what happens>
  • 33. 33 Naming test methods - example @RunWith(Enclosed.class) public class InMemoryAppTestRunRepositoryTest { public static class GetAppTestRunMethod { @Test public void returnsAnEmptyOptional_when_testRunWasNotStoredWithTheRequestedId() {} @Test public void returnsProperTestRun_when_testRunWasStoredWithTheRequestedId() {} @Test public void returnsLastStoredTestRun_when_testRunWasStoredMultipleTimesWithTheRequestedId() {} } public static class GetAppTestRunsMethod { @Test public void returnsNoTestRuns_when_noTestRunsWereStored() {} @Test public void returnsLatestVersionOfAllStoredTestRuns_in_undefinedOrder_when_multipleTestRunsWereStored() {} } }
  • 35. 35 Structure of a test method @Test public void name() { <setup test conditions> <call tested method(s) with meaningful parameters> <verify returned value (if any)> <verify expected interactions with collaborators (if needed)> }
  • 36. 36 Content of test methods ○ keep test methods short and abstract • important to see what is tested • the how is tested can be hidden in helper methods ○ extract all unimportant details to helper methods • prefer more specific helper methods with talkative name and fewer params • these can call more generic methods to avoid duplication ○ but all important activities and parameters shall be explicit in the test method • call to the tested method(s) • concrete scenario specific part of test setup • verification of expected behavior
  • 37. 37 Verification in test methods ○ types of verification • verify returned value of the tested method • verify expected state change of the target object • verify interaction with a collaborator ○ don’t have to repeat all verifications in all tests • different tests can concentrate on different part • although if verification is compact, it won’t hurt to repeat
  • 38. 38 Interactions with collaborators ○ important to distinguish between • setting up test conditions – we want a collaborator to return a given value / throw an exception • verifying expected interactions – verify calls only when there are no returned values – order of interactions is usually important ○ parameters of calls • always expect concrete param values when they are relevant (most of the time) – expect the same object if you have the reference in the test – use matchers if you don’t have the reference but know important properties of the parameter
  • 39. 39 Defining collaborators ○ we need an explicit reference to all collaborators • to be able to setup test conditions • to be able to verify interactions • if collaborators are created by the target object, we have no control over them ○ use dependency injection to pass them to the target object • as constructor parameter (preferred) • as setter parameter – dangerous because collaborators might change / not be initialized ○ if collaborators must be created by the target • pass factory object to create the dependency instead of direct instantiation • we can mock the factory and have control over the returned collaborator
  • 40. 40 Value objects in tests ○ inputs to the given test • constructor parameters of tested object • parameters of the tested method ○ play important role in the expected behavior • influence returned value • influence interactions with collaborators ○ often worth to create new classes for the entities • instead of having Strings for many parameters • enjoy benefits of compile time type safety ○ if you have too many parameters • group together the belonging entities in a new class
  • 41. 41 Value objects in tests ○ creation of value objects • use well named constants for very simple types/values – simple objects, Strings, dummy numbers • extract creation of instances to methods when their state is important – when their state is read – easier to change when constructor changes – easier to change when you want to switch between mocked/real instances • use mock fields when their state is not important – when only their reference is used during interactions
  • 42. 42 Matchers ○ can be applied for • verification of returned values • verification of an interaction: expected parameters • setting up test conditions: for which parameters do you want a given answer ○ useful when • you don’t have a reference to an expected object • you want to verify only some aspects of the object • you want to allow any value for an irrelevant parameter ○ recommended library • Hamcrest (https://github.com/hamcrest) • you can compose matchers and build your own
  • 43. 43 Avoid duplication in test classes ○ minimize the places where we have to change after • change in constructor/method parameters • method renaming • splitting a class ○ extract common logic to • test helper classes • base test classes • builder / utility classes in production code
  • 44. 44 Example test case: test condition setup @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } } protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) { return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment); }
  • 45. 45 Example test case: tested method @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } } protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) { return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment); }
  • 46. 46 Example test case: verification @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } } protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) { return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment); }
  • 47. 47 Example test case: public collaborators @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } } protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) { return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment); }
  • 48. 48 Example test case: created collaborators @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } private void fasQaToolClientFactoryReturnsFvtClientFor(TesterServer testerServer, FasQaToolClient returnedClient) { fasQaToolClientFactoryReturnsClientFor(testerServer, PROCESS_FVT, returnedClient); } }
  • 49. 49 Example test case: value objects @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } }
  • 50. 50 Example test case: created value objects @RunWith(MockitoJUnitRunner.class) public static class RunToolMethod extends FvtRunnableTest { @Test public void startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no LiveServersInDeployment() { FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock); FvtRunnable runnable = createFvtRunnable(configMock, deployment); fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock); paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock); TestResult result = runnable.runTool(progressMock); assertThat(result.isSuccessful(), equalTo(true)); InOrder inOrder = inOrder(fasQaToolClientMock); inOrder.verify(fasQaToolClientMock).startTool(processParamsMock); inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates(); } private void paramsConverterReturnsParamsForConfigAndServer(FvtConfig config, Server targetServer, Properties returnedProcessParams) { when(paramsConverterMock.createProcessParams(config, targetServer)).thenReturn(returnedProcessParams); } }
  • 51. 51 Example for helper methods public static class StartToolMethod extends FasQaToolClientTest { @Test public void startsProperProcess_on_daClient_with_passedParams() throws TestRunException { … client.startTool(toolParamsMock); verifyProcessWasStartedOnDaClient(INSTANCE_NAME, TOOL_NAME, toolParamsMock); } private void verifyProcessWasStartedOnDaClient(String expectedInstanceName, String expectedProcess, Properties expectedParams) { verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_START, expectedParams); } } public static class StopToolMethod extends FasQaToolClientTest { @Test public void stopsProperProcess_on_daClient() throws TestRunException { … client.stopTool(); verifyProcessWasStoppedOnDaClient(INSTANCE_NAME, TOOL_NAME); } private void verifyProcessWasStoppedOnDaClient(String expectedInstanceName, String expectedProcess) { verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_STOP, NO_PARAMS); } } protected void verifyProcessCommandWasExecutedOnDaClient(String expectedInstanceName, String expectedProcess, String expectedVerb, Properties expectedParams) { verify(daClientMock).invokeProcessCommand(expectedInstanceName, expectedProcess, expectedVerb, expectedParams); }
  • 52. 52 Control the uncontrollable ○ you won’t have full control if your tested class • use Random • use time related functionality (e.g. sleep(), now()) • is used by multiple threads ○ solution: do not use them directly • Random: pass as parameter, maybe hide behind an interface • time: move all time related functionality behind an interface – production code: use System time, real sleep – test code: own time, non-blocking sleep • multithreading – use Runnable, Callable and Executor framework – test separately the main activity in single thread – custom test implementation of collaborators to control concurrent environment
  • 54. 54 Recommended reading books Growing Object-Oriented Software Guided by Tests (Steve Freeman, Nat Pryce) The Art of Unit Testing: with Examples in .NET (Roy Osherove) xUnit Test Patterns: Refactoring Test Code (Gerard Meszaros) Test Driven Development: By Example (Kent Beck)
  • 55. 55 Recommended reading blogs Steve Freeman's blog http://www.higherorderlogic.com Kevin Rutherford's blog http://silkandspinach.net Google Testing on the Toilet http://googletesting.blogspot.nl/search/label/TotT Guide: Writing Testable Code (from Google) http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf … and plenty others