The document discusses agile testing practices like continuous integration, test automation, and behavior driven development. It emphasizes delivering working, tested software frequently through smaller batches and providing early feedback. Automating tests is part of the definition of done to enable continuous integration. Quality assurance experts focus on riskier areas like performance and security testing rather than manual testing. Behavior driven development uses examples and acceptance tests to drive development and automation. Technical debt can be addressed incrementally by automating tests for new features and riskier areas first.
2. So We’ve Gone Kanban w/ some Feature
Teams…
So we’re talking about…
• Whole Team Approach – WIP Limits keep us
together focused on delivering Working Tested
Clean Software
• Delivering and testing smaller stories much more
frequently
All Rights Reserved- AgileSparks
7. Early Feedback – The Goal and the conflict…
Improve – Reduce Testing Overhead
Ideal Batch Size
WITH Ideal Batch Traditional
Great automation W/O Processes
Automation
Earlier Feedback – Cheaper to change
Lower Testing Overhead
• Even without reducing testing overhead it is usually more cost-effective to reduce batch size
• Aim to reduce testing overhead to reduce batch size even more and be even more cost-effective
All Rights Reserved- AgileSparks
9. Continuous Integration (CI) –
The backbone for Agile testing
– the practice of integrating early and often
• avoid the pitfalls of "integration hell"
– “stop and fix”
– Always have a working system based on
latest code
2010 Copyright AgileSparks. All rights reserved.
All Rights Reserved- AgileSparks
10. Continuous integration in the XP framework
2010 Copyright AgileSparks. All rights reserved.
All Rights Reserved- AgileSparks
11. But how do we get enough
coverage for the
Continuous Integration
system?
All Rights Reserved- AgileSparks
12. We automate tests as part
of Definiton of Done
Well, Dah…
All Rights Reserved- AgileSparks
13. But what might happen
then?
All Rights Reserved- AgileSparks
14. Pop Quiz Blocked/
• What does this mean?
Impeded Card
Full story at http://yuvalyeret.com/2010/08/03/finding-the-right-
dev-to-test-ratio-when-working-in-kanban/
All Rights Reserved- AgileSparks
15. Pop Quiz
Dev
A lot of Empty Empty
Done
WIP in Test downstream
almost
Test Done (Bubble)
Full
All Rights Reserved- AgileSparks
16. What LIMITED WIP Does
Fix open
defects
on our
Stories
WIP Limit!
Can’t start new Help us
DEV work! with
Blocker
Help us
automate
How can I tests for this
help current story
stories?
All Rights Reserved- AgileSparks
17. What LIMITED WIP Automate
Does
Setups
and Test Improve Dev Done
Half of our work is
Data quality! – less
not core test work.
retesting for us
Maybe you can take
some of it, or help
us reduce waste
there
Help us do ATDD so
you can develop
Come pair with based on our test
us, you’ll probably expectations, and
see things from our also offload some
perspective and automation effort
have some ideas from us
how to help!
How can I help
you be more
efficient?
All Rights Reserved- AgileSparks
19. Automate at the right level
Manual
UI
5%
Acceptance
ROI
Cost
(Service/API)
15%
Unit Testing
80%
All Rights Reserved- AgileSparks
http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
20. If we only had limited QA capacity, what would
we focus on? What would we enable Devs to do?
Automated & Manual
manual Business Facing &
Automat
Functional Tests
User Acceptance Tests ed
Simulations, prototypes
Supports Programming
Exploratory Tests
Story
Usability Exploratory
Tests
Critiques Product
Tests/Examples/ATDD
ATDD Testing
Q2 Automated
Q3
System Tests
Q1 Q4
TDD Non-Functional
Unit Tests (TDD)
Automated Performance Tests
Testing
Comp./Integration Tests
Unit Tests Load Tests, Security
Code QualityContinuous
tools “bility’ testing
Integration
Automated Tools
Technology Facing
All Rights Reserved- AgileSparks
21. The UNIQUE role of the QA engineers
• Being Champions of the
Product and the
Customer/User.
• Specializing in Performance/
Security/Load/etc.
• Shining light on where to
focus quality efforts by
analyzing risk probability
and Impact.
All Rights Reserved- AgileSparks
22. Quality OVER Quantity – QA expertise SUPPORTING delivery
Delivery Team
Software Engineers Delivery Team
Software Engineers
Automation
Expert Test Engineers
Placed in risky “Hot Spots”
Delivery Team
Software Engineers
Delivery Team
Delivery Team Software Engineers
Software Engineers
All Rights Reserved- AgileSparks
24. Meet the family
Specification Behavior Driven
By Development
Example http://dannorth.net/introducing-
bdd/
Very important – “Step away from the
tools”
http://lizkeogh.com/2011/03/04/step-away-from-the-tools/
All Rights Reserved- AgileSparks
25. Let’s look at a concrete
workflow using SpecFlow
(Pragmatic BDD for .Net)
http://www.specflow.org/specflow/workflow.aspx
All Rights Reserved- AgileSparks
26. Step 1 – Write a Feature (using Gherkin language)
All Rights Reserved- AgileSparks
27. Step 2 – Watch it Fail
All Rights Reserved- AgileSparks
28. Step 3 – Implement Step definitions
All Rights Reserved- AgileSparks
29. Step 4 – Create Domain Skeleton
All Rights Reserved- AgileSparks
30. Step 5 – Watch it fail
All Rights Reserved- AgileSparks
31. Step 6 – Implement Domain Functionality
All Rights Reserved- AgileSparks
32. Optional Step 6a – Unit-level TDD
Add a test
Write some code
Run tests
Tests passed?
No
Yes
Refactor
Yes Development
finished
No
All Rights Reserved- AgileSparks
33. Step 7 – Iterate steps 5+6 until scenario passes
All Rights Reserved- AgileSparks
34. Step 8 – Iterate steps 2-7 until Feature Passes
All Rights Reserved- AgileSparks
35. But how do we get
started, we have so much
legacy?
All Rights Reserved- AgileSparks
36. Time to talk about Technical Debt…
All Rights Reserved- AgileSparks
38. Check out a recent great presentation…
http://www.slideshare.net/ehendrickson/lfmf-tales-of-test-automation-fa
All Rights Reserved- AgileSparks
39. How to Start
New Test Refactored Refactored
Coverage Code Code New Features
New Features New Features
New Test Refactored
Coverage Code New Features
New Features
New Test
Coverage
Sanity and Sanity and Sanity and New Features
risky areas risky areas risky areas
1 2 3 4
Legacy Systems Legacy Systems Legacy Systems Legacy Systems
Frightening code
Low quality code covered by automated tests
High quality code covered by automated tests
Apply automation to incrementally repair touch points as new
features added.
Manual affected regression testing only after risk analysis.
Automate Sanity and risky areas by independent team
2010 Copyright AgileSparks. All rights reserved.
All Rights Reserved- AgileSparks
40. References
http://bit.ly/testisdeadGTAC11
http://gojko.net/2012/05/08/redefin
ing-software-quality/
All Rights Reserved- AgileSparks
Editor's Notes
Current state – wait a lot of time until verify
With big features everything is harder – time to define, to stabilize, to control variance, to test, to verify, to reproduce …Symptoms:Our features/user stories are too big to fit into one iteration – we need LONGER iterations..We need a long time to nail down the design for this. Our PSP for this iteration is a high-level design…Solution?Effective User Story Analysis to create Minimum Marketable Features (MMF)DesignEither do all design up frontOr have a growing evolutionary designEveryone works on highest priority – EVEN if outside comfort zoneNeed to improve collective code ownershipDevelopers need to feel safe to work everywhere in the team’s codebase
Incease time until we can test, and complexity to InstallSymptoms:Our features/user stories are too big to fit into one iteration – we need LONGER iterations..We need a long time to nail down the design for this. Our PSP for this iteration is a high-level design…Solution?Effective User Story Analysis to create Minimum Marketable Features (MMF)DesignEither do all design up frontOr have a growing evolutionary designEveryone works on highest priority – EVEN if outside comfort zoneNeed to improve collective code ownershipDevelopers need to feel safe to work everywhere in the team’s codebase
The problem with small stories and continous testing and re-test setup costCompressed Cycle compared to waterfallSymptom:everything slows down (“we don’t have time to develop anything, we test 50% of the time”)Done is not really Done. PSP==Potentially S#!*ty ProductNeed to minimize cost of stuff repeated per iteration – enable effective small batches
Empty after testing, development in done testing busy bottleneck in testingThis is a classic bottleneck in an R&D team.Testing are at their work in progress limit, meaning they cannot take on more work. Acceptance has no work in progress, what we call a “bubble”Development are at their limit as well. Nothing from Testing is DONE waiting to be pulled, which explains why Acceptance has a bubble
Empty after testing, development in done testing busy bottleneck in testingThis is a classic bottleneck in an R&D team.Testing are at their work in progress limit, meaning they cannot take on more work. Acceptance has no work in progress, what we call a “bubble”Development are at their limit as well. Nothing from Testing is DONE waiting to be pulled, which explains why Acceptance has a bubble
Empty after testing, development in done testing busy bottleneck in testingThis is a classic bottleneck in an R&D team.Testing are at their work in progress limit, meaning they cannot take on more work. Acceptance has no work in progress, what we call a “bubble”Development are at their limit as well. Nothing from Testing is DONE waiting to be pulled, which explains why Acceptance has a bubble
Automation – not just test automation!How can we help you spend more time actually testing (compared to setup, and other wastes) (http://theoryofconstraints.blogspot.com/2007/06/toc-stories-2-blue-light-creating.html) How often do we need to retest? Why?ATDD - drives better code into testing, as well as offload some testing workAgree on “READY for Testing” criteria for stories, setup relevant team rules and processes.
Q1: Test Driven DevelopmentMeasure “internal quality” of systemSmall unit tests drive design of small part of code.Larger integration tests drive system designAutomated, in same language, done by programmersRun often and at every checkinQ2: Story Driven DevelopmentMeasure “external quality” of systemStory tests drive higher level system designTests are more functional; more examples & combinationsTests boundary conditions, error handling, presentationQ3:Some tests in Q2 may be incorrect or incompleteTeam may misunderstand, agile customer may omit or forget somethingEmulate how a real user would use the systemManual testing – requires a humanExploratory/session based testingQ4:Performance, robustness, security etc.Sometimes not given enough focus on agile teamsOften requires specialized tools & expertise* Tests on right hand side feed into stories/tasks for left hand side