Your SlideShare is downloading. ×
TDD talk
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

TDD talk

960

Published on

Presentation made to CoastNerds November 2010 by Scott Wallace and Robert Dyball. Entitled "Test Driven Development - A Testing Journey", it describes the path we've started in agile development …

Presentation made to CoastNerds November 2010 by Scott Wallace and Robert Dyball. Entitled "Test Driven Development - A Testing Journey", it describes the path we've started in agile development through test last, test first, TDD, ATDD and on to BDD. (The principles in the presentation apply to any language.)

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
960
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
33
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Ascertain who is here, where are we all?
    Who is doing TDD?
    Who has tried TDD?
    Who has never tried TDD?
  • RD
    Intro: (Insert hug picture) , Who are we?

    “hey I’m on your team”, says Scott

    I was thinking – we use the ‘Testing Journey’ sub title of the preso to tell ours – and many other’s story from:
    Unit testing
    Test Last
    Test First
    Test Driven
    BDD
    ATDD
    Highlighting that we are somewhere between test last and TDD. Looking at doing ATDD in our Migration – ie doing the mapping testing first – whadaya think?

    Preso high level, not technical, as big issue is “selling” to mgmt and devt, and devs not giving up
  • SW Brian Marrick’s Agile Testing Matrix

    Some context – brief discussion of each quadrants etc

    Introduce Unit testing for next slide.....
  • RD:
    Basically it’s a piece of code we write to test another, single unit of code.

    One thing to note, is automated unit tests; saves the drudgery etc.
  • SW: add code example

    Arrange all necessary preconditions and inputs.
    Act on the object or method under test.
    Assert that the expected results have occurred.

    Leads to tests that:
    are easier to read
    follow,
    understand
    maintain http://www.arrangeactassert.com/why-and-what-is-arrange-act-assert/


    http://c2.com/cgi/wiki?ArrangeActAssert
  • SW
  • SW
  • RD

    You can easily see the intent of the test, and if there’s a failure, what failed
  • SW
    Similar to the example before – I check the state – the balance – of the Account after acting on it.

    Use real objects if possible and a double if it's awkward to use the real thing.
  • SW
    So actually tests HOW an object collaborates with its collaborators

    Good for testing things like FileSystem, Data Access, Network Access etc...

    More tied to the internal implementation of the of the System Under Test.

    There is an example on the next page we can talk about...
  • SW
  • SW
    Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
    Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
    Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
    Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.


    From - http://martinfowler.com/articles/mocksArentStubs.html
  • SW

    Also frameworks are available to help with Test Doubles –

    eg
    .Net - RhinoMocks, NMock,
    Java – jMock, EasyMock, Mockito
    PHP, Ruby, Dynamic Languages?????
  • RD What is this? - Code first, test last.

    Why not do test last? What happens? Keep listening…

    For further reading, plenty of examples. Eg.,
    http://stephenwalther.com/blog/archive/2009/04/08/test-after-development-is-not-test-driven-development.aspx
  • RD

    Not good enough …

    Doesn’t promote a testable design “Why would I bother to go back and change the code to be easily testable?”
    Is undisciplined: “I don’t feel Iike writing tests for that stuff.” Falls victim to ego: “My code is great, why should I bother writing tests?”
    Appears unnecessary: “I already proved the code works!”
    Clashes with deadlines “Never mind the tests, we just have to ship!”
    Limits coverage “There is about a 70% practical coverage ceiling for test-after
    Doesn’t afford adequate refactoring 3O% and more of your app will inhibit refactoring and degrade in quality
    Results in difficult, slower, integration-like tests Integration tests are harder to maintain, comprehend and isolate failure.
    Isn’t very enjoyable Few people enjoy writing unit tests. In contrast many claim they love TDD.
    Questionable return on value TAD is expensive!
    Does nothing to advance the craft TAD is haphazard and undisciplined


    ref: http://agileinaflash.blogspot.com/2009/02/why-pout-aka-tad-sucks.html
    and: http://langrsoft.com/blog/2008/07/realities-of-test-after-development-aka.html

  • SW

    It wasn’t long before people doing some form of TAD got tired of writing their code, writing their tests, and then finding their time consumed in the debugger – and losing time fiddling with code and tests alternately.

    Test first means just that 1. write the test, 2. write the code, rinse and repeat …

    It usually isn’t too long before people doing some form of TAD become more interested in TFD as they get less interested in debugging.
    While TFD is vastly better than TAD, TFD can result in a collection of very well tested but disjointed chunks of code, once various parts are assembled into the whole.

    See: http://xprogramming.com/articles/testfirstguidelines/

  • SW
  • RD
  • RD

    Cited here: http://blog.gdinwiddie.com/2010/02/14/testability-good-design/
    From here: http://tech.groups.yahoo.com/group/testdrivendevelopment/message/32320?l=1
  • RD

    Writing a test that embodies the requirements means you have had to have a grasp of the requirements, and then express these in code.
    Requirements change, so too can your tests, and then see if the code keeps up.

    Image: http://verydemotivational.files.wordpress.com/2010/07/demotivational-posters-are-we-there-yet.jpg
  • sw
  • SW
  • RD
    Fast: Mind-numbingly fast, as in hundreds or thousands per second.
    Isolated: The test isolates a fault clearly.
    Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
    Self-verifying: The test is unambiguously pass-fail.
    Timely: Produced in lockstep with tiny code changes
  • SW
  • SW
    Summary shrink to notes

    Martin Fowler continues “You continue cycling through these three steps, one test at a time, building up the functionality of the system. Writing the test first, what XPE2 calls Test First Programming, provides two main benefits. Most obviously it's a way to get SelfTestingCode, since you can only write some functional code in response to making a test pass. The second benefit is that thinking about the test first forces you to think about the interface to the code first. This focus on interface and how you use a class helps you separate interface from implementation.
    The most common way that I hear to screw up TDD is neglecting the third step. Refactoring the code to keep it clean is a key part of the process, otherwise you just end up with a messy aggregation of code fragments. (At least these will have tests, so it's a less painful result than most failures of design.)” ref: http://www.martinfowler.com/bliki/TestDrivenDevelopment.html
  • SW

    Introduce refactoring.

    Refactoring is commonly forgotten? Reference? Own Slide?

  • Rd

    Perhaps show example
    Developers tend to be slack with documentation, TDD will give you documented examples of your code, it can become your documentation.

  • Rd

    Stats from “Nagappan, Maximilien, Bhat and Williams (Microsoft Research, IBM Research, North Carolina State University). Empirical Software Engineering journal 2008, http://research.microsoft.com/en-us/projects/esm/nagappan_tdd.pdf” as cited by http://www.allankelly.net/static/presentations/ABC2010_HowMuchQuality.pdf


    Arguments against TDD often are too much code, too much tests etc., yet (see figures) shows less than you expect,

    Microsoft did an internal using TDD and non-TDD and found changing from non-TDD to TDD resulted in 15% to 35% longer development time, but
    Had 2.6x less defects (per KLOC) at a cost of 35% increase in development time, and on another team 4.2x less defects for a 15% increase in development time.
    ie., while the time to develop increased the ship date, the lower defect count far outweighed the development time that TDD required.
    see: "Evaluating the Efficacy of Test-Driven Development - Industrial Case Studies", by Thirumalesh Bhat and Nachiappan Nagappan.
    ref: http://blogs.msdn.com/b/cse/archive/2006/11/14/evaluating-the-efficacy-of-test-driven-development-industrial-case-studies.aspx


    See “On the Sustained Use of a Test-Driven Development Practice at IBM”, ref: http://www.agile2007.com/downloads/proceedings/006_On%20the%20Sustained%20Use_860.pdf
    Tested TDD over a 5 year period, though not always test first, did find sustained use of TDD reduced the rate of increase in cyclomatic complexity that will inevitably occur. They achieved a 40% error reduction in first release, and said "...the results of a longitudinal case study of an IBM team that has been practicing TDD for ten releases over a five-year period. Our
    results indicate that the use of TDD can aid a team in developing a higher quality product. The quality improvement was not only evident in our metrics but also to the developers and to the product testers. ...“

    In the book “Test-Driven Development: An Empirical Evaluation of Agile Practice” By Lech Madeyski, p 217
    “The main result is that Test First programmers produce a code that is significantly less coupled.” also found that CBO (Coupling Between Object classes), a lower measure ”suggests better modularization (Ie., a more modular design), easier reuse as well as testing, and hence better architecture … “


    ITEA Agile did a study of TDD code quality and gauged adoption time and difficulty.
    75% of developers found it took 2 weeks to “get up to speed”, but then most developers said they would prefer to continue TDD in the future.
    Reference: http://www.agile-itea.org/public/deliverables/ITEA-AGILE-D5.2.10_v1.0.pdf



  • RD

    See "Realizing quality improvement through test driven development: results and experiences of four industrial teams“, Nachiappan Nagappan & E. Michael Maximilien & Thirumalesh Bhat & Laurie Williams,
    Ref: http://research.microsoft.com/en-us/projects/esm/nagappan_tdd.pdf
    p299 states:

    “Start TDD from the beginning of projects. Do not stop in the middle and claim it doesn’t work. Do not start TDD late in the project cycle when the design has already been decided and majority of the code has been written. TDD is best done incrementally and continuously.
    – For a team new to TDD, introduce automated build test integration towards the second third of the development phase—not too early but not too late. If this is a “Greenfield” project, adding the automated build test towards the second third of the development schedule allows the team to adjust to and become familiar with TDD. Prior to the automated build test integration, each developer should run all the test cases on their own machine.
    – Convince the development team to add new tests every time a problem is found, no matter when the problem is found. By doing so, the unit test suites improve during the development and test phases.
    – Get the test team involved and knowledgeable about the TDD approach. The test team should not accept new development release if the unit tests are failing.
    – Hold a thorough review of an initial unit test plan, setting an ambitious goal of having the highest possible (agreed upon) code coverage targets.
    – Constantly running the unit tests cases in a daily automatic build (or continuous integration); tests run should become the heartbeat of the system as well as a means to track progress of the development. This also gives a level of confidence to the team when new features are added.
    – Encourage fast unit test execution and efficient unit test design. Test execution speed is very important since when all the tests are integrated, the complete execution can become quite long for a reasonably-sized project and when using constant test executions. Tests results are important early and often; they provide feedback on the current state of the system. Further, the faster the execution of the tests the more likely developers themselves will run the tests without waiting for the automated build tests results. Such constant execution of tests by developers may also result in faster unit tests additions and fixes.
    – Share unit tests. Developers’ sharing their unit tests, as an essential practice of TDD, helps identify integration issues early on.
    – Track the project using measurements. Count the number of test cases, code coverage, bugs found and fixed, source code count, test code count, and trend across time, to identify problems and to determine if TDD is working for you.
    – Check morale of the team at the beginning and end of the project. Conduct periodical and informal surveys to gauge developers’ opinions on the TDD process and on their willingness to apply it in the future.”
  • SW
    Source: http://agileinaflash.blogspot.com/2009/03/tdd-process-smells.html

    sing code coverage as a goal. If you practice test-driven development, you should be getting close to 100% coverage on new code without even looking at a coverage tool. Existing code, that's another story. How do we shape up a system with low coverage? Insisting solely on a coverage number can lead to a worse situation: Coverage comes up quickly by virtue of lots of poorly-factored tests; changes to the system break lots of tests simultaneously; some tests remain broken, destroying most of the real value in having an automated test suite.
    No green bar in the last ~10 minutes. One of the more common mis-interpretations of TDD is around test size. The goal is to take the shortest step that will generate actionable feedback. Average cycle times of ten minutes or more suggest that you're not learning what it takes to incrementally grow a solution. If you do hit ten minutes, learn to stop, revert to the last green bar, and start over, taking smaller steps.
    Not failing first. Observing negative feedback affirms that any assumptions you've made are correct. One of the best ways to waste time is skip getting red bars with each TDD cycle. I've encountered numerous cases where developers ran tests under a continual green bar, yet meanwhile their code was absolutely broken. Sometimes it's as dumb as running tests against the wrong thing in Eclipse.
    Not spending comparable amounts of time on refactoring step. If you spend five minutes on writing production code, you should spend several minutes refactoring. Even if your changes are "perfect," take the opportunity to look at the periphery and clean up a couple other things.
    Skipping something too easy (or too hard) to test. "That's just a simple getter, never mind." Or, "that's an extremely difficult algorithm, I have no idea how to test it, I'll just give up." Simple things often mask problems; maybe that's not just a "simple getter" but a flawed attempt at lazy initialization. And difficult code is often where most of the problems really are; what value is there in only testing the things that are easy to test? Changes are most costly in complex areas; we look for tests to clamp down on the system and help keep its maintenance costs reasonable.
    Organizing tests around methods, not behavior. This is a rampant problem with developers first practicing TDD. They'll write a single testForSomeMethod, provide a bit of context, and assert something. Later they'll add to that same test code that represents calling someMethod with different data. Of course a comment will explain the new circumstance. This introduces risk of unintentional dependencies between the cases; it also makes things harder to understand and maintain.
    Not writing the tests first! By definition, that's not TDD, yet novice practitioners easily revert to the old habit of writing production code without a failing test. So what if they do? Take a look at Why TAD Sucks for some reasons why you want to write tests first.
  • RD
    Image credit: http://thebreakthrough.org/blog/2009/03/want_to_save_the_world_make_cl.shtml
  • SW
    Talk about Quadrant 2 – the automated Story Tests, Acceptance tests
  • SW
    EG:
    It is the difference between “when I add a new post to my blog, the new post shows up on my homepage” and “calling the create new post method on the blog controller saves a new post and the new post is passed to the homepage view when the home controller’s index action is called”

    Rd get dan north’s quote/defn

    Dan North describes BDD as “writing software that matters” [in The RSpec Book] and outlines 3 principles:
    1.Enough is enough: do as much planning, analysis, and design as you need, but no more.
    2.Deliver stakeholder value: everything you do should deliver value or increase your ability to do so.
    3.It’s a behavior: everyone involved should have the same way of talking about the system and what it does.
    BDD in its grandest sense is about communication and viewing your software as a system with behaviour. BDD tools such as RSpec and Cucumber strive to enable you to describe the behavior of your software in a very understandable way: understandable to everyone involved.

    quoted from : http://www.engineyard.com/blog/2009/cucumber-introduction/
  • SW

  • Rd
    It’s about coming up with the scenarios, examples of how the feature will work – end to end

  • RD
    Remember this from before?

    Credit: “Test Driven Development: Ten Years Later” Presented by Michael Feathers and Steve Freeman
    Ref: http://www.infoq.com/presentations/tdd-ten-years-later
    Excellent presentation on history of TDD, leading into ATDD

  • RD
    Now we see ATDD
  • RD
    Ref: http://janetgregory.blogspot.com/2010/08/atdd-vs-bdd-vs-specification-by-example.html
  • SW/RD

    Big move to utilise tools from Ruby space, such as Cucumber in .Net development
  • References:

    http://martinfowler.com/articles/mocksArentStubs.html
    http://xunitpatterns.com/
  • Transcript

    • 1. TDD A testing journey
    • 2. Who are we?
    • 3. Unit Tests • A unit test is a piece of a code (usually a method) that invokes another piece of code and checks the correctness of some assumptions afterward. • If the assumptions turn out to be wrong, the unit test has failed. • A “unit” is a method or function. Roy Osherove – The Art of Unit Testing
    • 4. AAA Arrange • all necessary preconditions and inputs. Act • on the object or method under test. Assert • that the expected results have occurred.
    • 5. Standards we use Object to be Tested Object to create on the testing side Project Create a test project named [ProjectUnderTest].Tests Class For each class, create at least one class with the name [ClassName]Tests Method For each method, create at least one test method with the following name: [MethodName]_[StateUnderTest]_[ExpectedBehavior] Taken from Roy Osherove – The Art of Unit Testing
    • 6. State Testing • State-based testing (also called state verification) determines whether the exercised method worked correctly by examining the state of the system under test and its collaborators (dependencies) after the method is exercised. Roy Osherove – The Art of Unit Testing
    • 7. Interaction Testing • ...testing how an object sends input to or receives input from other objects—how that object interacts with other objects. Roy Osherove – The Art of Unit Testing
    • 8. Test Doubles Dummies Fakes Stubs Mocks
    • 9. Read this: martinfowler.com/articles/mocksArentStubs.html
    • 10. Test Last Test-Last or Test-After Development: TAD, aka “POUT” Plain Old Unit Testing • Step1: write the code • Step 2: write the test • Repeat from Step 1 until …
    • 11. Doesn’t promote testable design Is undisciplined Falls victim to ego Appears unnecessary Clashes with deadlines Limits coverage inadequate refactoring hard slow, integration-like tests Isn’t very enjoyable Questionable return on value Doesn’t advance the craft
    • 12. Test First • Step 1: write the test • Step 2: write the code • Repeat from Step 1 until …
    • 13. What is TDD? “Test Driven Development (TDD) is a design technique that drives the development process through testing.” Martin Fowler
    • 14. “One reasonable definition of good design is testability. It is hard to imagine a software system that is both testable and poorly designed. It is also hard to imagine a software system that is well designed but also untestable.” – Robert “Uncle Bob” Martin “There appears to be a synergy between testability (at the unit level) and good design. If you aim for testability and make some good choices, design gets better. If you aim for good design, your design becomes more testable. ” – Michael Feathers What is TDD?
    • 15. TDD is about requirements Are We There Yet?
    • 16. TDD in 3 Cards... • From: • http://blog.objectmentor.com/articles/2008/03/06/tdd-on-three-index-cards
    • 17. Make the test pass Refactor Write a failing test The TDD cycle
    • 18. Refactor? “Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior” • http://www.refactoring.com/
    • 19. TDD benefits - Documentation - Feedback cycle - Ever expanding regression suite - No fear of change - Examples how to use the code
    • 20. Some TDD stats IBM drivers Microsoft Windows Microsoft MSN Microsoft Visual Studio Defect density (non-TDD) W X Y Z Defect density (with TDD) 61% of W 38% of X 24% of Y 9% of Z Increased time (with TDD) 15-20% 25% 15% 20%
    • 21. How to start? “The hardest thing is to start” - Bug driven tests note: code may not be testable - Testing legacy code (or trying to) - Start TDD with small projects - Pair programming - Katas /Dojos
    • 22. Code Coverage as a Goal No green bar in last 10 minutes Not failing first Not spending comparable time on refactoring Skipping something too easy or too hard Testing methods, not behaviours Not writing testing first TDD Process Smells
    • 23. What’s next?
    • 24. Remember This?
    • 25. BDD - Executable Acceptance Criteria - Given, When, Then BDD - Testing Intent “when I add a new post to my blog, the new post shows up on my homepage” TDD – Testing Code “calling the create new post method on the blog controller saves a new post and the new post is passed to the homepage view when the home controller’s index action is called” http://blog.robustsoftware.co.uk/2009/11/what-is-bdd-behaviour-driven-design.html
    • 26. Cucumber http://cukes.info/
    • 27. ATDD - Acceptance Test Driven Development - Start with the acceptance test -
    • 28. ATDD - Acceptance Test Driven - Different than BDD? - Start with the acceptance test - Examples,
    • 29. ATDD - Acceptance Test Driven - Different than BDD? - Start with the acceptance test - Examples,
    • 30. Tools? Unit Testing Mock s BDD .Net Nunit, xUnit, MSTest... Rhino Mocks, Nsubstitue, TypeMock Isolator nBehave Java jUnit jMock, EasyMock, Mockito JBehave, php PHPUnit, SimpleTest PHPUnit, SimpleTest, Mockery ?? Ruby Built in library?? Mocha?? Cucumber, RSpec

    ×