Test Driven Development - 09/2009

  • 446 views
Uploaded on

 

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
446
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
14
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • TDD is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximise the logic that is in testable library code, using fakes and mocks to represent the outside world.
    Management support is essential. Without the entire organization believing that TDD is going to improve the product, management will feel that time spent writing tests is wasted.
  • The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the 'Refactor' phase described above.
    The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to un-detectable holes in the test coverage.
  • DO NOT IGNORE/DISABLE FAILING TESTS
  • Unexpected gaps in test coverage may exist or occur for a number of reasons. Perhaps one or more developers in a team was not so committed to the TDD strategy and did not write tests properly, perhaps some sets of tests have been invalidated, deleted or disabled accidentally or on purpose during later work. If this happens, the confidence that a large set of TDD tests lend to further fixes and refactorings will be actually be misplaced. Alterations may be made that result in no test failures when in fact bugs are being introduced and remaining undetected.
  • If, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify these input parameters. If the developer misinterprets the requirements specification for the module being developed, both the tests and the code will be wrong.
  • resulting in less additional Q.A. activities, such as integration testing and compliance testing.
  • In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a class, which may belong to a base/super class, abstract class or derived/child class.
  • Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
  • Forces a requirements discussion before writin’ a bunch of code that don’t make the customer happy

Transcript

  • 1. TEST DRIVEN DEVELOPMENT DPUG - 09/08/2009 - Jason Ragsdale 1
  • 2. What is TDD? Test-driven development (TDD) is a software development technique that relies on the repetition of a very short development cycle: 1. First write a failing automated test case that defines a desired improvement or new function 2. Then produce code to pass that test 3. Finally refactor the new code to acceptable standards. 2
  • 3. The Three Laws of TDD 1. Don’t write any code unless it is to make a failing test pass. 2. Don’t write any more of a unit test than is sufficient to fail. 3. Don’t write more code than is sufficient to pass the one failing unit test. 3
  • 4. The Development Cycle 4
  • 5. Benefits The first goal is to make the test pass. Subsequent users have a greater level of trust in the code. Executable Documentation. 5
  • 6. Limitations Some Code is Hard to Test. Don’t Test That Code, Minimize that Code. Put the important code in a library and test that code. 6
  • 7. Limitations Management support is essential. Without the organization support, that TDD is going to improve the product. Management will feel that time spent writing tests is wasted. 7
  • 8. Limitations Badly written tests, are expensive to maintain. 8
  • 9. Limitations The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to un- detectable holes in the test coverage. 9
  • 10. Limitations Unexpected gaps in test coverage may exist or occur for a number of reasons. One or more developers in a team was not so committed to the TDD strategy and did not write tests properly. Some sets of tests have been invalidated, deleted or disabled accidentally or on purpose during later work. Alterations may be made that result in no test failures when in fact bugs are being introduced and remaining undetected. 10
  • 11. Limitations Unit tests created in a TDD environment are typically created by the developer who will also write the code that is being tested. The tests may therefore share the same blind spots with the code. 11
  • 12. Limitations The high number of passing unit tests may bring a false sense of security 12
  • 13. Unit Tests A unit is the smallest testable part of an application. 13
  • 14. Unit Tests 1 <?php 2 class WhenTestingAdder { 3 public function shouldAddValues() { 4 $adder = new Adder(); 5 $this->assertEquals(2, $adder->add(1, 1)); 6 $this->assertEquals(3, $adder->add(1, 2)); 7 $this->assertEquals(4, $adder->add(2, 2)); 8 $this->assertEquals(0, $adder->add(0, 0)); 9 $this->assertEquals(-3, $adder->add(-1, -2)); 10 $this->assertEquals(0, $adder->add(-1, 1)); 11 $this->assertEquals(2222, $adder->add(1234, 988)); 12 } 13 } 14 ?> 14
  • 15. Integration Tests Individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. 15
  • 16. System Tests Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. Falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. 16
  • 17. Ok Now What? No enhancements without defined requirements. Can not write tests for items without requirements. 17
  • 18. Examples HelloWorld TicTacToe 18
  • 19. Q&A 19
  • 20. Links PHPUnit http://www.phpunit.de/ Three Rules of TDD http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd TDD - Wikipedia http://en.wikipedia.org/wiki/Test-driven_development BDD - Wikipedia http://en.wikipedia.org/wiki/Behavior_Driven_Development http://www.phpspec.org/ 20