2. What is TDD?
Test-driven development (TDD) is a software development technique
that relies on the repetition of a very short development cycle:
1. First write a failing automated test case that defines a desired
improvement or new function
2. Then produce code to pass that test
3. Finally refactor the new code to acceptable standards.
2
3. The Three Laws of TDD
1. Don’t write any code unless it is to make a failing test pass.
2. Don’t write any more of a unit test than is sufficient to fail.
3. Don’t write more code than is sufficient to pass the one failing unit test.
3
5. Benefits
The first goal is to make the test pass.
Subsequent users have a greater level of trust in the code.
Executable Documentation.
5
6. Limitations
Some Code is Hard to Test.
Don’t Test That Code, Minimize that Code.
Put the important code in a library and test that code.
6
7. Limitations
Management support is essential.
Without the organization support, that TDD is going to improve the
product.
Management will feel that time spent writing tests is wasted.
7
9. Limitations
The level of coverage and testing detail achieved during repeated TDD
cycles cannot easily be re-created at a later date.
Therefore these original tests become increasingly precious as time goes
by.
If a poor architecture, a poor design or a poor testing strategy leads to a
late change that makes dozens of existing tests fail, it is important that
they are individually fixed.
Merely deleting, disabling or rashly altering them can lead to un-
detectable holes in the test coverage.
9
10. Limitations
Unexpected gaps in test coverage may exist or occur for a number of
reasons.
One or more developers in a team was not so committed to the TDD
strategy and did not write tests properly.
Some sets of tests have been invalidated, deleted or disabled
accidentally or on purpose during later work.
Alterations may be made that result in no test failures when in fact bugs
are being introduced and remaining undetected.
10
11. Limitations
Unit tests created in a TDD environment are typically created by the
developer who will also write the code that is being tested.
The tests may therefore share the same blind spots with the code.
11
16. System Tests
Testing conducted on a complete, integrated system to evaluate the
system’s compliance with its specified requirements.
Falls within the scope of black box testing, and as such, should require no
knowledge of the inner design of the code or logic.
16
17. Ok Now What?
No enhancements without defined requirements.
Can not write tests for items without requirements.
17
20. Links
PHPUnit
http://www.phpunit.de/
Three Rules of TDD
http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd
TDD - Wikipedia
http://en.wikipedia.org/wiki/Test-driven_development
BDD - Wikipedia
http://en.wikipedia.org/wiki/Behavior_Driven_Development
http://www.phpspec.org/
20
Editor's Notes
TDD is difficult to use in situations where full functional tests are required to determine success or failure. Examples of these are user interfaces, programs that work with databases, and some that depend on specific network configurations. TDD encourages developers to put the minimum amount of code into such modules and to maximise the logic that is in testable library code, using fakes and mocks to represent the outside world.
Management support is essential. Without the entire organization believing that TDD is going to improve the product, management will feel that time spent writing tests is wasted.
The tests themselves become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings or which are themselves prone to failure, are expensive to maintain. There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during the 'Refactor' phase described above.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore these original tests become increasingly precious as time goes by. If a poor architecture, a poor design or a poor testing strategy leads to a late change that makes dozens of existing tests fail, it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to un-detectable holes in the test coverage.
DO NOT IGNORE/DISABLE FAILING TESTS
Unexpected gaps in test coverage may exist or occur for a number of reasons. Perhaps one or more developers in a team was not so committed to the TDD strategy and did not write tests properly, perhaps some sets of tests have been invalidated, deleted or disabled accidentally or on purpose during later work. If this happens, the confidence that a large set of TDD tests lend to further fixes and refactorings will be actually be misplaced. Alterations may be made that result in no test failures when in fact bugs are being introduced and remaining undetected.
If, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify these input parameters. If the developer misinterprets the requirements specification for the module being developed, both the tests and the code will be wrong.
resulting in less additional Q.A. activities, such as integration testing and compliance testing.
In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a class, which may belong to a base/super class, abstract class or derived/child class.
Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Forces a requirements discussion before writin’ a bunch of code that don’t make the customer happy