Test driven software development

  • 647 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
647
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
20
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Test driven software development Timo Suomela September 13, 2006 Seminar paper
  • 2. HELSINGIN YLIOPISTO HELSINGFORS UNIVERSITET  Department Tiedekunta/Osasto  Fakultet/Sektion Faculty Laitos  Institution  UNIVERSITY OF HELSINKI Tekijä  Författare  Author Timo Suomela Työn nimi  Arbetets titel  Title Test driven software development Oppiaine  Läroämne  Subject Työn laji  Arbetets art  Level Aika  Datum  Month and year Sivumäärä  Sidoantal  Number of pages September 13, 2006 11 pages Tiivistelmä  Referat  Abstract This document is a short introduction to the process of test-driven software devel- opment. D.2.5 [Testing and debugging] Avainsanat  Nyckelord  Keywords testing-driven, mock objects, continuous integration Säilytyspaikka  Förvaringsställe  Where deposited Muita tietoja  övriga uppgifter  Additional information
  • 3. ii Contents 1 Introduction 1 2 The test-driven development cycle 2 2.1 Test list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Red/green/refactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Benets of TDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 Unit test case design 5 3.1 Mock objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Dependency injection . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4 Continuous integration 7 4.1 Single source repository . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.2 Automated build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.3 Self-testing code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.4 The master build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5 Summary 10 References 11
  • 4. 1 1 Introduction Test-driven software development (often abbreviated TDD) has been popularized by Kent Beck in association with eXtreme Programming [Bec04], one of several agile software development processes. Although TDD is one of the cornerstones of XP, it can easily be adopted to be used in any iterative software development process without adopting any of the other practices of XP. Test-driven software development (often abbreviated TDD) is far more than a simple testing technique. TDD is primarily a programming technique that has the nice side eect of ensuring that the code being developed is thoroughly unit tested. TDD only covers structural testing (whitebox testing), traditional testing techniques are still required to cover such aspects as functional testing and user acceptance testing. The starting premise with TDD is the same as with any other testing technique. A test is succesfull only if it helps to uncover one or more defects in the system being developed. When a test fails, progress has been made since a problem has been identied and that problem can now be xed. More importantly, these tests provide a clear measure of progress once they no longer fail. As long as the test cases are carefully selected, TDD increases condence in that the system being developed meets the requirements dened for it. Properly implemented, TDD achives 100 percent statement coverage, something that traditional testing techniques do not guarantee. Although full statement coverage is not the best indicator for the quality of the testing process, it increases condence in the code base, since each line of code is exercised at least once during a test run.
  • 5. 2 2 The test-driven development cycle The complete paradigm of Test-driven software development has been summarized by Kent Beck in two rules [Bec04]: 1. New business code is written only when an automated test has failed. 2. Any duplicate code is eliminated by refactoring. These are two simple rules, but they generate complex individual and group be- haviour with technical implications such as the following: 1. The design process is organic, design decisions are made based on feedback provided by running code. 2. Each developer must write his or her own test cases because it is impractical to wait 20 times a day for a dedicated tester to write the tests. 3. The development environment must provide repid response to small changes. 4. The design of the software system being developed must consist of many highly cohesive, loosely coupled components to make testing easier. 2.1 Test list Before starting to code a new feature or modifying an existing one, a list of tests must be compiled. Since it is practically impossible to design a test suite that will nd every single aw in a non-trivial piece of software, the selection process must be planned carefully. The selection of tests is beyond the scope of this paper but follows the same rules as in traditional testing. According to Newkirk and Vorontsov, this process should take about 15-20 minutes for a feature that is estimated to take about 4 hours to implement [NeV04]. After the list has been compiled, each test is implemented and then crossed of the list. Once each test has been implemented and succesfully run, the new feature or modication has been completed.
  • 6. 3 2.2 Red/green/refactor The process of implementing each test in the test list is dened by red, green, refactor. The goal of this process is to work in small veriable steps tha provide immedia feedback. The steps are descibed by Newkirk and Vorontsov as follows [NeV04]: 1. Adding a new test. In TDD the implementation of each new feature begins with writing a test case, usually using a framework from the XUnit-famliy of frameworks. The developer selects a test from the test list discussed above. 2. Running all tests and seeing the new one fail. This step may seem unnescessary at rst. The purpose of running all tests at this point is to validate the new test. If the code passes the new test case without requiring any changes, the test case is obviously awed and requires revising. The new test must also fail for the expected reason. Testing frame- works usually employ visual cues when reporting success or failure of a test. At this point the new test is colored . red 3. Implementing the new feature. In this step the new feature is implemented by writing code that is 'good enough'. The aim is not to write perfect code, but pass the test. The code will be improved in a later step. This is to ensure that no untested code is written. 4. Running all tests and see them succeed. If all test cases pass, the developer has a positive conrmation that the new feature is implemented correctly and all previously implemented tested features still work. The testing framework will show a green ag for each passed test. If a test fails, the developer must go back to the previous step and add or change some code and then run the tests again. 5. Refactoring the code to remove duplication. The last step involves cleaning up the code. By frequently running all test cases the developer can be condent that the code meets the same requirements before and after refactoring. The cycle is repeated until all test cases on the original test list are implemented and passed.
  • 7. 4 The process described above forces the developer to work in very smal steps. The steps are so small they may even seem ridiculous to the uninitiated. However, the smaller the steps are, the faster the developer gets feedback about a mistake he or she has made in the form of a failed test. The alternative to developing in small steps is to make a lot of changes to the code and then run test. If the test fails, backtracking to nd out which changes caused the failure will take a great amount of time. Working in smaller steps will also decrease the need for a debugger since the developer gets immediate feedback after each change and therefore will know exactly where to look in case of a failure. A nice side eect of this is increased condence in the quality of the code. 2.3 Benets of TDD In a TDD-environment a developer him or herself is reponsible for writing tests that verify the quality of his or her code. Traditionally the roles of developer and tester have been separated to increase the chances of a test suite to nd faults in the system under test. A kind of 'blindness' may prevent the developer from writing tests for the edge cases he or she did not consider when coding a new feature or modifying an existing one. Fowler says this is true [FoF99]. It's easy for a developer to overlook errors in his or her own code, but Fowler considers the value of the fast turn-around of the TDD- approach to be grater than the value of having separate testers. TDD does not cover the whole range of testing during the software development cycle, system tests and acceptance test must still be designed, written and executed independently of the coding eort. In a controlled expirement conducted to evaluating the eectiveness of TDD, Erdog- mus et. al. divided the a group of participating undergraduate students in two. The experiment group used TDD to develop a piece of software, the control group used a more conventional development technique, writing tests after the actual implemen- tation. Both groups followed an incremental process, adding new features one at a time and regression testing them. According to the study, students in the TDD group tendet to write more tests on the average. Students who wrote more tests also tendet to be more productive. Additionaly, the quality of the code increased linearly with the number of written tests, independent of the development strategy employed.
  • 8. 5 3 Unit test case design The idea of TDD in itself is language agnostic. The principles can be applied to any iterative software development process but the main focus is on modern object- oriented languages like Java, C and Python. In a TDD-environment, all unit test cases must be repeatable and must not depend on or interfere with each other. It must be possible to change the order in which the test cases are executed. It must be possible to repeatedly execute a single test case without need for a manual setup before the start of the test case or cleaning up after the test has nished. According to Beck, good unit tests must run fast, in isolation and use real data (e.g. a copy of production data) [?]. 3.1 Mock objects In a modular software system, each module provides one or more services. Some of these services are consumed by other modules in the same system to provide other, more complex services. The dependency on other modules makes isolating the module under test a non-trivial task, especially if one of the referred modules is a complex system in itself. An example of such a complex auxilary system is a database or a network resource like a web service. Sometimes the actual implementation of a referred module is not available at coding time and testing must be done against a set of interfaces. In many cases the task of setting up the modules needed to execute a single test case may include lots of time-consuming, fragile eort, such as running services or placing hardware in a known state. Mock objects provide a way to deal elegantly with this type of situation [Ham04] in a object-oriented language. A mock object is a simulation of a real object. Mock objects implement the interface of the real object and can be set up to behave in a predictable way. Mock objects also validate that the code using the objects does so in the expected order. The unit under test must call the methods of a mock object in the expected order, passing in the expected parameters. This verication capability is what separates mock objects from traditional stubs used in top-down integration testing [Bin99].
  • 9. 6 3.2 Dependency injection Dependency injection (sometimes referred to as 'inversion of control') is a fairly modern design principle that is humbling in its sheer simplicity. Rather than letting the components (instances of classes) of a software system declare explicit dependen- cys, e.g. by instantiating collaborating objects or looking them up using the locator pattern, they refer to them via interface only. The concrete implementations are 'injected' into the components by an assembly mechanism (sometimes called an inversion of control -container) using setter-methods. Paired with mock objects, the principle of dependency injection makes isolating a unit under test a trivial proceeding.
  • 10. 7 4 Continuous integration In the classical v-model of software development, unit testing (often referred to as module testing) commences once the coding phase has been completed (gure 1). The focus of unit testing is to eliminate errors in the functionality a single module provides. The tested modules are then tted together in a process called integration testing to eliminate errors in the collaboration of one or more modules. Figure 1: The v-model of classic software testing Barbey, in his masters thesis, has pointed out, that unit testing can be considered a special form of integration testing, the integrated units being the methods or functions of the module under test. From this point of view, integration testing completely covers the structural testing eort. Since integration is not an single event but a process, Continuous integration, as Fowler points out [FoF99], is a set of practices that has been around in one form or the other for a long time. Although it is a fundamental part of the eXtreme Programming -paradigm, it can be (and often is) adopted whithout even considering the other practices of XP. 4.1 Single source repository The rst requirement for succesfull continuous integration is a well known (well known to the development team that is) repository for the source code. In addition to the source code, the repository must contain everything that is required to compile the source code and run the unit tests. This includes build les, third party libraries, test scripts, database schemas, etc. Only a minimal amount of tools should be required to check out the source code from the source code repositor and build it
  • 11. 8 on a virgin machine. Typical examples of things no present in the repository are large and stable and/or dicult to install like a compiler, a database manager or a web server. The repository should not contain any artifacts that are products of the build process, i.e. executables. The source code repository should contain a single mainline (or main branch) of the project currently being developed. Each developer should work on the mainline, other branches should be used only to x bugs in older, allready released versions of the project. 4.2 Automated build Building a project from its sources and getting it to run on a development machine can be a very complicated task, e.g. a typical n-tiered web-application requires a database for the backend, a web server for the frontend and often even a middleware server. Like many tasks in the software development process, the build process can - and should be - automated. The more commands a developer has to remember to build and deploy a piece of software, the more likely he or she is to make a mistake. Automated build systems are nothing new, the *nix world has proted from make tool for decades. Modern examples of automated environments are Ant and Maven for Java and Nant and MSBuild for the .NET framework. Each of these tools can be congured to compile the source code, install a database schema, run all tests and deploy the system, all with issuing a single command. The complete build of a system can take an enormous amount of time, so it is cruical that the selected build tool is capable of analyzing the source code after a set of changes and only re-build the nescessary artifacts. A good build script will let the developer skip selected subgoals. 4.3 Self-testing code An important part of each build is the execution of the test cases written by the developers. Fowler calls these tests build verication tests (BVT). If all test succeed, the build is considered succesfull. If one or more tests fail, the build is considered broken. Of course self-testing code is no silver bullet, since even successfull tests don't prove the absence of bugs.
  • 12. 9 4.4 The master build The purposes of the master build is to nd integration problems in a multi-developer environment as early as possible. The build daemon periodically checks the source code repository and determines whether any new changes have been commited since the last build. If there's new code in the repository, then it starts building, using the following steps: 1. Makes a full check out from the repository. 2. Invokes a build script that compiles the source code. 3. Runs all unit tests. 4. If no tests fail, the build is considered successfull and the source code is labeled with a running build number in the repository. 5. Informs the developers of the status of the build, e.g. via email. The time needed to compile the source code may become an issue if the frequency of the master builds is high. This issue is resolved by either reducing the frequency or employing an incremental compilation strategy, compiling only the source code that actually has changed. Even if using an incremental compiler, a complete build should be build at least once a day.
  • 13. 10 5 Summary Test-driven software development is test-rst approach to software development. No business code is written before a repeatable unit test case has been written to assert the validity of the feature. The moment all tests go from red to green, coding stops. After each fail-code-pass iteration the maintainability of the code is increased by systematically factoring out duplicate code. Combined with continuous integration, test-driven software development provides fast feedback for the programmer and allows the development process to act faster on changing requirements.
  • 14. 11 References Bar97 Barbey, S., Test selection for specication-based unit testing of object- oriented software based on formal specications. Master's thesis, Ecole Polytechnique Federale de Lausanne, Lausanne, EPDFL, 1997. Bec02 Beck, K., Test-Driven Development By Example . Addison Wesley Pro- fessional, 2002. Bec04 Beck, K., Extreme Programming Explained: Embrace Change, Second Edition . Addison Wesley Professional, 2004. EMT05 Erdogmus, H., Morisio, M. and Torchiano, M., On the eectiveness of the test-rst approach to programming. IEEE Transactions on Software Engineering , 31,3(2005), pages 226237. FoF99 Fowler, M. and Foemmel, M., Continuous integration, 1999. http: //martinfowler.com/articles/originalContinuousIntegration. html. [10.9.2006] Fow04 Fowler, M., Dependency injection, 2004. http://martinfowler.com/ articles/injection.html. [10.9.2006] Ham04 Hamill, P., Unit Test Frameworks . O'Reilly, 2004. NeV04 Newkirk, J. W. and Vorontsov, A. A., Test-Driven Development in Microsoft .NET . Microsoft Press, 2004. Bin99 v. Binder, R., Testing Object-Oriented Systems . Addison-Wesley, 1999.