Test Driven Development and Quality Improvement


Published on

Test Driven Development and Quality Improvement

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • The requirements can be prioritized according to the importance for the customer, and their difficulty to achieve.
    Later they can be divided in sets of requirements in order to have a work load that can be performed by the members of the development team, and their implementation possible implementation dependencies.
  • Test case. It is an individual and particular test.
    Setup. The preparation of a test is its setup, and it is run is run before each test.
    Teardown. When a test finish a method call teardown is run in order to perform cleaning actions.
  • An approach is to perform the unit test using different setup and teardown methods.
  • A ubiquitous language contains the terms which will be used to define the behaviour of a system.
    A feature is subsequently realised by user stories. User stories provide the context of the features delivered by a system.
    What is the role of the user in the user story?
    What feature does the user want?
    What benefit can the user gain if the system provides the feature?
  • BDD suggests that code should be part of the system’s documentation, which is in line with the agile values. Code should be readable, and specification should be part of the code.
    Testing code is specification code.
    In BDD all scenarios should be run automatically, which means acceptance criteria should be imported and analysed automatically.
    The classes implementing the scenarios will read the plain text scenario specifications and execute them.
  • Find all commit log entries made after the target release, and before the following release.
    Search those entries for an instance of the string “bug” or “fix” and examine each of the resulting set of entries manually.
  • JFreeChart p-value of 0.084 indicates there is less than 9% probability that the observed negative
  • Test Driven Development and Quality Improvement

    1. 1. Test Driven Development and Quality Improvement Carlos Solis, John Noll, and Xiaofeng Wang Carlos.Solis@lero.ie Lero© 2010
    2. 2. Contents • • • • Test Driven Development Unit testing Code coverage The analysis of two open source projects Lero© 2010
    3. 3. Waterfall model Lero© 2010
    4. 4. Prioritizing the requirements Lero© 2010
    5. 5. Iterative and incremental development Initial Requirements List Analysis -i Design-i Testing-i Implementation-i Cycle-1 Lero© 2010 Cycle-2 Cycle-i Cycle-n-1 Cycle-n
    6. 6. Interlinked failures •The newer code can cause a failure in the code developed in previous iterations. Lero© 2010
    7. 7. Iterative and incremental development Initial Requirements List Analysis Set-i Design-i Testing-i, Testing-i-1 … Testing-1 Implementation-i Cycle-1 Lero© 2010 Cycle-2 Cycle-n
    8. 8. Test Driven Development Test Driven Development is an evolutionary approach that relies on very short development cycles and the practices of writing automated tests before writing functional code, refactoring and continuous integration. Lero© 2010
    9. 9. Agile methods • Agile methods are the most popular iterative and incremental software methods. Lero© 2010
    10. 10. Testing scope by granularity • Unit test. Do functions, classes and modules work as expected? • Integration Test. Do modules interact and work with other modules as expected? • Acceptance Test. Does the system work as expected? Lero© 2010
    11. 11. White box testing • In white box testing the tester has access to the code that wants to tests, and often requires to program the test. • Unit test and Integration Test are whites box tests. Input Lero© 2010 Output
    12. 12. Black box testing An acceptance test is a black box test. It is a test of the functionality of a system or module without knowing its internal structure. Output Input Lero© 2010
    13. 13. Test Driven Development Initial Requirements List Requirements Set-i Design-i Acceptance Acceptance Test-i testing Refactor UnitTest-i Cycle-1 Lero© 2010 Cycle-2 Implementation-i Cycle-n
    14. 14. Unit testing • A unit test tests a small portion of code usually methods and classes, in an independent way. Lero© 2010
    15. 15. Unit testing in JUnit public class WikiPageTest extends TestCase { protected WikiPage fixture = null; protected void setUp() throws Exception { setFixture(ShywikisFactory.createWikiPage()); } protected void tearDown() throws Exception { setFixture(null); } protected void setFixture(WikiPage fixture) { this.fixture = fixture; } Lero© 2010
    16. 16. Unit testing in JUnit public void testAddVersion() { WikiPage wp = this.fixture; User user1 = ShywikisFactory.createUser(); user1.setName("user1"); String body = "Hi!"; Version actual = wp.getActual(); assertNull(actual); Version newVer = wp.addVersion(user1, body); actual = wp.getActual(); assertEquals(actual,newVer); assertEquals(user1,actual.getUser()); assertEquals(body,actual.getBody()); … Lero© 2010
    17. 17. Unit testing in JUnit public void testVersionsOrder() { … for(i=0;i<5;i++){ body = "body" + i; wp.addVersion(user, body); } Iterator<Version> it = wp.getVersions().iterator(); i = 0; while(it.hasNext()){ Version ver = it.next(); assertEquals("user" + i, ver.getUser().getName()); assertEquals("body" + i, ver.getBody()); i++; } Lero© 2010
    18. 18. Independence: Mock Objects Mock Objects are objects that simulate the behavior of other objects in a controlled way. Mock Objects have the same interface than the objects they simulate. Lero© 2010
    19. 19. Independence: Mock Objects public class HBWikiPagePerfomer implements WikiPagePerfomer{ Session session; public Version createWikiPage(String name, String user){ … WikiPage wp = ShywikisFactory.createWikiPage(); wp.setName(name); session.save(wp); … } Lero© 2010
    20. 20. Example: Mock Objects public class SessionMock implements Session { public void cancelQuery() throws HibernateException { } public Connection close() throws HibernateException { return null; } …. public Serializable save(Object arg0) throws HibernateException { return null; } Lero© 2010
    21. 21. Example: Mock Objects protected void setUp() throws Exception { HBWikiPagePerfomer hbp = new HBWikiPagePerfomer(); hbp.setSession(new SessionMock()); setFixture(hbp); } public void testCreateWikiPage(){ HBWikiPagePerfomer hbp = getFixture(); assertNull(hbp.createWikiPage("wp1", "user1")); } Lero© 2010
    22. 22. Complex tests: Mock Objects Financial System Client Init getReport(X) Bye Bloomberg Trade Protocol Communication Layer (TCP/IP) TCP/IP How can we test this? Lero© 2010 Bloomberg Server
    23. 23. Code coverage The degree of how much a code is exercised by a set of automated tests is measured by its test coverage. The test coverage measures how complete is a set of tests in terms of how many instructions, branches, or blocks are covered when unit tests are executed. Lero© 2010
    24. 24. Code coverage Lero© 2010
    25. 25. Code coverage Lero© 2010
    26. 26. Integration testing Integration test is about testing the interacting modules in an application. There are some frameworks for integration testing such as dbUnit, httpUnit, etc. Lero© 2010
    27. 27. Integration testing In some cases unit test frameworks can be used to perform integration and acceptance tests. Integration Testing. The unit tests instead of using mock objects, would use testing real objects (a testing database, for example). Lero© 2010
    28. 28. Integration Test An approach is to reuse the unit test using different setup and teardown methods. protected void setUp() throws Exception { Session session = sessionFactory.openSession(“test.cfg”); HBWikiPagePerfomer hbp = new HBWikiPagePerfomer(); hbp.setSession(session); setFixture(hbp); } Lero© 2010
    29. 29. Automated Test Driven Development Start OK Automated Acceptance Test-i Requirements Set-i NOK Refactor UnitTest-i Lero© 2010 Design-i Implementation-i
    30. 30. Acceptance test An acceptance test is a test that proves that a requirement does what is in its specification, therefore, if the test is passed the customer accepts that the requirement is fully implemented. Lero© 2010
    31. 31. Acceptance test Each requirement has a set of specific cases or scenarios. Each scenario has a set of test. A requirement is accepted if the tests of all the scenarios are satisfied. Lero© 2010
    32. 32. Acceptance Test User interface Input Fit (Simulated User input) Lero© 2010 Controller Objects Unit Test and Integration Test Business Or Model Objects
    33. 33. Acceptance test There are automated acceptance testing frameworks such as cucumber, jbehave, fit and fitness. Lero© 2010
    34. 34. User Stories and Scenarios using BDD ubiquitous language and plain text Lero© 2010
    35. 35. Automated Acceptance Testing and Readable Behaviour Oriented Specification Code Lero© 2010
    36. 36. The adoption problem • Developers have to learn to write effective unit test. These tests will help to reduce bugs and to have better software designs. • It is better to have more testers or to allow developers to learn TDD? Ask accountability. Lero© 2010
    37. 37. The adoption problem • Adopt it progressively. First automated unit testing, later automatic acceptance and integration testing. • If the organizational structure follows the waterfall, there will be resistance to adopt it. Lero© 2010
    38. 38. Our research: Which is its effect on quality? Start Acceptance Test-i Requirements Set-i NOK Design-i Refactor OK Start Next Cycle Lero© 2010 UnitTest-i Implementation-i
    39. 39. Our research Automated testing has a positive effect on the quality of code in an OSS context. Lero© 2010
    40. 40. Projects analysed Lero© 2010
    41. 41. Approach • Open source projects with automated tests and with well documented bug repositories. • Bug density. Instead of using the bugs reported, we have used the modifications in the source code repository that have happened after the release date. Each file in a project has an associated number of post release modifications. • The projects’ tests were executed and the test coverage calculated. Lero© 2010
    42. 42. Approach • The final step was to analyze the data to see if there is a relationship between the test coverage and the number of post release modifications of the files. Lero© 2010
    43. 43. Bug density Bug or fix commit Previous Release date Release date Increment the defect count for each file associated with entries that were determined to be actual bug fixes. Lero© 2010
    44. 44. Bug Density Lero© 2010
    45. 45. Code coverage and defect density Lero© 2010
    46. 46. JFreeChart Lero© 2010
    47. 47. OrangeHRM Lero© 2010
    48. 48. Files with prm OrangeHRM Lero© 2010
    49. 49. Files with prm JFreeChart Lero© 2010
    50. 50. Correlation Lero© 2010
    51. 51. Result analysis • If Spearman’s rank correlation coefficient is negative, there is a negative correlation between test coverage and post-release fix density; in other words, higher coveragemay mean lower fix rates. • The significance of this correlation is measured by the p-value, which measures the probability that the correlation would be observed when the Null hypothesis is true. Lero© 2010
    52. 52. Result analysis • There is a small negative correlation for each project. • JFreeChart, a p-value of 0.0993 means that there is less than 10% probability that this observed negative correlation occurred by chance. • OrangeHRM data have a p-value of 0.887, meaning the relationship is surely due to random chance. Lero© 2010
    53. 53. Result analysis • Defects are not distributed evenly across all files. When the analysis is limited only to files that experienced release fixes, the negative correlation is larger for both projects. • The p-values are JFreeChart p-value of 0.084. For OrangeHRM is 0.0364. • As such, there is reason to reject the Null hypothesis with some caution, and conclude that increased statement coverage might improve post-release defect density. Lero© 2010
    54. 54. Future work • We think that we have to normalize using the pre-release modification of each file. • We would have to calculate the correlation between (post-mr / pre-mr) and file coverage. Lero© 2010
    55. 55. Thank you Questions? Lero© 2010