• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Lecture 18
 

Lecture 18

on

  • 2,181 views

 

Statistics

Views

Total Views
2,181
Views on SlideShare
2,181
Embed Views
0

Actions

Likes
3
Downloads
55
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Lecture 18 Lecture 18 Presentation Transcript

    • The Testing Process CS320 – Fundamentals of Software Engineering
    • What is Testing?
      • IEEE:
        • “The process of operating a system or component under specified condition, observing or recording the results, and making an evaluation of some aspect of the system or component.”
      • Specified Condition = Test Cases
    • What is testing?
      • Rick Craig and Stefan Jaskiel:
        • “ Testing is a concurrent lifecycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software being tested”
      • Includes planning, analysis, and design that lead to test cases
    • What is testing?
      • Testing is the process of comparing “what is” with “what ought to be”
    • Boris Beizer’s Five Levels
      • Level 0 – No difference between testing and debugging
      • Level 1 – Purpose is to show software works
      • Level 2 – Purpose is to show that software doesn’t work (tough cases, boundary values)
      • Level 3 – Purpose is not to prove, but to reduce the perceived risk of not working at an acceptable value
      • Level 4 – Mental discipline that result in low risk software (Make software more testable from inception)
    • Challenges
      • Not enough time
      • Too many combos
      • Difficult to determine the expected result of each test
      • Non-existent or rapidly changing req’s
      • No training
      • No tool support
      • Management doesn't understand testing
    • Test Cases
      • Need to be designed:
        • Systematic approach
        • Documentation & Planning
        • Goal and purpose
    • Test Case
      • Inputs
      • Expected Outputs
      • Order of Execution
      • (and Testing Criteria) how do I know when I’m done testing???
    • Inputs
      • Data
        • Files
        • Interfacing Systems (keyboards, mice …)
        • databases
    • Outputs
      • Excepted Results (oracle)
        • Program, process or data
        • Regression test suites
        • Validation data
        • Purchase test suites
        • Existing program
    • Order of Execution
      • Cascading test cases (build on each other)
        • Ex. Create record in database
          • Use record
          • Delete record etc.
      • Indecent test case (order does not matter)
    • Testing Criteria
      • When do I stop testing:
        • Ex1: When I cover all def use pairs
        • Ex2: All paths
    • Types of Testing
      • Black Box
        • Reqs and spec, no knowledge of internal structure
      • White Box
        • Based on internal paths, structure
          • Ex1 - definition use pairs
    • Testing Levels
      • Unit Testing : smallest piece
      • Integration Testing : assemble pieces into larger units and test
      • System Testing : all of the software hardware, user manuals etc.
      • Acceptance Testing : testing, that when completed will result in delivering the software to the customer
    • Impossible to Test Everything
      • int scale(int j){
      • j = j-1; // should be j = j+1
      • j = j/30000;
      • return j;
      • }
      • Input range -32768 to 32767 = 65,536 test cases (6 test cases reveal problem)
      • -30,001, -30000, -1, 0, 29999, 30000
    • Unit testing with JUnit
    • One minute summary
      • ‘Code that isn’t tested doesn’t work’
        • Why test?
      • Code that isn’t regression tested suffers from code rot (breaks eventually)
        • Why regression test?
      • A unit testing framework help make unit and regression testing more efficient
    • What is unit testing?
      • Testing is often divided into categories such as:
        • Unit testing
          • Testing a ‘unit’ of code, usually a class
        • Integration testing
          • Testing a module of code (e.g. a package)
        • Application testing
          • Testing the code as the user would see it (black box)
    • What is a testing framework?
      • A test framework provides reusable test functionality which:
        • Is easier to use (e.g. don’t have to write the same code for each class)
        • Is standardized and reusable
        • Provides a base for regression tests
    • Why formalize unit testing?
      • Unit testing has often been done, but in an ad hoc manner
        • E.g. adding a main method to a class, which runs tests on the class
      • Axiom:
        • Code that isn’t tested doesn’t work
        • ‘ If code has no automated test case written for it to prove that it works, it must be assumed not to work.’ (Hightower and Lesiecki)
    • Why use a testing framework?
      • Each class must be tested when it is developed
      • Each class needs a regression test
      • Regression tests need to have standard interfaces
      • Thus, we can build the regression test when building the class and have a better, more stable product for less work
    • Regression testing
      • New code and changes to old code can affect the rest of the code base
        • ‘Affect’ sometimes means ‘break’
      • I need to run tests on the old code, to verify it works – these are regression tests
      • Regression testing is required for a stable, maintainable code base
    • Regression Testing
      • Regression testing - retesting code to check for errors due to changes
      • Rothermel and Harrold (1994) provide a simple definition for regression testing ( T0 = original tests):
        • Identify changes to the software
        • Evaluate which tests from set T0 are no longer valid and create a subset T1
        • Use T1 to test software
        • Identify if any parts of the system have not been tested adequately (based upon test adequacy criteria) and generate a new set of tests T2
        • Use T2 to test software
    • Regression Testing
      • Rothermel and Harrold built framework of several retest selection techniques (1996)
        • Minimization
          • Linear equations
        • Coverage
          • ID changed definition-use pairs in control flow graphs
        • Safe techniques
          • Control dependency graphs.
    • Background: Regression Testing
      • Leung and White classify test cases according to reuse (1989).
        • Reusable test cases cover unmodified code (kept but not used)
        • Retestable test cases cover modified code (used)
        • Obsolete test-cases cover removed code (deleted)
    • Regression Testing
      • Briand et al. (TR 2004) uses Use Cases, Sequence Diagrams, and Class Diagrams to regression test code.
      • Creates mapping between design changes and code changes to classify code test cases.
      • Uses Leung and White approach to classify test cases
      • Makes assumptions regarding how the UML is used:
        • (1) assumes all possible use-case scenarios can be realized by Sequence Diagrams
        • (2) assumes that each test case maps to a Sequence Diagram Scenario
    • Regression testing and refactoring
      • ‘Refactoring is a technique to restructure code in a disciplined way.’ (Martin Fowler)
      • Refactoring is an excellent way to break code.
      • Regression testing allows developers to refactor safely – if the refactored code passes the test suite, it works
    • Running automated tests
      • The real power of regression tests happens when they are automated
        • This requires they report pass/fail results in a standardized way
      • Can set up jobs to
        • Clean & check out latest build tree
        • Run tests
        • Put results on a web page & send mail (if tests fail)
      • JUnit & ant have code to do all of this
    • Some background
      • eXtreme programming (XP) is a ‘programming methodology that stresses (among other things) testing and refactoring
      • The Apache community provides open-source software, including the Jakarta project (server side java tools, e.g. tomcat (servlet engine))
      • Ant is a build tool (like make)
        • Part of the Jakarta project
        • Is becoming the de facto standard for java projects
        • Is written in java, uses xml to specify build targets
    • What is Junit?
      • JUnit is a regression testing framework written by Erich Gamma and Kent Beck
      • It is found at www.junit.org
      • It consists of classes that the developer can extend to write a test
      • Notably: junit.framework.TestCase and related structure to run and report the tests
    • JUnit..
      • JUnit helps the programmer:
        • Define and execute tests and test suites
        • Formalize requirements and clarify architecture
        • Write and debug code
        • Integrate code and always be ready to release a working version
    • Installing Junit and CppUnit
      • For JUnit
        • Download (and unpack) the distribution
        • Put the jar file on your classpath
      • For CppUnit
        • Download (and unpack) the distribution
        • On linux, run make (configure; make; make install)
        • Bob Goddard figured out the required switches on Solaris, contact Bob, Pete Brodsky or I for details
    • Tool integration
      • Java IDEs
        • Netbeans - http://www.netbeans.org/index.html
          • Claims to include Junit integrationIt’s free
        • Idea - http://www.intellij.com/idea/
          • Includes the Junit jar & is Junit savvy
          • Also supports ant
          • Not free, but not expensive ($400-$600)
    • What JUnit does
      • JUnit runs a suite of tests and reports results
      • For each test in the test suite:
        • JUnit calls setUp()
          • This method should create any objects you may need for testing
    • What JUnit does…
        • JUnit calls one test method
          • The test method may comprise multiple test cases; that is, it may make multiple calls to the method you are testing
          • In fact, since it’s your code, the test method can do anything you want
          • The setUp() method ensures you entered the test method with a virgin set of objects; what you do with them is up to you
        • JUnit calls tearDown()
          • This method should remove any objects you created
    • Creating a test class in JUnit
      • Define a subclass of TestCase
      • Override the setUp() method to initialize object(s) under test.
      • Override the tearDown() method to release object(s) under test.
      • Define one or more public testXXX() methods that exercise the object(s) under test and assert expected results.
      • Define a static suite() factory method that creates a TestSuite containing all the testXXX() methods of the TestCase.
      • Optionally define a main() method that runs the TestCase in batch mode.
    • Fixtures
      • A fixture is just a some code you want run before every test
      • You get a fixture by overriding the method
        • protected void setUp() { …}
      • The general rule for running a test is:
        • protected void runTest() { setUp(); <run the test> tearDown(); }
        • so we can override setUp and/or tearDown, and that code will be run prior to or after every test case
    • Implementing setUp() method
      • Override setUp () to initialize the variables, and objects
      • Since setUp() is your code, you can modify it any way you like (such as creating new objects in it)
      • Reduces the duplication of code
    • Implementing the tearDown() method
      • In most cases, the tearDown() method doesn’t need to do anything
        • The next time you run setUp() , your objects will be replaced, and the old objects will be available for garbage collection
        • Like the finally clause in a try-catch-finally statement, tearDown() is where you would release system resources (such as streams)
    • The structure of a test method
      • A test method doesn’t return a result
      • If the tests run correctly, a test method does nothing
      • If a test fails, it throws an AssertionFailedError
      • The JUnit framework catches the error and deals with it; you don’t have to do anything
    • Test suites
      • In practice, you want to run a group of related tests (e.g. all the tests for a class)
      • To do so, group your test methods in a class which extends TestCase
      • Running suites we will see in examples
    • assert X methods
      • static void assertTrue(boolean test )
      • static void assertFalse(boolean test )
      • assertEquals( expected , actual )
      • assertSame(Object  expected , Object  actual )
      • assertNotSame(Object  expected , Object  actual )
      • assertNull(Object  object )
    • assert X methods
      • assertNotNull(Object  object )
      • fail()
      • All the above may take an optional String message as the first argument, for example, static void assertTrue(String message , boolean test )
    • Organize The Tests
      • Create test cases in the same package as the code under test
      • For each Java package in your application, define a TestSuite class that contains all the tests for validating the code in the package
      • Define similar TestSuite classes that create higher-level and lower-level test suites in the other packages (and sub-packages) of the application
      • Make sure your build process includes the compilation of all tests
    • JUnit framework
    • Writing a test
      • Directions can be found in the Junit cookbook - http://junit.sourceforge.net/doc/cookbook/cookbook.htm
      • Summary
        • Create an instance of TestCase :
        • Write methods which run your tests, calling them test<Foo>
        • Call a test runner with your test
        • When you want to check a value, call an assert method (e.g. assertTrue()) and pass a condition that is true if the test succeeds
    • Example: Counter class
      • For the sake of example, we will create and test a trivial “counter” class
        • The constructor will create a counter and set it to zero
        • The increment method will add one to the counter and return the new value
        • The decrement method will subtract one from the counter and return the new value
    • Example: Counter class
      • We write the test methods before we write the code
        • This has the advantages described earlier
        • Depending on the JUnit tool we use, we may have to create the class first, and we may have to populate it with stubs (methods with empty bodies)
      • Don’t be alarmed if, in this simple example, the JUnit tests are more code than the class itself
    • JUnit tests for Counter
      • public class CounterTest extends junit.framework.TestCase { Counter counter1;
      • public CounterTest() { } // default constructor
      • protected void setUp() { // creates a (simple) test fixture counter1 = new Counter(); }
      • protected void tearDown() { } // no resources to release
    • JUnit tests for Counter…
      • public void testIncrement() { assertTrue(counter1.increment() == 1); assertTrue(counter1.increment() == 2); }
      • public void testDecrement() { assertTrue(counter1.decrement() == -1); } } // End from last slide
    • The Counter class itself
      • public class Counter { int count = 0; public int increment() { return ++count; } public int decrement() { return --count; }
      • public int getCount() { return count; } }
    • Result
      • We will see with the help of tool
      • When ??
      !! Now !!
    • Why JUnit
      • Allow you to write code faster while increasing quality
      • Elegantly simple
      • Check their own results and provide immediate feedback
      • Tests is inexpensive
      • Increase the stability of software
      • Developer tests
      • Written in Java
      • Free
      • Gives proper uniderstanding of unit testing
    • Problems with unit testing
      • JUnit is designed to call methods and compare the results they return against expected results
        • This ignores:
          • Programs that do work in response to GUI commands
          • Methods that are used primary to produce output
    • Problems with unit testing…
      • I think heavy use of JUnit encourages a “functional” style, where most methods are called to compute a value, rather than to have side effects
        • This can actually be a good thing
        • Methods that just return results, without side effects (such as printing), are simpler, more general, and easier to reuse
    • Junit addons
      • Junit has a bunch of add on stuff that has been written for it (your mileage may vary)
      • Examples
        • Code generators – make guesses at what tests are needed
        • Load testers
        • Code coverage
        • Testers for thread safety
        • ‘ Helpful classes’ (e.g. RecursiveTestSuite is a TestSuite that will recursively walk through the directory structure looking for subclasses of TestCase and will add them.)
    • Cactus, HttpUnit, Mock Objects
      • Cactus (from jakarta) is a simple test framework for unit testing server-side java code (Servlets, EJBs, Tag Libs, Filters, ...).
      • HttpUnit emulates the relevant portions of browser behavior to allow automated testing (on sourceforge)
      • Mock objects emulate objects from other parts of the application, but have controlled behaviour
    • Resources
      • Web sites
        • Junit - http://www.junit.org
        • CppUnit - http://cppunit.sourceforge.net/
        • Ant - http://jakarta.apache.org/ant/index.html