Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

OCAT: Object Capture based Automated Testing (ISSTA 2010)


Published on

ISSTA 2010 presentation

Published in: Technology
  • Be the first to comment

  • Be the first to like this

OCAT: Object Capture based Automated Testing (ISSTA 2010)

  1. 1. OCAT: Object Capture based Automated Testing Hojun Jaygarl, Carl K. Chang Iowa State University Sunghun Kim The Hong Kong University of Science and Technology Tao Xie North Carolina State University ISSTA 2010
  2. 2. Problem Generating object inputs is hard in Object-Oriented (OO) unit testing2
  3. 3. Automated Test Generation (ATG) Automatically generate test inputs for a unit. Reduce manual efforts in OO unit testing.3
  4. 4. Two Main Types of ATG Techniques Direct object construction  Directly assign values to object fields of the object instance  E.g., Korat [ISSTA02] Method sequence generation  Generate method sequences that can produce an object instance under construction  E.g., Randoop [ICSE’07], Pex [TAP08]
  5. 5. Results of the State of the Arts in ATG Pex  Pex automatically generates test inputs.  Dynamic Symbolic Execution  21% of branch coverage [Thummalapenta et al. FSE 09] for QuickGraph, a C# graph library. Randoop  Randoop automatically generates method sequences  Random but feedback directed  58% of branch coverage [Thummalapenta et al. FSE 09]  45% according to our evaluation for Apache Common Collections 3.2 5
  6. 6. Case of not-covered branchesCause of not- # of branches Explanationcovered branchesInsufficient object 135 (46.3%) unable to generate desirable object instances required to cover certain branches.String comparison 61 (20.9%) Difficult to randomly find a desirable string to satisfy such constraints, since the input space of string type values is huge.Container object access 39 (13.4%) Not easy to create a certain size of a container with necessary elements.Array comparison 25 (8.6%) Not easy to create a certain size of an array with necessary elements.Exception branches 18 (6.1%) These branches have a particularity of exception handling code that handles run-time errorsEnvironmental setting 9 (3.1%) Almost 90% Hard to get environment variables and file-system structure.Non-deterministic 4 (1.3%) Hard to handle multi-threading and user interactionsbranch 6  Run Randoop for three projects Apache Commons, XML Security, and JSAP We randomly selected 10 source code files from each subject and investigated the causes of uncovered branches.
  7. 7. Not-Covered Branch Example Checks the validity of “doc”7
  8. 8. Limitations of Current Approaches Korat  Require manual efforts for writing class invariants and value domains. Pex  Pex uses built-in simple heuristics for generating fixed sequences, which are often ineffective. Randoop  Random approach cannot generate relevant sequences that produce desirable object instances. 8
  9. 9. Idea Practical approach (high coverage) Reflect real usage Easier process Object capture based Automated Testing 9
  10. 10. OCAT Overview10
  11. 11. Capturing Process Instrument the target program Capture objects  From program execution  From system test execution 11
  12. 12. Instrumentation Byte code instrumentation 12
  13. 13. Serialization Get a type and a concrete state of captured objects. Prune redundant objects. A concrete state representation for the state-equality checking [Xie et al. ASE’04] 13
  14. 14. Serialized Object Example A list that has bank accounts 14
  15. 15. Object Generation Feed captured objects to an existing automated method sequence generation technique. Method Object Sequences Sequences Execution Capturing Captured Generation objects Generated objects 15
  16. 16. Usage of Captured Objects Captured objects are de-serialized and used as test inputs. Evolved Objects Captured foo(A) New instance of A returns A instance of A Indirect usage Captured bar(A) New instance of A returns B instance of B 16
  17. 17. Method Sequence with Captured Objects17
  18. 18. Evaluation - Setup18
  19. 19. Evaluation - Captured Objects Q1 How much can OCAT improve code coverage through captured object instances? 19.0% improved from 45.2% 19
  20. 20. Evaluation- Captured Objects 28.5% and 17.3% improved from 29.6% and 54.1% 20
  21. 21. Object Mutation
  22. 22. Approach – Object Mutation (MTT)AbstractReferenceMap 22
  23. 23. Evaluation- Mutated Objects Q2 How much can mutated object instances further improve code coverage? 23
  24. 24. Evaluation- Total 25.5% improved on average, with maximum 32.7% 32.7%22.9% 20.9% 24
  25. 25. Why is It a Feasible Idea? Reflect real usage Potential for being desirable inputs in achieving new branch coverage Capturing objects is easy25
  26. 26. Discussion Object Capturing Process  Problem: OCAT’s coverage depends on captured objects  Capturing objects is an easy process (but we still need to capture “good-enough” objects). Captured Objects and Software Evolution  Problem: software evolves and objects are changing.  Captured object instances may be obsolete and not be valid anymore. 26
  27. 27. Discussion Branches to Cover  Problem: Still not-covered branches are more than 20%.  Cross-system object capturing  objects can be captured from system A and used for system B.  Static analysis  currently we use a simple static analysis.  Iterative process  two phases, object generation and object mutation, can be iteratively applied 27
  28. 28. Threats to Validity Software under test might not be representative  Our three subjects may yield better or worse OCAT coverage than that of other software projects. Our object capturing relies on the existing tests  OCAT test coverage reported in this paper depends on the quality and quantity of existing tests. 28
  29. 29. Conclusions Problem  Hard to generate desirable object instances in OO unit testing. OCAT approach  Capture objects from program execution.  Generate more objects with method sequence generation.  Mutate directly the captured objects to try to cover those not-yet-covered branches. 29
  30. 30. Conclusions Results  OCAT helps Randoop to achieve high branch coverage averagely 68.5%  25.5% improved (with maximum 32.7%) from only 43.0% achieved by Randoop alone. Future work  Enhance static analysis part  Apply captured objects to other state-of-the-art techniques (e.g., parameterized unit testing, concolic testing, and model checking approaches).  30 Release the tool.
  31. 31. Thank you!31