Upcoming SlideShare
×

# Partitioned Based Regression Verification

2,348 views

Published on

proposition of an alternative approach to regression testing

Published in: Technology
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
2,348
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
8
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Partitioned Based Regression Verification

1. 1. Partition-Based Regression [1] Verification 1 PRESENTED BY: AUNG THU RHA HEIN(5536871) BOONYA SUWANMANE(5436284) NATTACHART TAMKITTIKHUN (5637378) [1] MARCEL BO’HME, BRUNO C. D. S. OLIVEIRA, ABHIK ROYCHOUDHURY SCHOOL OF COMPUTING,NATIONAL UNIVERSITY OF SINGAPORE PUBLISHED ON ICSE’13, SAN FRANCISCO, USA
2. 2. Outline 2  Introduction  Partition-Based Regression Verification  Empirical Study  Results and Analysis  Threats to Validity  Related Works  Discussion & Conclusion
3. 3. Introduction-Software Regression 3  Software Regression- software bugs occur after changes to software functionalities  Regression Testing- selective retesting of a software system  Regression Verification-verify the correctness of the program relative to earlier versions
4. 4. Introduction-Motivation 4  Verify error correctness of software version  Proposes a better approach than Regression verification
5. 5. Introduction-Problem Statement 5  Requires specifications  Verification process is time consuming  Partial verification
6. 6. Introduction-Research Contributions 6  Introduces (PRV) partition-based regression verification  Proposes a differential partitioning technique  Provides another way to regression test generation techniques
7. 7. Introduction-PRV 7  A gradual approach to RV based on the exploration of differential partitions  Verify inputs by partitions  Shares the advantages of RV and RT
8. 8. Introduction-Differential Partitions 8  Computed the paths by symbolic execution  Require deterministic program execution
9. 9. Partition-Based Regression Verification 9 Emphasize: Regression error • Interrupt • Terminate PRV Experiment: Continuous => return error
10. 10. PRV: A) Computing Differential Partitions 10 Base on the value detect of test suite(input) which lead to regression error.  Output a set of test suite and as set condition 
11. 11. PRV: B) Computing Reachability Conditions 11 Base on the detect condition.  Output is a condition depend on your proposal input value be the criteria  Reachable condition or  Unreachable condition 
12. 12. PRV: C) Computing Propagation Conditions 12 Base on the detect where the differential program states converge.  Output: Statement instance at line Ni(i substitute line number) 
13. 13. PRV: D) Computing Difference Conditions 13 Base on the detect where the differential program states converge.  Algorithm as same as C) plus the change value process , check does the converge output are different?  Output: A set of change statement instance at line Ni(i substitute line number) which lead the output at converge line are different. 
14. 14. PRV: E) Generating Adjacent Test Cases 14 Base on the detect where the differential program states converge.  Algorithm: If adjacent condition have been deleted after compute beyond the existing condition does program compute the output. : If place or add or reorder condition does program compute the output. If can compute how the output different or same at the converge statement. 
15. 15. PRV: F) Theorems 15  In practice, the absence of regression errors can be guaranteed for all inputs to the same extent as symbolic execution can guarantee the absence of program errors. Speciﬁcally, they assume deterministic program execution.
16. 16. Empirical Study 16  Evaluate relative efficiency of PRV and discuss practicability based on authors’ experience.  Do not prove the scalability of PRV.  It suffers from the same limitation as symbolic execution.  However, it can benefit from optimizations like domain reduction, parallelization, and better search strategies.
17. 17. Empirical Study – Setup and Infrastructure 17  Built into authors’ dynamic backward slicing tool JSlice.  The differential partitions are explored in a breadth-first manner starting from the same initial input within 5 minutes, unless stated otherwise.  Every version of the same subject uses the same test driver to construct necessary inputs.  Subject programs are analyzed on a desktop computer with an Intel 3 GHz quad-core processor and 4 GB of memory.
18. 18. Empirical Study – Subject Programs 18  Subject programs in the experiments are chosen according to the following 2 criteria:  They represent a variety of evolving programs.  They are discussed in related work (which allows the comparison with our own experimental results).  There are 83 versions of programs ranging from 20 to almost 5000 lines of code.  Some are derived by seeding faults, called mutants, of the original versions.  Some are real versions that were committed to a version control system.
19. 19. Empirical Study – Subject Programs 19
20. 20. Empirical Study – Subject Programs 20  The authors compare the empirical results of the references discussing regression verification and regression test generation.  No empirical results available for the regression test generation techniques and differenctial symbolic execution  All programs are tested as whole programs, except for Apache CLI.  For Apache CLI, command line component was tested for regression.
21. 21. Empirical Study – Research Questions 21  RQ1: How efficiently does PRV find the first input that exposes semantic difference?  RQ2:  How efficiently does PRV find the first input that exposes software regression?  RQ3:  How practical is PRV in an example usage scenario? 
22. 22. Results and Analysis – RQ1: Efficientcy – Semantic Difference 22  Measure 2 aspects when searching for the first difference-revealing input:  average time  If > 5 min, not included.  mutation score.  The fraction of versions for which a difference-revealing input can be found within 5 minutes.
23. 23. Results and Analysis – RQ1: Efficientcy – Semantic Difference 23
24. 24. Results and Analysis – RQ1: Efficientcy – Semantic Difference 24  Answer to RQ1.  PRV generates a difference-revealing test case on average for 21% mor version pairs in 41% less time, than the eXpress-like approach that analyzes only the changed version P’.
25. 25. Results and Analysis – RQ2: Efficientcy – Software Regression 25  In practice, not every difference-revealing test case reveals software regression.  A difference-revealing test case can be checked formally on informally against the programmer’s expectation.
26. 26. Results and Analysis – RQ2: Efficientcy – Software Regression 26
27. 27. Results and Analysis – RQ2: Efficientcy – Software Regression 27  Answer to RQ2.  PRV generates a regression-revealing test case on average for 48% more version pairs in 63% less time than the eXpress-like approach that analyzes on the changed version P’.
28. 28. Results and Analysis – RQ3 Practicability – Usage Scenario: Apache CLI 28  Apache CLI is used to evaluate PRV in a practical usage scenario.  PRV generates difference-revealing test cases within the bound of 20 minutes for every version pair.  A developer checks these test cases for regression and relates the regression revealing test cases to changes that semantically interfere.
29. 29. Results and Analysis – RQ3 Practicability – Usage Scenario: Apache CLI 29  Answer to RQ3.  For the evolution of Apache CLI over 6 years, tests generated as witnesses of differential behavior of 2 successive versions suggest:  An average progression of 49%, regression of 18% and intermediate semantic changes of 33% towards the latest revision.
30. 30. Threats to Validity 30  Main threat to internal validity: Implementation of PRV into JSlice  Tried to mitigate by using the same implementation to gather results for the DART-like and eXpress-like approches.  Main threat to external validity  The generalization of the results  Limited choice and # of subjects does not suggest generallizability.  The subjects are served mainly as comparison to relavant works, and give an idea about practibility of PRV. 
31. 31. Related Works 31  Regression Verification(RV) based on semantic equivalence  time consuming  no intermediate guarantees  Differential Symbolic Execution (DSE)  based on symbolic summaries  less scalable  Regression Test Generation (RTG)  construct sample input that can expose software regression 
32. 32. Discussion & Conclusion 32  Introduces differential partitions technique  Enable partial verification  Retain regression guarantees  Detect regression errors more
33. 33. Thank you. 33  Questions ?