Your SlideShare is downloading. ×
Testing survey by_directions
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Testing survey by_directions

371
views

Published on

Published in: Technology, Education

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
371
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
10
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Questions Unsolved1. 现在的安排是怎样的? a) 对笔记本上的 idea 汇总和总结。 b) Xinming Wang 对 code omission 的 分 类 ! ! ! (这里与周老师的小程序目的一样。 Wang 的方法不会对所有子分类都好) c) 阅读可能会引用到的文章。 d) 他们的 Implementation 的 framework 是怎样的? e) 理清 design,列出所有有可能的实验方法 f) 逐个完成实验,提供论据。2. 目标是什么呢? a) 开题:Our Approach1. Evaluation 的方法和数据集 a) Siemens b) Unix c) 还有其他的数据集么? d) Java 的程序集通常是哪些? i. NanoXML2. 熟悉 fault,对 fault 进行分类 a) Coincidental correctness b) Code omission c) Multi-Fault d)3. 熟悉用例的 run a) run 信息分类 i. sum{covered statement} ii. 覆盖 iii. 次数 iv. Trace v. 语意 vi. Slice vii. State viii. Predicates ix. Symbolic execution x. PDG
  • 2. xi. AST xii. CFG b) 做统计图:分布,方差,均值。4. 熟悉 test case a) 如何判断测试用例的距离,相似性? i. 输入 ii. 覆盖 iii. 覆盖句子个数 iv. 覆盖次数的相似度 b) 如何判断一个测试用例更容易找出错误? i. 测试用例的评价。5. 学习使用工具 a) gcov b) weka c) eclipse plugins6. 努力的方向 a) Socket 通信的 Fault Localization b) 循环中的 Fault Localization(例如<=3 写成<3) c) 递归中的 Fault Localization d) 测试用例充分与否(正确测试用例只需 20 个左右的那篇文章) e) 整体上提升,提出一个新公式 f) 标记出不同种类的错误 g) 针对特殊的错误提出一种公式 h) 删除部分相似的测试用例 i) 聚类后建立一个逻辑模型(使用 run.covered_statements.length 来聚类) j) 谓语子句的逻辑组合覆盖?程序覆盖信息的本质在于条件判断语句的分支覆盖。 k) 错误在疑似谓语子句的向上切片 slice 中 l) 一个 failed 的 run 的没有执行的语句都是正确语句7. 总结目前的方向 a) Run i. 赋予权重给每个 run ii. 对 run 进行聚类 iii. 删除部分极其相似的 run iv. 删除部分极有可能是 coincidental correctness 的 run。(删除与 failed run 最接近 的 passed run) v. 对 run 覆盖的语句做集合运算:交(大权),并(小权),补(负权)。 Passed-Covered Passed-Uncovered Failed-Covered Failed-Uncovered b) 谓 语 的 逻 辑 组 合 覆 盖 , 再 用 slicing 的 方 法 ( 这 算 是 CBFL 与 slice 的 结 合 ) 这是因为覆盖的本质在于,条件判断语句(不过条件的结果受之前的赋值语句影 响)
  • 3. Questions Solved1.Questions1. What is the background?2. What assumptions do these approaches based?3. Can you tell me what the best approach is in this area? Who proposed it?4. Can you list some motivation examples for the approach?5. The ideas are trivial. What is the biggest challenge in these approaches?6. What is the approach’s IPO (Input-Process-Output)? Can you give me an example?7. What are the paper’s contributions?8. The result is better. Can you explain it? What is different from related work?9. How to evaluate in this area, including methods, benchmarks and convincing reasons?10. Can you find the design space of this area?11. What can we learn from the author’s survey?12. Can we make some breakthroughs? What’s our future work?Test[]Annotation
  • 4. KeywordAbstractBackgroundMotivationSolutionContributionEvaluationTotal Pages Value Understanding Last ReadQuestion Result ValidationMethod/Means | Evaluation | Technique | Analytic Model Analysis | Persuasion |Characterization Experience
  • 5. Test Case Generation[McM04] Search-based software test data generation: asurveyMcMinn, P. (2004), Search-based software test data generation: a survey. Software Testing,Verification and Reliability, 14: 105–156.AnnotationThe paper gives us a fairly comprehensive overview of search-based test generation. The authorfirstly introduces the motivation of automated test, as well as the problems researchers have toface. In the second chapter of the paper, several general search techniques is proposed .Throughthe next 4 chapters, the author classifies the different types of search-based test generation. Theclassification is based on the different type of testing, which is known as structural testing,functional testing, grey-box testing and non-functional testing. The author also classifies moredetails for each testing type, with a number of comprehensive examples. The classification isimpressive and helps a lot to understand each research’s position.Keywordsearch-based software engineering; automated software test data generation;AbstractBackgroundThe use of metaheuristic search techniques for the automatic generation of test data has been aburgeoning interest for many researchers in recent years.
  • 6. MotivationPrevious attempts to automate the test generation process have been limited, having beenconstrained by the size and complexity of software, and the basic fact that, in general, test datageneration is an undecidable problem.SolutionMetaheuristic search techniques offer much promise in regard to these problems. Metaheuristicsearch techniques are high-level frameworks, which utilize heuristics to seek solutions forcombinatorial problems at a reasonable computational cost. To date, metaheuristic searchtechniques have been applied to automate test data generation for structural and functional testing;the testing of grey-box properties, for example safety constraints; and also non-functionalproperties, such as worst-case execution time.ContributionThis paper surveys some of the work undertaken in this field, discussing possible new futuredirections of research for each of its different individual areas.Total Pages Value Understanding Last Read52 High Normal 2010.09.24Question Result ValidationCharacterization Analytic Model Persuasion[Edv99] A survey on automatic test data generationJon Edvardsson. A survey on automatic test data generation. In Proceedings of the SecondConference on Computer Science and Engineering in Linköping (October 1999), pp. 21-28.AnnotationA program-based test data generator is one component to automate software testing. The paperbegins by showing the architecture of a typical test data generator system and some basicconcepts, such as control flow graph, basic block, and branch predicate. In the next chapter, theauthor classifies the Test Data Generators into four kinds: Static and Dynamic Test DataGeneration, Random Test Data Generation, Goal-Oriented Test Data Generation, and Path-
  • 7. Oriented Test Data Generation. The author also discusses some problems of test data generation,which involve Arrays and Pointers, Objects, Loops, Modules, Infeasible Paths, ConstraintSatisfaction, Oracle.KeywordProgram-based Test GenerationAbstractOutline1. Introduction2. Basic Concepts3. An Automatic Test Data Generator System a) The Test Data Generator i. Static and Dynamic Test Data Generation ii. Random Test Data Generation iii. Goal-Oriented Test Data Generation iv. Path-Oriented Test Data Generation b) The Path Selector’s path criteria i. Statement coverage ii. Branch coverage iii. Condition coverage iv. Multiple-condition coverage v. Path coverage4. Problems of Test Data Generation a) Arrays and Pointers b) Objects c) Loops d) Modules e) Infeasible Paths f) Constraint Satisfaction g) Oracle
  • 8. BackgroundIn order to reduce the high cost of manual software testing and at the same time to increase thereliability of the testing processes researchers and practitioners have tried to automate it. One ofthe most important components in a testing environment is an automatic test data generator - asystem that automatically generates test data for a given program.MotivationThe focus of this article is program-based generation, where the generation starts from the actualprograms.SolutionIn this article I present a survey on automatic test data generation techniques that can be found incurrent literature.
  • 9. ContributionBasic concepts and notions of test data generation as well as how a test data generator systemworks are described. Problems of automatic generation are identified and explained. Finallyimportant and challenging future research topics are presented.Total Pages Value Understanding Last Read8 Normal Normal 2010.09.24Question Result ValidationCharacterization Analytic Model Persuasion[GGJ+10] Test generation through programming in UDITAMilos Gligoric, Tihomir Gvero, Vilas Jagannath, Sarfraz Khurshid, Viktor Kuncak, DarkoMarinov. Test generation through programming in UDITA. Proceedings of the 32nd ACM/IEEEInternational Conference on Software Engineering - Volume 1, ICSE 2010, Cape Town, SouthAfrica, 1-8 May 2010.AnnotationGenerating test input on complex data structures is time-consuming and results in test suites thathave poor quality and difficult to reuse. The author present a new language for describing tests,UDITA, a Java-based language with non-deterministic choice operators and an interface forgenerating linked structures. We can learn these tradeoffs below in this area: how easy to write thespecification, how fast to generate tests (efficiency), how good the tests are (effectiveness) andhow complex the tests are.Keywordtest input generation; specification-based;AbstractBackground
  • 10. The consequences of software bugs become more severe, while widely adopted testing tools offerlittle support for test generation.MotivationPractical application of these techniques were largely limited to testing units of code much smallerthan hundred thousand lines, or generating input values much simpler than representations of Javaprograms. It means these techniques can not generate inputs with complex data structures. Theexperiments show that test generation using UDITA is faster and leads to test descriptions that areeasier to write than in previous frameworks.SolutionThe author presents an approach for describing tests using nondeterministic test generationprograms. The author introduces UDITA, a Java-based language with non-deterministic choiceoperators and an interface for generating linked structures. Further more, the author describe newalgorithms to generate tests and implemented their approach based on Java PathFinder (JPF).Contribution1. New language for describing tests2. New test generation algorithms3. Implementation4. EvaluationEvaluationThe author evaluated UDITA with four sets of experiments, three for black-box testing and one forwhite-box. The first set of experiments, on six data structures, which are DAG, HeapArray,NQueens, RBTree, SearchTree and SortedList, compares base JPF test generation. The second setof experiments, on testing refactoring engines, compares UDITA with ASTGen. The third set ofexperiments uses UDITA to test parts of the UDITA implementation itself. For white-box testing,the forth set of experiments compares UDITA with symbolic execution in Pex. The experimentsshow that test generation using UDITA is faster and leads to test descriptions that are easier towrite than in previous frameworks.Total Pages Value Understanding Last Read10 Normal Normal 2010-10-06
  • 11. Question Result ValidationMethod/Means Technique Analysis, Experience[GGJ+09] On test generation through programming in UDITAM. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak, and D. Marinov. On test generationthrough programming in UDITA. Technical Report LARA-REPORT-2009-05, EPFL, Sep. 2009.Total Pages Value Understanding Last Read14 Normal Normal 2010-10-06Question Result ValidationMethod/Means Technique Analysis, ExperienceAnnotationThis is the Technical Report version of [GGJ+10], which offers more references, links and graphswithout the page limit.[BKM02] Korat: Automated testing based on Java predicatesBoyapati, C., Khurshid, S., and Marinov, D. 2002. Korat: automated testing based on Javapredicates. In Proceedings of the 2002 ACM SIGSOFT international Symposium on SoftwareTesting and Analysis (Roma, Italy, July 22 - 24, 2002). ISSTA 02. ACM, New York, NY, 123-133.AnnotationA novel framework for test generation is proposed in this paper. Korat uses the methodprecondition writing in JML to automatically generate nonisomorphic test cases. Key techniquesin Korat are monitoring the predicate’s executions, pruning portions with structural invariants andgenerating only nonisomorphic inputs. The evaluation in this area usually involves the time ofgeneration, the correctness and effectiveness of generated tests.
  • 12. Keywordspecification-based testingAbstractBackgroundManual software testing and test data generation are labor-intensive processes. Korat usesSpecification-based testing.MotivationCan we use precondition to generate test cases and postcondition to check the correctness ofoutput?SolutionKorat exhaustively explores the bounded input space of the predicate. However, Korat alsomonitor the predicate’s executions and pruning portions of the search space. Korat uses the JavaModeling Language (JML) for specifications.Contribution1. A technique for automatic test case generation: given a predicate, and a bound on the size of its inputs, Korat generates all nonisomorphic inputs for which the predicate returns true.2. Korat uses backtracking to systematically explore the bounded input space of the predicate.3. Korat monitors accesses that the predicate makes to all the fields of the candidate input to prune large portions of the search space.EvaluationThis paper present Korat’s performance, then compare Korat with Alloy Analyzer for test casegeneration. The benchmarks are BinaryTree, HeapArray, LinkedList, TreeMap, HashSet andAVTree. Some of them come from standard Java libraries. The comparison with Alloy Analyzerincludes the number of structures and the time to generate them.
  • 13. Total Pages Value Understanding Last Read11 High Well 2010.10.06Question Result ValidationMethod/Means Technique Analysis[KM04] TestEra: Specification-Based Testing of JavaPrograms Using SATSarfraz Khurshid, Darko Marinov. TestEra: Specification-Based Testing of Java Programs UsingSAT. 403-434 2004 11 Autom. Softw. Eng. 4AnnotationThis paper proposed a framework for automated specification-based testing of Java programs.Instead of JML [BKM02], the author took Alloy to express the specification of the pre- and post-conditions of that method. Since the Alloy is a first-order declarative language, the author attemptto use SAT solver to generate the test cases. The key idea behind TestEra is to automate testing ofJava programs, requiring only that the structural invariants of inputs and the correctness criteriafor the methods be formally specified.Keywordtest generationAbstractBackgroundTestEra is a framework for automated specification-based testing of Java programs.
  • 14. MotivationThe search space is huge and nonisomorphism is hard. In addition, enumeration of structurallycomplex data is not efficient.SolutionTestEra requires as input a Java method (in source code or byte code), a formal specification of thepre- and post-conditions of that method, and a bound that limits the size of the test cases to begenerated, expressed in Alloy, a first-order declarative language based on sets and relations. Usingthe method’s pre-condition, TestEra automatically generates all nonisomorphic test inputs up tothe given bound. It executes the method on each test input, and uses the method postcondition asan oracle to check the correctness of each output. Due to the first-order specification, the authoruses SAT solvers to help solve the problem. The key idea behind TestEra is to automate testing ofJava programs, requiring only that the structural invariants of inputs and the correctness criteriafor the methods be formally specified. The framework is shown as below.EvaluationThe author collected, for each case study, the method we test, a representative input size, and thephase 1 (i.e., input generation) and phase 2 (i.e., correctness checking) statistics of TestEra’schecking for that size. The case study include singly linked lists, red black trees, INS (InformationNetwork System) and Alloy-alpha Analyzer.Total Pages Value Understanding Last Read32 Normal Normal 2010.10.06Question Result ValidationMethod/Means Technique Experience
  • 15. Symbolic execution[Kin76] Symbolic execution and program testingKing, J. C. 1976. Symbolic execution and program testing. Commun. ACM 19, 7 (Jul. 1976),385-394.AnnotationThis paper is the Most Cited Paper in symbolic execution. The author attempts to introduce somebasic notions of this program analysis technique. The main difficulty in symbolic execution isconditional branch type statements. The paper takes a simple programming language (PL/I) toanalyze the difficulty in details. By using two typical examples, the author introduces the symbolicexecution system based on symbolic execution tree and the strategy to solve conditional branchproblem. Furthermore, this paper discusses the program proving based on symbolic execution.The symbolic execution accepts symbolic inputs and produce symbolic formulas as output. Theexecution semantics is changed for symbolic execution. But neither the language syntax nor theindividual programs written in the language are changed.Keywordsymbolic execution; program testingAbstractBackgroundInstead of supplying the normal inputs to a program (e.g. numbers) symbolic execution suppliessymbols representing arbitrary values. The execution proceeds as in a normal execution exceptthat values may be symbolic formulas over the input symbols.MotivationThe difficult, interesting issues arise during the symbolic execution of conditional branch type
  • 16. statements.SolutionA particular system called EFFIGY which provides symbolic execution for program testing anddebugging is also described, it interpretively executes programs written in a simple PL/I styleprogramming language. It includes many standard debugging features, the ability to manage andto prove things about symbolic expressions, a simple program testing manager, and a programverifier.EvaluationA brief discussion of the relationship between symbolic execution and program proving is alsoincluded.Total Pages Value Understanding Last Read10 High Normal 2010.10.06Question Result ValidationMethod/Means Technique Persuasion[DJDM09] ReAssert: Suggesting Repairs for Broken Unit TestsBrett Daniel, Vilas Jagannath, Danny Dig, Darko Marinov, "ReAssert: Suggesting Repairs forBroken Unit Tests," ase, pp.433-444, 2009 IEEE/ACM International Conference on AutomatedSoftware Engineering, 2009AnnotationSoftware’s changes cause tests to fail. This paper is first published paper to suggest repairs tofailing tests’ code. The key challenge in repairing tests is to retain as much of the original testlogic as possible. The author also proposed several repair strategies: Replace Assertion Method,Invert Relational Operator, Replace Literal in Assertion, Replace with Related Method, TraceDeclaration-Use Path, Accessor Expansion, Surround with Try-Catch and Custom RepairStrategies. Notice that the repair only changes the test code (e.g. the code based on JUnit), insteadof the code to be tested.
  • 17. KeywordSoftware testing; Software maintenanceAbstractBackgroundDevelopers often change software in ways that cause tests to fail. When this occurs, developersmust determine whether failures are caused by errors in the code under test or in the test codeitself. In the latter case, developers must repair failing tests or remove them from the test suite.MotivationRepairing tests is time consuming but beneficial, since removing tests reduces a test suites abilityto detect regressions. Fortunately, simple program transformations can repair many failing testsautomatically.SolutionWe present ReAssert, a novel technique and tool that suggests repairs to failing tests code whichcause the tests to pass. Examples include replacing literal values in tests, changing assertionmethods, or replacing one assertion with several. If the developer chooses to apply the repairs,ReAssert modifies the code automatically.
  • 18. ContributionThis paper makes contributions in Idea, Technique, Tool and Evaluation.EvaluationFirst, we describe two case studies in which researchers used ReAssert to repair failures in theirevolving software.Second, we perform a controlled user study to evaluate whether ReAssert’s suggested repairs
  • 19. match developers’ expectations.Third, we assess ReAssert’s ability to suggest repairs for failures in open-source projects,considering both manually written and automatically generated test suites.Total Pages Value Understanding Last Read12 Normal Normal 2010.10.06Question Result ValidationMethod/Means Technique Persuasion[PV09] A survey of new trends in symbolic execution forsoftware testing and analysisCorina S. Păsăreanu, Willem Visser. A survey of new trends in symbolic execution for softwaretesting and analysis. 339-353 2009 11 STTT 4AnnotationSymbolic execution is an analysis technique which takes program as input, and output thesymbolic execution tree. A comprehensive overview of symbolic execution is given. By givingsome simple and classical examples, the author first introduces the basic notions and challenges ofsymbolic execution. Secondly, the trend to combine concrete and symbolic execution is discussed.Thirdly, the author introduces how researchers tried to solve scalability issues when facing largeprograms, which is still the main obstacle against widespread application of symbolic execution.Furthermore, the author gives an overview of the application of symbolic execution techniques,such as test case generation, proving program properties and static detection of runtime error. Inthe “future directions” part, the author discusses the main obstacle and possible solutions in thisarea, e.g. new heuristic searches, extending the abstraction of programs and powerful decisionprocedures for combinations of theories.Keywordsymbolic execution; survey
  • 20. AbstractBackgroundSymbolic execution is a well-known program analysis technique which represents program inputswith symbolic values instead of concrete, initialized, data and executes the program bymanipulating program expressions involving the symbolic values.MotivationSymbolic execution has been proposed over three decades ago but recently it has found renewedinterest in the research community, due in part to the progress in decision procedures, availabilityof powerful computers and new algorithmic developments.SolutionWe provide here a survey of some of the new research trends in symbolic execution, withparticular emphasis on applications to test generation and program analysis.ContributionWe first describe an approach that handles complex programming constructs such as inputrecursive data structures, arrays, as well as multithreading. Furthermore, we describe recent hybridtechniques that combine concrete and symbolic execution to overcome some of the inherentlimitations of symbolic execution, such as handling native code or availability of decisionprocedures for the application domain.We follow with a discussion of techniques that can be used to limit the (possibly infinite) numberof symbolic configurations that need to be analyzed for the symbolic execution of loopingprograms. Finally, we give a short survey of interesting new applications, such as predictivetesting, invariant inference, program repair, analysis of parallel numerical programs anddifferential symbolic execution.EvaluationTotal Pages Value Understanding Last Read15 High Normal 2010.10.06
  • 21. Question Result ValidationCharacterization Analytic Model Persuasion[KPV03] Generalized symbolic execution for model checkingand testingKhurshid, S., Păsăreanu, C. S., and Visser, W. 2003. Generalized symbolic execution for modelchecking and testing. In Proceedings of the 9th international Conference on Tools and AlgorithmsFor the Construction and Analysis of Systems (Warsaw, Poland, April 07 - 11, 2003). H. Garaveland J. Hatcliff, Eds. Lecture Notes In Computer Science. Springer-Verlag, Berlin, Heidelberg,553-568.AnnotationThis paper proposes one of the early approaches focusing on Symbolic Execution on concurrentprograms and complex data structures. This paper presents a novel framework based on two-foldsymbolic execution. First, the paper defines a source translation instrument, which enablesstandard model checkers to perform symbolic execution. For the purpose of handling dynamicallyallocated structures, method preconditions, data and concurrency, this paper give a novel symbolicexecution algorithm.Keywordsymbolic executionAbstractBackgroundModern software systems, which often are concurrent and manipulate complex data structures,must be extremely reliable.
  • 22. MotivationWe need to automate checking of such systems, which are concurrent and manipulate complexdata structures.SolutionWe provide a two-fold generalization of traditional symbolic execution based approaches. First,we define a source to source translation to instrument a program, which enables standard modelcheckers to perform symbolic execution of the program. Second, we give a novel symbolicexecution algorithm that handles dynamically allocated structures (e.g., lists and trees), methodpreconditions (e.g., acyclicity), data (e.g., integers and strings) and concurrency.Contribution1. To address the state space explosion problem.2. To achieve modularity.3. To check strong correctness properties of concurrent programs.4. To exploit the model checker’s built-in capabilitiesEvaluationBy introducing the implementation and illustrating two applications of the framework, the authorpersuades the availability of this approach.Total Pages Value Understanding Last Read16 High Well 2010.10.09Question Result ValidationMethod/Means Technique Persuasion
  • 23. [PV04] Verification of Java programs using symbolicexecution and invariant generationC. S. Păsăreanu, W. Visser. Verification of Java Programs Using Symbolic Execution and InvariantGeneration. Lecture Notes in Computer Science, Vol. 2989, pp. 164-181, 2004.AnnotationSoftware verification is recognized as an important and difficult problem. However, it suffers fromthe state-explosion problem and can only deal with closed systems. This paper proposes aframework uses method specifications and loop invariants to solve the problem. This paper alsoillustrates some non-trivial examples, in which case they can benefit from the more powerfulapproximation techniques.Keywordsymbolic execution; method specifications; loop invariantsAbstractBackgroundSoftware verification is recognized as an important and difficult problem.MotivationModel checking typically can only deal with closed systems and it suffers from the state-explosionproblem.SolutionIn order to solve the state-explosion problem, we present a novel framework, based on symbolicexecution, for the automated verification of software. The framework uses annotations in the form
  • 24. of method specifications and loop invariants. We present a novel iterative technique that usesinvariant strengthening and approximation for discovering these loop invariants automatically.Contribution1. A novel verification framework that combines symbolic execution and model checking.2. A new method for iterative invariant generation.3. A series of (small) non-trivial Java examples showing the merits of our method.EvaluationBy showing some non-trivial Java examples, we compare our work with the invariant generationmethod presented in another paper [C. Flanagan and S. Qadeer. Predicate abstraction for softwareverification. In Proc. POPL, 2002.].Total Pages Value Understanding Last Read18 Normal Normal 2010.10.09Question Result ValidationMethod/Means Technique PersuasionFault Localization[WD10] Software Fault LocalizationW. Eric Wong, Vidroha Debroy. "Software Fault Localization," IEEE Reliability Society 2009Annual Technology Report, January 2010AnnotationThis article gives a fairly comprehensive overview of Software Fault Localization. Afterintroducing basic notions and classical ways of fault localization, this article classifies theadvanced fault localization techniques as follows: Static, Dynamic, and Execution Slice-Based
  • 25. Techniques, Program Spectrum-based Techniques, Statistics-based Techniques, Program State-based Techniques, Machine Learning-based Techniques, etc. Furthermore, important aspects offault localization are given, namely, Effectiveness, efficiency, and robustness; Impact of TestCases; Faults introduced by missing code; lastly, Programs with multiple bugs; which could beregarded as the design space for future work.KeywordFault LocalizationAbstractBackgroundRegardless of the effort spent on developing a computer program, it may still contain bugs. In fact,the larger, more complex a program, the higher the likelihood of it containing bugs.MotivationIt is always challenging for programmers to effectively and efficiently remove bugs, while notinadvertently introducing new ones at the same time.SolutionAutomatic fault localization techniques can guide programmers to the locations of faults withminimal human intervention.Total Pages Value Understanding Last Read6 High Well 2010.10.10Question Result ValidationCharacterization Analytic Model Experience
  • 26. Web[ADT+10] Practical fault localization for dynamic webapplicationsArtzi, S., Dolby, J., Tip, F., and Pistoia, M. 2010. Practical fault localization for dynamic webapplications. In Proceedings of the 32nd ACM/IEEE international Conference on SoftwareEngineering - Volume 1 (Cape Town, South Africa, May 01 - 08, 2010). ICSE 10. ACM, NewYork, NY, 265-274.AnnotationIn this Paper, an automatic fault localization technique is proposed, which first fully finds andlocalizes malformed HTML errors in Web applications that execute PHP code on the server side.This technique is based on the previous work [3, 4] of combined concrete and symbolic executionto Web applications written in PHP. This technique needn’t an upfront test suite. Furthermore, thispaper defines the statement’s suspiciousness rating in web applications with the use of an outputmapping from statements.However, the suspiciousness definition as followes is a bit magical, where the suspiciousnessrating and the Tarantula suspiciousness rating are 1.1 and 0.5.KeywordFault Localization
  • 27. AbstractBackgroundWeb applications are typically written in a combination of several programming languages. Aswith any program, programmers make mistakes and introduce faults, resulting in Web-applicationcrashes and malformed dynamically generated HTML pages. While malformed HTML errors mayseem trivial, and indeed many of them are at worst minor annoyances.MotivationPrevious fault-localization techniques need an upfront test suite. And there is no fully automatictool that finds and localizes malformed HTML errors in Web applications that execute PHP codeon the server side.SolutionWe leverage combined concrete and symbolic execution and several fault-localization techniquesto create a uniquely powerful tool for localizing faults in PHP applications. The tool automaticallygenerates tests that expose failures, and then automatically localizes the faults responsible forthose failures.Contribution1. We present an approach for fault localization that uses combined concrete and symbolic execution to generate a suite of passing and failing tests.2. We demonstrate that automated techniques for fault localization are effective at localizing real faults in open-source PHP applications.3. We present 6 fault localization techniques that combine variations on the Tarantula algorithm.4. We implemented these 6 techniques in Apollo.EvaluationThis evaluation aims to answer two questions:1. How effective is the Tarantula fault localization technique in the domain of PHP web applications?2. How effective is Tarantula, when combined with the use of an output mapping and/or when modeling the outcome of conditional expressions in Section 4.
  • 28. The benchmarks are faqforge, webchess, schoolmate and timeclock. And 6 techniques thatcombine variations are used in the experiment.Note: The author doesn’t know the location of faults, needed to localize them manually. Manuallylocalizing and fixing faults is a very time-consuming task, so they limited themselves to 20 faultsin each of the subject programs.Total Pages Value Understanding Last Read10 High Well 2010.10.09Question Result ValidationMethod/Means Technique Analysis[AKD+10] Finding Bugs in Web Applications Using DynamicTest Generation and Explicit-State Model CheckingShay Artzi, Adam Kie&#x17C;un, Julian Dolby, Frank Tip, Danny Dig, Amit Paradkar, MichaelD. Ernst, "Finding Bugs in Web Applications Using Dynamic Test Generation and Explicit-StateModel Checking," IEEE Transactions on Software Engineering, vol. 99, no. RapidPosts, pp.474-494, , 2010AnnotationThis paper enhances the tools and methods in the authors’ previous work [AKD+08]. Byimplementing a form of explicit-state software model checking, this paper try to handle user inputoptions that are created dynamically by a web application, which includes keeping track ofparameters that are transferred from one script to the next.Keywordtest generation; symbolic execution; explicit-state model checking
  • 29. AbstractBackgroundWeb script crashes and malformed dynamically-generated web pages are common errors, and theyseriously impact the usability of web applications.MotivationCurrent tools for web-page validation cannot handle the dynamically generated pages that areubiquitous on today’s Internet.In the previous work, we did not yet supply a solution for handling user input options that arecreated dynamically by a web application, which includes keeping track of parameters that aretransferred from one script to the next—either by persisting them in the environment, or bysending them as part of the call.SolutionWe present a dynamic test generation technique for the domain of dynamic web applications. Thetechnique utilizes both combined concrete and symbolic execution and explicit-state modelchecking. The technique generates tests automatically, runs the tests capturing logical constraintson inputs, and minimizes the conditions on the inputs to failing tests, so that the resulting bugreports are small and useful.Contribution1. The technique utilizes both combined concrete and symbolic execution and explicit-state model checking.2. We adapt the established technique of dynamic test generation, based on combined concrete and symbolic execution.3. We created a tool, Apollo.4. We evaluated our tool by applying it to 6 real web applications.5. We present a detailed classification of the faults found by ApolloEvaluationThe Evaluation Methods are almost the same with the previous work [AKD+08].
  • 30. Total Pages Value Understanding Last Read17 Normal Normal 2010.10.10Question Result ValidationMethod/Means Technique Analysis[AKD+08] Finding bugs in dynamic web applicationsArtzi, S., Kiezun, A., Dolby, J., Tip, F., Dig, D., Paradkar, A., and Ernst, M. D. 2008. Finding bugsin dynamic web applications. In Proceedings of the 2008 international Symposium on SoftwareTesting and Analysis (Seattle, WA, USA, July 20 - 24, 2008). ISSTA 08. ACM, New York, NY,261-272.AnnotationA framework of test generation for web application is proposed in this paper. The technique isbased on combined concrete and symbolic execution. The authors also present the failuredetection algorithm and the path constraint minimization algorithm.Keywordsymbolic execution; dynamic analysis; test generationAbstractBackgroundWeb script crashes and malformed dynamically-generated Web pages are common errors, and theyseriously impact usability of Web applications.MotivationCurrent tools for Web-page validation cannot handle the dynamically-generated pages that areubiquitous on today’s Internet.
  • 31. SolutionIn this work, we apply a dynamic test generation technique, based on combined concrete andsymbolic execution, to the domain of dynamic Web applications. The technique generates testsautomatically, uses the tests to detect failures, and minimizes the conditions on the inputs exposingeach failure, so that the resulting bug reports are small and useful in finding and fixing theunderlying faults. Our tool Apollo implements the technique for PHP. Apollo generates test inputsfor the Web application, monitors the application for crashes, and validates that the outputconforms to the HTML specification.Contribution1. We adapt the established technique of dynamic test generation, based on combined concrete and symbolic execution, to the domain of Web applications.2. We created a tool, Apollo.3. We evaluated our tool by applying it to real Web applications and comparing the results with random testing.EvaluationThe author designed the experiments to answer the following research questions:1. How many faults can Apollo find, and of what varieties?2. How effective is the fault localization technique of Apollo compared to alternative approaches such as randomized testing, in terms of the number and severity of discovered faults and the line coverage achieved?3. How effective is our minimization in reducing the size of inputs parameter constraints and
  • 32. failure-inducing inputs?For the evaluation, the author selected the following four open-source PHP programs: faqforge,webchess, schoolmate, phpsysinfo.Total Pages Value Understanding Last Read11 High Normal 2010.10.10Question Result ValidationMethod/Means Technique AnalysisTest ExecutionTest Optimization[DGM10] On test repair using symbolic executionDaniel, B., Gvero, T., and Marinov, D. 2010. On test repair using symbolic execution. InProceedings of the 19th international Symposium on Software Testing and Analysis (Trento, Italy,July 12 - 16, 2010). ISSTA 10. ACM, New York, NY, 207-218.AnnotationWhen the program is changed, the test code is out of data, which may cause regression tests failed.The paper proposes a technique based on symbolic execution to repair the test. The authoranalyzed the .NET code’s symbolic execution by using a tool named Pex. This paper is[DJDM09]’s enhanced solutions. It fixed several failures that ReAssert could not repair or that itcould have repaired in a better way. The author describe modifications on expected values,expected object comparison and conditional expected value as examples.
  • 33. Keywordtest repair; symbolic executionAbstractBackgroundWhen developers change a program, regression tests can fail not only due to faults in the programbut also due to out of date test code that does not reflect the desired behavior of the program.MotivationRepairing tests manually is difficult and time consuming.SolutionWe recently developed ReAssert, a tool that can automatically repair broken unit tests, but only ifthey lack complex control flow or operations on expected values.ContributionThis paper introduces symbolic test repair, a technique based on symbolic execution, which canovercome some of ReAssert’s limitations.EvaluationWe reproduce experiments from earlier work and find that symbolic test repair improves uponpreviously reported results both quantitatively and qualitatively. We also perform new experimentswhich confirm the benefits of symbolic test repair and also show surprising similarities in testfailures for open-source Java and .NET programs. Our experiments use Pex, a powerful symbolicexecution engine for .NET, and we find that Pex provides over half of the repairs possible fromthe theoretically ideal symbolic test repair.Q1: How many failures can be repaired by replacing literals in test code? That is, if we had anideal way to discover literals, how many broken tests could we repair?Q2: How do literal replacement and ReAssert compare? How would an ideal literal replacementstrategy affect ReAssert’s ability to repair broken tests?
  • 34. Q3: How well can existing symbolic execution discover appropriate literals? Can symbolicexecution produce literals that would cause a test to pass?Java: Checkstyle, JDepend, JFreeChart, Lucene, PMD, XStream.NET: AdblockIE, CSHgCmd, Fudge-CSharp, GCalExchangeSync, Json.NET, MarkdownSharp,NerdDinner, NGChart, NHaml, ProjectPilot and SharpMap.Total Pages Value Understanding Last Read11 Normal Normal 2010.09.25Question Result ValidationMethod/Means Technique Analysis[HO09] MINTS: A general framework and tool for supportingtest-suite minimizationHwa-You Hsu; Orso, A.; , "MINTS: A general framework and tool for supporting test-suiteminimization," Software Engineering, 2009. ICSE 2009. IEEE 31st International Conference on ,vol., no., pp.419-429, 16-24 May 2009AnnotationThis is first published paper, which attempted to handle multi-criteria test-suite minimizationproblems. This approach models multi-criteria minimization as binary ILP problems and thenleverages ILP solvers to compute an optimal solution to such problems.Note that the difference and relation between minimization criteria and minimization policies.Keywordtest-suite minimization
  • 35. AbstractBackgroundTest-suite minimization techniques aim to eliminate redundant test cases from a test-suite based onsome criteria, such as coverage or fault-detection capability.MotivationMost existing test-suite minimization techniques have two main limitations: they performminimization based on a single criterion and produce suboptimal solutions.SolutionIn this paper, we propose a test-suite minimization framework that overcomes these limitations byallowing testers to (1) easily encode a wide spectrum of test-suite minimization problems, (2)handle problems that involve any number of criteria, and (3) compute optimal solutions byleveraging modern integer linear programming solvers.
  • 36. Contribution1. A general test-suite minimization framework that handles minimization problems involving any number of criteria and can produce optimal solutions to such problems.2. A prototype tool that implements the framework, can interface seamlessly with a number of different ILP solvers, and is freely available.3. An empirical study in which we evaluate the approach using a wide range of programs, test cases, minimization problems, and solvers.EvaluationIn the evaluation, the authors investigated the following research questions:1. How often can MINTS find an optimal solution for a test-suite minimization problem in a reasonable time?2. How does the performance of MINTS compare with the performance of a heuristic approach?3. To what extent does the use of a specific solver affect the performance of the approach?Note that the authors consider one absolute minimization criterion and three relative minimizationcriteria. The authors also consider eight different minimization policies: seven weighted and oneprioritized.The benchmark is the Siemens suite and three additional programs with real faults: flex,LogicBlox, and Eclipse.Total Pages Value Understanding Last Read11 Normal Normal 2010.10.10Question Result ValidationMethod/Means Technique Analysis[WHLM95] Effect of test set minimization on fault detectioneffectivenessWong, W. E., Horgan, J. R., London, S., and Mathur, A. P. 1995. Effect of test set minimization onfault detection effectiveness. In Proceedings of the 17th international Conference on SoftwareEngineering (Seattle, Washington, United States, April 24 - 28, 1995). ICSE 95. ACM, New York,NY, 41-50.
  • 37. AnnotationKeywordAbstractBackgroundSize and code coverage are important attributes of a set of tests.MotivationA program P is executed on elements of the test set T. Can we observe the fault detectingcapability of T for P? Which T induces code coverage on P according to some coverage criterion?Whether is it the size of T or the coverage of T on P which determines the fault detectioneffectiveness of T for P?While keeping coverage constant, what is the effect on fault detection of reducing the size of a testset?SolutionWe report results from an empirical study using the block and all-uses criteria as the coveragemeasures.ContributionEvaluationTotal Pages Value Understanding Last Read
  • 38. Question Result ValidationMethod/Means | Evaluation | Technique | Analytic Model Analysis | Persuasion |Characterization ExperienceTest Adequacy CriterionMutant Testing[LJT+10] Is operator-based mutant selection superior torandom mutant selection?Zhang, L., Hou, S., Hu, J., Xie, T., and Mei, H. 2010. Is operator-based mutant selection superiorto random mutant selection?. In Proceedings of the 32nd ACM/IEEE international Conference onSoftware Engineering - Volume 1 (Cape Town, South Africa, May 01 - 08, 2010). ICSE 10. ACM,New York, NY, 435-444.AnnotationMutant selection is used for reduce the expensiveness of compiling and executing too manymutants. Many researches on mutant selection are operator- based. In this paper, the issue whetheroperator- based mutant selection really superior than ones using random methods is addressed. Byempirical study on three operator- based mutant selection techniques (i.e., Offutt et al.’s 5mutation operators [31], Barbosa et al.’s 10 mutation operators [4], and Siami Namin et al.’s 28mutation operators [37]) and two random ones, the research indicates that operator-based mutantselection is not superior.
  • 39. KeywordAbstractBackgroundDue to the expensiveness of compiling and executing a large number of mutants, it is usuallynecessary to select a subset of mutants to substitute the whole set of generated mutants in mutationtesting and analysis. Most existing research on mutant selection focused on operator-based mutantselection, i.e., determining a set of sufficient mutation operators and selecting mutants generatedwith only this set of mutation operators. Recently, researchers began to leverage statistical analysisto determine sufficient mutation operators using execution information of mutants.MotivationHowever, whether mutants selected with these sophisticated techniques are superior to randomlyselected mutants remains an open question.SolutionIn this paper, we empirically investigate this open question by comparing three representativeoperator-based mutant-selection techniques with two random techniques. Our empirical resultsshow that operator-based mutant selection is not superior to random mutant selection. Theseresults also indicate that random mutant selection can be a better choice and mutant selection onthe basis of individual mutants is worthy of further investigation.ContributionOur study empirically evaluates three recent operatorbased mutant-selection techniques (i.e.,Offutt et al. [31], Barbosa et al. [4], and Siami Namin et al. [37]) against random mutant selectionfor mutation testing.Our study produces the first empirical results concerning stability of operator-based mutantselection and random mutant selection for mutation testing.Beside the random technique studied previously (referred to as the one-round random technique inthis paper), our study also investigates another random technique involving two steps to selecteach mutant (referred to as the two-round random technique in this paper).
  • 40. The subjects used in our study are larger than those used in previous studies of random mutantselection. To the best of our knowledge, due to the extreme expensiveness of experimentingmutant-selection techniques, the Siemens programs are by far the largest subjects2 used in studiesof mutant selection [37].EvaluationTotal Pages Value Understanding Last Read10 High Normal 2010.09.26Question Result ValidationCharacterization Analytic Model Analysis[ST10] From behaviour preservation to behaviourmodification: constraint-based mutant generationAnnotationThis paper presents a mutant generation approach, which generate both syntactically semanticallycorrect mutants. The author builds this approach with several constraint-based methods. FromAccessibility Constraints, Introducing or Deleting Entities, Type Constraints, the author not onlygenerates mutants, but also rejects mutants. The author also applied this technique to several opensource programs, such as JUnit, JHotDraw, Draw2D, Jaxen and HTMLParser.KeywordMutation AnalysisAbstractBackgroundThis paper is about mutation generation. The authors’ approach builds on their prior work on
  • 41. constraint-based refactoring tools, and works by negating behaviour-preserving constraints.MotivationThe efficacy of mutation analysis depends heavily on its capability to mutate programs in such away that they remain executable and exhibit deviating behaviour. Whereas the former requiresknowledge about the syntax and static semantics of the programming language, the latter requiressome least understanding of its dynamic semantics, i.e., how expressions are evaluated.SolutionWe present an approach that is knowledgeable enough to generate only mutants that are bothsyntactically and semantically correct and likely exhibit non-equivalent behaviour.EvaluationAs a proof of concept we present an enhanced implementation of the Access Modifier Changeoperator for Java programs whose naive implementations create huge numbers of mutants that donot compile or leave behaviour unaltered. While we cannot guarantee that our generated mutantsare non-equivalent, we can demonstrate a considerable reduction in the number of vain mutantgenerations, leading to substantial temporal savings.Total Pages Value Understanding Last Read10 High Normal 2010.09.26Question Result ValidationMethod/Means Technique Analysis, Persuasion[JH09] An analysis and survey of the development of mutationtestingYue Jia, Mark Harman (September 2009). "An Analysis and Survey of the Development ofMutation Testing" (PDF). CREST Centre, Kings College London, Technical Report TR-09-06.
  • 42. AnnotationAn overview of Mutation Testing is given in the paper. Not only the basic notions, the author alsointroduces the history and the application of Mutation Testing at first. In the second part,fundamental hypotheses, the process, the problems in theoretical research is discussed. In the thirdpart, Techniques in Mutation Testing is classified into two types, reduction of the generatedmutants (which combines do fewer and do faster) and reduction of the execution cost (whichcorresponds to do faster). To detect if a program and one of its mutants programs are equivalent isan known undecidable problem. The problem is discussed in Part 4. In the fifth part, the authorclassified the applications of mutation testing into program mutation and specification mutation.More detailed statistics is shown. In Part 6 and Part 7, empirical evaluation and Tools usingmutation testing is gathered and listed. In the last part, the author shows five important avenues forresearch: a need for high quality higher order mutants, a need to reduce the equivalent mutantproblem, a preference for semantics over syntax, an interest in achieving a better balance betweencost and value and a pressing need to generate test cases to kill mutants.Keywordmutation testingAbstractOutline1. Introduction2. The theory of mutation testing3. Cost reduction techniques4. Equivalent mutant detective techniques5. The application of mutation testing6. Empirical evaluation7. Tools support mutation testing8. Future Trend9. ConclusionBackgroundMutation Testing is a fault–based software testing technique that has been widely studied for overthree decades.
  • 43. MotivationThe literature on Mutation Testing has contributed a set of approaches, tools, developments andempirical results which have not been surveyed in detail until now.SolutionThis paper provides a comprehensive analysis and survey of Mutation Testing. The paper alsopresents the results of several development trend analyses.EvaluationThese analyses provide evidence that Mutation Testing techniques and tools are reaching a state ofmaturity and applicability, while the topic of Mutation Testing itself is the subject of increasinginterest.Total Pages Value Understanding Last Read32 High Well 2010.09.27Question Result ValidationCharacterization Analytic Model Analysis, ExperienceHigh-Dimensional Clustering[HK99] Optimal Grid-Clustering: Towards Breaking the Curseof Dimensionality in High-Dimensional ClusteringAlexander Hinneburg , Daniel A. Keim, Optimal Grid-Clustering: Towards Breaking the Curse ofDimensionality in High-Dimensional Clustering, Proceedings of the 25th International Conferenceon Very Large Data Bases, p.506-517, September 07-10, 1999
  • 44. AnnotationKeywordHigh-Dimensional ClusteringAbstractBackgroundMany applications require the clustering of large amounts of high-dimensional data. In addition,the high-dimensional data often contains a significant amount of noise which causes additionaleffectiveness problems.MotivationThe comparison reveals that condensation-based approaches (such as BIRCH or STING) are themost promising candidates for achieving the necessary efficiency, but it also shows that basicallyall condension-based approaches have severe weaknesses with respect to their effectiveness inhigh-dimensional space.SolutionTo overcome these problems, we develop a new clustering technique called OptiGrid which isbased on constructing an optimal grid-partitioning of the data. The optimal grid-partitioning isdetermined by calculating the best partitioning hyperplanes for each dimension (if such apartitioning exists) using certain projections of the data.EvaluationWe perform a series of experiments on a number of different data sets from CAD and molecularbiology. A comparison with one of the best known algorithms (BIRCH) shows the superiority ofour new approach.
  • 45. Total Pages Value Understanding Last Read12 High Bad 2010.10.23Question Result ValidationMethod/Means, Evaluation Technique Analysis[KKZ09] Clustering high-dimensional data: A survey onsubspace clustering, pattern-based clustering, andcorrelation clustering.Kriegel, H., Kröger, P., and Zimek, A. 2009. Clustering high-dimensional data: A survey onsubspace clustering, pattern-based clustering, and correlation clustering. ACM Trans. Knowl.Discov. Data 3, 1 (Mar. 2009), 1-58.AnnotationKeywordAbstractOutlineINTRODUCTION a) Sample Applications of Clustering High-Dimensional Data i. Gene Expression Analysis. ii. Metabolic Screening. iii. Customer Recommendation Systems. iv. Text Documents.
  • 46. b) Finding Clusters in High-Dimensional Data i. The main challenge for clustering here is that different subsets of features are relevant for different clusters, that is, the objects cluster in subspaces of the data space but the subspaces of the clusters may vary. ii. A common way to overcome problems of high-dimensional data spaces where several features are correlated or only some features are relevant is to perform feature selection before performing any other data mining task. iii. Unfortunately, such feature selection or dimensionality reduction techniques cannot be applied to clustering problems. iv. Instead of a global approach to feature selection, a local approach accounting for the local feature relevance and/or local feature correlation problems is required.BackgroundAs a prolific research area in data mining, subspace clustering and related problems induced a vastquantity of proposed solutions.MotivationHowever, many publications compare a new proposition—if at all—with one or two competitors,or even with a so-called “naïve” ad hoc solution, but fail to clarify the exact problem definition.As a consequence, even if two solutions are thoroughly compared experimentally, it will oftenremain unclear whether both solutions tackle the same problem or, if they do, whether they agreein certain tacit assumptions and how such assumptions may influence the outcome of analgorithm.SolutionIn this survey, we try to clarify: (i) the different problem definitions related to subspace clusteringin general; (ii) the specific difficulties encountered in this field of research; (iii) the varyingassumptions, heuristics, and intuitions forming the basis of different approaches; and (iv) howseveral prominent solutions tackle different problems.EvaluationTotal Pages Value Understanding Last Read58 High Normal 2010.10.24
  • 47. Question Result ValidationCharacterization Analytic Model Persuasion

×