ICSE10b.ppt
Upcoming SlideShare
Loading in...5
×
 

ICSE10b.ppt

on

  • 204 views

 

Statistics

Views

Total Views
204
Slideshare-icon Views on SlideShare
200
Embed Views
4

Actions

Likes
0
Downloads
2
Comments
0

2 Embeds 4

http://ptidej.net 3
http://www.ptidej.net 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    ICSE10b.ppt ICSE10b.ppt Presentation Transcript

    • AURA Wu, Gu´h´neuc, e eAntoniol, and Kim AURA: A Hybrid Approach to IdentifyFrameworkEvolution Framework EvolutionIntroductionOutline Wei Wu1 , Yann-Ga¨l Gu´h´neuc1 ,ExamplesRequirements e e ePrevious Work Giuliano Antoniol2 , and Miryung Kim3ApproachBackgroundAlgorithm 1 ´ Ptidej Team, DGIGL, Ecole Polytechnique de Montr´al, Canada eEvaluation 2 ´ SOCCER Lab, DGIGL, Ecole Polytechnique de Montr´al, Canada eBackground 3Indicators ECED, The University of Texas at Austin, USAComparisonsConclusion wuwei@iro.umontreal.ca,Bibliography yann-gael.gueheneuc@polymtl.ca, antoniol@ieee.org, miryung@ece.utexas.edu
    • AURA Framework Evolution Wu, Gu´h´neuc, e eAntoniol, and KimFramework Compilation ErrorEvolution final Graphics2D g2 = ...;Introduction final Rectangle2D dataArea = ...;OutlineExamples final CategoryPlot plot = ...;Requirements final ChartRenderingInfo info = ...;Previous Work ...Approach final AbstractCategoryItemRenderer renderer =BackgroundAlgorithm new BarRenderer().initialise(g2, dataArea, plot, info); ...EvaluationBackgroundIndicatorsComparisons Typical QuestionsConclusion The developer would wonder:Bibliography ▶ did the method signature change? ▶ if not, how should she now initialize a renderer: ▶ one equivalent method? ▶ many new methods? ▶ just not the way anymore? 2 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim OutlineFrameworkEvolution IntroductionIntroduction ExamplesOutlineExamples RequirementsRequirementsPrevious Work Previous WorkApproachBackground ApproachAlgorithmEvaluation BackgroundBackground AlgorithmIndicatorsComparisonsConclusion EvaluationBibliography Background Indicators Comparisons Conclusion Bibliography 3 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim Examples (1/4)FrameworkEvolutionIntroductionOutline JHotDrawExamplesRequirements package CH.ifa.draw.application;Previous Work public class DrawApplication ... {Approach protected JMenu createEditMenu() {Background ...Algorithm Version 5.2: menu.add(new CutCommand("Cut", view()),Evaluation Version 5.2: new MenuShortcut(’x’));Background Version 5.3: menu.add(new UndoableCommand(IndicatorsComparisons Version 5.3: new CutCommand("Cut", this)), Version 5.3: new MenuShortcut(’x’));Conclusion ...Bibliography (1) CutCommand.CutCommand(DrawingView...) ↺ CutCommand.CutCommand(Alignment, DrawingEditor) and UndoableCommand.UndoableCommand(Command) 4 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim Examples (2/4)FrameworkEvolutionIntroductionOutline JEditExamples package org.gjt.sp.jedit;RequirementsPrevious Work public class GUIUtilities { public static JMenu loadMenu(...) {ApproachBackground Version 4.1: DirectoryMenu.DirectoryMenu(...);Algorithm Version 4.1: MarkersMenu.MarkersMenu();Evaluation Version 4.1: RecentDirectoriesMenu.RecentDirectoriesMenu(Background Version 4.2: EnhancedMenu.EnhancedMenu(...);IndicatorsComparisons ...ConclusionBibliography (1) DirectoryMenu.DirectoryMenu(...) and MarkersMenu.MarkersMenu() and RecentDirectoriesMenu.RecentDirectoriesMenu( ↺ EnhancedMenu.EnhancedMenu(...) 5 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim Examples (3/4)FrameworkEvolutionIntroductionOutlineExamplesRequirements JFreeChartPrevious Work package org.jfree.data;Approach public class DefaultBoxAndWhiskerDataset {Background Version 0.9.11: public static ... createNumberArray(...) {Algorithm Version 0.9.11: ...Evaluation Version 0.9.11: }BackgroundIndicators Version 0.9.11: ...Comparisons Version 0.9.12: ...ConclusionBibliography (1) DefaultBoxAndWhiskerDataset.createNumberArray( ↺ ∅ 6 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim Examples (4/4)FrameworkEvolution EclipseIntroduction public ReplaceEdit[] getModifications(String source) {OutlineExamples ...Requirements Version 3.1: return Indents.getChangeIndentEdits(...);Previous Work Version 3.3: return IndentManipulation.getChangeIndentEdits(Approach }Background Version 3.1: class Indents {Algorithm Version 3.3: class IndentManipulation {Evaluation void getChangeIndentEdits(...) {BackgroundIndicators ...Comparisons Version 3.1: int length= Indents.computeIndentLength(...);Conclusion Version 3.3: int length= this.indexOfIndent(...);Bibliography ... (1) Indents.getChangeIndentEdits(...) ↺ IndentManipulation.getChangeIndentEdits(...) (2) Indents.computeIndentLength(...) ↺ IndentManipulation.indexOfIndent(...) 7 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim RequirementsFrameworkEvolution Main RequirementsIntroductionOutlineExamples We want an approach that offers:RequirementsPrevious Work ▶ one-to-many replacements (JHotDraw);ApproachBackground ▶ many-to-one replacements (JEdit);AlgorithmEvaluation ▶ simply-deleted methods (JFreeChart);BackgroundIndicators ▶ cascade replacements (Eclipse).ComparisonsConclusionBibliography Additional Requirements We also want an approach that is: ▶ automatic and does not require framework developers’ involvement (FDI); ▶ system/contex independent ⇒ no thresholds. 8 / 30
    • AURA Introduction Wu, Gu´h´neuc, e eAntoniol, and Kim Previous WorkFrameworkEvolutionIntroductionOutlineExamplesRequirements FeaturesPrevious Work One-to- Many- Simply- No Fully No ApproachesApproach many to-one deleted FDI Auto- Thres-Background Rules Rules Rules matic holdsAlgorithm Chow et al. [1] × × ✓ × × ✓Evaluation SemDiff [2] ✓ ✓ ✓ ✓ × ×BackgroundIndicators Godfrey et al. [3] ✓ ✓ ✓ ✓ × ×Comparisons CatchUp! [4] × × × × ✓ ✓Conclusion M. Kim et al. [5] × ✓ ✓ ✓ ✓ ×Bibliography S. Kim et al. [6] × × × ✓ ✓ × Sch¨fer et al. [8] a × × × ✓ ✓ × Diff-CatchUp [9] ✓ ✓ ✓ ✓ × × AURA ✓ ✓ ✓ ✓ ✓ ✓ 9 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Background (1/4)FrameworkEvolutionIntroductionOutlineExamples InspirationRequirementsPrevious Work AURA uses:ApproachBackground ▶ Call dependency, as in previous work by:AlgorithmEvaluation ▶ Dagenais and Robillard [2];BackgroundIndicators ▶ Godfrey et al. [3];Comparisons ▶ Sch¨fer et al. [8]; aConclusion ▶ Text similarity, as in previous work by:Bibliography ▶ M. Kim et al. [5]; ▶ S. Kim et al. [6]; ▶ Xing and Stroulia [9]. 10 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Background (2/4)FrameworkEvolution AssumptionsIntroductionOutline We only consider method(s) in the same framework,ExamplesRequirements excluding other vendors’ frameworks.Previous WorkApproachBackground We only use the old and new releases of a framework, notAlgorithmEvaluation any of its client programs.BackgroundIndicatorsComparisons A target method t can be replaced by:ConclusionBibliography ▶ no method if it has been simply-deleted; ▶ one or more candidate methods, set {c}; A rule is a conjunction of method replacements, while M. Kim et al.’s change-rules [5] are groups of similar changes. 11 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Background (3/4)FrameworkEvolutionIntroduction Underlying TechniquesOutlineExamples We use transitive call dependency analyses:RequirementsPrevious Work ▶ let A be the number of anchors, i.e., methods callingApproachBackground target methods, then the confidence value is:Algorithm CV(t, c) = A(t, c)/A(t);EvaluationBackground ▶ let KR(t) be the key-replacement method of t, i.e., theIndicatorsComparisons method that is most similar to t from the candidateConclusion replacement methods whose names are equal to t’s orBibliography with CV(t, c) = 100%, then the support value is: S(t, c) = ∣∣{ m ∣ m ∈ {all the methods in the new release} ∧ m → KR(t) ∧ m → c }∣∣. 12 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Background (4/4)FrameworkEvolutionIntroductionOutlineExamplesRequirements Underlying TechniquesPrevious WorkApproach We use Lawrie et al.’s technique [7] to tokenize and compareBackgroundAlgorithm method signatures:EvaluationBackground ▶ most of return types, declaring classes, method names,IndicatorsComparisons and parameter lists;Conclusion ▶ Levenshtein distance;Bibliography ▶ longest common subsequence (LCS) 13 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Algorithm (1/2)FrameworkEvolutionIntroduction StepsOutlineExamples Our algorithm divide in 6 steps (see pages 329–330):RequirementsPrevious Work 1. Global Data Set Generation: generates target method set, anchorApproach set and candidate replacement method set.BackgroundAlgorithm 2. Target Methods Classification: divides methods into those calledEvaluation by an anchor from the others.BackgroundIndicators 3. Candidate Replacement Method Set Generation: uses callComparisons dependency to build sets.Conclusion 4. Confidence Value Computation: build one-to-one, many-to-one,Bibliography and one-to-many rules. 5. Text Similarity Only Rule Generation: uses text similarity for methods not called by anchors. 6. Simply-deleted Method Rule Identification: builds list of deleted methods. 14 / 30
    • AURA Approach Wu, Gu´h´neuc, e eAntoniol, and Kim Algorithm (2/2)FrameworkEvolutionIntroductionOutlineExamplesRequirementsPrevious WorkApproachBackground ImplementationAlgorithm AURA is implemented in Java, as an Eclipse plug-in,EvaluationBackground available at http://www.ptidej.net/downloads/IndicatorsComparisons experiments/icse10b.ConclusionBibliography 15 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim BackgroundFrameworkEvolutionIntroductionOutlineExamplesRequirementsPrevious WorkApproachBackground HypothesisAlgorithm AURA will find more relevant change rules than the previousEvaluationBackground approaches with comparable precision, i.e., it will have aIndicatorsComparisons better recall than and similar precision to those of theConclusion previous approaches.Bibliography 16 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Indicators (1/2)FrameworkEvolutionIntroductionOutlineExamples Precision and RecallRequirementsPrevious Work ∩Approach ∣∣{relevant rules} {retrieved rules}∣∣Background Precision =Algorithm ∣∣{retrieved rules}∣∣Evaluation ∩Background ∣∣{relevant rules} {retrieved rules}∣∣Indicators Recall =Comparisons ∣∣{relevant rules}∣∣ConclusionBibliography Problem We cannot know {relevant rules} without an expensive and error-prone manual analysis. 17 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Indicators (2/2)FrameworkEvolutionIntroduction Δs of Precision (P) and Recall (R)OutlineExamples We introduce the manually-build set:RequirementsPrevious Work {correct rules}X = {relevant rules}X ∩Approach {retrieved rules}XBackgroundAlgorithm Then:Evaluation PrecisionA − PrecisionBBackground ΔP(A, B) =Indicators PrecisionBComparisons ∣∣{correct rules}A ∣∣ × ∣∣{retrieved rules}B ∣∣Conclusion = −1 ∣∣{retrieved rules}A ∣∣ × ∣∣{correct rules}B ∣∣Bibliography RecallA − RecallB ΔR(A, B) = RecallB ∣∣{correct rules}A ∣∣ − ∣∣{correct rules}B ∣∣ = ∣∣{correct rules}B ∣∣ 18 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (1/6)FrameworkEvolutionIntroductionOutlineExamplesRequirementsPrevious WorkApproach Considered RulesBackgroundAlgorithm We noticed that large numbers of target methods areEvaluation deleted in new releases: on average 31.93% of AURA rulesBackgroundIndicators are simple-deleted rules ⇒ we include simply-deleted rules.ComparisonsConclusionBibliography We convert many-to-one rules into as many one-to-one rules. 19 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (2/6)FrameworkEvolutionIntroductionOutline Subject SystemsExamplesRequirementsPrevious Work We reuse systems analysed in previous work to benefit fromApproach the sets {correct rules}X built by their authors.BackgroundAlgorithm Systems Releases # MethodsEvaluation 0.9.11 4,751Background JFreeChart 0.9.12 5,197Indicators 5.2 1,486Comparisons JHotDraw 5.3 2,265Conclusion 4.1 2,773 JEdit 4.2 3,547Bibliography 1.1 5,973 Struts 1.2.4 6,111 org.eclipse.jdt.core 3.1 35,439 org.eclipse.jdt.ui 3.3 47,237 20 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (3/6)FrameworkEvolutionIntroduction Small/Medium SystemsOutlineExamples We benefit from existing sets {correct rules}X and compareRequirementsPrevious Work precision, recall, and deltas.ApproachBackground Systems Indicators AURA M. Kim et al. [5] ΔR ΔPAlgorithm JHotDraw # Correct rule 97 81 19.49% -6.69%Evaluation 5.2-5.3 Precision 92.38% 99.00%Background JEdit # Correct rule 356 217 64.29% -13.78%Indicators 4.1-4.2 Precision 80.18% 93.00%Comparisons JFreeChart # Correct rule 155 88 75.86% 3.50% 0.9.11-0.9.12 Precision 80.73% 78.00%Conclusion Systems Indicators AURA Sch¨fer et al. [8] a ΔR ΔPBibliography JHotDraw # Correct rule 97 88 10.23% 4.98% 5.2-5.3 Precision 92.38% 88.00% Struts # Correct rule 129 66 95.49% 11.50% 1.1-1.2.4 Precision 96.56% 85.70% Total Precision of AURA 88.25% Average ΔR 53.07% ΔP -0.10% 21 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (4/6)FrameworkEvolution Large SystemIntroduction ∣∣{retrieved rules}AURA ∣∣ > 4, 500 ⇒ we use the number ofOutlineExamples errors in scope, i.e., the number of compilation errors relatedRequirementsPrevious Work to method and type replacements:ApproachBackground ▶ to build {correct rules}AURA ;AlgorithmEvaluation ▶ to reuse {correct rules}SemDiff .BackgroundIndicatorsComparisons Systems AURA SemDiff [2]Conclusion org.eclipse. # Errors in Scope 4 jdt.debug.ui # Found Rules 4 4Bibliography 3.1 - 3.3 # Correct Rules 4 4 Mylyn # Errors in Scope 2 0.5-2.0 # Found Rules 2 2 # Correct Rules 1 2 JBossIDE # Errors in Scope 8 1.5-2.0 # Found Rules 8 8 # Correct Rules 8 8 Precision 92.86% ≤ 100.00% 22 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (5/6)FrameworkEvolutionIntroductionOutlineExamples Threats to ValidityRequirementsPrevious Work The following threats affect our comparisons:ApproachBackground ▶ construct validity: errors in the algorithm and bias inAlgorithmEvaluation the manual validation of AURA rulesBackground ⇒ implementation and data available on-line;IndicatorsComparisons ▶ internal validity: confounding factors that could affectConclusion the results of the studyBibliography ⇒ exploratory study and systematic comparison of AURA with the previous approaches. 23 / 30
    • AURA Evaluation Wu, Gu´h´neuc, e eAntoniol, and Kim Comparisons (6/6)FrameworkEvolution Threats to ValidityIntroductionOutline The following threats also affect our comparisons:ExamplesRequirementsPrevious Work ▶ conclusion validity: relation between the treatment andApproach the outcome of the studyBackgroundAlgorithm ⇒ unbiased measures and data provided by the authorsEvaluation of previous approaches;BackgroundIndicators ▶ reliability validity: possibility of replicating the studyComparisonsConclusion ⇒ details in the paper and studied systems and dataBibliography available on-line; ▶ external validity: possibility to generalize our the results of the study ⇒ five systems of different size, domains, and evaluated by previous approaches. 24 / 30
    • AURA Conclusion (1/4) Wu, Gu´h´neuc, e eAntoniol, and KimFrameworkEvolutionIntroductionOutlineExamples StrengthsRequirementsPrevious Work AURA has the following advantages over previous work:ApproachBackground ▶ combination of call-dependency and text-similarityAlgorithmEvaluation analyses;BackgroundIndicators ▶ multi-iteration algorithm;ComparisonsConclusion ▶ three text similarity measures;Bibliography ▶ many-to-one, one-to-many, simply-deleted rules; ▶ no threshold. 25 / 30
    • AURA Conclusion (2/4) Wu, Gu´h´neuc, e eAntoniol, and KimFrameworkEvolutionIntroductionOutlineExamplesRequirements LimitationsPrevious Work Still, AURA has the following limitations:ApproachBackground ▶ it cannot detect one-to-many and many-to-one changeAlgorithmEvaluation rules for target methods called by no other method;BackgroundIndicators ▶ it only generates change rules for methods.ComparisonsConclusion ▶ it assumes no major changes to the internalBibliography implementation of anchors. 26 / 30
    • AURA Conclusion (3/4) Wu, Gu´h´neuc, e eAntoniol, and KimFrameworkEvolutionIntroduction Future WorkOutlineExamples Consequently, future work includes to:RequirementsPrevious Work ▶ analyze systems in other programming languages;ApproachBackground ▶ develop heuristics that generate change rules for typesAlgorithmEvaluation and fields using inheritance relations and polymorphism;BackgroundIndicators ▶ combine AURA with approaches that use otherComparisons matching techniques;ConclusionBibliography ▶ present AURA results in first-order relational logic rules, as introduced by M. Kim et al. [5]; ▶ perform usability studies of the efficacy of AURA. 27 / 30
    • AURA Conclusion (4/4) Wu, Gu´h´neuc, e eAntoniol, and KimFrameworkEvolutionIntroductionOutlineExamplesRequirementsPrevious WorkApproachBackgroundAlgorithmEvaluation Any questions, comments?BackgroundIndicatorsComparisonsConclusionBibliography 28 / 30
    • AURA Bibliography (1/2) Wu, Gu´h´neuc, e eAntoniol, and KimFramework K. Chow and D. Notkin.Evolution Semi-automatic update of applications in response to library changes. In ICSM ’96: Proceedings of the 1996 International Conference on Software Maintenance, pageIntroduction 359, Washington, DC, USA, 1996. IEEE Computer Society.OutlineExamples B. Dagenais and M. P. Robillard.Requirements Recommending adaptive changes for framework evolution.Previous Work In ICSE ’08: Proceedings of the 30th international conference on Software engineering, pagesApproach 481–490, New York, NY, USA, 2008. ACM.Background M. W. Godfrey and L. Zou.Algorithm Using origin analysis to detect merging and splitting of source code entities.Evaluation IEEE Trans. Softw. Eng., 31(2):166–181, 2005.BackgroundIndicators J. Henkel and A. Diwan.Comparisons Catchup!: capturing and replaying refactorings to support api evolution. In ICSE ’05: Proceedings of the 27th international conference on Software engineering, pagesConclusion 274–283, New York, NY, USA, 2005. ACM.Bibliography M. Kim, D. Notkin, and D. Grossman. Automatic inference of structural changes for matching across program versions. In ICSE ’07: Proceedings of the 29th international conference on Software Engineering, pages 333–343, Washington, DC, USA, Not Available 2007. IEEE Computer Society. S. Kim, K. Pan, and E. J. Whitehead, Jr. When functions change their names: Automatic detection of origin relationships. In WCRE ’05: Proceedings of the 12th Working Conference on Reverse Engineering, pages 143–152, Washington, DC, USA, 2005. IEEE Computer Society. 29 / 30
    • AURA Bibliography (2/2) Wu, Gu´h´neuc, e eAntoniol, and KimFrameworkEvolutionIntroductionOutlineExamplesRequirements D. Lawrie, H. Feild, and D. Binkley.Previous Work Syntactic identifier conciseness and consistency. In Sixth IEEE International Workshop on Source Code Analysis and Manipulation., pagesApproach 139–148, Sept. 2006.BackgroundAlgorithm T. Sch¨fer, J. Jonas, and M. Mezini. aEvaluation Mining framework usage changes from instantiation code.Background In ICSE ’08: Proceedings of the 30th international conference on Software engineering, pagesIndicators 471–480, New York, NY, USA, May 2008. ACM.Comparisons Z. Xing and E. Stroulia.Conclusion API-evolution support with diff-CatchUp. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 33(12):818 – 836, December 2007.Bibliography 30 / 30