Your SlideShare is downloading. ×
  • Like
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

API, software evolution, recommender, multi-objective optimisation

API, software evolution, recommender, multi-objective optimisation

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On SlideShare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. MOFAE: Multi-objective Optimization Approach to Framework API Evolution Wei Wu1,2, Yann-Gaël Guéhéneuc1, Giuliano Antoniol2 Ptidej Team1, SOCCER Lab2 DGIGL, École Polytechnique de Montréal, Canada  Problem  Problem Framework API changes may break the client programs after upgrading  Multi-objective Optimization  Multi-objective Optimization  Output  Output  Effort Analysis  Effort Analysis  Search Effort (SE) + Evaluation Effort (EE) Multi-objective optimization (MOOP) is the process of finding solutions to problems with potentially conflicting objectives  Smax, a maximum number of tries. Very few frameworks provide adaptation rules #C, Number of correct recommendations |T| - total missing API number The adaptation process is time consuming  Evaluation  Evaluation Single-recommendation approaches:  Existing Approaches  Existing Approaches Existing approaches use different features:  MOFAE  MOFAE  Call-dependency SemDiff, Schäfer et al. , Beagle, HiMa, AURA, Halo 1. Recommendation system modeling framework API evolution as a MOOP problem  Text-similarity Kim et al., Beagle, AURA, Halo  Software design model Diff-CatchUp  Software metrics Beagle  Comments HiMa, Halo 2. Use four features: call-dependency similarity, method signature text similarity, inheritance tree similarity, and method comment similarity 3. Select the candidates that no other candidates are better than in all the features Multi-recommendation approaches:  #Posavg – average correct replacement position  #Sizeavg – average recommendation list size  Results  Results Recommendation list sizes: 4. Sort the selected candidates by in how many features they are the best  Objectives  Objectives  Call dependency similarity Confidence value and support Correct replacement positions:  Limitations  Limitations Single-feature approaches:  Limitations  Limitations Semi-automatic Depends on the features  Comment similarity Longest Common Subsequence (LCS)  Method signature text similarity LCS, Levenshtein Distance (LD), and Method-level Distance (MD) Hybrid approaches: which feature to trust?  Inheritance similarity Inheritance tree string LCS p1:P1.p2:P2.p1:PI1.p1:I1.p2:PI2.p2:I2  Conclusion  Conclusion Comparison with previous approaches:  MOFAE can detect 18% more correct change rules than previous works.  Average size 3.7, median size 2.2, maximum size 8.  87% correct recommendations are the first, 99% correct recommendations are in top three.  Effort saving 31%  Acknowledgements  Acknowledgements This work was supported by Fonds québécois de la recherche sur la nature et les technologies and Natural Sciences and Engineering Research Council of Canada 