Cser13.ppt
Upcoming SlideShare
Loading in...5
×
 

Cser13.ppt

on

  • 177 views

API, software evolution, recommender, multi-objective optimisation

API, software evolution, recommender, multi-objective optimisation

Statistics

Views

Total Views
177
Views on SlideShare
176
Embed Views
1

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 1

http://www.ptidej.net 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Cser13.ppt Cser13.ppt Presentation Transcript

  • MOFAE: Multi-objective Optimization Approach to Framework API Evolution Wei Wu1,2, Yann-Gaël Guéhéneuc1, Giuliano Antoniol2 Ptidej Team1, SOCCER Lab2 DGIGL, École Polytechnique de Montréal, Canada  Problem  Problem Framework API changes may break the client programs after upgrading  Multi-objective Optimization  Multi-objective Optimization  Output  Output  Effort Analysis  Effort Analysis  Search Effort (SE) + Evaluation Effort (EE) Multi-objective optimization (MOOP) is the process of finding solutions to problems with potentially conflicting objectives  Smax, a maximum number of tries. Very few frameworks provide adaptation rules #C, Number of correct recommendations |T| - total missing API number The adaptation process is time consuming  Evaluation  Evaluation Single-recommendation approaches:  Existing Approaches  Existing Approaches Existing approaches use different features:  MOFAE  MOFAE  Call-dependency SemDiff, Schäfer et al. , Beagle, HiMa, AURA, Halo 1. Recommendation system modeling framework API evolution as a MOOP problem  Text-similarity Kim et al., Beagle, AURA, Halo  Software design model Diff-CatchUp  Software metrics Beagle  Comments HiMa, Halo 2. Use four features: call-dependency similarity, method signature text similarity, inheritance tree similarity, and method comment similarity 3. Select the candidates that no other candidates are better than in all the features Multi-recommendation approaches:  #Posavg – average correct replacement position  #Sizeavg – average recommendation list size  Results  Results Recommendation list sizes: 4. Sort the selected candidates by in how many features they are the best  Objectives  Objectives  Call dependency similarity Confidence value and support Correct replacement positions:  Limitations  Limitations Single-feature approaches:  Limitations  Limitations Semi-automatic Depends on the features  Comment similarity Longest Common Subsequence (LCS)  Method signature text similarity LCS, Levenshtein Distance (LD), and Method-level Distance (MD) Hybrid approaches: which feature to trust?  Inheritance similarity Inheritance tree string LCS p1:P1.p2:P2.p1:PI1.p1:I1.p2:PI2.p2:I2  Conclusion  Conclusion Comparison with previous approaches:  MOFAE can detect 18% more correct change rules than previous works.  Average size 3.7, median size 2.2, maximum size 8.  87% correct recommendations are the first, 99% correct recommendations are in top three.  Effort saving 31%  Acknowledgements  Acknowledgements This work was supported by Fonds québécois de la recherche sur la nature et les technologies and Natural Sciences and Engineering Research Council of Canada 