Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
On Evaluating Code Recommender Systems for API Usages
1. On Evaluating Recommender Systems for API Usages Fachbereich Informatik | Software Technology Group | Marcel Bruch | bruch@cs.tu-darmstadt.de International Workshop on Recommendation Systems for Software Engineering Marcel Bruch Thorsten Schäfer Mira Mezini
2.
3.
4. Sample Usage of a Code Recommender ... Code Recommender Recommendations Calls: <init> setText setFont setLayoutData «refine » Type: Text Calls: ??? widget Incomplete Code « predict » « send » Type: Text Calls: <init> setText setLayoutData widget Resulting Code « inspect & use » Query Calls for Type= Text ? « create »
5. General Process of Training & Evaluating Code Recommenders c Code Observations 1. Code Analysis Recommender Training Data 2. Training Artifact Processing Step Legend: Visualization Performance Measures 5. Automated Reports Queries Test Data Expected Recommendations 3. Automated Query Creation Recommendations 4. Query Phase
6. Sample Evaluation Scenario 2. Training 5. Reports 3. Automated Query Creation ... 4. Query Phase Recommender Query Calls for Type= Text ? Recommendations Calls: <init> setText setFont setLayoutData Expected Recommendations Calls: <init> setText setLayoutData Type: Text Calls: <init> setText setLayoutData widget Test Data « send » « predict » «refine » « compare »
6. Juni 2009 | | To ease framework understanding, tools have been developed that analyze existing framework instantiations to extract API usage patterns and present them to the user. However, detailed quantitative evaluations of such recommender systems are lacking. In this paper we present an automated evaluation process which extracts queries and expected results from existing code bases. This enables the validation of recommendation systems with large test beds in an objective manner by means of precision and recall measures.