IAC 2024 - IA Fast Track to Search Focused AI Solutions
iFL: An Interactive Environment for Understanding Feature Implementations
1. 14 Sep., 2010 TOKYO INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE
ICSM 2010 ERA
iFL: An Interactive
Environment for Understanding
Feature Implementations
Shinpei Hayashi, Katsuyuki Sekine,
and Motoshi Saeki
Department of Computer Science,
Tokyo Institute of Technology, Japan
2. Abstract
We have developed iFL
− An environment for program understanding
− Interactively supports the understanding of feature
implementation using a feature location technique
− Can reduce understanding costs
2
3. Background
Program understanding is costly
− Extending/fixing existing features
Understanding the implementation of target
feature is necessary
− Dominant of maintenance costs [Vestdam 04]
Our focus: feature/concept location (FL)
− Locating/extracting code fragments which
implement the given feature/concept
[Vestdam 04]: “Maintaining Program Understanding – Issues, Tools, and Future Directions”,
Nordic Journal of Computing, 2004.
3
4. FL Example (Search-based)
I want to understand scheduler
the feature
converting input time
strings to schedule
objects…
Source Code
……
public Time(String hour, …) {
A new ......
maintainer }
…
FL public void createSchedule() {
......
schedule time Search }
public void updateSchedule(…) {
……
Feature Location
(Search-based approach) Reading these methods
for understanding 4
5. Problem 1:
How to Find Appropriate Queries?
FL How??
Search
Constructing appropriate queries
requires rich knowledge for the
implementation
− Times: time, date, hour/minute/second
− Images: image, picture, figure
Developers in practice use several
keywords for FL through trial and error
5
6. Problem 2:
How to Fix FL Results?
Complete (Optimum) FL results are rare
− Accuracy of used FL techniques
− Individual difference in appropriate code
An FL result Necessary code
(code fragments) (false negatives)
Unnecessary code
(false positives)
FL
schedule time Search
Optimum result
(Code fragments that should be understood) 6
7. Our Solution: Feedbacks
Added two feedback processes
Query Input
schedule Search
Feature location
(calculating scores)
Updating 1st: ScheduleManager.addSchedule() Relevance
queries 2nd: EditSchedule.inputCheck()
… feedback
(addition of hints)
Selection and
understanding of
code fragments
Finish if the user judges that he/she
has read all the necessary code fragments 7
8. Query Expansion
Wide query for initial FL
− By expanding queries to its synonyms
Narrow query for subsequent FLs
− By using concrete identifies in source code
1st FL 2nd FL
schedule* date* Search schedule time Search
Thesaurus Expand
A code fragment in a FL result
public void createSchedule() {
…
• schedule • list String hour = …
• agenda • time Time time = new Time(hour, …);
• plan • date …
} 8
9. Relevance Feedback
Improving FL results by users feedback
− Adding a hint when the selected code fragments
is relevant or irrelevant to the feature
− Feedbacks are propagated into other fragments
using dependencies
Dependency
i th result of FL (i+1) th result of FL
1 2 9 6 1 8 11 6
Code fragment : relevant propagation
with its score by dependencies
9
10. Supporting Tool: iFL
Implemented as an Eclipse plug-in
− For static analysis: Eclipse JDT
− For dynamic analysis: Reticella [Noda 09]
− For a thesaurus: WordNet
Exec. Traces /
Synonym
Reticella
dependencies iFL- Info.
core Word
Syntactic Net
information
JDT
Implemented!
Eclipse
10
22. Evaluation Results
No effect in S3
− Because non-interactive approach is sufficient for understanding
− Not because of the fault in interactive approach
Non-
# Correct # FL Interactive # Query
interactive Δ Costs Overheads
Events execution costs updating
costs
S1 19 5 20 31 0.92 1 2
S2 7 5 8 10 0.67 1 1
S3 1 2 2 2 0.00 1 0
S4 10 6 10 13 1.00 0 2
S5 3 6 6 15 0.75 3 2
J1 10 4 20 156 0.93 10 2
J2 4 6 18 173 0.92 14 3
22
23. Conclusion
Summary
− We developed iFL: interactively supporting the
understanding of feature implementation using FL
− iFL can reduce understanding costs in 6 out of 7
cases
Future Work
− Evaluation++: on larger-scale projects
− Feedback++: for more efficient relevance feedback
• Observing code browsing activities on IDE
23
25. The FL Approach
Based on search-based FL
Use of static + dynamic analyses
Static analysis
Query Methods with Hints
schedule
their scores
Source
code
Execution trace Evaluating
Test events Events with
case / dependencies
their scores
(FL result)
Dynamic analysis
25
26. Static Analysis: eval. methods
Matching queries to identifiers
Schedule time Search
Expand
The basic score (BS) of
createSchedule: 21
Thesaurus
public void createSchedule() {
• schedule … 20 for method names
• agenda String hour = …
• time Time time = new Time(hour, …);
• date …
1 for local variables
}
Expanded queries
26
27. Dynamic Analysis
Extracting execution traces and their
dependencies by executing source code with
a test case
Dependencies
Execution trace (Method invocation relations)
e1
e1: loadSchedule()
e2: initWindow() e2 e3 e6
e3: createSchedule()
e4: Time()
e5: ScheduleModel() e4
e6: updateList()
e5
27