Your SlideShare is downloading. ×
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
MIning Software Repositories (MSR) 2010 presentation
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

MIning Software Repositories (MSR) 2010 presentation

781

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
781
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
17
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Proceedings of the 2010 7th IEEE Working Conference on Mining Software Repositories, p.1-10 Predicting the Severity of a Reported BugAhmed Lamkanfi, Serge Demeyer | Emanuel Giger | Bart GoethalsAnsymo | s.e.a.l. | ADReM
  • 2. Proceedings of the 2010 7th IEEE Working Conference on Mining Software Repositories, p.1-10 Predicting the Severity of a Reported BugAhmed Lamkanfi, Serge Demeyer | Emanuel Giger | Bart GoethalsAnsymo | s.e.a.l. | ADReM
  • 3. Severity of a bug is important✓ Critical factor in deciding how soon it needs to be fixed, i.e. when prioritizing bugs
  • 4. Priority is business
  • 5. Severity is techn ical
  • 6. ✓ Severity varies: ➡ trivial, minor, normal major, critical and blocker ➡ clear guidelines exist to classify severity of bug reports
  • 7. ✓ Severity varies: ➡ trivial, minor, normal major, critical and blocker ➡ clear guidelines exist to classify severity of bug reports✓ Both a short and longer description of the problem
  • 8. ✓ Severity varies: ➡ trivial, minor, normal major, critical and blocker ➡ clear guidelines exist to classify severity of bug reports✓ Both a short and longer description of the problem✓ Bugs are grouped according to products and components ➡ e.g.: plug-ins, bookmarks are components of product Firefox
  • 9. Can we accurately predict the severity of a reported bug by analyzing its textual descriptions?
  • 10. Can we accurately predict the severity of a reported bug by analyzing its textual descriptions? Also the following questions:
  • 11. Can we accurately predict the severity of a reported bug by analyzing its textual descriptions? Also the following questions: Potential indicators?
  • 12. Can we accurately predict the severity of a reported bug by analyzing its textual descriptions? Also the following questions: Potential indicators? Short versus long description?
  • 13. Can we accurately predict the severity of a reported bug by analyzing its textual descriptions? Also the following questions: Potential indicators? Short versus long description? Per component versus cross-component?
  • 14. Approach
  • 15. We use text mining to classify bug reports• Bayesian classifier: based on the probabilistic occurrence of words• training and evaluation period• in first instance, per component
  • 16. We use text mining to classify bug reports• Bayesian classifier: based on the probabilistic occurrence of words• training and evaluation period• in first instance, per component
  • 17. We use text mining to classify bug reports• Bayesian classifier: based on the probabilistic occurrence of words• training and evaluation period• in first instance, per component
  • 18. We use text mining to classify bug reports• Bayesian classifier: based on the probabilistic occurrence of words• training and evaluation period• in first instance, per component Non-severe bugs Severe bugs (trivial, minor) (major, critical, blocker)
  • 19. We use text mining to classify bug reports• Bayesian classifier: based on the probabilistic occurrence of words• training and evaluation period• in first instance, per component Undecided Non-severe bugs Default Severe bugs (trivial, minor) (normal) (major, critical, blocker)
  • 20. Evaluation of the approach:✓ precision and recall:Cases drawn from the open-source community ✓ Mozilla, Eclipse and GNOME
  • 21. Results
  • 22. How does the basic approach perform?➡ per component and using short description
  • 23. How does the basic approach perform?➡ per component and using short description Non-severe Severe component precision recall precision recall Mozilla: Layout 0.701 0.785 0.752 0.653Mozilla: Bookmarks 0.692 0.703 0.698 0.687 Eclipse: UI 0.707 0.633 0.668 0.738 Eclipse: JDT-UI 0.653 0.714 0.685 0.621GNOME: Calendar 0.828 0.783 0.794 0.837GNOME:Contacts 0.767 0.706 0.728 0.785
  • 24. What keywords are good indicators of severity?
  • 25. What keywords are good indicators of severity? Component Non-severe Severe inconsist, favicon, credit, Fault, machin, reboot,Mozilla Firefox- General extra, consum, licens, reinstal, lockup, underlin, typo, inspector, seemingli, perman, titlebar instantli, segfault, compil deprec, style, runnabl, hang, freez, deadlock, Eclipse JDT UI system, cce, tvt35, thread, slow, anymor, whitespac, node, put, param memori, tick, jvm, adapt mnemon, outbox, typo, pad, deadlock, sigsegv, relat, GNOME Mailer follow, titl, high, caus, snapshot, segment, acceler, decod, reflec core, unexpectedli, build, loop
  • 26. How does the approach perform when using the longer description?
  • 27. How does the approach perform when using the longer description? Non-severe Severe component precision recall precision recall Mozilla: Layout 0.583 0.961 0.890 0.314Mozilla: Bookmarks 0.536 0.963 0.820 0.166 Mozilla: Firefox 0.578 0.948 0.856 0.308 general Eclipse: UI 0.548 0.976 0.892 0.197 Eclipse: JDT-UI 0.547 0.973 0.881 0.195 Eclipse: JDT-Text 0.570 0.988 0.955 0.257
  • 28. How does the approach perform when using the longer description? Non-severe Severe component precision recall precision recall Mozilla: Layout 0.583 0.961 0.890 0.314Mozilla: Bookmarks 0.536 0.963 0.820 0.166 Mozilla: Firefox 0.578 0.948 0.856 0.308 general Eclipse: UI 0.548 0.976 0.892 0.197 Eclipse: JDT-UI 0.547 0.973 0.881 0.195Eclipse: JDT-Text 0.570 0.988 0.955 0.257
  • 29. How does the approach perform whencombining bugs from different components?
  • 30. How does the approach perform when combining bugs from different components? Non-severe Severecomponent precision recall precision recall Mozilla 0.704 0.750 0.733 0.685 Eclipse 0.693 0.553 0.628 0.755 GNOME 0.817 0.737 0.760 0.835
  • 31. How does the approach perform when combining bugs from different components? Non-severe Severecomponent precision recall precision recall Mozilla 0.704 0.750 0.733 0.685 Eclipse 0.693 0.553 0.628 0.755 GNOME 0.817 0.737 0.760 0.835Much larger training set necessary ✓± 2000 reports instead of ± 500 per severity!
  • 32. Conclusions✓ It is possible to predict the severity of a reported bug✓ Short description better source for predictions✓ Cross-component approach works, but requires more training samples

×