Your SlideShare is downloading. ×
Cues for Better Scent in Debugging
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Cues for Better Scent in Debugging

73
views

Published on

Talk at SVV - University of Luxembourg

Talk at SVV - University of Luxembourg


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
73
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Cues for BetterScent in Debugging Rui Abreu Dept. of Informatics Engineering University of Porto Portugal
  • 2. History: The birth of debuggingYour guess?
  • 3. Thanks to Alex Orso
  • 4. Can we do this automatically?Thanks to Alex Orso
  • 5. Diagnostic Performance
  • 6. Are we done?• Best performing techniques still require the tester to inspect 10% of the code... • 100 LOC 10 LOC • 10,000 LOC 1,000 LOC • 1000,000 LOC 10,000 LOC 39
  • 7. Metrics• Are we measuring the right thing? • rank-based • PDG-based 40
  • 8. Case Studies (NXP/PSC) 41
  • 9. Human studiesA. Orso et al observed that there is a lack of: 42
  • 10. Why do we need human studies?• Do developers follow the ranking?• Does perfect bug understanding exist? • How can we quantify isolation efforts? 43
  • 11. Ecosystem in need • Wide adoptionDebugging – a frameworkBetter Cues for will only be possible if thereBetteraCues for Debugging – a framework is framework which provides Check it out at www.gzoltar.org • testing functionalities • debugging capabilities • integrated in an IDE 44
  • 12. Interested?• Do you wanna try it out? • We are always interested in receiving feedback • Email José Carlos Campos to participate • jose.carlos.campos@fe.up.pt• Thanks! 45
  • 13. Conclusions• History of debugging• Spectrum-based reasoning• Human studies 46
  • 14. Open Research Questions • Can we automatically decide if a test fails? • Using program invariants • Sort of replace asserts in JUnit tests • Can we automatically suggest fixes? • Other intuitive visualisations? • How to reduce the overall overhead? • Can we apply this principles to Web/Mobile envs? • Self-healing: Architecture-based Run-time fault localization (NSF project with CMU) 47
  • 15. Show time