Cues for Better Scent in Debugging
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Cues for Better Scent in Debugging

  • 174 views
Uploaded on

Talk at SVV - University of Luxembourg

Talk at SVV - University of Luxembourg

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
174
On Slideshare
174
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
1
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Cues for BetterScent in Debugging Rui Abreu Dept. of Informatics Engineering University of Porto Portugal
  • 2. History: The birth of debuggingYour guess?
  • 3. Thanks to Alex Orso
  • 4. Can we do this automatically?Thanks to Alex Orso
  • 5. Diagnostic Performance
  • 6. Are we done?• Best performing techniques still require the tester to inspect 10% of the code... • 100 LOC 10 LOC • 10,000 LOC 1,000 LOC • 1000,000 LOC 10,000 LOC 39
  • 7. Metrics• Are we measuring the right thing? • rank-based • PDG-based 40
  • 8. Case Studies (NXP/PSC) 41
  • 9. Human studiesA. Orso et al observed that there is a lack of: 42
  • 10. Why do we need human studies?• Do developers follow the ranking?• Does perfect bug understanding exist? • How can we quantify isolation efforts? 43
  • 11. Ecosystem in need • Wide adoptionDebugging – a frameworkBetter Cues for will only be possible if thereBetteraCues for Debugging – a framework is framework which provides Check it out at www.gzoltar.org • testing functionalities • debugging capabilities • integrated in an IDE 44
  • 12. Interested?• Do you wanna try it out? • We are always interested in receiving feedback • Email José Carlos Campos to participate • jose.carlos.campos@fe.up.pt• Thanks! 45
  • 13. Conclusions• History of debugging• Spectrum-based reasoning• Human studies 46
  • 14. Open Research Questions • Can we automatically decide if a test fails? • Using program invariants • Sort of replace asserts in JUnit tests • Can we automatically suggest fixes? • Other intuitive visualisations? • How to reduce the overall overhead? • Can we apply this principles to Web/Mobile envs? • Self-healing: Architecture-based Run-time fault localization (NSF project with CMU) 47
  • 15. Show time