Are Automated Debugging Techniques Actually Helping Programmers
Upcoming SlideShare
Loading in...5
×
 

Are Automated Debugging Techniques Actually Helping Programmers

on

  • 2,509 views

 

Statistics

Views

Total Views
2,509
Views on SlideShare
2,493
Embed Views
16

Actions

Likes
1
Downloads
41
Comments
1

2 Embeds 16

https://twitter.com 10
http://twitter.com 6

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • It is nice to see UNIVAC FLIT metioned. It is still be used to prototype and debug UNISYS OS 2200 Clearpath systems.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Just ask this guy.
  • Maybe you have some questions…
  • Maybe you have some questions…
  • Cons
  • 58,300 papers on automated debugging…  1,170 this year! (google scholar)
  • Both linear and exhaustive?
  • Do two things
  • Intutively, mental model etc
  • 37% of the visits jumped more than one position and, on average, each jump skipped 10 positionsEach participant zigzagged, with an average of 10.3 zigzags,with an overall range between 1 and 36 zigzags.would scan the ranked list to find a statement that might confirm theirhypothesis about the cause of the failure, whereas othertimes they skipped statements that did not appear relevant.
  • Or are we blind to the bugs in front of us.
  • Info graphics, N=X
  • Can we leveragehypothesises
  • Does perfect bug understanding exist?Are Automated Debugging Tools Faster?
  • Siemens 138346299297402483512SIR 62 subjects (229)

Are Automated Debugging Techniques Actually Helping Programmers Are Automated Debugging Techniques Actually Helping Programmers Presentation Transcript

  • Are Automated Debugging Techniques Actually Helping Programmers?
    Chris Parnin
    Georgia Tech
    @chrisparnin (twitter)
    Alessandro (Alex) Orso
    Georgia Tech
    @alexorso(twitter)
  • Finding bugs can be hard…
  • Automated debugging to the rescue!
    I’ll help you find location of bug!
  • How it works (Ranking-Based)
    I have calculated most likely location of bug!
    Give me a failing program.
    Calculating…
    Here is your ranked list of statements.
  • How it works (Ranking-Based)
    I have calculated most likely location of bug!
    Give me input.
    But how does a programmer usea ranked list of statements?
    Calculating…
    Here is your rankedlist of statements.
  • Conceptual Model
    1)
    2)
    3)
    4)

    Here is a list of places to check out
    Ok, I will check out your suggestions one by one.
  • Conceptual Model
    1)
    2)
    3)
    4)

    Found the bug!
  • A Skeptic
    Does the conceptual model make sense?
    Have we evaluated it?
  • Let’s see…
    Over 50 years of researchonautomated debugging.
    2001. Statistical Debugging
    1999. Delta Debugging
    1981. Weiser. Program Slicing
    1962. Symbolic Debugging (UNIVAC FLIT)
  • Did you see anything?
  • Only 5 papers have evaluated
    automated debugging techniques
    with actual programmers.
    • Most find no benefit
    • Most done on programs < 100 LOC
  • More generally, two points
    Techniques rely on
    two strong assumptions
  • Assumption #1: Perfect bug understanding must also exist when using automated tool.
    Do you see a bug?
  • Assumption #2
    Programmer inspects statements linearly and exhaustively until finding bug.
    Is this realistic?
  • How can we evaluate the conceptual model?
    How could we measure the benefit of an automated tool?
  • Conceptual model: What if gave a developer a list of statements to inspect?
    How would they use the list?
    Would they be able to see the bug after visiting it?
    Is ranking important?
  • Benefit: What if we evaluate programmers with and without automated debuggers?
    >
    ?
    We also could observe what works and what doesn’t.
  • Study Setup
    34 Developers
    2 Debugging Tasks
    Automated debugging tool
  • Study Setup
    Participants:
    34 developers
    MS/Phd Students
    Different levels of expertise (low,medium,high)
  • Study Setup
    Software subjects:
    Tetris (2.5 kloc)
    NanoXML (4.5 kloc)
  • Study Setup
    21
    Tools:
    Traditional debugger
    Eclipse ranking plugin
    (logged activity)
  • Study Setup
    Tasks:
    Debugging fault
    30 minutes per task
    Questionnaire at end
  • Bugs
    Bug #1: Pressing rotate key causes
    square figure to move up!
  • Bugs
    When running the NanoXML program (main is in class Parser1_vw_v1), the following exception is thrown:
    Exception in thread "main" net.n3.nanoxml.XMLParseException:
    XML Not Well-Formed at Line 19: Closing tag does not match opening tag: `ns:Bar' != `:Bar'
    at net.n3.nanoxml.XMLUtil.errorWrongClosingTag(XMLUtil.java:497)
    at net.n3.nanoxml.StdXMLParser.processElement(StdXMLParser.java:438)
    at net.n3.nanoxml.StdXMLParser.scanSomeTag(StdXMLParser.java:202)
    at net.n3.nanoxml.StdXMLParser.processElement(StdXMLParser.java:453)
    at net.n3.nanoxml.StdXMLParser.scanSomeTag(StdXMLParser.java:202)
    at net.n3.nanoxml.StdXMLParser.scanData(StdXMLParser.java:159)
    at net.n3.nanoxml.StdXMLParser.parse(StdXMLParser.java:133)
    at net.n3.nanoxml.Parser1_vw_v1.main(Parser1_vw_v1.java:50)
     
    The input, testvm_22.xml, contains the following input xml document:
    <Foo a=”test”>
    <ns:Bar>
    <Blah x=”1” ns:x=”2”/>
    </ns:Bar>
    </Foo>
    Bug #2: Exception on input xml document.
  • Study Setup: Groups
  • Study Setup: Groups
    26
    A
    B
  • Study Setup: Groups
    C
    D
    Rank
    Rank
  • Results
  • How do developers use a ranked list?
    Lowperformers did follow list.
    Survey says searched through statements.
    37% of visits jumped avg. 10.
    Navigation pattern zig-zagged (avg. 10 zigzags)
  • Is perfect bug understanding realistic?
    Only 1 out of 10 programmers who clicked on bug stopped investigation.
    The others spent on average ten minutes continuing investigation.
  • Are automated toolsspeeding up debugging?
    =
    Traditional
    Automated group
    No

  • Are automated toolsspeeding up debugging?
    No

    =
    Traditional
    Automated group
    Rank
    No
    =

    Traditional
    Automated group
  • Are automated toolsspeeding up debugging?
    =
    Traditional
    Automated group
    No

  • But… Stratifying Participants
    High Performers
    Medium Performers
    Low Performers






  • Significant difference for “experts”
    High Performers
    On average, 5 minutes faster


  • Are automated toolsspeeding up debugging?
    No

    =
    Traditional
    Automated group
    Yes!

    Experts
    Experts
    >
    Traditional
    Automated group
  • Observations
    Developers searched through statements.
    Developers without tool fixed symptoms (not problem).
    Developers wanted explanations rather than recommendations.
  • Future directions
  • Moving beyond fault space reduction
    39
    We can keep building better tools.
    But we can’t keep abstracting away the human.
  • Performing further studies
    Does different granularity work better for inspection? Documents? Methods?
    How does different interfaces or visualizations impact technique?
    Do other automated debugging techniques fare any better?
    40
  • How do developers use a ranked list?
    Is perfect bug understanding realistic?
    Are Automated Debugging Tools Helpful?
    Human studies, human studies, human studies!
  • 64,000,000 miles
    800,000 miles
    35 years of
    Scientific Progress
    1969
    2004
    42
  • 352 LOC
    (median 8 programs)
    63.5 LOC
    (median 4 programs)
    30 years
    30 years of
    Scientific Progress
    1981
    2011
    43