™
      mVerify
      A Million Users in a Box ®


 Software Test Patterns:
Successes and Challenges

     Robert V. Binder

   Software QS-TAG 08
       Nuremberg
    November 8, 2007
Overview


 Why I Used Test Patterns
 What Have We Learned?
 Are Patterns Necessary?
 Test Design Patterns, 2.0
 Q&A




                              2
Test Design Patterns
 Software testing, c. 1995
    A large and fragmented body of knowledge
    Few ideas about testing OO software
 Challenges
    Re-interpret wealth of knowledge for OO
    Address unique OO considerations
    Systematic presentation
    Uniform analytical framework
 Patterns looked like a useful schema
    Existing templates didn’t address unique testing issues
                                                               3
Some Footprints

1995 Design Patterns       2003 Briand’s Experiments
1995 Beizer, Black Box     2003 Dot Net Test Objects
     Testing
1995 Firesmith PLOOT       2003 Microsoft Patterns Group
1995 McGregor              2004 Java Testing Patterns
1999 TOOSMPT               2005 JUnit Anti Patterns
2000 Tutorial Experiment   2007 Test Object Anti Patterns
2001 POST Workshops (4)    2007 Software QS-TAG


                                                            4
Ten Years After …

 Many new design patterns for hand-crafted test
  automation
     Elaboration of Incremental Test Framework (e.g. JUnit)
     Platform-specific or application-specific
     Narrow scope
   Few new test design patterns
   No new oracle patterns
   Attempts to generate tests from design patterns
   To date ~9,500 copies of TOOSMPT


                                                               5
Limits on Acceptance

 Proliferation of templates
 Confusion -- process v. implementation v. testing
 “Me-too” forms – idiosyncratic heuristics lacking
  hooks for conceptual integration
 Pattern adoption rituals
 Finger versus the moon
 “Code-first” (I code, therefore I am)
 Perceived cost of secondary modeling too high


                                                      6
What Have We Learned?
 Effective for articulation of insight and practice
    Requires discipline to develop
    Supports research and tool implementation
 Do not “work out of the box”
    Requires discipline in application
    Enabling factors
 Irrelevant to the uninterested, undisciplined
    Low incremental benefit
    Readily available substitutes
 Broadly influential, but not compelling

                                                       7
Innovators and Scribes

 A paradox
   Successful innovators don’t follow patterns, they
    create them
   Without application, there cannot be any patterns to
    discover




                                                           8
Wright and the Prairie School

 Wright’s “style” is a pattern language.
 “A building should appear to grow easily
 from its site. Design gently sloping roofs
 low-pitch hipped, unbroken … Keep
 proportions low. Use suppressed heavy
 chimneys. Build sheltering overhangs.
 Include low terraces. Construct garden
 walls that reach out. … Group windows in a
 rhythmic way. Use casement windows, not
 double-hung, guillotine-style windows.”

            Frank Lloyd Wright, In the Cause of Architecture, 1908

                                                                     9
Robie House: Chicago, 1908
                             10
Wright and the Prairie School
 Wright’s own pattern language doesn’t include a key
  element of its effectiveness
    Prospect and refuge: where one can see without being seen




                                                                 11
Heurtley House, Oak Park, 1902

                                 12
Are Patterns Necessary?

 We don’t need Patterns
   for a pattern language
   for elegant solutions

 We do need Patterns
   for a trustworthy conceptual map
   to efficiently identify, share, and use solutions




                                                        13
21st Century Software Challenges

 Systems of Systems
   Exponential increase in complexity and impact
   Testing is the only approach that can assure system
    scope reliability
   Realistic model-based test generation and evaluation
    is the only testing strategy that can scale for SOS
 Testing depends on software creation
   Must test the SUT, not models or patterns
   Second modeling effort is too expensive



                                                           14
The Hardware Strategy

 Design equals implementation
 Formal design models with substantial automated
  support (Mathlab, Matrix X, Cadence …)
 Development testing must be (and is) mostly
  automatic

 Could a practical analog be found for software?



                                                    15
Test Design Patterns, 2.0
 Establish a common descriptive framework
    Choose a template
    http://www.taxonomywarehouse.com
 Establish value-creating applications
    Test automation
       Design
       Benchmarking
    Research
       What can be learned from classification?
 Align with Model-Driven Architecture
    Testable model criteria
    OMG Standards
                                                   16
Q&A


      17

Software Test Patterns: Successes and Challenges

  • 1.
    mVerify A Million Users in a Box ® Software Test Patterns: Successes and Challenges Robert V. Binder Software QS-TAG 08 Nuremberg November 8, 2007
  • 2.
    Overview  Why IUsed Test Patterns  What Have We Learned?  Are Patterns Necessary?  Test Design Patterns, 2.0  Q&A 2
  • 3.
    Test Design Patterns Software testing, c. 1995  A large and fragmented body of knowledge  Few ideas about testing OO software  Challenges  Re-interpret wealth of knowledge for OO  Address unique OO considerations  Systematic presentation  Uniform analytical framework  Patterns looked like a useful schema  Existing templates didn’t address unique testing issues 3
  • 4.
    Some Footprints 1995 DesignPatterns 2003 Briand’s Experiments 1995 Beizer, Black Box 2003 Dot Net Test Objects Testing 1995 Firesmith PLOOT 2003 Microsoft Patterns Group 1995 McGregor 2004 Java Testing Patterns 1999 TOOSMPT 2005 JUnit Anti Patterns 2000 Tutorial Experiment 2007 Test Object Anti Patterns 2001 POST Workshops (4) 2007 Software QS-TAG 4
  • 5.
    Ten Years After…  Many new design patterns for hand-crafted test automation  Elaboration of Incremental Test Framework (e.g. JUnit)  Platform-specific or application-specific  Narrow scope  Few new test design patterns  No new oracle patterns  Attempts to generate tests from design patterns  To date ~9,500 copies of TOOSMPT 5
  • 6.
    Limits on Acceptance Proliferation of templates  Confusion -- process v. implementation v. testing  “Me-too” forms – idiosyncratic heuristics lacking hooks for conceptual integration  Pattern adoption rituals  Finger versus the moon  “Code-first” (I code, therefore I am)  Perceived cost of secondary modeling too high 6
  • 7.
    What Have WeLearned?  Effective for articulation of insight and practice  Requires discipline to develop  Supports research and tool implementation  Do not “work out of the box”  Requires discipline in application  Enabling factors  Irrelevant to the uninterested, undisciplined  Low incremental benefit  Readily available substitutes  Broadly influential, but not compelling 7
  • 8.
    Innovators and Scribes A paradox  Successful innovators don’t follow patterns, they create them  Without application, there cannot be any patterns to discover 8
  • 9.
    Wright and thePrairie School  Wright’s “style” is a pattern language. “A building should appear to grow easily from its site. Design gently sloping roofs low-pitch hipped, unbroken … Keep proportions low. Use suppressed heavy chimneys. Build sheltering overhangs. Include low terraces. Construct garden walls that reach out. … Group windows in a rhythmic way. Use casement windows, not double-hung, guillotine-style windows.” Frank Lloyd Wright, In the Cause of Architecture, 1908 9
  • 10.
  • 11.
    Wright and thePrairie School  Wright’s own pattern language doesn’t include a key element of its effectiveness  Prospect and refuge: where one can see without being seen 11
  • 12.
    Heurtley House, OakPark, 1902 12
  • 13.
    Are Patterns Necessary? We don’t need Patterns  for a pattern language  for elegant solutions  We do need Patterns  for a trustworthy conceptual map  to efficiently identify, share, and use solutions 13
  • 14.
    21st Century SoftwareChallenges  Systems of Systems  Exponential increase in complexity and impact  Testing is the only approach that can assure system scope reliability  Realistic model-based test generation and evaluation is the only testing strategy that can scale for SOS  Testing depends on software creation  Must test the SUT, not models or patterns  Second modeling effort is too expensive 14
  • 15.
    The Hardware Strategy Design equals implementation  Formal design models with substantial automated support (Mathlab, Matrix X, Cadence …)  Development testing must be (and is) mostly automatic  Could a practical analog be found for software? 15
  • 16.
    Test Design Patterns,2.0  Establish a common descriptive framework  Choose a template  http://www.taxonomywarehouse.com  Establish value-creating applications  Test automation  Design  Benchmarking  Research  What can be learned from classification?  Align with Model-Driven Architecture  Testable model criteria  OMG Standards 16
  • 17.
    Q&A 17