Your SlideShare is downloading. ×
Testability: Factors and Strategy
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Testability: Factors and Strategy

283
views

Published on

Keynote, Google Test Automation Conference. Hyderabad, India.October 28, 2010

Keynote, Google Test Automation Conference. Hyderabad, India.October 28, 2010


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
283
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. 1 TESTABILITY: FACTORS AND STRATEGY Robert V. Binder rvbinder@gmail.com Google Test Automation Conference Hyderabad October 28, 2010© 2010, Robert V. Binder
  • 2. 2Overview• Why testability matters• White box testability• Black box testability• Test automation• Strategy• Q&A
  • 3. 3WHY TESTABILITY MATTERS
  • 4. 4Testability Economics Testability defines the• Assumptions limits on producing • Sooner is better complex systems with an • Escapes are bad acceptable risk of costly or dangerous defects. • Fewer tests means more escapes • Fixed budget – how to optimize• Efficiency: average tests per unit of effort• Effectiveness: average probability of killing a bug per unit of effort• Higher testability: more better tests, same cost• Lower testability: fewer weaker tests, same cost
  • 5. 5What Makes an SUT Testable?• Controllability • What do we have to do to run a test case? • How hard (expensive) is it to do this? • Does the SUT make it impractical to run some kinds of tests ? • Given a testing goal, do we know enough to produce an adequate test suite? • How much tooling can we afford?• Observability • What do we have to do to determine test pass/fail? • How hard (expensive) is it to do this? • Does the SUT make it impractical to fully determine test results ? • Do we know enough to decide pass/fail/DNF ?
  • 6. 6 O ra cle Re us ab le R e q u ire m e n t s S p e c if ic a ti o n Fe as ibl e To ol Ba se d O b j e c t iv e C o m p le te n e s s S pe cific atio n- ba se d Tr ace a ble F e a s ib le C u rre n c y E fficie nt Co n figu ra tio nTestability Q u a n t if i a b le C o rre s p o n d e n c e A uto ma te d Ma n ag e me n t IEE E 83 0 IEE E 10 16 Co n tr o l Te st Su ite R e p r e s e n t a ti o n Co nfi gu ra ti on De ve lop e r Te st Re sult S ch em a U s e r I n t e rf a c e Re vie w Te st Pl an Ma na g em en tFactors S R S /S D D C o n t ro l S tr a t e g y Cu sto me r Te st De sign Te st Exe cu ti on Lo g Re vie w S D D /S U T C o la b o ra t io n P a c k a g in g Te st In cid e nt Re po rt Te st Ca ses Cl ose d L o op S U T / T e s t S u i te A rc h it e c t u r a l P a c k a g in g A sse sme n t Te st Pr oce d ur es Te st Su m ma ry S e p a r a t io n o f V er ifie d Do cu me nte d T ra c e a b ilit y C o n c e rn s Co n f ig u ra tio n Ru n t ime Ma n a g e m en t Tra c e Te s t S u ite S tru c tu re Fa u lt S e n sit ivity Co m p a ra to r Ma n a g e m en t E n ca p su la t ion P o lym o rp h is m In p u t Dis trib u tio n I nc id e n t S ta t ic Co d e Tra c ke r A n a lyze r P e rfo rm a nc e I ns tru m e n to r Re p o rt in g Co m p le xity Tw e ak s In h e rit an c e De b u g I nt e ro p e rab ilit y Im p le me n t at io n De m e te r Co mp lia n ce Me m o ry M a na g e m e n t S ta n d a rd M ec h a n is m Te s t S p e cif ica t ion - b a se d Co n c urre n c y In it ia lize Te s t G e n e rat o r To o ls In p u t C ap t u re E xt e rna l A P I Mu lt ipro g ra m min g Co n s is te n t U sa g e Co d e -b a se d E xe c ut e S c ript S crip t E d it Te s t G e n e rat o r Co n t rol Flo w No n -d e te rm in istic C o n tro l De v e lop e r Re s ta rt, R e co ve ry Re p la y S crip t Te s t Da t a De f in itio n Tim e S e n sit ivit y S h a re d Re s ou c e s G e n e ra to r E xt e rna l In t e rf a ce De t e rmin ism E xc e pt io n Ha n d lin g Te s t Ca s e Te s t B e d De v e lop m e n t C o n s ta n c y o f E f f e ct iv e n e s s P u rp o s e D e f in e d a n d R e p e a t a b l e D ri ve r S e t/Re s et S a fe ty Em pow er m en t C u s t o m e r-o rie n te d R eu s ab l e S ta te -b a se d F u n d in g T e s t R e d in e s s A s se s m e n t S ta te -b a se d Ad equ ac y A s s es m en t D om a i n S am p l in g G r a nu l ar ity Pr oc es s A c c o u n t a b ilit y C lo s e d L o o p F e e d b a c k G e n e r ate F ro m C a p a b i lit y S p e ci fica tio n B u il t-in T e st V e rt ic a l I n t e g r a t io n T ra in in g E x p e rie n c e C la s s C lu s t e r A p p lic a ti o n In te rm e d ia te Re s ul ts T es t T es t T es t M o t iv a t io n P r e C o n d itio n s S t a ff H o r izo n t a l I n t e g r a t io n C a p a b i lit y T ru stw or th y P o st C o nd i tio n s N o s id e -e ffe cts R q m ts D e s ig n C od e T es t M a i n t a in C la ss In va r ia n ts V & V I n t e g ra t io n R ep o r ter A ss e rtio n s I n t e g ra t e d P r o t o t y p in g I n s p e c tio n s R e v ie ws T es t TESTABILITY S t ra t e g y
  • 7. 7Testability Factors Representation Implementation Built-in Test Testability Test Suite Test Tools Test Process
  • 8. 8Examples Controllability ObservabilityGUIs Impossible without abstract Cursor for structured widget set/get. Brittle with content. Noise, non- one. Latency. Dynamic determinism in non-text widgets, specialized widgets. output. Proprietary lock- outs.OS Exceptions 100s of OS exceptions to Silent failures catch, hard to trigger.Objective-C Test drivers can’t anticipate Instrumenting objects on objects defined on the fly the flyMocks with DBMS Too rich to mock DBMS has better loggingMulti-tier CORBA Getting all DOs desired state Tracing message propagationCellular Base Station RF loading/propagation for Proprietary lock-outs. Never varying base station “off-line” population
  • 9. 9WHITE BOX TESTABILITY
  • 10. 10White Box Testability Complexity, Non-Deterministic Dependency (NDD) PCOs Built-in Test, State Helpers, Good Structure
  • 11. 11The Anatomy of a Successful Test• To reveal a bug, a test must • Reach the buggy code • Trigger the bug • Propagate the incorrect result • Incorrect result must be observed • The incorrect result must be recognized as such int scale(int j) { j = j - 1; // should be j = j + 1 j = j/30000; return j; } Out of 65,536 possible values for j, only six will produce an incorrect result: -30001, -30000, -1, 0, 29999, and 30000.
  • 12. 12What Diminishes Testability?• Non-deterministic Dependencies • Race conditions • Message latency • Threading • CRUD shared and unprotected data
  • 13. 13What Diminishes Testability?• Complexity • Essential • Accidental
  • 14. 15What Improves Testability?• Points of Control and Observation (PCOs)• State test helpers• Built-in Test• Well-structured code
  • 15. 16PCOs• Point of Control or Observation • Abstraction of any kind of interface • See TTCN-3 for a complete discussion• What does a tester/test harness have to do activate SUT components and aspects? • Components easier • Aspects more interesting, but typically not directly controllable• What does a tester/test harness have to do inspect SUT state or traces to evaluate a test? • Traces easier, but often not sufficient or noisy • Embedded state observers effective, but expensive or polluting • Aspects often critical, but typically not directly observable• Design for testability: • Determine requirements for aspect-oriented PCOs
  • 16. 17State Test Helpers• Set state• Get current state• Logical State Invariant Functions (LSIFs)• Reset• All actions controllable• All events observable
  • 17. 18 Implementation and Test Models TwoPlayerGame Two Play erG am e ( ) αTwoPlayerGame p1 _S tart ( ) / p2 _S tart ( ) /+TwoPlayerGame() s im u lat eV olle y( ) s im u lat eV olle y( ) ThreePlayerGame( ) /TwoPlayerGame( ) G am e S tarte d α+p1_Start( )+p1_WinsVolley( ) p1_Start( ) / p3_Start( )/ p1 _W ins V olle y ( ) p2 _W ins V olle y ( )-p1_AddPoint( ) simulateVolley( ) simulateVolley( )+p1_IsWinner( ) [th is .p 1_ Sc ore ( ) < 20 ] / th is .p 1A ddP oin t( ) [th is .p 2_ Sc ore ( ) < 20 ] / th is .p 2A ddP oin t( ) Game Started+p1_IsServer( ) s im u lat eV olle y( ) p1 _W ins V olle y ( ) / s im u lat eV olle y( )+p1_Points( ) s im u lat eV olle y( ) p2_Start( ) /+p2_Start( ) P la ye r 1 P la ye r 2 simulateVolley( )+p2_WinsVolley( ) S erv ed S erv ed p1_WinsVolley( ) /-p2_AddPoint( ) p2 _W ins V olle y ( ) / simulateVolley( ) s im u lat eV olle y( )+p2_IsWinner( ) p1 _W ins V olle y ( ) p2 _W ins V olle y ( )+p2_IsServer( ) [th is .p 1_ Sc ore ( ) = = 20] / [th is .p 2_ Sc ore ( ) = = 20] /+p2_Points( ) th is .p 1A ddP oin t( ) th is .p 1A ddP oin t( ) p2_WinsVolley( )+~( ) p1_WinsVolley( ) [this.p2_Score( ) < 20] / p3_WinsVolley( ) [this.p1_Score( ) < 20] / this.p2AddPoint( ) [this.p3_Score( ) < 20] / P la ye r 1 P la ye r 2 p1 _Is W in ner( ) / Won Won this.p1AddPoint( ) simulateVolley( ) this.p3AddPoint( ) p2 _Is W in ner( ) / retu rn TR UE ; retu rn TR UE ; simulateVolley( ) simulateVolley( ) ~( ) ~( ) p1_WinsVolley( ) / p2_WinsVolley( ) / ω simulateVolley( ) simulateVolley( ) Player 1 Player 2 Player 3 Served Served Served p2_WinsVolley( ) / p3_WinsVolley( ) / simulateVolley( ) simulateVolley( ) ThreePlayerGame p1_WinsVolley( ) p3_WinsVolley( ) [this.p1_Score( ) == 20] / [this.p3_Score( ) == 20] / Th ree P la y erG a m e ( ) / Two P la y erG am e ( ) this.p1AddPoint( ) this.p3AddPoint( ) α p3_WinsVolley( ) / p 3_ S tart ( ) / simulateVolley( ) s im ulat eV o lley ( ) p2_WinsVolley( )ThreePlayerGame G a m e S ta rt ed p 3_ W ins V o lle y( ) / [this.p2_Score( ) == 20] /+ThreePlayerGame() s im ulat eV o lley ( ) this.p1AddPoint( )+p3_Start( ) p 3_ W ins V o lle y( ) [t his .p 3_ S co re ( ) < 2 0] /+p3_WinsVolley( ) th is . p3 A dd P oint ( ) Tw oP lay erG am e ( )-p3_AddPoint( ) s im ulat eV o lley ( ) p 1_ W ins V o lle y( ) /+p3_IsWinner( ) s im ulat eV o lley ( ) p1_IsWinner( ) / p2_IsWinner( ) / p3_IsWinner( ) /+p3_IsServer( ) P la y er 3 return TRUE; Player 1 return TRUE; Player 2 Player 3 return TRUE;+p3_Points( ) S erv e d Won Won Won+~( ) p 2_ W ins V o lle y( ) / s im ulat eV o lley ( ) p 3_ W ins V o lle y( ) ~( ) [t his .p 3_ S co re ( ) = = 2 0] / ~( ) ~( ) th is . p3 A dd P oint ( ) ω P la y er 3 W on p 3_ Is W in ne r( ) / ret urn TR UE ; ~( ) ω
  • 18. 19Test Plan and Test Size• K events 1 ThreePlayerGame( ) 2 p1_Start( ) 8 Player 2 Served 3 p2_Start( )• N states 4 p3_Start( ) 11 Player 3 Served 5 p1_WinsVolley( ) 6 p1_WinsVolley( )[this.p1_Score( ) < 20] Player 1 Served 17 omega *7 7 p1_WinsVolley( ) [this.p1_Score( ) == 20] Player 1 W on 14 8 p2_WinsVolley( ) Player 1 W on 9 p2_WinsVolley( ) [this.p2_Score( ) < 20] *6 Player 1 Served 10 p2_WinsVolley( ) [this.p2_Score( ) == 20]• With LSIFs 2 *9 Player 2 Served • KN tests 11 Player 3 Served 1 3 alpha Gam eStarted Player 2 Served 17 omega * 10 Player 2 W on 15 Player 2 W on 5 Player 1 Served 4• No LSIFs * 12 Player 3 Served 17 omega • K × N3 tests 11 p3_WinsVolley( ) * 13 Player 3 W on 16 12 p3_WinsVolley( ) [this.p3_Score( ) < 20] Player 3 Served Player 3 W on 13 p3_WinsVolley( ) [this.p3_Score( ) == 20] 8 Player 2 Served 14 p1_IsWinner( ) 15 p2_IsWinner( ) 16 p3_IsWinner( ) 5 Player 1 Served 17 ~( )
  • 19. 20Built-in Test• Asserts• Logging• Design by Contract• No Code Left Behind pattern• Percolation pattern
  • 20. 21No Code Left Behind• Forces • How to have extensive BIT without code bloat? • How to have consistent, controllable • Logging • Frequency of evaluation • Options for handling detection • Define once, reuse globally?• Solution • Dev adds Invariant function to each class • Invariant calls InvariantCheck evaluates necessary state • InvariantCheck function with global settings • Selective run time evaluation • const and inline idiom for no object code in production
  • 21. 22 BasePercolation Pattern + + + + Base() ~Base() foo() bar() # invariant() # fooPre()• Enforces Liskov Subsitutability # # # fooPost() barPre() barPost()• Implement with No Code Left Behind Derived1 + Derived1() + ~Derived1() + foo() + bar() + fum() # invariant() # fooPre() # fooPost() # barPre() # barPost() # fumPre() # fumPost() Derived2 + Derived2() + ~Derived2() + foo() + bar() + fee() # invariant() # fooPre() # fooPost() # barPre() # barPost() # feePre() # feePost()
  • 22. 23Well-structured Code• Many well-established principles• Significant for Testability • No cyclic dependencies • Don’t allow build dependencies to leak over structural and functional boundaries (levelization) • Partition classes and packages to minimize interface complexity
  • 23. 24BLACK BOX TESTABILITY
  • 24. 25Black Box Testability Size, Nodes, Variants, Weather Test Model, Oracle, Automation
  • 25. 26System Size• How big is the essential system? • Feature points • Use cases • Singly invocable menu items • Command/sub commands • Computational strategy • Transaction processing • Simulation • Video games • Storage • Number of tables/views
  • 26. 27Network Extent• How many independent nodes? • Client/server • Tiers • Peer to peer• Minimum Spanning Tree • Must have one at least of each online MS-TPSO, Transaction Processing System Overview
  • 27. 28Variants• How many configuration options?• How many platforms supported?• How many localization variants• How many “editions”?• Combinational coverage • Try each option with every other at least once (pair-wise) • Pair-wise worst case product of option sizes • May be reduced with good tools
  • 28. 29Weather – Environmental• The SUT has elements that cannot be practically established in the lab • Cellular base station • Expensive server farm • Competitor/aggressor capabilities• Internet – you can never step in the same river twice• Test environment not feasible • Must use production/field system for test • Cannot stress production/field system
  • 29. 30More is Less• Other things being equal, a larger system is less testable • Same budget spread thinner • Reducing system scope improves testability• Try to partition into independent subsystems: additive versus multiplicative extent of test suite• For example • 10000 Feature Points • 6 Network Node types • 20 options, five variables each • Must test when financial markets are open• How big is that?
  • 30. M66 95000 Light Years 1 Pixel ≈ 400 Light Years Solar System ≈ 0.0025 LY
  • 31. 32Understanding Improves Testability• Primary source of system knowledge • Documented, validated? • Intuited, guessed? • IEEE 830• Test Model • Different strokes • Test-ready or hints?• Oracle • Computable or judgment? • Indeterminate?
  • 32. 33TEST AUTOMATION
  • 33. 34Automation Improves Testability• Bigger systems need many more tests• Intellectual leverage • Huge coverage space • Repeatable • Scale up functional suites for load test• What kind of automation? • Harness • Developer harness • System harness • Capture-replay • Model-based • Generation • Evaluation
  • 34. 35Test Automation EnvelopeReliability (Effectiveness) 5 Nines Bespoke Model- Model- Based 4 Nines Based Vision 3 Nines 2 Nines Scripting Manual 1 Nine 1 10 100 1,000 10,000 Productivity: Tests/Hour (Efficiency)
  • 35. 36STRATEGY
  • 36. 37How to Improve Testability? WBT = BIT + State Helpers +PCOs - Structure - Complexity– NDDs maximize minimize BBT = Model + Oracle + Harness – Size – Nodes – Variants - Weather maximize minimize
  • 37. 38Who Owns Testability Drivers? Architects Devs Test BBT Model Oracle Harness PCOs Size Testers typically Variants don’t own or control Nodes the work that drives Weather testability WBT BIT State Helpers Good Structure Complexity NDDs
  • 38. 39Strategy Emphasize Incentivize High Functional Repeaters Testing Black Box Testability Emphasize Manage Low Code Expectations Coverage Low High White Box Testability
  • 39. 40Q&A
  • 40. 41Media Credits• Robert V. Binder. Design for • Microsoft, MS-TPSO, testability in object-oriented Transaction Processing systems. Communications of System Overview. the ACM, Sept 1994. • NASA, Hubble Space• Diomidis D. Spinellis. Internet Telescope, M66 Galaxy. Explorer Call Tree. • Holst - The Planets – Uranus.• Unknown. Dancing Hamsters. Peabody Concert Orchestra,• Jackson Pollock, Autumn Hajime Teri Murai, Music Rhythms. Director• Brian Ferneyhough. • Test Automation Envelope. Plötzlichkeit for Large mVerify Corporation. Orchestra.• Robert V. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools.