Your SlideShare is downloading. ×
Software Testing: Models, Patterns, Tools
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Software Testing: Models, Patterns, Tools

195
views

Published on

Overview of Software Testing, Guest Lecture at University of Illinois at Chicago. November 10, 2010

Overview of Software Testing, Guest Lecture at University of Illinois at Chicago. November 10, 2010

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
195
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Software Testing:Models, Patterns, Tools Guest Lecture, UIC CS 540 November 16, 2010 Robert V. Binder
  • 2. Overview• Test design pattern fly-by• Levels of testing• Case study: CBOE Direct• Q&A
  • 3. TEST DESIGN PATTERNS
  • 4. Test Design Patterns• Software testing, c. 1995 – A large and fragmented body of knowledge – Few ideas about testing OO software• Challenges – Re-interpret wealth of knowledge for OO – Address unique OO considerations – Systematic presentation – Uniform analytical framework• Patterns looked like a useful schema – Existing templates didn’t address unique testing issues4
  • 5. Some Footprints1995 Design Patterns 2003 Briand’s Experiments1995 Beizer, Black Box 2003 Dot Net Test Objects Testing1995 Firesmith PLOOT 2003 Microsoft Patterns Group1995 McGregor 2004 Java Testing Patterns1999 TOOSMPT 2005 JUnit Anti Patterns2000 Tutorial Experiment 2007 Test Object Anti Patterns2001 POST Workshops (4) 2007 Software QS-TAG 5
  • 6. Test Design Patterns • Pattern schema for test design – Methods – Classes – Package and System Integration – Regression – Test Automation – Oracles 6
  • 7. Test Design Patterns • Pattern schema forName/Intent test designContext Test ModelFault Model Test ProcedureStrategy OracleEntry Criteria AutomationExit CriteriaConsequencesKnown Uses 7
  • 8. Test Design Patterns • Method Scope • Class/Cluster Scope – Category-Partition – Invariant Boundaries – Combinational Function – Modal Class – Recursive Function – Quasi-Modal Class – Polymorphic Message – Polymorphic Server – Modal Hierarchy© 2000 Robert V. Binder, all 8rights reserved
  • 9. Modal Class: Implementation and Test Models TwoPlayerGame Two Play erG am e ( ) αTwoPlayerGame p1 _S tart ( ) / p2 _S tart ( ) /+TwoPlayerGame() s im u lat eV olle y( ) s im u lat eV olle y( ) ThreePlayerGame( ) /TwoPlayerGame( ) G am e S tarte d α+p1_Start( )+p1_WinsVolley( ) p1_Start( ) / p3_Start( )/ p1 _W ins V olle y ( ) p2 _W ins V olle y ( )-p1_AddPoint( ) simulateVolley( ) simulateVolley( )+p1_IsWinner( ) [th is .p 1_ Sc ore ( ) < 20 ] / th is .p 1A ddP oin t( ) [th is .p 2_ Sc ore ( ) < 20 ] / th is .p 2A ddP oin t( ) Game Started+p1_IsServer( ) s im u lat eV olle y( ) p1 _W ins V olle y ( ) / s im u lat eV olle y( )+p1_Points( ) s im u lat eV olle y( ) p2_Start( ) /+p2_Start( ) P la ye r 1 P la ye r 2 simulateVolley( )+p2_WinsVolley( ) S erv ed S erv ed p1_WinsVolley( ) /-p2_AddPoint( ) p2 _W ins V olle y ( ) / simulateVolley( ) s im u lat eV olle y( )+p2_IsWinner( ) p1 _W ins V olle y ( ) p2 _W ins V olle y ( )+p2_IsServer( ) [th is .p 1_ Sc ore ( ) = = 20] / [th is .p 2_ Sc ore ( ) = = 20] /+p2_Points( ) th is .p 1A ddP oin t( ) th is .p 1A ddP oin t( ) p2_WinsVolley( )+~( ) p1_WinsVolley( ) [this.p2_Score( ) < 20] / p3_WinsVolley( ) [this.p1_Score( ) < 20] / this.p2AddPoint( ) [this.p3_Score( ) < 20] / P la ye r 1 P la ye r 2 p1 _Is W in ner( ) / Won Won this.p1AddPoint( ) simulateVolley( ) this.p3AddPoint( ) p2 _Is W in ner( ) / retu rn TR UE ; retu rn TR UE ; simulateVolley( ) simulateVolley( ) ~( ) ~( ) p1_WinsVolley( ) / p2_WinsVolley( ) / ω simulateVolley( ) simulateVolley( ) Player 1 Player 2 Player 3 Served Served Served p2_WinsVolley( ) / p3_WinsVolley( ) / simulateVolley( ) simulateVolley( ) ThreePlayerGame p1_WinsVolley( ) p3_WinsVolley( ) [this.p1_Score( ) == 20] / [this.p3_Score( ) == 20] / Th ree P la y erG a m e ( ) / Two P la y erG am e ( ) this.p1AddPoint( ) this.p3AddPoint( ) α p3_WinsVolley( ) / p 3_ S tart ( ) / simulateVolley( ) s im ulat eV o lley ( ) p2_WinsVolley( )ThreePlayerGame G a m e S ta rt ed p 3_ W ins V o lle y( ) / [this.p2_Score( ) == 20] /+ThreePlayerGame() s im ulat eV o lley ( ) this.p1AddPoint( )+p3_Start( ) p 3_ W ins V o lle y( ) [t his .p 3_ S co re ( ) < 2 0] /+p3_WinsVolley( ) th is . p3 A dd P oint ( ) Tw oP lay erG am e ( )-p3_AddPoint( ) s im ulat eV o lley ( ) p 1_ W ins V o lle y( ) /+p3_IsWinner( ) s im ulat eV o lley ( ) p1_IsWinner( ) / p2_IsWinner( ) / p3_IsWinner( ) /+p3_IsServer( ) P la y er 3 return TRUE; Player 1 return TRUE; Player 2 Player 3 return TRUE;+p3_Points( ) S erv e d Won Won Won+~( ) p 2_ W ins V o lle y( ) / s im ulat eV o lley ( ) p 3_ W ins V o lle y( ) ~( ) [t his .p 3_ S co re ( ) = = 2 0] / ~( ) ~( ) th is . p3 A dd P oint ( ) ω P la y er 3 W on p 3_ Is W in ne r( ) / ret urn TR UE ; ~( ) ω 9
  • 10. Test Plan and Test Size• K events 1 2 ThreePlayerGame( ) p1_Start( ) 8 Player 2 Served 3 p2_Start( )• N states 4 5 p3_Start( ) p1_WinsVolley( ) Player 1 Served 11 Player 3 Served 17 omega 6 p1_WinsVolley( )[this.p1_Score( ) < 20] *7 7 p1_WinsVolley( ) [this.p1_Score( ) == 20] Player 1 W on 14 8 p2_WinsVolley( ) Player 1 W on 9 p2_WinsVolley( ) [this.p2_Score( ) < 20] *6 Player 1 Served• With LSIFs 10 p2_WinsVolley( ) [this.p2_Score( ) == 20] 2 *9 Player 2 Served – KN tests alpha 1 Gam eStarted 3 Player 2 Served 11 Player 3 Served 17 omega * 10 Player 2 W on 15 Player 2 W on 5 Player 1 Served• No LSIFs 4 * 12 Player 3 Served 17 omega – K× N3 tests 11 12 p3_WinsVolley( ) p3_WinsVolley( ) [this.p3_Score( ) < 20] Player 3 Served * 13 Player 3 W on 16 Player 3 W on 13 p3_WinsVolley( ) [this.p3_Score( ) == 20] 8 Player 2 Served 14 p1_IsWinner( ) 15 p2_IsWinner( ) 16 p3_IsWinner( ) 5 Player 1 Served 17 ~( ) 10
  • 11. Test Design Patterns • Subsystem Scope • Reusable Components – Class Associations – Abstract Class – Round-Trip Scenarios – Generic Class – Mode Machine – New Framework – Controlled Exceptions – Popular Framework© 2000 Robert V. Binder, all 11rights reserved
  • 12. Test Design Patterns • Intra-class Integration • Integration Strategy – Small Pop – Big Bang – Alpha-Omega Cycle – Bottom up – Top Down – Collaborations – Backbone – Layers – Client/Server – Distributed Services – High Frequency© 2000 Robert V. Binder, all 12rights reserved
  • 13. Test Design Patterns• System Scope • Regression Testing – Extended Use Cases – Retest All – Covered in CRUD – Retest Risky Use Cases – Allocate by Profile – Retest Profile – Retest Changed Code – Retest Within Firewall© 2000 Robert V. Binder, all 13rights reserved
  • 14. Test Oracle Patterns• Smoke Test • Reversing• Judging • Simulation – Testing By Poking Around – Code-Based Testing • Approximation – Post Test Analysis • Regression• Pre-Production • Voting• Built-in Test • Substitution• Gold Standard – Custom Test Suite • Equivalency – Random Input Generation – Live Input – Parallel System© 2000 Robert V. Binder, all 14rights reserved
  • 15. Test Automation Patterns• Test Case Implementation • Test Drivers – Test Case/Test Suite – TestDriver Super Class Method – Percolate the Object Under – Test Case /Test Suite Class Test – Catch All Exceptions – Symmetric Driver• Test Control – Subclass Driver – Server Stub – Private Access Driver – Server Proxy – Test Control Interface – Drone – Built-in Test Driver© 2000 Robert V. Binder, all 15rights reserved
  • 16. Test Automation Patterns • Test Execution • Built-in Test – Command Line Test – Coherence idiom Bundle – Percolation – Incremental Testing – Built-in Test Driver Framework (e.g. Junit) – Fresh Objects© 2000 Robert V. Binder, all 16rights reserved
  • 17. Percolation Pattern Base + + + Base() ~Base() foo() + bar() # invariant() # fooPre()• Enforces Liskov Subsitutability # # # fooPost() barPre() barPost()• Implement with No Code Left Behind Derived1 + Derived1() + ~Derived1() + foo() + bar() + fum() # invariant() # fooPre() # fooPost() # barPre() # barPost() # fumPre() # fumPost() Derived2 + Derived2() + ~Derived2() + foo() + bar() + fee() # invariant() # fooPre() # fooPost() # barPre() # barPost() # feePre() # feePost() 17
  • 18. Ten Years After …• Many new design patterns for hand-crafted test automation – Elaboration of Incremental Test Framework (e.g. JUnit) – Platform-specific or application-specific – Narrow scope• Few new test design patterns• No new oracle patterns• Attempts to generate tests from design patterns• To date 10,000+ copies of TOOSMPT18
  • 19. What Have We Learned? • TP are effective for articulation of insight and practice – Requires discipline to develop – Supports research and tool implementation • Do not “work out of the box” – Requires discipline in application – Enabling factors • Irrelevant to the uninterested, undisciplined – Low incremental benefit – Readily available substitutes • Broadly influential, but not compelling19
  • 20. TEST AUTOMATION LEVELS
  • 21. What is good testing? • Value creation (not technical merit) – Effectiveness (reliability/quality increase) – Efficiency (average cost per test) • Levels – 1: Testing by poking around – 2: Manual Testing – 3: Automated Test Script/Test Objects – 4: Model-based – 5: Full Test Automation Each Level 10x Improvement© 2004 mVerify Corporation 21
  • 22. Level 1: Testing by Poking Around Manual “Exploratory” Testing •Low Coverage •Not Repeatable •Can’t Scale •Inconsistent System Under Test© 2004 mVerify Corporation 22
  • 23. Level 2: Manual Testing Test SetupManual ManualTest Design/ Test InputGeneration •1 test per hour •Not repeatable Test Results System Under Test Evaluation© 2004 mVerify Corporation 23
  • 24. Level 3: Automated Test Script Test SetupManual Test ScriptTest Design/ ProgrammingGeneration •10+ tests per hour •Repeatable •High change cost Test Results System Under Test Evaluation© 2004 mVerify Corporation 24
  • 25. Level 4: Automated Model-based Test Setup Model-based Automatic Test Design/ Test Generation Execution•1000+ tests per hour•High fidelity Test Results System Under Test Evaluation© 2004 mVerify Corporation 25
  • 26. Level 5: Total Automation Automated Test Setup Model-based Automatic Test Design/ Test Generation Execution •10,000 TPH Automated Test Results System Under Test Evaluation© 2004 mVerify Corporation 26
  • 27. MODEL-BASED TESTING OFCBOE DIRECT
  • 28. CBOE Direct ®• Electronic technology platform built and maintained in-house by Chicago Board Options Exchange (CBOE) – Multiple trading models configurable by product – Multiple matching algorithms (options, futures, stocks, warrants, single stock futures) – Best features of screen-based trading and floor-based markets• Electronic trading on CBOE, the CBOE Futures Exchange (CFX), and the CBOE Stock Exchange (CBSX), others• As of April 2008: – More than 188,000 listed products – More than 3.8 billion industry quotes handled from OPRA on peak day – More than two billion quotes on peak day – More than 684,000 orders on peak day – More than 124,000 peak quotes per second – Less than 5 ms response time for quotes
  • 29. Development• Rational Unified process – Six development increments – 3 to 5 months – Test design/implementation parallel with app dev• Three + years, version 1.0 live Q4 2001• About 90 use-cases, 650 KLOC Java• CORBA/IDL distributed objects• Java (services and GUI), some XML• Oracle DBMS• HA Sun server farm• Many legacy interfaces© 2004 mVerify Corporation 29
  • 30. Test Models Used • Extended Use Case – Defines feature usage profile – Input conditions, output actions • Mode Machine – Use case sequencing • Invariant Boundaries Stealth Requirements Engineering© 2004 mVerify Corporation 30
  • 31. Behavior Model • Extended Use Case pattern 1 2 3 4 5 Conditions Variable/Object Value/StateTest Input Widget 1 Query T TConditions for Widget 2 Set Time T T Logic combinations Widget 3 DEL Tautomatic test Host Name Pick Valid T F T F control test input DCinput generation Host Name Enter Host Name data selection Actions Variable/Interface Value/Result Host Name Display No Change T T T T Deleted T Added Host Time Display No Change Usage Profile T Required Actions Host Time T T controls statistical for automatic CE Time Display Last Local Time T T T distribution of test T result checking Host Time T cases Error Message F T F T F Relative Frequency 0.35 0.20 0.30 0.10 0.05© 2004 mVerify Corporation 31
  • 32. Load Model• Vary input rate, any quantifiable pattern – Arc – Flat – Internet Fractal – Negative ramp – Positive ramp – Random – Spikes – Square wave – Waves Actual “Waves” Load Profile © 2004 mVerify Corporation 32
  • 33. MBT Challenges/Solutions• One time sample not • Simulator generates fresh, effective, but fresh test accurate sample on demand suites too expense• Too expensive to develop • Oracle generates expected results expected on demand• Too many test cases to • Comparator automates evaluate checking• Profile/Requirements • Incremental changes to change rule base• SUT Interfaces change • Common agent interface
  • 34. Simulator• Discrete event simulation of user behavior• 25 KLOC, Prolog – Rule inversion – “Speaks”• Load Profile – Time domain variation – Orthogonal to operational profile• Each event assigned a "port" and submit time© 2004 mVerify Corporation 34
  • 35. Test Environment • Simulator, etc. on typical desktop • Dedicated, but reduced server farm • Live data links • ~10 client workstations for automatic test agents – Adapter, each System Under Test (SUT) Interface – Test Agents execute independently • Distributed processing/serialization challenges – Loosely coupled, best-effort strategy – Embed sever-side serialization monitor© 2004 mVerify Corporation 35
  • 36. Automated Run Evaluation • Post-process evaluation • Oracle accepts output of simulator • About 500 unique rules (20 KLOC Prolog) • Verification – Splainer: result/rule backtracking tool (Prolog, 5 KLOC) – Rule/Run coverage analyzer • Comparator (Prolog, 3 KLOC) – Extract transaction log – Post run database state – End-to-end invariants© 2004 mVerify Corporation 36
  • 37. Daily Test Process • Plan each days test run – Load profile, total volume – Configuration/operational scenarios • Run Simulator – 100,000 events per hour – FTP event files to test agents • Test agents submit • Run Oracle/Comparator • Prepare bug reports 1,000 to 750,000 unique tests per day© 2004 mVerify Corporation 37
  • 38. Technical Achievements • AI-based user simulation generates test suites • All inputs generated under operational profile • High volume oracle and evaluation • Every test run unique and realistic (about 200) • Evaluated functionality and load response with fresh tests • Effective control of many different test agents (COTS/ custom, Java/4Test/Perl/Sql/proprietary)© 2004 mVerify Corporation 38
  • 39. Technical Problems• Stamp coupling – Simulator, Agents, Oracle, Comparator• Re-factoring rule relationships, Prolog limitations• Configuration hassles• Scale-up constraints• Distributed schedule brittleness• Horn Clause Shock Syndrome© 2004 mVerify Corporation 39
  • 40. Results• Revealed about 1,500 bugs over two years – ~ 5% showstoppers• Five person team, huge productivity increase• Achieved proven high reliability – Last pre-release test run: 500,000 events in two hours, no failures detected – No production failures© 2004 mVerify Corporation 40
  • 41. Q&A

×