Your SlideShare is downloading. ×

Model-Based Testing: Why, What, How

918

Published on

Juniper Networks Ignite! Testing Conference. Sunnyvale California, November 9, 2011. …

Juniper Networks Ignite! Testing Conference. Sunnyvale California, November 9, 2011.
Overview of model-based testing. Two case studies. Thumbnail introduction to fee and free MBT tools.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
918
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
47
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Model-Based Testing: Why, What, How Bob Binder System Verification Associates Juniper Systems Testing Conference November 9, 2011
  • 2. Overview• What is Model-Based Testing?• Testing Economics• Case Studies – Automated Derivatives Trading – Microsoft Protocol Interoperability• Product Thumbnails• Real Testers of …• Q&A Model-Based Testing: What, Why, How 2
  • 3. Why?• For Juniper: – Reduce cost of testing – Reduce time to market – Reduce cost of quality – Increase competitive advantage• For you: – Focus on System Under Test (SUT), not test hassles – Engineering discipline with rigorous foundation – Enhanced effectiveness and prestige – Future of testing Model-Based Testing: What, Why, How 3
  • 4. WHAT IS MODEL-BASED TESTING? Model-Based Testing: What, Why, How 4
  • 5. “All Testing is Model-Based” • Patterns for test design – Methods – Classes – Package and System Integration – Regression – Test Automation – Oracles • 35 patterns, each a test meta-model Model-Based Testing: What, Why, How 5
  • 6. What is a Test Model?TwoPlayerGame TwoPlayerGame Mode Machine test design pattern Two Play erG a me ( )+TwoPlayerGame() α p1 _S tart ( ) / p2 _S tart ( ) /+p1_Start( ) s im u lat eV olle y( ) s im u lat eV olle y( ) ThreePlayerGame( ) /TwoPlayerGame( )+p1_WinsVolley( ) G am e S tarte d α-p1_AddPoint( )+p1_IsWinner( ) p1 _W ins V olle y ( ) p2 _W ins V olle y ( ) p1_Start( ) / p3_Start( )/+p1_IsServer( ) [th is .p 1_ Sc ore ( ) < 20 ] / [th is .p 2_ Sc ore ( ) < 20 ] / simulateVolley( ) simulateVolley( )+p1_Points( ) th is .p 1A ddP o in t( ) th is .p 2A ddP o in t( ) Game Started s im u lat eV olle y( ) p1 _W ins V olle y ( ) / s im u lat eV olle y( )+p2_Start( ) s im u lat eV olle y( )+p2_WinsVolley( ) P la ye r 1 P la ye r 2 p2_Start( ) /-p2_AddPoint( ) S erv ed S erv ed simulateVolley( )+p2_IsWinner( ) p2 _W ins V olle y ( ) / p1_WinsVolley( ) /+p2_IsServer( ) p1 _W ins V olle y ( ) s im u lat eV olle y( ) p2 _W ins V olle y ( ) simulateVolley( )+p2_Points( ) [th is .p 1_ Sc ore ( ) = = 20] / [th is .p 2_ Sc ore ( ) = = 20] / th is .p 1A ddP o in t( ) th is .p 1A ddP o in t( )+~( ) p2_WinsVolley( ) P la ye r 1 P la ye r 2 p1_WinsVolley( ) [this.p2_Score( ) < 20] / p3_WinsVolley( ) p1 _Is W in ner( ) / retu rn TR UE ; Won Won p2 _Is W in ner( ) / retu rn TR UE ; [this.p1_Score( ) < 20] / this.p2AddPoint( ) [this.p3_Score( ) < 20] / ~( ) ~( ) this.p1AddPoint( ) simulateVolley( ) this.p3AddPoint( ) ω simulateVolley( ) simulateVolley( ) p1_WinsVolley( ) / p2_WinsVolley( ) / simulateVolley( ) simulateVolley( ) Player 1 Player 2 Player 3 Served Served Served ThreePlayerGame p2_WinsVolley( ) / p3_WinsVolley( ) / simulateVolley( ) simulateVolley( ) Th ree P la y erG am e ( ) / Two P la y erG am e ( ) α p1_WinsVolley( ) p3_WinsVolley( ) p 3_ S tart ( ) / [this.p1_Score( ) == 20] / [this.p3_Score( ) == 20] / s im ulat eV o lley ( )ThreePlayerGame p 3_ W ins V o lle y( ) / G a m e S ta rt ed this.p1AddPoint( ) this.p3AddPoint( ) s im ulat eV o lley ( ) p3_WinsVolley( ) /+ThreePlayerGame() p 3_ W ins V o lle y( ) simulateVolley( )+p3_Start( ) [t his .p 3_ S co re ( ) < 2 0] / p2_WinsVolley( ) th is . p3 A dd P oint ( )+p3_WinsVolley( ) Tw oP lay erG am e ( ) s im ulat eV o lley ( ) [this.p2_Score( ) == 20] /-p3_AddPoint( ) p 1_ W ins V o lle y( ) /+p3_IsWinner( ) s im ulat eV o lley ( ) this.p1AddPoint( ) P la y er 3+p3_IsServer( ) S erv e d+p3_Points( ) p 2_ W ins V o lle y( ) / s im ulat eV o lley ( )+~( ) p 3_ W ins V o lle y( ) [t his .p 3_ S co re ( ) = = 2 0] / p1_IsWinner( ) / p2_IsWinner( ) / p3_IsWinner( ) / th is . p3 A dd P oint ( ) return TRUE; Player 1 return TRUE; Player 2 Player 3 return TRUE; Won Won Won P la y er 3 W on p 3_ Is W in ne r( ) / ~( ) ret urn TR UE ; ~( ) ~( ) ~( ) ω ω SUT Design Model Test Model Model-Based Testing: What, Why, How 6
  • 7. Model-based Test Suite 1 ThreePlayerGame( )• N+ Strategy 2 3 p1_Start( ) p2_Start( ) 8 Player 2 Served 4 p3_Start( ) – Start at α 5 p1_WinsVolley( ) Player 1 Served 11 Player 3 Served 17 omega 6 p1_WinsVolley( )[this.p1_Score( ) < 20] *7 – Follow transition 7 8 p1_WinsVolley( ) [this.p1_Score( ) == 20] p2_WinsVolley( ) Player 1 W on 14 Player 1 W on path 9 10 p2_WinsVolley( ) [this.p2_Score( ) < 20] p2_WinsVolley( ) [this.p2_Score( ) == 20] *6 Player 1 Served 2 – Stop if ω or visited *9 Player 2 Served – Three loop 11 Player 3 Served 1 3 alpha Gam eStarted Player 2 Served 17 omega * 10 iterations Player 2 W on 15 Player 2 W on – Assumes state 5 Player 1 Served 4 * 12 observer Player 3 Served 17 omega 11 p3_WinsVolley( ) * 13 – Try all sneak paths Player 3 W on 12 p3_WinsVolley( ) [this.p3_Score( ) < 20] 16 Player 3 Served Player 3 W on 13 p3_WinsVolley( ) [this.p3_Score( ) == 20] 8 Player 2 Served 14 p1_IsWinner( ) 15 p2_IsWinner( ) 16 p3_IsWinner( ) 5 Player 1 Served 17 ~( ) N+ Test Suite Model-Based Testing: What, Why, How 7
  • 8. Automated Model-based Testing• Software that represents an SUT so that test inputs and expected results can be computed – Useful abstraction of SUT aspects – Algorithmic test input generation – Algorithmic expected result generation – Many possible data structures and algorithms• SUT interface for control and observation – Abstraction critical – Generated and/or hand-coded Model-Based Testing: What, Why, How 8
  • 9. How MBT Improves Quality Develop Missing, incorrect Requirements Ambiguous, missing, contradictory, incorrect, Model obscured, incomplete Coverage: Model error, omission Requirements, Generate Model, Code Inputs Expected Outputs Evaluate (Test Sequences) (Test Oracle) Control Observe Reliability Estimate SUT SUT BugStobie et al, © 2010 Microsoft. Adapted with permission. Model-Based Testing: What, Why, How 9
  • 10. Typical Test Configuration Test Suite Control Agent Adapter Adapter System Under Test Transport Transport Transport TransportTest Suite Host OS SUT OS Model-Based Testing: What, Why, How 10
  • 11. Typical MBT Environment Reqmts DB MBT Tool Design DB Test Suite Control Agent Bug DB Adapter Adapter System Under Test Code Stack Transport Transport Transport Transport Test Suite HostTest Manager Test Host OS SUT OS Development Environment Configuration Management Model-Based Testing: What, Why, How 11
  • 12. TESTING ECONOMICS
  • 13. Show Me the MoneyHow much of this … for one of these? Model-Based Testing: What, Why, How 13
  • 14. Testing by Poking Around Manual “Exploratory” Testing System Under Test+ No tooling costs No testware costs Subjective, wide variation Low coverage - Quick start Not repeatable Opportunistic Can’t scale Qualitative feedback Inconsistent Model-Based Testing: What, Why, How 14
  • 15. Manual Testing Manual Test Input Manual Manual Test Results System Under Test Test Design Evaluation Test Setup+ Flexible, no SUT coupling Systematic coverage 1 test per hour Usually not repeatable/ed - No tooling costs Not scalable No testware cost Inconsistent Usage validation Tends to “sunny day” tests Model-Based Testing: What, Why, How 15
  • 16. Hand-coded Test Driver Manual Test Driver Test Design Programming System Under Test - 10+ tests per hour Tooling costs+ Repeatable Predictable Testware costs Brittle, high maintenance cost Consistent Short half-life Continuous Integration, TDD Technology focus Model-Based Testing: What, Why, How 16
  • 17. Model-based Testing Modeling, Automated Automated Setup and Generation Execution System Under Test+ 1000+ tests per hour Maintain model (not testware) Tooling costs Training costs - Intellectual control Paradigm shift Explore complex space Still need manual, coded tests Consistent coverage Model-Based Testing: What, Why, How 17
  • 18. Test Automation EnvelopeReliability (Effectiveness)5 Nines Model-based4 Nines3 Nines Automated Driver2 Nines Manual1 Nine 1 10 100 1,000 10,000 Productivity: Tests/Hour (Efficiency) Model-Based Testing: What, Why, How 18
  • 19. CASE STUDY:REAL TIME DERIVATIVES TRADING
  • 20. Real Time Derivatives Trading• “Screen-based trading” over private network – 3 million transactions per hour – 15 billion dollars per day• Six development increments – 3 years – 3 to 5 months per iteration – Testing cycle shadows dev increments• QA staff test productivity – One test per hour Model-Based Testing: What, Why, How 20
  • 21. System Under Test• Unified process• About 90 use-cases, 650 KLOC Java• CORBA/IDL distributed object model• HA Sun server farm• Multi-host Oracle DBMS• Many interfaces – GUI (trading floor) – Many high speed program trading users – Many legacy input/output Model-Based Testing: What, Why, How 21
  • 22. MBT: Challenges and Solutions• One time sample not • Simulator generates fresh, effective, but fresh test accurate sample on demand suites too expense• Too expensive to develop • Oracle generates expected expected results on demand• Too many test cases to • Comparator automates evaluate checking• Profile/Requirements • Incremental changes to rule change base• SUT interfaces change • Common agent interface Model-Based Testing: What, Why, How 22
  • 23. Test Input Generation 10000000• Simulation of users 1000000 – Use case profile 100000 10000 – 50 KLOC Prolog 1000• Load Profile 100 – Time domain variation 10 1 – Orthogonal to event 1 2 3 4 5 6 7 8 9 10 11 12 profile 3500.000• Each generated event 3000.000 2500.000 assigned a "port" and 2000.000 Events Per Second submit time 1500.000 1000.000• 1,000 to 750,000 unique 500.000 tests for 4 hour session -5000 0.000 -500.000 0 5000 10000 15000 20000 25000 Time (seconds) Model-Based Testing: What, Why, How 23
  • 24. Automated Evaluation• Oracle – Processes all test inputs – About 500 unique rules – Generates end of session “book”• Comparator – Compares SUT “book” to oracle “book”• Verification – “Splainer” rule backtracking – Rule/Run coverage analyzer Model-Based Testing: What, Why, How 24
  • 25. Test HarnessSimulator Oracle Adapter Splainer Adapter Adapter Comparator Adapter Test Verdict SUT Run Reports Model-Based Testing: What, Why, How 25
  • 26. Technical Achievements• AI-based user simulation generates test suites• All inputs generated under operational profile• High volume oracle and evaluation• Every test run unique and realistic (about 200)• Evaluated functionality and load response with fresh tests• Effective control of many different test agents (COTS/ custom, Java/4Test/Perl/Sql/proprietary) Model-Based Testing: What, Why, How 26
  • 27. Technical Problems• Stamp coupling – Simulator, Agents, Oracle, Comparator• Re-factoring rule relationships, Prolog limitations• Configuration hassles• Scale-up constraints• Distributed schedule brittleness• Horn Clause Shock Syndrome Model-Based Testing: What, Why, How 27
  • 28. Results• Revealed about 1,500 bugs over two years – 5% showstoppers• Five person team, huge productivity increase – 1 TPH versus 1,800 TPH• Achieved proven high reliability – Last pre-release test run: 500,000 events in two hours, no failures detected – No production failures• Abandoned by successor QA staff Model-Based Testing: What, Why, How 28
  • 29. CASE STUDY:MICROSOFT PROTOCOL INTEROPERABILITY
  • 30. Challenges• Prove interoperability to Federal Judge and court-appointed scrutineers• Validation of documentation, not as-built implementation• Is each TD all a third party needs to develop: – A client that interoperates with an existing service? – A service that interoperates with existing clients?• Only use over-the-wire messages Model-Based Testing: What, Why, How 30
  • 31. Microsoft Protocols• Remote API for a service • All product groups – Windows Server – Office – Exchange – SQL Server – Others • 500+ protocols – Remote Desktop – Active Directory – File System – Security – Many others Model-Based Testing: What, Why, How 31
  • 32. Microsoft Technical Document (TD)• Publish protocols as “Technical Documents”• One TD for each protocol• Black-box spec – no internals• All data and behavior specified with text Model-Based Testing: What, Why, How 32
  • 33. Published Technical Docshttp://msdn.microsoft.com/en-us/library/cc216513(PROT.10).aspx Model-Based Testing: What, Why, How 33
  • 34. Validating Interoperability with MBTTechnical Document Analysis Test Data and Requirements behavior Specification statements• Approximates third party implementation Modeling• Validates consistency with actual Windows implementation Model assertions generate Model-based and check response of actual Test SuiteWS 2008 Windows ServicesWS 2003WS 2000 Test Execution Stobie et al, © 2010 Microsoft. Adapted with permission. Model-Based Testing: What, Why, How 34
  • 35. Protocol Quality Assurance Process TD v1 TD v2 TD vnAuthors Study Plan Design Final • Scrutinize • Complete • Complete • Gen & Run TD Test Rqmts Model Test SuiteTest • Define Test • High Level • Complete • Prep User Strategy Test Plan Adapters DocSuiteDevelopers Review Review Review Review • TD ready? • Test Rqmts • Model Ok? • Coverage • Strategy OK? OK? • Adapter Ok? OK? • Plan OK? • Test Code OK?Reviewers Model-Based Testing: What, Why, How 35
  • 36. Productivity“On average, model- Avg Hrs Per Test Requirementbased testing took 42% Taskless time than hand- Document review 1.1coding tests” Test requirement extract 0.8 Model authoring 0.5Threshold result Traditional test coding 0.6• Nearly all Adapter coding 1.2 requirements had Test case execution 0.6 less than three tests Final adjustments 0.3• Much greater gain for Total, all phases 5.1 full coverage Grieskamp et al. Model-Based Testing: What, Why, How 36
  • 37. Results• Published 500+ TDs, ~150,000 test requirements• 50,000+ bugs, most identified before tests run• Many Plugfests, many 3rd party users• Released high interest test suites as open source• Met all regulator requirements, on time – Judge closes DOJ anti-trust case May 12, 2011• ~20 MSFT product teams now using Spec Explorer Model-Based Testing: What, Why, How 37
  • 38. TOOL THUMBNAILSAll product or company names mentioned herein may be trademarks or registeredtrademarks of their respective owners.
  • 39. CerifyITSmartestingModel Use cases, OCL; custom test stereotypes; keyword/action abstractionNotation UML 2, OCL, custom stereotypes, UML Test ProfileUML Support YesRequirements Interface to DOORS, HP QC, othersTraceabilityGeneration Constraint solver selects minimal set of boundary valuesOracle Post conditions in OCL, computed result for test pointAdapter Natural language option; HP GUI driversTypical SUT Financial, Smart CardNotable Top-down formally defined behavior; data stores; GUI model Model-Based Testing: What, Why, How 39
  • 40. ConformiqDesignerModel State machines with coded event/actionsNotation Statecharts, JavaUML Support YesRequirements Integrated requirements, traceability matrixTraceabilityGeneration Graph traversal: state, transition, 2-switchOracle Model post conditions, any custom functionAdapter Output formatter, TTCN and user-definedTypical SUT Telecom, embeddedNotable Timers; parallelism and concurrency; on-the-fly mode Model-Based Testing: What, Why, How 40
  • 41. MaTeLoAll4TecModel State machine with transition probabilities (Markov); data domains, event timingNotation Decorated State MachineUML Support NoRequirements Integrated requirements and trace matrix; import fromTraceability DOORS, othersGeneration Most likely path, user defined, all transitions, Markov simulation; subset or full modelOracle User conditions; Matlab and SimulinkAdapter EXAM mappers; Python output formatterTypical SUT hardware-in-the-loop; Automotive, RailNotable Many standards-based device interfaces; supports software reliability engineering Model-Based Testing: What, Why, How 41
  • 42. Automatic Test GenerationIBM/RationalModel Sequence diagrams, flow charts, statecharts, codebaseNotation UML, SysML, UML Testing ProfileUML Support YesRequirements DOORS integration; design model traceabilityTraceabilityGeneration Parses generated C++ to generate test cases; Reach states, transition, operations, events for modeled classesOracle User codeAdapter User code, merge generationTypical SUT EmbeddedNotable Part of systems engineering tool chain Model-Based Testing: What, Why, How 42
  • 43. Spec ExplorerMicrosoftModel C# class with “action” method pre/post condition; regular expressions define “machine” of classes/actionsNotation C#UML Support Sequence diagramsRequirements API for logging user defined requirementsTraceabilityGeneration For any machine, constraint solver finds feasible short or long path of actions; generates C# runtimeOracle Action post conditions; any custom functionAdapter User codeTypical SUT Microsoft protocols, APIs, productsNotable Pairwise data selection; on-the-fly mode; use any Dot Net capability Model-Based Testing: What, Why, How 43
  • 44. T-Vec/RAVET-VecModel Boolean system with data boundaries; SCR types and modules; hierarchic modulesNotation SCR-based, tabular definition; accepts SimulinkUML Support NoRequirements RAVE requirements management, interface to DOORS,Traceability othersGeneration Constraint solver identifies test pointsOracle Solves constraints for expected valueAdapter Output formatter; html, C++, java, perl, othersTypical SUT Aerospace, DoDNotable Simulink for input, oracle, model checking; MCDC model coverage; non-linear and real-valued constraints Model-Based Testing: What, Why, How 44
  • 45. Close Cousins• Data Generators – Grammar based – Pairwise, combinatoric – Fuzzers• TTCN-3 Compilers• Load Generators• Model Checkers• Model-driven Development tool chains Model-Based Testing: What, Why, How 45
  • 46. REAL TESTERS OF …
  • 47. MBT User Survey• Part of 1st Model-based Testing User Conference – Offered to many other tester communities• In progress• Preliminary analysis of responses to date• https://www.surveymonkey.com/s/JSJVDJW Model-Based Testing: What, Why, How 47
  • 48. MBT Users, SUT Domain Gaming Social Media Other Supercomputing CommunicationsSoftware Infrastructure EmbeddedTransaction Processing 0% 5% 10% 15% 20% 25% 30% 35% 40% Model-Based Testing: What, Why, How 48
  • 49. MBT User, Company SizeEmployees 10000 +1001-10000 501-1000 101-500 11-100 1-10 0% 5% 10% 15% 20% 25% 30% 35% Model-Based Testing: What, Why, How 49
  • 50. MBT Users, Software Process Other Ad Hoc Waterfall Spiral Incremental XP/TDDCMMI level 2+ Agile 0% 5% 10% 15% 20% 25% Model-Based Testing: What, Why, How 50
  • 51. How Used?What stage of adoption? Who is the tool provider? Evaluation In HousePilot Project Open Source RolloutRoutine use Commercial 0% 20% 40% 60% 0% 20% 40% 60% 80% Model-Based Testing: What, Why, How 51
  • 52. What is the Overall MBT Role? At what scope is MBT used? What is overall test effort for each testing mode? System ManualComponent Hand-coded Unit Model-based 0% 20% 40% 60% 80% 25% 30% 35% 40% Model-Based Testing: What, Why, How 52
  • 53. How Long to be Proficient? Median: 100 hoursHours of training/use to become proficient 160+80-120 1-40 0% 10% 20% 30% 40% 50% Model-Based Testing: What, Why, How 53
  • 54. How Bad are Common Problems? Misses bugs Cant integrate w other test assets Developing SUT interfaces too hard Inadequate coverageDeveloping test models is too difficult Oracle ineffective Too difficult to update model Model "blows up" 0% 50% 100% Worse than expected Not an issue Better than expected Model-Based Testing: What, Why, How 54
  • 55. MBT Effect on Time, Cost, Quality? Percent change40% 35% from baseline: e.g.,35% 36% 35% fewer escaped 30% 28% bugs, 0% more bugs 25% 23% 20% 18% 15% 10% 5% 0% 0% Better Worse Bugs Escaped Overall Testing Costs Overall Testing Time Model-Based Testing: What, Why, How 55
  • 56. MBT Traction Overall, how effective is MBT? How likely are you to continue using MBT? Not at all 0% No effect 4% Slightly 4% Slightly 13% Moderately 21% 42% 38%Moderately Very 38% Extremely 42% Extremely Model-Based Testing: What, Why, How 56
  • 57. CONCLUSIONS
  • 58. What Have We Learned?• Test engineering with rigorous foundation• Global best practice• Broad applicability• Mature commercial offerings• Many proof points• Commitment and planning necessary• 10x to 1,000x improvement possible Model-Based Testing: What, Why, How 58
  • 59. Q&A rvbinder@gmail.comModel-Based Testing: What, Why, How 59
  • 60. Image CreditsUnless noted below, all content herein Copyright © Robert V. Binder, 2011.• Pensive Boy: Resource Rack, http://sites.google.com/site/resourcerack/mental• Isoquant Chart: MA Economics Blog, http://ma-economics.blogspot.com/2011/09/optimum- factor-combination.html• Derivatives Trading Floor: Money Mavens, http://medillmoneymavens.com/2009/02/11/cboe-and-cbot-a-story-in-two-floors/• Barrett Pettyman US Federal Courthouse: Earth in Pictures, http://www.earthinpictures.com/world/usa/washington,_d.c./e._barrett_prettyman_united_ states_courthouse.html• Server Room: 1U Server Rack, http://1userverrack.net/2011/05/03/server-room-4/• Utility Knife: Marketing Tenerife, http://marketingtenerife.com/marketing-tools-in-tenerife/• Software Tester: IT Career Coach, http://www.it-career-coach.net/2010/02/14/the-job-of- software-testing-quality-assurance-career• Conclusion: European Network and information Security Agency (ENISA), http://www.enisa.europa.eu/media/news-items/summary-of-summer- school/image/image_view_fullscreen Model-Based Testing: What, Why, How 60

×