Experience with a Profile-based Automated Testing Environment

329 views

Published on

Industry Track, ISSRE 2003, November 18, 2003. Five Levels of Automated Testing; case study of level 4 model-based testing.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
329
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Experience with a Profile-based Automated Testing Environment

  1. 1. ™ mVerify A Million Users in a Box ™Experience with a Profile-basedAutomated Testing Environment Presented at ISSRE 2003 November 18, 2003 Robert V. Binder mVerify Corporation www.mVerify.com
  2. 2. OverviewLevels of Automated TestingSystem Under TestApproachObservations © 2003 mVerify Corporation 2
  3. 3. Musa’s ObservationTesting driven by an operational profile isvery efficient because it identifies failures(and hence the faults causing them) onaverage, in order of how often they occur.This approach rapidly increases reliability …because the failures that occur mostfrequently are caused by the faultyoperations used most frequently. IEEE Software, March 1993 © 2003 mVerify Corporation 3
  4. 4. Promise of Profile-Based Testing Testers point of view versus reliability analyst  Maximize reliability within fixed budget  Measurement of reliability not primary goal Profile-Based Testing is optimal when  Available information already used  Must allocate resources to test complex SUT Many significant practical obstacles © 2003 mVerify Corporation 4
  5. 5. Testing by Poking Around Manual “Exploratory” Testing•Not Effective•Low Coverage•Not Repeatable•Can’t Scale System Under Test © 2003 mVerify Corporation 5
  6. 6. Manual Testing Test SetupManual ManualTest Design/ Test InputGeneration •1 test per hour Test Results •Not repeatable Evaluation System Under Test © 2003 mVerify Corporation 6
  7. 7. Automated Test Script Test SetupManual Test ScriptTest Design/ ProgrammingGeneration •10+ tests per hour •Repeatable Results Test Evaluation •Brittle System Under Test © 2003 mVerify Corporation 7
  8. 8. Automated Generation/Agent Test SetupModel-basedTest Design/ Automatic Generation Test Execution•1000+ tests per hour•High fidelity Results Test Evaluation•Evaluation limited System Under Test © 2003 mVerify Corporation 8
  9. 9. Full Test Automation Automated Test SetupModel-basedTest Design/Generation Automatic Test Execution Automated•Advanced Mobile App Testing Environment Test Results•Q3 2005 Evaluation System Under Test © 2003 mVerify Corporation 9
  10. 10. Application Under TestE-commerce/securities market, screen-based trading over private network3 million transactions per hour15 billion dollars per day3 years, version 1.0 live Q4 2001 © 2003 mVerify Corporation 10
  11. 11. Development Process/EnvironmentRational Unified processAbout 90 use-cases, 600 KLOC JavaJava (services and GUI), some XMLOracle DBMSMany legacy interfacesCORBA/IDL distributed object modelHA Sun server farmDedicated test environment © 2003 mVerify Corporation 11
  12. 12. Profile-based Testing Approach Executable operational profile Simulator generates realistic unique test suites Loosely coupled automated test agents Oracle/Comparator automatically evaluate Support integration, functional, and stress test © 2003 mVerify Corporation 12
  13. 13. Model-based TestingProfile alone insufficientExtended Use Case  RBSC test methodology  Defines feature usage profile  Input conditions, output actionsMode MachineInvariant Boundaries © 2003 mVerify Corporation 13
  14. 14. Simulator Discrete event simulation  Generate any distribution with pseudo-random Prolog implementation (50 KLOC)  Rule inversion Load Profile  Time domain variation  Orthogonal to operational profile Each event assigned a "port" and submit time © 2003 mVerify Corporation 14
  15. 15. Test Environment Simulator generates interface-independent content Adapters for each SUT Interface  Formats for test agent API  Generates script code Test Agents execute independently Distributed processing/serialization challenges  Loosely coupled, best-effort strategy  Embed sever-side serialization monitor © 2003 mVerify Corporation 15
  16. 16. Automated Run Evaluation Oracle accepts output of simulator About 500 unique rules Verification  Splainer – result/rule backtracking tool  Rule/Run coverage analyzer Comparator  Extract transaction log  Post run database state  end-to-end invariant Stealth requirements engineering © 2003 mVerify Corporation 16
  17. 17. Overall Process Six development increments  3 to 5 months  Test design/implementation parallel with app dev Plan each days test run  Load profile  Total volume  Configuration/operational scenarios © 2003 mVerify Corporation 17
  18. 18. Daily Test Process Run Simulator  100,000 events per hour  FTP event files to test agents Start SUT Test agents automatically start at scheduled time Extract results Run Oracle/Comparator Prepare bug reports © 2003 mVerify Corporation 18
  19. 19. Problems and Solutions One time sample not  Simulator generates effective, but fresh test fresh, accurate sample suites too expense on demand Too expensive to develop  Oracle generates expected results expected on demand Too many test cases to  Comparator automates evaluate checking Profile/Requirements  Incremental changes to change rule base SUT Interfaces change  Common agent interface © 2003 mVerify Corporation 19
  20. 20. Technical Achievements AI-based user simulation generates test suites All inputs generated under operational profile High volume oracle and evaluation Every test run unique and realistic (about 200) Evaluated functionality and load response with fresh tests Effective control of many different test agents (COTS/ custom, Java/4Test/Perl/Sql/proprietary) © 2003 mVerify Corporation 20
  21. 21. Problems Stamp coupling  Simulator, Agents, Oracle, Comparator Re-factoring rule relationships, Prolog limitations Configuration hassles Scale-up constraints Distributed schedule brittleness Horn Clause Shock Syndrome © 2003 mVerify Corporation 21
  22. 22. Results Revealed about 1,500 bugs over two years  5% showstoppers Five person team, huge productivity increase Achieved proven high reliability  Last pre-release test run: 500,000 events in two hours, no failures detected  Bonus: 1M event run closed big affiliate deal  No production failures © 2003 mVerify Corporation 22

×