tarot.brunel.ac.uk

495 views
429 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
495
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Illustration of increase in size of software, and that this leads to, in principle, an exponential increase in necessary testing effort. There is real danger that testing methods (and more ingeneral V&V methods) cannot keep pace with software development, where modern software development tools make it easier to make ever larger systems with relatively less effort. Automation of testing is necessary; MBT is one of the techniques.
  • tarot.brunel.ac.uk

    1. 1. Model-Based Testing Jan Tretmans jan.tretmans @ esi.nl Embedded Systems Institute Eindhoven, NL Radboud University Nijmegen, NL
    2. 2. Overview <ul><li>Software testing </li></ul><ul><li>Model-based testing </li></ul><ul><li>Model-based testing with labelled transition systems </li></ul><ul><li>Testing of component-based systems </li></ul><ul><li>Model-based testing of components </li></ul>
    3. 3. Software Testing: What? How? Who? Sorts?
    4. 4. Paradox of Software Testing <ul><li>But also: </li></ul><ul><li>ad-hoc, manual, error-prone </li></ul><ul><li>hardly theory / research </li></ul><ul><li>no attention in curricula </li></ul><ul><li>not cool : “if you’re a bad programmer you might be a tester” </li></ul><ul><li>Testing is: </li></ul><ul><li>important </li></ul><ul><li>much practiced </li></ul><ul><li>30-50% of project effort </li></ul><ul><li>expensive </li></ul><ul><li>time critical </li></ul><ul><li>not constructive (but sadistic?) </li></ul><ul><li>Attitude is changing: </li></ul><ul><li>more awareness </li></ul><ul><li>more professional </li></ul>
    5. 5. The Triangle Program [Myers] <ul><li>“A program reads three integer values. The three values are interpreted as representing the lengths of the sides of a triangle. The program prints a message that states whether the triangle is scalene, isosceles, or equilateral.” </li></ul>Write a set of test cases to test this program
    6. 6. <ul><li>valid scalene triangle ? </li></ul><ul><li>valid equilateral triangle ? </li></ul><ul><li>valid isosceles triangle ? </li></ul><ul><li>3 permutations of previous ? </li></ul><ul><li>side = 0 ? </li></ul><ul><li>negative side ? </li></ul><ul><li>one side is sum of others ? </li></ul><ul><li>3 permutations of previous ? </li></ul><ul><li>one side larger than sum of others ? </li></ul><ul><li>3 permutations of previous ? </li></ul><ul><li>all sides = 0 ? </li></ul><ul><li>non-integer input ? </li></ul><ul><li>wrong number of values ? </li></ul><ul><li>for each test case: is expected output specified ? </li></ul><ul><li>check behaviour after output was produced ? </li></ul>The Triangle Program [Myers] Test cases for:
    7. 7. <ul><li>is it fast enough ? </li></ul><ul><li>doesn’t it use too much memory ? </li></ul><ul><li>is it learnable ? </li></ul><ul><li>is it usable for intended users ? </li></ul><ul><li>is it secure ? </li></ul><ul><li>does it run on different platforms ? </li></ul><ul><li>is it portable ? </li></ul><ul><li>is it easily modifiable ? </li></ul><ul><li>is the availability sufficient ? </li></ul><ul><li>is it reliable ? </li></ul><ul><li>does it comply with relevant laws ? </li></ul><ul><li>doesn‘t it do harm to other applications ? </li></ul><ul><li>. . . . . </li></ul>The Triangle Program [Myers] Test cases for:
    8. 8. Testing: A Definition <ul><li>Software testing is: </li></ul><ul><li>a technical process, </li></ul><ul><li>performed by executing / experimenting with a product, </li></ul><ul><li>in a controlled environment, following a specified procedure, </li></ul><ul><li>with the intent of measuring one or more characteristics / quality of the software product </li></ul><ul><li>by demonstrating the deviation of the actual status of the product from the required status / specification. </li></ul>
    9. 9. Testing and Software Quality <ul><li>Quality : totality of characteristics of an entity (IS 9126) that bear on its ability to satisfy stated and implied needs </li></ul><ul><li>Testing: measuring the quality of a product; obtaining confidence in the quality of product </li></ul><ul><li>How to specify quality ? </li></ul><ul><ul><li>explicit / implicit / legal requirements </li></ul></ul><ul><li>How to classify quality ? </li></ul><ul><li>How to quantify quality ? </li></ul><ul><li>How to measure quality ? </li></ul><ul><li>How to obtain good tests ? </li></ul>
    10. 10. Quality: There is more than Testing static analysis reviewing inspection walk-through debugging certification CMM quality control requirement management software process verification QUALITY testing testing testing
    11. 11. Sorts of Testing <ul><li>Many different types and sorts of testing: </li></ul><ul><ul><li>functional testing, acceptance testing, duration testing, </li></ul></ul><ul><ul><li>performance testing, interoperability testing, unit testing, </li></ul></ul><ul><ul><li>black-box testing, white-box testing, </li></ul></ul><ul><ul><li>regression testing, reliability testing, usability testing, </li></ul></ul><ul><ul><li>portability testing, security testing, compliance testing, </li></ul></ul><ul><ul><li>recovery testing, integration testing, factory test, </li></ul></ul><ul><ul><li>robustness testing, stress testing, conformance testing, </li></ul></ul><ul><ul><li>developer testing, acceptance, production testing, </li></ul></ul><ul><ul><li>module testing, system testing, alpha test, beta test </li></ul></ul><ul><ul><li>third-party testing, specification-based testing, ……… </li></ul></ul>
    12. 12. Sorts of Testing unit integration system efficiency portability functionality white box black box Level of detail Accessibility Characteristics usability reliability module maintainability <ul><li>Other dimensions: </li></ul><ul><ul><li>- phases in development </li></ul></ul><ul><ul><li>- who does it </li></ul></ul><ul><ul><li>goal of testing </li></ul></ul><ul><ul><li>. . . . . </li></ul></ul>
    13. 13. Model-Based Testing
    14. 14. Towards Model-Based Testing: Trends <ul><li>Increase in complexity , and quest for higher quality software </li></ul><ul><li>More abstraction </li></ul><ul><ul><li>less detail </li></ul></ul><ul><ul><li>model based development; OMG’s UML, MDA </li></ul></ul><ul><li>Checking quality </li></ul><ul><ul><li>practice: testing - ad hoc, too late, expensive, lot of time </li></ul></ul><ul><ul><li>research: formal verification - proofs, model checking, . . . . with disappointing practical impact </li></ul></ul>Software bugs / errors cost US economy yearly: $ 59.500.000.000 (~ € 50 billion) ( www.nist.gov ) $ 22 billion could be eliminated…
    15. 15. Towards Model-Based Testing: Trends Increase testing effort grows exponentially with system size testing cannot keep pace with development of complexity and size of systems x : [0..9] 10 ways that it can go wrong 10 combinations of inputs to check y : [0..9] x : [0..9] 100 ways that it can go wrong 100 combinations of inputs to check x : [0..9] y : [0..9] z : [0..9] 1000 ways that it can go wrong 1000 combinations of inputs to check
    16. 16. Model-Based Testing <ul><li>Model based testing has potential to combine </li></ul><ul><ul><li>practice - testing </li></ul></ul><ul><ul><li>theory - formal methods </li></ul></ul><ul><li>Model Based Testing : </li></ul><ul><ul><li>testing with respect to a (formal) model / specification state model, pre/post, CSP, Promela, UML, Spec#, . . . . </li></ul></ul><ul><ul><li>promises better, faster, cheaper testing: </li></ul></ul><ul><ul><ul><li>algorithmic generation of tests and test oracles : tools </li></ul></ul></ul><ul><ul><ul><li>formal and unambiguous basis for testing </li></ul></ul></ul><ul><ul><ul><li>measuring the completeness of tests </li></ul></ul></ul><ul><ul><ul><li>maintenance of tests through model modification </li></ul></ul></ul>
    17. 17. Types of Testing unit integration system efficiency maintainability functionality white box black box Level of detail Accessibility Characteristics usability reliability module portability
    18. 18. Model-Based Testing: Formal Models <ul><li>Use mathematics to model relevant parts of software </li></ul><ul><li>Precise, formal semantics: no room for ambiguity or misinterpretation </li></ul><ul><li>Allow formal validation and reasoning about systems </li></ul><ul><li>Amenable to tools: automation </li></ul><ul><li>Examples: </li></ul><ul><ul><li>Z </li></ul></ul><ul><ul><li>Temporal Logic </li></ul></ul><ul><ul><li>First order logic </li></ul></ul><ul><ul><li>SDL </li></ul></ul><ul><ul><li>LOTOS </li></ul></ul><ul><ul><li>Promela </li></ul></ul><ul><ul><li>Labelled Transition Systems </li></ul></ul><ul><ul><li>Finite state machine </li></ul></ul><ul><ul><li>Process algebra </li></ul></ul><ul><ul><li>UML </li></ul></ul><ul><ul><li>...... </li></ul></ul>
    19. 19. Automated Model-Based Testing Model-Based Automated Testing model IUT IUT confto model  TTCN TTCN t est cases pass fail test tool test generation tool test execution tool IUT passes tests IUT confto model   sound exhaustive
    20. 20. A Model-Based Development Process specification realization design code model- based informal ideas formalizable validation formal verification testing informal world world of models real world
    21. 21. Model-Based Testing Formal Specification-Based Functional Testing implementation under test IUT Testing functional behaviour of black-box implementation with respect to a model in a well-defined language based on a formal definition of correctness model-based testing specification/model is basis for testing and assumed to be correct model s world of models real world
    22. 22. Approaches to Model-Based Testing <ul><li>Several modelling paradigms: </li></ul><ul><li>Finite State Machine </li></ul><ul><li>P re/post-conditions </li></ul><ul><li>Labelled Transition Systems </li></ul><ul><li>Programs as Functions </li></ul><ul><li>Abstract Data Type testing </li></ul><ul><li>. . . . . . . </li></ul>Labelled Transition Systems
    23. 23. Model-Based Testing with Labelled Transition Systems
    24. 24. Model Based Testing with Transition Systems s  LTS i  IOTS i ioco s pass fail test tool gen : LTS   (TTS) t || i i || gen(s)  pass i ioco s   sound exhaustive pass fail model IUT IUT confto model test tool test generation tool test execution tool IUT passes tests IUT confto model   sound exhaustive
    25. 25. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + models of IUTs </li></ul><ul><li>correctness imp </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    26. 26. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + models of IUTs </li></ul><ul><li>correctness imp </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    27. 27. Labelled Transition Systems <ul><li>Labelled Transition System  S , L , T , s 0  </li></ul>ConReq ConConf Discon Data Discon states actions transitions T  S  (L  {  })  S initial state s 0  S
    28. 28. Labelled Transition Systems a b a a b a b a b a a c b a  c a a a a b b b b
    29. 29. Labelled Transition Systems L = { dub,kwart, coffee,tea,soup} S0 S1 dub transition dub coffee S0 S3 transition composition kwart tea S0 executable sequence non-executable sequence kwart soup S0 LTS(L) all transition systems over L dub coffee kwart tea S1 S2 S3 S0 s S4 S5 
    30. 30. Labelled Transition Systems s Sequences of observable actions: Reachable states: traces(s) = {  , dub, dub coffee, dub tea } s after dub = { S1 , S2 } s after dub tea = { S4 } dub coffee dub tea S1 S2 S3 S4 S0 traces (s) = {   L* | s  } s after  = { s’ | s  s’ }
    31. 31. Representation of LTS <ul><li>Explicit :  { S0,S1,S2,S3 } , { dub,coffee,tea } , { (S0,dub,S1), (S1,coffee,S2), (S1,tea,S3) }, S0  </li></ul><ul><li>Transition tree / graph </li></ul><ul><li>Language / behaviour expression : S := dub ; ( coffee ; stop  [] tea ; stop ) </li></ul>S coffee dub tea S1 S2 S3 S0
    32. 32. Labelled Transition Systems stop a ; b ; stop a ; stop [] b ; stop P where P := a ; P a b a a b a b a b a a c b a  c a a a a b b b b
    33. 33. Labelled Transition Systems a ; stop ||| b ; stop a ; b ; stop [] a ; c ; stop a ; stop []  ; c ; stop Q where Q := a ; ( b ; stop ||| Q ) a b a b a a c b a  c a a a a b b b b
    34. 34. Example : Parallel Composition M || U dub coffee M' || U' dub coffee kwart tea M U dub coffee tea M' U'
    35. 35. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + models of IUTs </li></ul><ul><li>correctness imp </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    36. 36. Equivalences on Labelled Transition Systems Observable Behaviour “ Some transition systems are more equal than others “ a a a  a  a ?  ?  ? 
    37. 37. Comparing Transition Systems <ul><li>Suppose an environment interacts with the systems: </li></ul><ul><ul><li>the environment tests the system as black box by observing and actively controlling it; </li></ul></ul><ul><ul><li>the environment acts as a tester ; </li></ul></ul><ul><li>Two systems are equivalent if they pass the same tests. </li></ul>S1 S2 environment environment
    38. 38. Comparing Transition Systems   ? ? S1  S2   e  E . obs ( e, S1 ) = obs (e, S2 ) S1 S2 environment e environment e
    39. 39. Trace Equivalence s 1  tr s 2  traces ( s 1 ) = traces ( s 2 ) Traces: S1 S2 environment environment traces ( s ) = {   L* | s  }
    40. 40. (Completed) Trace Equivalence  tr  ctr  tr  tr  tr  ctr a b a  b a a b b  a  ctr  ctr
    41. 41. Equivalences on Transition Systems observing sequences of actions and their end observing sequences of actions test an LTS with another LTS test an LTS with another LTS, and try again (continue) after failure test an LTS with another LTS, and undo, copy, repeat as often as you like now you need to observe  's …… isomorphism bisimulation ( weak ) failure trace = refusal failures = testing completed trace trace weak strong
    42. 42. Equivalences : Examples d a b c p d c d a b b b c s a a b b d c r b b d c a q
    43. 43. Preorders on Transition Systems implementation i specification s  <ul><li>Suppose an environment interacts with the black box implementation i and with the specification s : </li></ul><ul><ul><li>i correctly implements s if all observation of i can be related to observations of s </li></ul></ul>environment e environment e
    44. 44. Preorders on Transition Systems    ? ? ? i  LTS s  LTS implementation i specification s  i  s   e  E . obs ( e , i )  obs ( e , s ) environment e environment e
    45. 45. Trace Preorder i  tr s = traces( i )  traces( s ) dub coffee dub tea coffee dub tea coffee dub  tr  tr  tr  tr  tr  tr
    46. 46. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + models of IUTs </li></ul><ul><li>correctness imp </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    47. 47. Input-Output Transition Systems L I = { ?dub, ?kwart } L U = { !coffee, !tea } dub, kwart coffee, tea from user to machine from machine to user initiative with user initiative with machine machine cannot refuse user cannot refuse input output L I L U L I  L U =  L I  L U = L dub coffee kwart tea S1 S2 S3 S4 S0 ! ? ? !
    48. 48. Input-Output Transition Systems L I = { ?dub, ?kwart } L U = { !coffee, !tea } Input-Output Transition Systems IOTS ( L I , ,L U )  LTS ( L I ,  L U ) IOTS is LTS with Input-Output and always enabled inputs : for all states s , for all inputs ?a  L I : ?dub ?kwart ?dub ?kwart ?dub ?kwart ?dub ?kwart ?dub ! coffee ?kwart !tea S ?a
    49. 49. Example Models Input-Output Transition Systems ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart ! choc ?dub !tea ! coffee ?dub !tea ?dub ! coffee ?dub
    50. 50. Preorders on Input-Output Transition Systems implementation i specification s imp i  IOTS(L I ,L U ) s  LTS(L I ,L U ) imp  IOTS (L I ,L U ) x LTS (L I , L U ) Observing IOTS where system inputs interact with environment outputs, and v.v. environment e environment e
    51. 51. Correctness Implementation Relation ioco i ioco s = def    Straces ( s ) : out ( i afte r  )  out ( s after  ) p  p =  !x  L U  {  } . p !x out ( P ) = { !x  L U | p !x , p  P }  {  | p  p , p  P } Straces ( s ) = {   (L  {  })* | s  } p after  = { p’ | p  p’ }
    52. 52. Correctness Implementation Relation ioco i ioco s = def    Straces ( s ) : out ( i afte r  )  out ( s after  ) <ul><li>Intuition: </li></ul><ul><li>i ioco -conforms to s , iff </li></ul><ul><li>if i produces output x after trace  , then s can produce x after  </li></ul><ul><li>if i cannot produce any output after trace  , then s cannot produce any output after  ( quiescence  ) </li></ul>
    53. 53. Implementation Relation ioco ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart ! choc ?dub !tea ioco ioco ! coffee ?dub !tea s ioco ioco  ?dub ! coffee ?dub
    54. 54. Implementation Relation ioco i ioco s = def   Straces ( s ) : out ( i afte r  )  out ( s after  ) i ioco s equation solver for y 2 =x : ? x (x >= 0) !  x ? x (x < 0) ? y implementation i ! -  x ? x (x >= 0) !  x ? x (x < 0) ? y specification s s ioco i
    55. 55. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + mo dels of IUTs </li></ul><ul><li>correctness </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    56. 56. Test Cases <ul><ul><li>labels in L  {  } </li></ul></ul><ul><ul><ul><li>‘ quiescence’ label  </li></ul></ul></ul><ul><ul><li>tree-structured </li></ul></ul><ul><ul><li>‘ finite’, deterministic </li></ul></ul><ul><ul><li>sink states pass and fail </li></ul></ul><ul><ul><li>from each state: </li></ul></ul><ul><ul><ul><li>either one input !a and all outputs ?x </li></ul></ul></ul><ul><ul><ul><li>or all outputs ?x and  </li></ul></ul></ul>!dub !kwart ?tea ?coffee ?tea  !dub  pass fail fail fail pass Model of a test case = transition system : fail fail ?coffee ?tea fail pass ?coffee ?tea fail fail ?coffee ?tea ?coffee L U  { θ } pass L U  { θ } fail
    57. 57. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + mo dels of IUTs </li></ul><ul><li>correctness </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    58. 58. Test Generation i ioco s = def    Straces ( s ) : out ( i afte r  )  out ( s after  )  out ( s after  ) = { !x , !y ,  }  s !x !y  i !x  !z out ( i after  ) = { !x , !z ,  } out ( test after  ) = L U  {  }  pass pass fail test ?x  ?y ?z pass
    59. 59. Test Generation Algorithm Algorithm To generate a test case t ( S ) from a transition system specification S , with S   : set of states ( initially S = s 0 after  ) Apply the following steps recursively, non-deterministically: 1 end test case pass allowed outputs (or  ): !x  out ( S ) forbidden outputs (or  ): !y  out ( S ) 3 observe all outputs fail t ( S after !x ) fail allowed outputs forbidden outputs ?y  ?x 2 supply input !a !a t ( S after ?a   ) fail t ( S after !x ) fail allowed outputs forbidden outputs ?y ?x
    60. 60. Test Generation Example test ?coffee fail pass  ?tea ?choc fail fail ?coffee fail fail  ?tea ?choc pass s !tea ?dub !coffee fail !dub ?coffee fail ?tea ?choc fail
    61. 61. Test Generation Example <ul><li>Equation solver for y 2 =x </li></ul>test T o cope with non-deterministic behaviour, tests are not linear traces, but trees ?-2 ?2 pass pass otherwise fail !4 otherwise fail otherwise fail specification ? x (x >= 0) !  x ? x (x < 0) ! -  x pass ?-3 otherwise ?3 fail !9
    62. 62. Model-Based Testing with LTS test generation imp <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + mo dels of IUTs </li></ul><ul><li>correctness </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    63. 63. Test Execution Test execution = all possible parallel executions ( test runs ) of test t with implementation i going to state pass or fail Test run : t  i  pass  i' or t  i  fail  i' t t’, i i’ a a t  i t’  i’ a i i’  t  i t  i’  t t’ , θ t  i t’  i’ θ i i’ δ
    64. 64. Test Execution Example Two test runs : i fails t i' i'' t  i dub tea pass  i ' fail  i'' t  i dub choc ! choc ?dub !tea i ?coffee fail pass  ?tea ?choc fail fail test ?coffee fail fail  ?tea ?choc pass fail !dub ?coffee fail ?tea ?choc fail
    65. 65. Validity of Test Generation <ul><li>For every test t generated with algorithm we have: </li></ul><ul><li>Soundness : t will never fail with correct implementation i ioco s implies i passes t </li></ul><ul><li>Exhaustiveness : each incorrect implementation can be detected with a generated test t i ioco s implies  t : i fails t </li></ul>
    66. 66. Model-Based Testing with LTS test generation ioco <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + mo dels of IUTs </li></ul><ul><li>correctness ioco </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    67. 67. Model-Based Testing with LTS test generation ioco <ul><li>Involves: </li></ul><ul><li>specification model </li></ul><ul><li>implementation IUT + mo dels of IUTs </li></ul><ul><li>correctness ioco </li></ul><ul><li>test cases </li></ul><ul><li>test generation </li></ul><ul><li>test execution </li></ul><ul><li>test result analysis </li></ul>= test execution tests model IUT pass / fail
    68. 68. Test Generation Tools <ul><li>AETG </li></ul><ul><li>Agatha </li></ul><ul><li>Agedis </li></ul><ul><li>Autolink </li></ul><ul><li>Conformiq </li></ul><ul><li>Cooper </li></ul><ul><li>Cover </li></ul><ul><li>G  st </li></ul><ul><li>Gotcha </li></ul><ul><li>Leirios </li></ul><ul><li>TestComposer </li></ul><ul><li>TGV </li></ul><ul><li>TorX </li></ul><ul><li>TorXakis </li></ul><ul><li>Uppaal-Tron </li></ul><ul><li>Tveda </li></ul><ul><li>. . . . . . </li></ul>TorX Some Model-Based Testing Approaches and Tools <ul><li>Phact/The Kit </li></ul><ul><li>QuickCheck </li></ul><ul><li>Reactis </li></ul><ul><li>RT-Tester </li></ul><ul><li>SaMsTaG </li></ul><ul><li>SpecExplorer </li></ul><ul><li>Statemate </li></ul><ul><li>STG </li></ul><ul><li>TestGen (Stirling) </li></ul><ul><li>TestGen (INT) </li></ul>
    69. 69. A Tool for Transition Systems Testing: TorX <ul><li>On-the-fly test generation and test execution </li></ul><ul><li>Implementation relation: ioco </li></ul><ul><li>Mainly applicable to reactive systems / state based systems; </li></ul><ul><ul><li>specification languages: LOTOS , Promela , FSP , Automata </li></ul></ul>TorX IUT observe output offer input next input specification check output pass fail inconclusive user: manual automatic
    70. 70. TorX Case Studies <ul><li>Conference Protocol </li></ul><ul><li>EasyLink TV-VCR protocol </li></ul><ul><li>Cell Broadcast Centre component </li></ul><ul><li>‘’ Rekeningrijden’’ Payment Box protocol </li></ul><ul><li>V5.1 Access Network protocol </li></ul><ul><li>Easy Mail Melder </li></ul><ul><li>FTP Client </li></ul><ul><li>“ Oosterschelde” storm surge barrier -control </li></ul><ul><li>DO/DG dose control </li></ul><ul><li>Laser interface </li></ul><ul><li>The new Dutch electronic passport </li></ul>academic Philips LogicaCMG Interpay Lucent LogicaCMG academic LogicaCMG ASML/Tangram ASML/Tangram Min. of Int. Aff.
    71. 71. TorX Case Study Lessons <ul><li>model construction </li></ul><ul><ul><li>difficult: missing or bad specs </li></ul></ul><ul><ul><li>leads to detection of design errors </li></ul></ul><ul><ul><li>not supported by integrated environment yet </li></ul></ul><ul><ul><li>research on test based modelling </li></ul></ul><ul><li>adapter/test environment </li></ul><ul><ul><li>development is cumbersome </li></ul></ul><ul><ul><li>specific for each system </li></ul></ul><ul><li>longer and more flexible tests </li></ul><ul><ul><li>full automation : test generation + execution + analysis </li></ul></ul><ul><li>no notion of test selection or specification coverage yet </li></ul><ul><ul><li>only random coverage or user guided test purposes </li></ul></ul>
    72. 72. Model Based Testing, Verification, and the Test Assumption
    73. 73. Formal Testing with Transition Systems gen : LTS   ( TTS ) ioco Proof soundness and exhaustiveness:  i  IOTS . (  t  gen(s) . i passes t )  i ioco s Test assumption :  IUT  IMP .  i IUT  IOTS .  t  TTS . IUT passes t  i IUT passes t = exec : TESTS  IMPS   (OBS) T s  TTS s  LTS IUT  IMPS i IUT  IOTS passes : IOTS  TTS  {pass,fail} pass / fail
    74. 74. Comparing Transition Systems: An Implementation and a Model IUT  i IUT   e  E . obs ( e, IUT ) = obs (e, i IUT ) IUT i IUT environment e environment e
    75. 75. Formal Testing : Test Assumption Test assumption :  IUT .  i IUT  MOD .  t  TEST . IUT passes t  i IUT passes t IUT i IUT test t test t
    76. 76. Completeness of Formal Testing IUT passes T s  def  t  T s . IUT passes t Test assumption :  t  TEST . IUT passes t  i IUT passes t Proof obligation:  i  MOD (  t  T s . i passes t )  i imp s Definition : IUT confto s  IUT confto s  i IUT imp s   t  T s . i IUT passes t   t  T s . IUT passes t IUT passes T s IUT passes T s  IUT confto s ?
    77. 77. Variations of ioco
    78. 78. Variations on a Theme <ul><li>i ioco s     Straces( s ) : out ( i afte r  )  out ( s after  ) </li></ul><ul><li>i  ior s     ( L  {  } )* : out ( i afte r  )  out ( s after  ) </li></ul><ul><li>i ioconf s     traces( s ) : out ( i afte r  )  out ( s after  ) </li></ul><ul><li>i ioco F s     F : out ( i afte r  )  out ( s after  ) </li></ul><ul><li>i uioco s     Utraces( s ) : out ( i afte r  )  out ( s after  ) </li></ul><ul><li>i mioco s multi-channel ioco </li></ul><ul><li>i wioco s non-input-enabled ioco </li></ul><ul><li>i eco e environmental conformance </li></ul><ul><li>i sioco s symbolic ioco </li></ul><ul><li>i (r)tioco s (real) timed tioco (Aalborg, Twente, Grenoble, Bordeaux,. . . .) </li></ul><ul><li>i ioco r s refinement ioco </li></ul><ul><li>i hioco s hybrid ioco </li></ul><ul><li>i qioco s quantified ioco </li></ul><ul><li>. . . . . . </li></ul>
    79. 79. Underspecification: uioco i ioco s     Straces( s ) : out ( i afte r  )  out ( s 0 after  ) s 0 s 1 s 2 out ( s 0 after ?b ) =  but ?b  Straces( s ) : under-specification : anything allowed after ?b out ( s 0 after ?a ?a ) = { !x } and ?a ?a  Straces( s ) but from s 2 , ?a ?a is under-specified : anything allowed after ?a ?a ? ?a ?a ?b !z ?b ?a !y !x
    80. 80. Underspecification: uioco i uioco s     Utraces( s ) : out ( i afte r  )  out ( s 0 after  ) Now s is under-specified in s 2 for ?a : anything is allowed. ioco  uioco ?a ?a ?b !z ?b ?a !y !x s 0 s 1 s 2 Utraces( s ) = {   Straces ( s ) |   1 ?a  2 =  ,  s' : s  1 s'  s' ?a }
    81. 81. Underspecification: uioco s 0 s 1 ?b  ?a  s 2 Alternatively, via chaos process  for under-specified inputs i uioco s     Utraces( s ) : out ( i afte r  )  out ( s 0 after  ) L I  L U   L I  ?a ?a ?b !z ?b ?a !y !x
    82. 82. Testing of Component-Based Systems
    83. 83. Component-Based Development <ul><li>Software ....... </li></ul><ul><li>is very complex </li></ul><ul><li>has many quality issues </li></ul><ul><li>is very expensive to develop </li></ul><ul><li>but we increasingly depend it ! </li></ul><ul><li>Contribution to a solution: Component-Based Development </li></ul><ul><li>„ Lego“ approach: building components and combining components into systems </li></ul><ul><li>divide and conquer </li></ul><ul><li>common in many other engineering disciplines: construction, electronics, cars </li></ul>
    84. 84. Component-Based Development component component component component component component system
    85. 85. <ul><li>Potential benefits </li></ul><ul><li>independence of component en system development </li></ul><ul><ul><li>master complexity </li></ul></ul><ul><ul><li>third party development, outsourcing </li></ul></ul><ul><ul><li>specialization </li></ul></ul><ul><li>reuse </li></ul><ul><ul><li>standard components </li></ul></ul><ul><ul><li>reduce system development cost and time </li></ul></ul><ul><ul><li>improve quality </li></ul></ul><ul><li>substitution </li></ul><ul><ul><li>choice and replacements from other suppliers </li></ul></ul><ul><ul><li>update components </li></ul></ul>Component-Based Development
    86. 86. <ul><li>Prerequisites </li></ul><ul><li>“ good“ components </li></ul><ul><ul><li>precise specification </li></ul></ul><ul><ul><li>correct implementation </li></ul></ul><ul><li>infrastructure and glue to connect components </li></ul><ul><li>standardization of interfaces </li></ul><ul><ul><li>at component supplier side, and </li></ul></ul><ul><ul><li>at componenent user side </li></ul></ul><ul><li>“ Good“ components“ </li></ul><ul><li>independent from system </li></ul><ul><li>reusable : components that fit in many systems </li></ul><ul><li>substitutable : system must work with substituted components </li></ul>Component-Based Development
    87. 87. <ul><li>system : (autonomous) entity interacting with other entities (= environment) </li></ul><ul><li>system boundary : common frontier between system and environment </li></ul><ul><li>function of system: what system is intended to do </li></ul><ul><li>behaviour : what the system does to implement its function = sequence of states and actions </li></ul>Components and Systems environment system
    88. 88. <ul><li>structure : internal composition that enables system to generate behaviour </li></ul><ul><ul><li>set of components </li></ul></ul><ul><ul><li>with possible interactions </li></ul></ul><ul><li>component : yet another system </li></ul>Components and Systems environment component A component B system
    89. 89. <ul><li>user : part of the environment who or that is interested in the function of the system = role of a system </li></ul><ul><li>service : behaviour of a system as perceived and received by its user </li></ul><ul><li>provider : role of system in delivering service </li></ul><ul><li>system can simultaneously be provider and user </li></ul>Components and Systems environment component A component B user system
    90. 90. <ul><li>service interface : part of the boundary where service is delivered </li></ul><ul><li>use interface : part of the boundary of the user at which the user receives service </li></ul><ul><li>service specification : specification of all possible behaviours that shall be perceivable at the service interface </li></ul>Components and Systems environment component A component B user service interface service interface use interface use interface system
    91. 91. Testing Component-Based Systems <ul><li>Traditional approaches </li></ul><ul><li>bottom-up (with drivers) </li></ul><ul><li>top-down (with stubs) </li></ul><ul><li>big-bang </li></ul><ul><li>mixed </li></ul>component A component B system test stub for 1 driver for 2 driver for 1 system <ul><li>Testing of </li></ul><ul><li>components </li></ul><ul><li>interactions (integration) </li></ul><ul><li>whole system </li></ul><ul><li>where later detection of errors is more costly </li></ul>
    92. 92. <ul><li>Testing challenges </li></ul><ul><li>reusability : testing in many/all different environments (cf. Ariane) </li></ul><ul><li>substitutability : system must accept substitute components </li></ul><ul><li>independence : </li></ul><ul><ul><li>component tester does not know use environment </li></ul></ul><ul><ul><li>integration tester has no code (black-box), no expertise about component, cannot repair and maintain component </li></ul></ul><ul><li>Functionality is compositional, to a certain extent </li></ul><ul><li>But emerging and non-functional properties are not : </li></ul><ul><ul><li>performance, memory usage, reliability, security </li></ul></ul><ul><li>Components developed once , but tested often ? </li></ul>Testing Component-Based Systems
    93. 93. Testing Component-Based Systems <ul><li>Testing of components A and B </li></ul><ul><li>testing B: </li></ul><ul><ul><li>does B provide the correct service </li></ul></ul><ul><li>testing A : </li></ul><ul><ul><li>does A provide the correct service </li></ul></ul><ul><ul><li>does A use the service of B correctly </li></ul></ul><ul><li>Disadvantages of traditional approaches </li></ul><ul><li>bottom-up: A is not tested with different B </li></ul><ul><li>top-down: B is not tested with different A </li></ul>component A component B provided service provided service use service system
    94. 94. Model-Based Testing of Components
    95. 95. Model-Based Testing of Components <ul><li>Testing component A </li></ul><ul><li>ideal: </li></ul><ul><ul><li>access to all interfaces </li></ul></ul><ul><ul><li>formal specfication model of complete behaviour at all interfaces </li></ul></ul><ul><ul><li>coordinated testing at all interfaces based on formal model, e.g. with ioco </li></ul></ul><ul><li>often in practice: </li></ul><ul><ul><li>only specification of provided service available </li></ul></ul><ul><ul><li>unspecified how component should use other componets </li></ul></ul>driver stub tester IUT component A
    96. 96. Model-Based Testing of Components <ul><li>Testing component A </li></ul><ul><li>specification of A only prescribes provided service of A, typically </li></ul><ul><ul><li>what A provides to its user </li></ul></ul><ul><ul><li>what A expects from its user </li></ul></ul><ul><li>ideal for bottum-up testing at service interface of A </li></ul><ul><li>but not for testing of A in isolation </li></ul>component A component B test based on model of A system model of A
    97. 97. Model-Based Testing of Components <ul><li>Testing component A </li></ul><ul><li>suppose also such a specification of B is available that prescribes </li></ul><ul><ul><li>what B provides to its user = A </li></ul></ul><ul><ul><li>what B expects from its user = A </li></ul></ul><ul><li>use specification of B to test whether A behaves following expectations of B, using what B provides </li></ul><ul><li>model of B as “intelligent stub“ </li></ul><ul><li>requires something else than ioco ! </li></ul>component A component B test based on model of A test based on model of B model of B model of A
    98. 98. Model-Based Testing of Components <ul><li>Formalization of components </li></ul><ul><li>(in a client-server like setting) </li></ul><ul><li>labelled transition systems </li></ul><ul><ul><li>what are labels ? </li></ul></ul><ul><ul><li>input enabled ? </li></ul></ul><ul><li>labels </li></ul><ul><ul><li>all interactions between components must be modelled atomically as labels </li></ul></ul><ul><ul><ul><li>methods calls </li></ul></ul></ul><ul><ul><ul><li>method returns </li></ul></ul></ul><ul><ul><ul><li>methods called </li></ul></ul></ul><ul><ul><ul><li>methods returned </li></ul></ul></ul><ul><ul><ul><li>(assuming no side-effects) </li></ul></ul></ul>method invocations IUT component || I UT component method invocations methods invoked method call I UT component method returned method called method return IUT component method invocation
    99. 99. Model-Based Testing of Components specifications ideal s A  LTS( L I , L U ) practice s A  LTS( L I p , L U p ) s B  LTS( L I u , L U u ) L I = L I p  L I u = p rovided methods calls  u sed methods returns L U = L U p  L U u = p rovided methods returns  u sed methods calls tester I UT component A method call method return method call method return L I u L I p L U u L U p
    100. 100. Model-Based Testing of Components Input-enabledness :  s of IUT ,  ?a  L I : No ! ? For method returns ? asdffdsf s ?a tester I UT component A method call method return method call method return L I u L I p L U u L U p
    101. 101. Implementation Relation wioco i uioco s = def    Utraces ( s ) : out ( i afte r  )  out ( s after  ) in ( s afte r  ) = { a?  L I | s afte r  must a? } i wioco s = def    Utraces ( s ) : out ( i afte r  )  out ( s after  ) and in ( i afte r  )  in ( s after  ) (u)ioco domain-extended, conservatively for non-input enabled implementations: wioco s after  must a? =  s’ ( s  s’  s’ a? )
    102. 102. <ul><li>Test implementation of A : I A </li></ul><ul><li>with respect to model of A : M A </li></ul><ul><li>according to implementation relation wioco </li></ul><ul><li>I A wioco M A = def    Utraces ( M A ) : out ( I A afte r  )  out ( M A after  ) </li></ul><ul><li>and in ( I A afte r  )  in ( M A after  ) </li></ul>Implementation Relation wioco component A component B test based on model of A model of A wioco
    103. 103. Implementation Relation wioco ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart ! choc ?dub !tea ioco ioco ! coffee ?dub !tea s ioco ioco  ?dub ! coffee ?dub
    104. 104. Implementation Relation wioco ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart ! choc ?dub !tea wioco wioco ! coffee ?dub !tea s wioco wioco  ?dub ! coffee ?dub
    105. 105. Implementation Relation wioco ! choc ?dub !tea ?dub ! coffee ?dub ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart wioco wioco ! coffee ?dub !tea s wioco wioco 
    106. 106. Implementation Relation wioco ! choc ?dub !tea s ?dub ! coffee ?dub ?dub ! choc ?kwart !tea ! coffee ?dub wioco wioco  ?dub ! coffee ?kwart !tea wioco wioco
    107. 107. Implementation Relation eco i uioco s = def    Utraces ( s ) : out ( i afte r  )  out ( s after  ) in ( e afte r  ) = { a?  L I | e afte r  must a? } i eco e = def    Utraces ( s )  L * : uit ( i afte r  )  in ( e after  ) uit ( i afte r  ) = out ( i afte r  ) { δ } (u)ioco for testing with a specification of the environment: eco [ and in ( i afte r  )  uit ( e after  ) ] e after  must a? =  e’ ( e  e’  e’ a? )
    108. 108. <ul><li>Test implementation of A : I A </li></ul><ul><li>with respect to model of B : M B </li></ul><ul><li>according to implementation relation eco </li></ul>Implementation Relation eco component A component B test based on model of A test based on model of B wioco model of B eco I A eco M B = def    Utraces ( M B )  L * : uit ( I A afte r  )  in ( M B after  ) and in ( I A afte r  )  uit ( M B after  )
    109. 109. Implementation Relation eco ?dub ! choc ?kwart !tea ! coffee ?dub ?kwart ?dub ?kwart ?dub ?kwart ! choc ?dub !tea ! coffee ?dub !tea e ?dub ! coffee ?dub
    110. 110. Implementation Relation eco ! dub ?coffee ! dub ! dub ?choc ! kwart ?tea ?choc ! dub ?tea eco eco ! coffee ?dub !tea e eco eco ? coffee !dub ?tea ?coffee ?tea ?tea ?coffee
    111. 111. Implementation Relation eco ?choc !dub ?tea !dub ?coffee !dub !dub ?choc !kwart ?tea ?coffee !dub eco ! coffee ?dub !tea e eco eco eco
    112. 112. Test Generation Algorithm for eco Algorithm To generate a test case t ( E ) from a transition system specification E , with E   : set of states ( initially E = e 0 after  ) Apply the following steps recursively, non-deterministically: !a  uit ( E ) t ( E ) for  1 end test case pass allowed outputs : ?x  in ( E ) and  forbidden outputs : ?y  in ( E ) 3 observe all outputs fail t ( E after ?x ) fail allowed outputs forbidden outputs ?y  ?x 2 supply input !a !a t ( E after !a   ) fail t ( E after ?x ) fail allowed outputs forbidden outputs ?y ?x
    113. 113. Test Generation for eco e i eco ! coffee ?dub !tea ?kwart pass  ?dub fail fail eco test t !tea fail fail  ?dub ?kwart ?dub pass ?kwart fail fail  i‘ t  i kwart ! dub ?choc ! kwart ?tea
    114. 114. <ul><li>Testing with wioco and eco can be performed concurrently but it still is partial testing ! </li></ul><ul><li>coordination and dependence between actions at both interfaces is not tested </li></ul><ul><li>Example: A shall look up information in a database (= component B); A queries B, but does not use this information, but invents itself the information. </li></ul><ul><li>This is not the “ideal” testing of component A in isolation, but only step in that direction ! </li></ul><ul><li>More research required ........ </li></ul>wioco and eco model-based wioco model-based eco IUT component A tester
    115. 115. i 1 ioco s 1 i 2 ioco s 2 ioco s 1 ||s 2 i 1 ||i 2 Compositional Testing !x ?ok !err ?but ?but ?but !err ?but !ok ?ok ?ok ?err !x ?err !y ?ok ?err ?ok ?err ?ok ?err ?but  !x ok err but X y ok err but X y  ?but ?but ?but !y ?but
    116. 116. Compositional Testing i 1 i 2 s 2 s 1 ioco i 1 ioco s 1 i 2 ioco s 2 s 1 || s 2 i 1 || i 2 If s 1 , s 2 input enabled - s 1 , s 2  IOTS - then ioco is preserved !
    117. 117. Concluding <ul><li>Testing can be formal, too (M.-C. Gaudel, TACAS'95) </li></ul><ul><ul><li>Testing shall be formal, too </li></ul></ul><ul><li>A test generation algorithm is not just another algorithm : </li></ul><ul><ul><li>Proof of soundness and exhaustiveness </li></ul></ul><ul><ul><li>Definition of test assumption and implementation relation </li></ul></ul><ul><li>For labelled transition systems : </li></ul><ul><ul><li>( u / w ) ioco , eco for expressing conformance between imp and spec </li></ul></ul><ul><ul><li>sound and exhaustive test generation algorithms </li></ul></ul><ul><ul><li>tools generating and executing tests: TGV , TestGen , Agedis , TorX , . . . . </li></ul></ul>
    118. 118. <ul><li>Model based formal testing can improve the testing process : </li></ul><ul><li>model is precise and unambiguous basis for testing </li></ul><ul><ul><li>design errors found during validation of model </li></ul></ul><ul><li>longer, cheaper, more flexible, and provably correct tests </li></ul><ul><ul><li>easier test maintenance and regression testing </li></ul></ul><ul><li>automatic test generation and execution </li></ul><ul><ul><li>full automation : test generation + execution + analysis </li></ul></ul><ul><li>extra effort of modelling compensated by better tests </li></ul>Perspectives
    119. 119. Thank Y ou

    ×