Model-based Testing

2,177
-1

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,177
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
155
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Model-based Testing

  1. 1. Agenda Introduction MDSE – Model Driven Software Model-based testing Engineering • Model your system • Test coverage criteria and Test case generation • Test case execution Model-based Testing A model-based testing tool Research Project Ana Paiva apaiva@fe.up.pt www.fe.up.pt/~apaiva Model Driiven Software Engineering MAP-I, Ana Paiva, 11/11/2008 1 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 2 Testing Fault vs Failure Testing is the activity performed for evaluating a product quality, and for improving it, by identifying By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing defects and problems [SWEBOK 2004] A fault is the cause of a failure. It is the execution of the …consists of the dynamic verification of the behaviour of faults in software that causes failures a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected behaviour Tester look for failures and then try to identify the faults which cause those failures. Testing executes software in order to detect failures and then tries to identify and fix the faults that cause them Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 3 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 4
  2. 2. Software quality properties General test (V&V) strategies External Properties: The user’s perspective: Manual vs automatic • Satisfaction: subjective view (pleasant, comfortable, intuitive, consistent) Static vs dynamic • Reliable: refers to the errors a user can do when using the system • Learnability: time taken to learn how to use the system White vs Black-box • Efficiency: how efficient a user can be when using the system Error/Fault based vs specification-based Internal Properties: The software engineering’s perspective: API vs UI testing • Code: readable, style,… • Architecture: influences the degree of manageability and scalability Functional vs non-functional (Usability, Performance, • Run time efficiency: related to the complexity of the algorithms Robustness) • Correctness: Verification (meets specification) and validation (meets users’ requirements) [GC96] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 5 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 6 Different kinds of tests: V-Model Agenda Introduction Model-based testing • Model your system • Test coverage criteria and Test case generation • Test case execution A model-based testing tool Research Project Model-based testing Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 7 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 8
  3. 3. Model-based testing Why do we need MBT? Unit testing is not enough Model-based testing is software testing in which test cases are derived in whole or in part from a model that describes some (usually functional) aspects of the system under test (SUT) We need meaningful sequences of actions calls that provide [UL07] systematic behavioral coverage Model-based testing uses a model program to generate test cases or In software industry model-based testing is being adapted as an act as an oracle. Model-based testing includes both integrated part of the testing process • offline testing and • on-the-fly (online) testing In academia several model-based testing conferences have been started An oracle is the authority which provides the correct result used to make a judgment about the outcome of a test case - whether the test passed or failed Several commercial tools have been developed for model-based [A-site] testing Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 9 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 10 Model-based testing Model-based testing process Map Advantages system under 1.Modelling sw model • Higher degree of automation (test case generation) test (SUT) • Allows more exhaustive testing Input data; Requirements 2.Test case Abstract traceability • Good for correctness/functional testing Outputs expected generation Test suite (oracle) matrix; model • Model can be easily adapted to changes coverage 3.Test case Concrete SUT coverage Disadvantages concretization Test suite • Requires a formal specification/model • Test case explosion problem 4.Test case execution • Test case generation has to be controlled appropriately to generate a test case of manageable size Bugs report • Small changes to the model can result in a totally different test suite 5. Analysis • Time to analyse failed tests (model, SUT, adaptor code) Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 11 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 12
  4. 4. Model-based testing challenges Model your system Effort & guidelines Map system under Design the model to meet your testing goals 1.Modelling sw model Test adequacy test (SUT) Choose the right level of abstraction (which aspects of SUT Test case explosion 2.Test case Abstract you want to test) generation Test suite You can have a many-to-many relationship among the 3.Test case Concrete operations of your model and the operations of the SUT concretization Test suite Model to Choose a notation for modelling implementation gap 4.Test case Once the model is written, ensure that it is accurate (validate execution and verify your model) Bugs report [UL07] 5. Analysis Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 13 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 14 Models: Built or Borrow? Notations Reuse the development models – 100% reuse Pre/Post (or Model-based). Ex.: VDM, Z, Spec#. • Too much detail • Usually do not describe the dynamic behaviour of the SUT Transition-based. Ex.: FSM, Petri nets. • It is neither abstract enough nor precise Behavior-based (or history-based). Ex.: CSP, MSC. Models to generate code or reverse engineered • Lack of independence (implementation and test cases are derived from Property-based (or functional-based). Ex.: OBJ. the same source code) Develop test models from scratch – 0% reuse Hybrid approaches. Ex.: RAISE. • Maximum level of independence … [UL07] UML with OCL? • pre/post, Set, OrderedSet, Bag, Sequence, and associations among classes Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 15 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 16
  5. 5. Pre/Post: VDM++ Transition-based: FSM class Stack instance variables stack : seq of int := []; Choosing the set of states is a critical step -- inv Pop operations Pop Stack : () ==> () Stack () == stack := [] Len s = 0 Len s != 0 post stack = []; Push : int ==> () Push(k) Push(i) == stack := [i] ^ stack post stack = [i] ^ ~stack; Pop : () ==> () Pop() == stack := tl stack pre stack <> [] post stack = tl ~stack; Top : () ==> int Top() == return (hd stack) pre stack <> [] post RESULT = hd stack and stack = ~stack; end Stack Statecharts are another option Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 17 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 18 Behaviour-based: CSP Property-based: OBJ Spec: Stack; Stack(< >) = push?x:E -> Stack(<x>) Extend Nat by Sorts: Stack; Stack(<y>^s) = push?x:E->Stack(<x>^<y>^s) Operations: newstack: → Stack | pop!y -> Stack(s) push: Stack × Nat → Stack pop: Stack → Stack top: Stack → Nat OR Variables: s: Stack; n: Nat Axioms: Stack() = push?x;Stack(x) pop(newstack) = newstack; Stack(x) = push?y; Stack(y);Stack(x) top(newstack) = zero; pop(push(s,n)) = s; | pop!x;Stack() top(push(s,n)) = n; Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 19 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 20
  6. 6. Choosing a notation Agenda Pre/Post (for modelling complex data) and transition-based State of the art (for modelling control) are the most common notations used in model-based testing processes Model-based testing • Model your system • Test coverage criteria and Test case generation • Test case execution Whatever notation you choose, it has to be a formal language with precise semantics in order to write accurate models so as A model-based testing tool to be used as test oracles Research Project Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 21 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 22 Test suite Coverage criteria and analysis Key Point – In this context, a test suite contains the sequence Coverage criteria help you generate a good test suit and determine when to stop testing but, above all, your of actions to perform, the input data and the results expected expert knowledge of the SUT is the key factor for success The best test suit is the smallest one which is capable of Coverage analysis aims to measure the extent to which a finding the maximum number of bugs. given verification activity has achieved its objectives and can be used to evaluate the quality of the test suite used and also determine when to stop the verification A good test suite should combine a good code coverage with a process. It is usually expressed as a percentage referring good requirements (or specification) coverage to the accomplished part of an activity [UL07] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 23 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 24
  7. 7. Coverage criteria 1.Structural coverage criteria 1.Structural coverage criteria – aims to exercise code or Statement coverage (SD) – every executable statement is invoked at model concerning some coverage goal least once; Decision coverage (DC) – expression outcomes are tested for true 2.Data coverage criteria – aims to cover the input data space and false (e.g., (A or B) tested for TF and FF); of an operation or transition Condition coverage (CC) – each condition within expression takes all 3.Fault-based criteria – aims to generate test suites possible outcomes (e.g., (A or B) tested for TF and FT); appropriate for detecting specific kinds of faults Decision/condition coverage (D/CC) – combines the two previous criteria (e.g., (A or B) tested for TT and FF); 4.Requirements coverage criteria – aims to ensure that each requirement will be tested Modified condition/decision coverage (MC/DC) – each condition affects independently the outcome of the decision (e.g., (A or B) 5.Others tested for TF, FT, and FF); Multiple condition coverage (MCC) – test each possible combination [UL07] of inputs. Test 2n for a decision with n inputs (almost of the times is unfeasible); [HVCKR01] … Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 25 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 26 1.Structural coverage criteria 1.Structural coverage criteria: FSM + Multiple Condition Coverage (MCC) strongest All-Paths + strongest Modified Condition/Decision Coverage (MC/DC) All-One-Loop-Paths All-Transition-Pairs Full Predicate Coverage (FPC) All-Round-Trips All-Configurations Decision/Condition coverage (D/CC) All-Loop-Free-Paths All-Transitions Decision converage (DC) Condition Coverage(CC) - All-States - Statement Coverage (SC) [UL07] [UL07] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 27 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 28
  8. 8. 2.Data coverage criteria 2.Equivalence class partitioning Choose variables’ values from their domains A partition of some set, S, is a set of non-empty subsets • One value (almost of the times not enough) SS1, ..., SSn, such that each SSi and SSj are disjoint, and • Smart choice of values the union of all SSi's equals S Does not - Equivalence class partitioning (e.g., type-based) generate a test sequence of - Boundary value analysis actions - Randomly value generation If a defect is detected by one member of a class, it is - Goal oriented methods (e.g., AI Planning) expected that the same defect would be detected by any - Path oriented methods other element of the same class • All values (almost of the times unfeasible) i1 i4 i2 i3 all inputs [UL07] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 29 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 30 2.Boundary value analysis 2.Goal oriented Boundary value analysis tests boundary conditions of Goal-oriented methods try to drive the system into a equivalence classes choosing input boundary values. This given goal by two different methods: the chaining technique is based on the knowledge that input values at the boundaries or just beyond the boundaries of the approach and assertion-oriented approach input domain tend to cause errors in the system • The first one tries to find a path to the execution of a given goal node based on data dependence analysis • The second tries to find any path to an assertion that does not • Test cases : hold - class x < 0, arbitrary value: x = -10 - class x >= 0, arbitrary value x = 100 - classes x < 0, x >= 0, on boundary : x = 0 - classes x < 0, x >= 0, below and above: x = -1, x = 1 “Bugs lurk in corners and congregate at boundaries.” [Boris Beizer, "Software testing techniques"] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 31 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 32
  9. 9. 2.Path oriented 3.Fault-based criteria E.g., Symbolic testing. It replaces program variables by symbols and Mutation testing - Mutation techniques introduce small changes (faults) by calculates constraints that represent possible symbolic execution paths. applying mutation operators into the original specification (e.g., arithmetic operators When a program variable is changed during execution, the new value is expressed as a constraint over the symbolic variables. A constraint solver The changed specifications are called mutants system can be used to find, when possible, concrete values that cause the The goal is to construct test cases that distinguish each mutant from the execution of the path described by each constraint original by producing different results. If that happens, it is said that the test case has killed the mutant int x, y; x = X, y= Y A good test case should be capable of killing the mutants because if it is if (x > y) { X > ? Y able to detect the small differences generated by the mutation operators it is expectable that it will be good at finding real faults x = x + y; [X<=Y] END [X>Y] x = X + Y The rate of killed mutants (after removing mutants that are y = x – y; [X>Y]y = X+Y-Y =X equivalent to the original code) gives an indication of the rate of x = x – y; [X>Y] x = X+Y-X = Y undetected defects that may exist in the original code If (x - y > 0) [X>Y] Y-X >? 0 One of the problems of mutation testing is the incapacity of the technique to generate test data assert false [X>Y, Y-X <= 0] END [X>Y, Y-X>0] END [W03] } Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 33 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 34 4.Requirements coverage criteria 5.Model-checking Whenever a property, expressed in temporal logic, does not hold in Requirements coverage is usually less systematic and usually those a system described as a FSM, model-checking tries to generate a not contain the full specification of the behaviour of the system. counter-example However, there are at least two approaches that try to systematize it more: When a counter-example is produced, it can be used as a test case – • Record the requirements inside the behaviour model (as annotations on sequence of transitions in the FSM with inputs and expected outputs several parts of the model) so that the test generation process can To be effective as a test-case generation technique, the properties ensure that all requirements have been tested about the system should be described in such a way that counter- • Formalize each requirement and then use that formal expression as a examples produced by them can be used as test cases test selection criterion to drive the automated generation of one or System model (FSM) Property: G(x -> F y) more tests from the behaviour model Ex.: Coverage criterion measuring the degree in which use cases or Model Checker scenarios were tested. Scenarios describe how the system and the user should interact to achieve a specific goal. They usually refer to common usages of the system and may not be a full description of no yes the behaviour of the system x = T,T,F,F,... [ABM98] y = F,F,F,T,... Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 35 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 36
  10. 10. 5.Test case generation from property-based models 5.Test case generation from behavioural-based models Common techniques used to generate test cases from Analyses of the execution traces to generate test cases. these specifications are rewriting and constraint solving A trace in CSP is a finite sequence of events • push?x;push?y;pop!x;push?x Given a set of expressions (logical assertions or Another example of test case generation from CSP equivalence relations) and the set of variables within specifications is illustrated in [BS02]. The goal is to test those expressions, constraint solving techniques try to Universal Mobile Telecommunications Systems (UMTS). find an instantiation of the variables which reduce the They start by constructing a transition graph with all expressions to true - E.g., [X>Y, Y-X>0] = impossible! possible interleaving and parallel tasks. Then, a test driver computes all paths through this graph that are used as test sequences Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 37 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 38 Test case generation overview Agenda Depends on the specification characteristics: State of the art • Pre/Post. Ex.: VDM, Z, Spec# - Partition testing Model-based testing - Generate FSM from the model. Ex.: Spec Explorer. • Model your system • Transition based. Ex.: FSM, EFSM, Statecharts. • Test coverage criteria and Test case generation - Traversal algorithms (state and transition coverage) • Test case execution - Model checking (algorithms based on state exploration; mutation analysis) A model-based testing tool Research Project • Property-based. Ex.: Obj - Rewriting rules - Constraint solving • Behaviour-based. Ex.: CSP - Trace analysis Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 39 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 40
  11. 11. Test suite “concretization” Test suite execution abstract “Lock-step” mode in which results are compared after each step Test Test Test Cases Cases Cases “Batch-oriented” way, in which case the test suite is run as a whole in the specification level, and expected results are kept in memory for later comparison with the results obtained from the execution of the implementation (which is performed in a different execution time instant) Level of abstractio Adapter • One advantage of the batch-oriented way is the need to execute n Test Scripts the model only once and not every time test cases are executed. The main drawback is the additional need of memory to keep the Test Scripts Adapter results expected concrete SUT SUT SUT "On-the fly testing" combines in a single algorithm the test case generation and execution and executes each operation as a a) Adaptation b) Transformation c) Mixed “lock-step” in each level comparing results after each of those execution steps [UL07] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 41 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 42 Agenda System Testing with model programs State of the art System System Model-based testing Design Test • Model your system Spec Explorer • Test coverage criteria and Test case generation • Test case execution A model-based testing tool: Spec Explorer Research Project Microsoft Research (FSE) : Colin Campbell, Wolfgang Grieskamp, Yuri Gurevich, Lev Nachmanson, Wolfram Schulte, Nikolai Tillmann, Margus Veanes Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 43 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 44
  12. 12. Model-Based Testing System testing Traditional unit testing is not enough • Single unit may work properly in isolation Model • Incorrect interaction between units may cause serious Provides expected security/reliability failures Generates results for System-level testing User Pass • Requires model of system behavior Test Cases Test Oracle Fail Info • Behavior is often reactive/nondeterministic Provides - Implementation is multi-threaded or distributed Are run • State space is typically infinite actual results for by - Objects, unbounded values Implementation • Traditional FSM-based testing does not work Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 45 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 46 Microsoft approach How it works Behavior is described by model programs Modeling • Written in Spec# • Define (infinite) transition system TP through program P • Describe reactive system behavior as interface automata Exploration • Use alternating refinement for conformance • Reduce TP to a finite test graph G Test generation Model exploration, validation and model-based testing • Generate test cases from G are provided by the Spec Explorer tool developed at MSR • Supports scenario control Test execution • Provides offline as well as online testing • Run the test cases using the model as the oracle • Views test cases as game strategies • Checks violations of alternating refinement To avoid state space explosion, exploration, test generation and execution can be combined into a single on-the-fly algorithm [CGNSTV05] Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 47 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 48
  13. 13. Modelling Exploration In Spec#/Spec Explorer: Exploration is the process of unfolding a rule-based model • States are mappings of variables to values program into a transition system - ASM states or first-order structures • Actions move the system from state to state • Initial state is given by initial assignment to model variables • State comes from variables containing values • Actions are defined by method invocations • Preconditions of methods and model invariants define action enabling • Compound data types are possible (sets, maps, sequences, etc…) conditions • Objects are identities • Transition function is defined by method execution Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 49 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 50 State space of a stack model Controlling exploration - 1 Stack model In general the transition system is infinite, but we can impose limits var Seq<int> content = Seq{}; [Action] Goal public void Push(int x) { [] content = Seq{x} + content; Push(0) Push(k) • Create a state space of manageable size that satisfies a given } Top/0 Pop Pop Top/k testing goal [Action] [0] [k] public void Pop() { requires !content.IsEmpty; Push(0) Push(l) Two main tasks Pop content = content.Tail; Pop • Restrict action parameter domains to interesting values } [0,k] [l,k] Top/l • Restrict the state space to interesting states [Action] Top/0 public int Top() { requires !content.IsEmpty; return content.Head; } Note: The two tasks are not necessarily independent! Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 51 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 52
  14. 14. Controlling exploration - 2 State Groupings The following techniques are used in Spec Explorer One or more state groupings may be defined • State filters • Stopping conditions A grouping G is a sequence of state based expressions g1,…,gk • State groupings • Search order Two states s and t are in the same group G or • Enabling conditions on actions G-equivalent if • gis = git for 1 ≤ i ≤ k Usually a combination of all (or some) of the A G-group is the set of all G-equivalent states techniques is needed to achieve desired effect Similar to predicate abstraction in model-checking (if grouping expressions are Boolean) Also used in viewing Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 53 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 54 Main purpose of groupings Test case generation FSM View all states in a G-group as being the same generation by bounded exploration A way to define “what is an interesting state” from some SUT (Spec# or _ Bounds AsmL) testing point of view FSM Example: content.Size in the stack model Coverage Test case criteria Pop generation Pop Test suite Spec Explorer Len s = 0 Len s != 0 Push(k) Set bounds: state filters; additional pre-conditions; restriction of the domains; equivalence classes; stop conditions; scenario actions Choose coverage criteria: full transition coverage; shortest path; random walk Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 55 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 56
  15. 15. Test generation Test execution The result of exploration is a finite transition system Generated test cases are executed in the implementation under test This connects us to the world of model-based testing • Most formal approaches to testing in the literature are based on The model acts as the oracle transition systems Traversal underlies most techniques for generating tests from automata Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 57 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 58 Demo: stack Online Testing of Reactive systems Component View Controllable Controllable actions actions [] Constrained Refinement IUT wrapper model checker Push(0) Push(k) Pop Top/0 Pop Pop Top/k Pop Object [0] [k] Observable mapping Observable actions c1 d1 actions IUT Push(0) Push(l) Len s = 0 Len s != 0 … … Pop Pop Model Push(k) [0,k] [l,k] Top/l Top/0 Pass/Fail Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 59 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 60
  16. 16. Example: Chat service Refinement relation IUT wrapper Conformance between the model and the system under test is Constrained chat model alternating refinement Controllable Login Send … Refinement actions: checker Spec System b.Send(“hi”) IUT m.Send(“hello ”) b.Send(“hi”) b.Send(“hi”) Object Ch at b.Send(“bye” mapping se ) rv b Bob er m.Send(“hello”) b.Rcv(“hello” m.Rcv(“hi m Mary m.Send(“hello”) ) ”) b.Rcv(“hello” ) m.Rcv(“hi m.Rcv(“bye” ”) ) m.Receive(b,“hi”) b.Receive(m,“hello”) Observable actions: Receive m.Receive(b,“hi”) m.Rcv(“bye” b.Rcv(“hello” ) ) b.Receive(m,“hello”) b.Receive(m,“hello”) Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 61 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 62 Example: Valid chat scenario Model program : Chat service State: class Client { Service should guarantee local consistency of bool entered = false; Map<Client, Seq<string>> unreceivedMsgs = Map{}; message delivery (FIFO wrt sender) } Send is a controllable action Receive is an observable action Actions: class Client { void Send(string message) Controllable: requires entered; { Bob Server Mary foreach (Client c in enumof(Client), c != this, c.entered) c.unreceivedMsgs[this] += Seq{message}; } void Receive(Client sender, string message) Observable: requires sender != this && unreceivedMsgs[sender].Length > 0 && unreceivedMsgs[sender].Head == message; { unreceivedMsgs[sender] = unreceivedMsgs[sender].Tail;} … Time } Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 63 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 64
  17. 17. Online testing Example of scenario control Chat initialization For large reactive systems deriving an exhaustive test-suite is Use a parameterized scenario action Start. infeasible • Invoking Start(n) produces a sequence of n Create() invocations followed by n Enter() invocations Merge test generation and test execution into one process [Action(Kind=Scenario)] Online testing is a form of model-based stress testing void Start(int nrOfMembers) requires enumof(Client) == Set{}; { Purely random exploration does not work Seq<Client> clients = Seq{}; Create()/c0; Create()/c1; • Use scenario control to direct online testing Start(2) c0.Enter(); // 1-- The given number of c1.Enter(); // clients are created for (int i = 0; i < nrOfMembers; i++) clients += Seq{Create()}; // 2-- all clients enter the session for (int i = 0; i < nrOfMembers; i++) clients[i].Enter(); } Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 65 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 66 Demo: chat Experiences Most tool features driven by demands of internal Bob Server Mary users at MS (mostly testers) Models help discover more bugs during modeling (design bugs) than testing • Testers do not get credit for finding those bugs !!! During testing models help discover deep Time system-level bugs where manual test scenarios don’t • Such bugs are hard to understand and fix Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 67 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 68
  18. 18. Experiences (cont.) NModel – http://www.codeplex.com/NModel Bugs appear in both models and implementations NModel is a model-based analysis and testing framework for (the ratio is roughly 50-50) model programs written in C#. Code coverage is a poor measure for testing It provides software support for the book "Model-based concurrent software, often a single execution Software Testing and Analysis with C#", Cambridge University thread provides the same coverage Press, 2007 New versions of implementation usually require only local changes to models, whereas manual See http://staff.washington.edu/jon/modeling-book/ for more information regarding the book tests often need to be rewritten completely The tool is built on .NET but has been used to test distributed C++ applications. Most test harnesses at MS are built in managed code Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 69 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 70 References References Main references [P07] Ana C. R. Paiva, PhD Thesis untitled “Automated Model-Based Testing of Graphical User SpecExplorer and Spec# references Interfaces”: www.fe.up.pt/~apaiva/PhD/PhDGUITesting.pdf [BLS04] M. Barnett, K. R. M. Leino, and W. Schulte, "The Spec# Programming System: An [UL07] Mark Utting and Bruno Legeard, “Practical Model-Based Testing: A Tools Approach”, Overview," presented at CASSIS'04 - International workshop on Construction and Analysis Morgan Kaufmann, Elsevier, 2007. of Safe, Secure and Interoperable Smart devices, Marseille, 2004. Other references [CGNSTV05] C. Compbell, W. Grieskamp, L. Nachmanson, W. Schulte, N. Tillmann, and [N00] N. Nyman, "Using Monkey Test Tools," in STQE - Software Testing and Quality M. Veans, "Testing Concurrent Object-Oriented Systems with Spec Explorer," presented Engineering Magazine, 2000. at FM, 2005. [UTF-site] For Unit testing frameworks visit sites – www.nunit.org; www.junit.org [FSE-site] Visit FSE web site for more information: research.microsoft.com/foundations [JMWU91] R. Jeffries, J. R. Miller, C. Wharton, and K. M. Uyeda, "User Interface Evaluation in [SE-site] Visit Spec Explorer site for download: research.microsoft.com/SpecExplorer/ the Real World: A Comparison of Four Techniques," 1991 Additional reading [GC96] C. Gram and G. Cockton, Design Principles for Interactive Software, Chapman & Hall, isbn:0412724707, 1996 [W03] James A. Whittaker, “How to Break Software: A Practical Guide to Testing”, ISBN: 0201796198 [HVCKR01] J. Hayhurst, D. S. Veerhusen, J. J. Chilenski, and L. K. Rierson, "A Practical Tutorial on Modified Condition / Decision Coverage," NASA/TM-2001-210876, 2001. [A-site] Alan Hartman – [BS02] J. Bredereke and B.-H. Schlingloff, "An Automated, Flexible Testing Environment for http://www.agedis.de/documents/ModelBasedTestGenerationTools_cs.pdf UMTS", in Proceedings of the IFIP 14th International Conference on Testing Communicating Model-Based Testing papers – Systems XIV, 2002. www.geocities.com/model_based_testing/online_papers.htm [ABM98] P. E. Ammann, P. E. Black, and W. Majurski, "Using Model Checking to Generate Tests from Specifications", in Proceedings of the 2nd IEEE International Conference on Formal Engineering Methods (ICFEM'98), M. G. H. John Staples, and Shaoying Liu(Eds.), Brisbane, Australia, 1998. Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 71 Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 72
  19. 19. END Questions? Model Driiven Software Engineering, MAP-I, Ana Paiva, 11/11/2008 73

×