Assessing Model-Based Testing: An Empirical Study Conducted in Industry

777 views

Published on

Published in: Software, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
777
On SlideShare
0
From Embeds
0
Number of Embeds
11
Actions
Shares
0
Downloads
30
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • This diagram shows the overview of our approach. I would just like to give you a quick overview, on the following slides we’ll go into the details of the process.
    The first step that you have to do is Analyze the requirements and support documentation of the system. This can include existing test cases of the system. This information is used to manually build the model of the system that you would like to test.
    In the next step, the tester has to map the model’s states and transitions to a test execution framework. In the hello world example we would have to build a test execution framework that can interact with the buttons of the hello world program.

    In step 3 the tester automatically creates abstract test cases from the model, abstract test cases are basically a list with states and transitions from the model.
    In order to then get executable test cases the abstract test cases have to be instantiated which means that for each state and transition the associated actions from the test execution framework have to be embedded in the test case.

    In step 5 the test are executed and the results are then analyzed in step 6 in order to identify issues in the system.
  • Assessing Model-Based Testing: An Empirical Study Conducted in Industry

    1. 1. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Assessing Model-Based Testing - an Empirical Study Conducted in Industry Christoph Schulze, Dharmalingam Ganesan, Mikael Lindvall, Rance Cleaveland Fraunhofer CESE, Maryland Daniel Goldman Global Net Services Inc. (GNSI), Maryland ICSE 2014 (SEIP Track)
    2. 2. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 2 The Big Picture of our Experiment Manual Testing vs. Model-based Testing (MBT)
    3. 3. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Model-based Testing (MBT): Brief Overview • Generate test cases using models (built for testing) – Incrementally model the software under test based on requirements • Usage behavior, expected system response – Models are state machines in this work (other advanced notations exist) • Every path through the model is a test case – Manual work • Model construction and maintenance • Mapping of model elements to concrete instructions • Analysis of test case failures – Automatic • Test case generation • Test case execution and verdict • MBT fits many types of systems, types of test cases • Web, APIs, xUnit (e.g. Junit, Cunit, etc.) 3
    4. 4. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering We have applied MBT to • Embedded Flight Software • Ground systems • Database-driven systems • Architecture styles: – Pub-Sub-based systems – Client-Server based systems • API-level: – Middleware and Operating System wrappers  This presentation: MBT of Web-based systems 4
    5. 5. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Goal of the Project • Evaluate the costs and benefits of MBT as compared to completely manual testing • Compare effectiveness and efficiency of MBT and manually testing methods – Compare the number of detected issues – Compare the effort • Observe differences between manual and automated testing 5
    6. 6. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Commercial System Under Test (SUT) • Used by the customers of U.S. FDA – Allows researchers to exchange findings of laboratory analyses regarding food borne illnesses • General Functionality: – Add/Edit data – Review data – Search data – Sort data into tree structure 6
    7. 7. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering System Under Test – Web Interface 7
    8. 8. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Experimental Set-up • Two testers – One tester used MBT (@ Fraunhofer) – Another tester used a completely manual approach (@ GNSI) • Testers have no prior experience with the SUT – Manual Tester had 3.5 years of testing experience – MBT Tester had 2 years of MBT-based testing experience • Both Testers were given the same artifacts – Use cases, Requirements, Analysts, and SUT • Two versions – Version v1: GUI inherited from a previous contractor – Version v2: New GUI front-end 8
    9. 9. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 9
    10. 10. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 10
    11. 11. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 11
    12. 12. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 12
    13. 13. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 13
    14. 14. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 14
    15. 15. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the MBT Approach Manual Automation Support Fully Automated 15
    16. 16. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the Manual Approach 16 Manual • No testing tool used by the manual tester • The tester manually: – entered data – clicked on buttons – compared actual results to expected
    17. 17. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the Manual Approach 17 Manual • No testing tool used by the manual tester • The tester manually: – entered data – clicked on buttons – compared actual results to expected
    18. 18. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the Manual Approach 18 Manual • No testing tool used by the manual tester • The tester manually: – entered data – clicked on buttons – compared actual results to expected
    19. 19. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Overview of the Manual Approach • No testing tool used by the manual tester • The tester manually: – entered data – clicked on buttons – compared actual results to expected 19 Manual
    20. 20. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Classification of Issue Types • Business Logic Issues – E.g. Functional Issues • Field Validation – E.g. Field Length violations are not handled correctly • Naming Discrepancies – E.g. “Lab or Organization” instead of “Organization” • Field Discrepancies – E.g. Extra/Missing Fields • Usability Issues – E.g. Broken Layout 20
    21. 21. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Issues Found in Version 1 and Version 2 21 Category MBT Manual Union Business Logic 22 12 27 Field Validation 6 1 6 Naming Discrepancies 0 5 5 Extra Fields 1 6 6 Usability 7 5 9 Total 36 (24 + 12) 29 (17 + 12) 53 (17 + 12 + 24) Only MBT Only Manual Both 24 17 12
    22. 22. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Observation 1: MBT better than Manual • Business Logic – Manual testing was accidentally uneven – Focused on some parts but missed others • Field Validation – MBT always tests the limits of all fields • Usability – Systematic use of the system by MBT is good to find usability issues 22 6 1 22 12 7 5 MBT Manual
    23. 23. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Observation 2: Manual better than MBT • Field Discrepancies • Naming Discrepancies • Why were they missed by MBT? – Models focused on functional issues • MBT found more severe issues than manual – See the paper for definition of severity levels 23 1 6 0 5 MBT Manual
    24. 24. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Model and Test Infrastructure Metrics • Model for add/edit/approve features – States: 166 – Transitions: 250 • Model for user roles / page access – States: 21 – Transitions: 30 • Size of test infrastructure – ~2500 Lines of Code – Most of the code very simple, filling in forms, reading from forms and validating the data 24
    25. 25. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Generated Test Suite Metrics • 100 automatically generated test cases divided into scenarios: – add method: 20 – edit method: 20 – add/edit method: 20 – approve method: 10 – table of content: 10 – mix of above scenarios: 20 • Average length: ~580 lines of code per test case 25
    26. 26. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Preparation Effort (in person/hours) 26 Task MBT Manual Requirement Elicitation 16.5 16 Modeling 24 N/A Implementing Test Infrastructure 87 N/A Test Case Development N/A 16 Total 127.5 32 Why was the test infrastructure for MBT so expensive? • Had to develop utilities to programmatically interact with the web-browser • Same cost for setting up automated test case execution framework • Creating models, generating test cases is the smaller cost • Limitations of Selenium • Is the table sorted? • How many rows in the table, etc.? • File uploading and native windows controls, etc.
    27. 27. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Effort Breakdown for Two Versions 27 Task MBT Manual Overall Effort 139.5 65 Overall Effort Task MBT Manual Adapting the test infrastructure 6 N/A Test Execution N/A 7 Issue Analysis 2 N/A Total 8 7 Effort (V2) Task MBT Manual Test Execution N/A 26 Issue Analysis 4 N/A Total 4 26 Effort (V1)
    28. 28. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Benefits of Manual Testing • Limited initial investment • No coding experience is necessary • Exploratory testing is possible • Good at bringing the system to a particular state and test around it • Easy to characterize test case failures • No problem at all when GUI changes 28
    29. 29. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Drawbacks of Manual Testing • Tester gets tired and ends testing early – Stopping criteria • Time consuming to test all corner cases • Test execution takes longer 29
    30. 30. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Benefits of MBT • Business Logic can be encoded in testing models (precise spec. of the system) • Well-defined stopping criteria – Various model coverage metrics • Generated test cases can be (and are) reused – Applied the same set of tests on multiple versions – Could reuse the tests for a modified version with moderate changes to the testing infrastructure – Pays off in the long run (great for regression testing) • Several corner case issues were detected – Manual testing missed many of them – It is tedious and time consuming to check corner cases manually 30
    31. 31. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Drawbacks of MBT • Tradeoff between completeness of the models and time spent – Right-level of abstraction • Analysis of test failures is not always easy – Long and random test cases – Multiple tests could fail for the same reason • Managing data for test cases is not always easy – Data intensive systems – Managing the state of the database 31
    32. 32. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Interested in further details? • Read the paper: – MBT for beginners – GNSI business context and SUT – Detailed design of the experiment – Detailed definition of severity of issues – Lessons learned – Threats to validity – Related work • Send your questions/comments: – dganesan@fc-md.umd.edu 32
    33. 33. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Conclusion • Performing an empirical study in industry is difficult but possible (management support) – Too many versions and changes during the study • MBT and Manual find different types of issues • MBT is expensive to start but pays off after a couple of versions: – Test infrastructure/driver for MBT is the bottleneck – Changes in GUI breaks the concrete tests • MBT is better in detecting functional issues and (most) of the corner-cases 33
    34. 34. © 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering Acknowledgement • GNSI management – Ori Reiss – Pino Marinelli • GNSI engineers, testers, and analysts – Jangho Ki – Anjana Sreeram – Prashant Pandya – Eyal Rand 34

    ×