Juniper Networks Ignite! Testing Conference. Sunnyvale California, November 9, 2011.
Overview of model-based testing. Two case studies. Thumbnail introduction to fee and free MBT tools.
A software testing practice that follow the principle of agile software development is called Agile Testing.
Agile is an iterative development methodology where requirement evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer need.
Website: https://www.1solutions.biz/
This document provides an overview of test automation using Cucumber and Calabash. It discusses using Cucumber to write automated test specifications in plain language and Calabash to execute those tests on Android apps. It outlines the environments, tools, and basic steps needed to get started, including installing Ruby and DevKit, creating Cucumber feature files, and using Calabash APIs to automate user interactions like tapping, entering text, and scrolling. The document also explains how to run tests on an Android app and generate an HTML report of the results.
Regression testing is testing performed after changes to a system to detect whether new errors were introduced or old bugs have reappeared. It should be done after changes to requirements, new features added, defect fixes, or performance improvements. There are various strategies for regression testing including re-running all tests, test selection, test prioritization, and focusing on areas like frequently failing tests or recently changed code. While regression testing helps ensure system quality, managing large test suites over time poses challenges in minimizing tests while achieving coverage. Automating regression testing can help address these challenges.
Software testing is an important phase of the software development process that evaluates the functionality and quality of a software application. It involves executing a program or system with the intent of finding errors. Some key points:
- Software testing is needed to identify defects, ensure customer satisfaction, and deliver high quality products with lower maintenance costs.
- It is important for different stakeholders like developers, testers, managers, and end users to work together throughout the testing process.
- There are various types of testing like unit testing, integration testing, system testing, and different methodologies like manual and automated testing. Proper documentation is also important.
- Testing helps improve the overall quality of software but can never prove that there
The document outlines an automation testing syllabus covering software development lifecycles, the role of testers, types of testing, test techniques, test cases, test plans, bugs, Java concepts, production tools, load testing, test management tools, and real-world manual testing projects. Key topics include waterfall, agile, and scrum models; unit, integration, and regression testing; black box and white box techniques; test plans; bug tracking; Java fundamentals; and tools like JUnit, Selenium, JIRA, LoadRunner, and QTP. The syllabus aims to equip students with the skills needed for both manual and automation testing.
This document provides an overview of agile testing. It discusses what agile testing is, common agile testing strategies and stages, principles of agile testing, advantages such as reduced time and money and regular feedback, challenges like compressed testing cycles and minimal time for planning, and concludes that communication between teams is key to agile testing success. The agile testing life cycle involves four stages: iteration 0 for initial setup, construction iterations for ongoing testing, release for deployment, and production for maintenance. Principles include testing moving the project forward, testing as a continuous activity, everyone on the team participating in testing, and reducing feedback loops.
Testing involves finding errors in a program. The goal is to assume a program contains errors and test to find as many as possible. Different testing techniques include white box testing by developers and black box testing by testers. Testing levels include unit, integration, system, and user acceptance testing. Developers and testers have different goals - developers want code to work while testers try to make code fail. Good development practices from a tester's view include doing own acceptance tests, fixing bugs, writing helpful error messages, and not artificially adding bugs. Good relationships between project managers, developers and testers help ensure quality.
These slides summarize key concepts about software testing strategies from the book "Software Engineering: A Practitioner's Approach". The slides cover topics such as unit testing, integration testing, regression testing, object-oriented testing, and debugging. The overall strategic approach to testing outlined in the slides is to begin with "testing in the small" at the component level and work outward toward integrated system testing. Different testing techniques are appropriate at different stages of development.
A software testing practice that follow the principle of agile software development is called Agile Testing.
Agile is an iterative development methodology where requirement evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer need.
Website: https://www.1solutions.biz/
This document provides an overview of test automation using Cucumber and Calabash. It discusses using Cucumber to write automated test specifications in plain language and Calabash to execute those tests on Android apps. It outlines the environments, tools, and basic steps needed to get started, including installing Ruby and DevKit, creating Cucumber feature files, and using Calabash APIs to automate user interactions like tapping, entering text, and scrolling. The document also explains how to run tests on an Android app and generate an HTML report of the results.
Regression testing is testing performed after changes to a system to detect whether new errors were introduced or old bugs have reappeared. It should be done after changes to requirements, new features added, defect fixes, or performance improvements. There are various strategies for regression testing including re-running all tests, test selection, test prioritization, and focusing on areas like frequently failing tests or recently changed code. While regression testing helps ensure system quality, managing large test suites over time poses challenges in minimizing tests while achieving coverage. Automating regression testing can help address these challenges.
Software testing is an important phase of the software development process that evaluates the functionality and quality of a software application. It involves executing a program or system with the intent of finding errors. Some key points:
- Software testing is needed to identify defects, ensure customer satisfaction, and deliver high quality products with lower maintenance costs.
- It is important for different stakeholders like developers, testers, managers, and end users to work together throughout the testing process.
- There are various types of testing like unit testing, integration testing, system testing, and different methodologies like manual and automated testing. Proper documentation is also important.
- Testing helps improve the overall quality of software but can never prove that there
The document outlines an automation testing syllabus covering software development lifecycles, the role of testers, types of testing, test techniques, test cases, test plans, bugs, Java concepts, production tools, load testing, test management tools, and real-world manual testing projects. Key topics include waterfall, agile, and scrum models; unit, integration, and regression testing; black box and white box techniques; test plans; bug tracking; Java fundamentals; and tools like JUnit, Selenium, JIRA, LoadRunner, and QTP. The syllabus aims to equip students with the skills needed for both manual and automation testing.
This document provides an overview of agile testing. It discusses what agile testing is, common agile testing strategies and stages, principles of agile testing, advantages such as reduced time and money and regular feedback, challenges like compressed testing cycles and minimal time for planning, and concludes that communication between teams is key to agile testing success. The agile testing life cycle involves four stages: iteration 0 for initial setup, construction iterations for ongoing testing, release for deployment, and production for maintenance. Principles include testing moving the project forward, testing as a continuous activity, everyone on the team participating in testing, and reducing feedback loops.
Testing involves finding errors in a program. The goal is to assume a program contains errors and test to find as many as possible. Different testing techniques include white box testing by developers and black box testing by testers. Testing levels include unit, integration, system, and user acceptance testing. Developers and testers have different goals - developers want code to work while testers try to make code fail. Good development practices from a tester's view include doing own acceptance tests, fixing bugs, writing helpful error messages, and not artificially adding bugs. Good relationships between project managers, developers and testers help ensure quality.
These slides summarize key concepts about software testing strategies from the book "Software Engineering: A Practitioner's Approach". The slides cover topics such as unit testing, integration testing, regression testing, object-oriented testing, and debugging. The overall strategic approach to testing outlined in the slides is to begin with "testing in the small" at the component level and work outward toward integrated system testing. Different testing techniques are appropriate at different stages of development.
Testing metrics provide objective measurements of software quality and the testing process. They measure attributes like test coverage, defect detection rates, and requirement changes. There are base metrics that directly capture raw data like test cases run and results, and calculated metrics that analyze the base metrics, like first run failure rates and defect slippage. Tracking these metrics throughout testing provides visibility into project readiness, informs management decisions, and identifies areas for improvement. Regular review and interpretation of the metrics is needed to understand their implications and make changes to the development lifecycle.
(1) The document discusses software testing and provides an introduction to various testing techniques.
(2) It discusses the challenges of software testing including the large input space, different execution paths, and coincidental correctness. Testing aims to find bugs early and is part of quality assurance.
(3) The document then provides short glossaries defining key testing terms like test case, test suite, oracle, and fault model. It also discusses the V-Model and different testing levels from unit to system testing.
This document discusses agile testing processes. It outlines that agile is an iterative development methodology where requirements evolve through collaboration. It also discusses that testers should be fully integrated team members who participate in planning and requirements analysis. When adopting agile, testing activities like planning, automation, and providing feedback remain the same but are done iteratively in sprints with the whole team responsible for quality.
This is my complete introductory course for Software Test Automation.If you need full training that includes different automation tools (Selenium, J-Meter, Burp, SOAP UI etc), feel free to contact me by email (amraldo@hotmail.com) or by mobile (+201223600207).
This document provides an overview of test-driven development (TDD). It defines TDD as a technique for building software where tests are written before code to guide development. The key aspects of TDD covered are:
- Writing tests first before code, which helps improve design and ensures tests are written.
- The TDD mantra of Red-Green-Refactor, where tests initially fail (Red), code is written to pass tests (Green), then code is refactored to improve design.
- An example case study of a large Java project developed using TDD that has over 20,000 lines of unit tests providing over 90% test coverage.
The document outlines a test strategy for an agile software project. It discusses testing at each stage: release planning, sprints, a hardening sprint, and release. Key points include writing test cases during planning and sprints, different types of testing done during each phase including unit, integration, feature and system testing, retrospectives to improve, and using metrics like burn downs and defect tracking to enhance predictability. The overall strategy emphasizes testing early and often throughout development in short iterations.
Agile Testing: The Role Of The Agile TesterDeclan Whelan
This presentation provides an overview of the role of testers on agile teams.
In essence, the differences between testers and developers should blur so that focus is the whole team completing stories and delivering value.
Testers can add more value on agile teams by contributing earlier and moving from defect detection to defect prevention.
A test case is a set of conditions or variables under which a tester will determine whether a software system is working correctly. Test cases are often written as test scripts and collected into test suites. Characteristics of good test cases include being simple, clear, concise, complete, non-redundant, and having a reasonable probability of catching errors. Test cases should be developed to verify specific requirements or designs and include both positive and negative cases.
User Interfaces can be modeled in a technology agnostic way using Conceptual User Interface Patterns. This talk shows how to take advantage of this approach and shows how to generate code to different devices and technologies.
Testing is the process of identifying bugs and ensuring software meets requirements. It involves executing programs under different conditions to check specification, functionality, and performance. The objectives of testing are to uncover errors, demonstrate requirements are met, and validate quality with minimal cost. Testing follows a life cycle including planning, design, execution, and reporting. Different methodologies like black box and white box testing are used at various levels from unit to system. The overall goal is to perform effective testing to deliver high quality software.
With a pre-requisite of ensuring an application's flawless functioning, this PPT sheds light on what functional testing entails with its importance to enhance an application's quality. Get to know more on Functional Testing Services, Functional Testing Types, Smoke Testing, Sanity Testing, Regression Testing with this presentation and stay tuned for our upcoming ones.
The document discusses Object Oriented Design and Analysis using the Rational Unified Process (RUP). RUP is an iterative software development process framework for building object-oriented systems. It is comprised of four phases - Inception, Elaboration, Construction, and Transition. Within each phase are iterative cycles of requirements analysis, design, implementation, testing and feedback. The goal is to produce high-quality software that meets user needs within schedule and budget.
The document discusses various topics related to software testing including:
1. Software testing helps improve software quality by testing conformance to requirements and is important to uncover errors before delivery to customers.
2. Testing involves specialists at different stages from early development through delivery and includes unit testing of individual components, integration testing of combined components, and system testing of the full system.
3. Proper testing methods include black box testing of inputs/outputs, white box testing of code structures, and testing at different levels from units to full system as well as by independent third parties.
Unit testing involves testing individual units or components of code to ensure they work as intended. It focuses on testing small, isolated units of code to check functionality and edge cases. Benefits include faster debugging, development and regression testing. Guidelines for effective unit testing include keeping tests small, automated, independent and focused on the code's public API. Tests should cover a variety of inputs including boundaries and error conditions.
The document discusses various aspects of software testing including definitions, principles, objectives, types and processes. It defines testing as "the process of executing a program with the intent of finding errors". The key principles discussed are that testing shows presence of bugs but not their absence, exhaustive testing is impossible, early testing is beneficial, and testing must be done by an independent party. The major types of testing covered are unit testing, integration testing and system testing.
The document discusses software testing concepts like verification, validation, whitebox testing, and blackbox testing. Verification ensures the product satisfies specifications, while validation ensures it meets customer requirements. Whitebox testing uses internal knowledge to test code, while blackbox testing treats the system as a black box without internal knowledge. The document also covers different types of testing like unit, integration, and functional testing.
Unit testing involves individually testing small units or modules of code, such as functions, classes, or programs, to determine if they are fit for use. The goal is to isolate each part of a program and verify that it works as intended, helps reduce defects early in the development process, and improves code design. Unit testing is typically done by developers to test their code meets its design before integration testing.
How to Release Rock-solid RESTful APIs and Ice the Testing BackBlobBob Binder
REST APIs are a key enabling technology for the cloud. Mobile applications, service-oriented architecture, and the Internet of Things depend on reliable and usable REST APIs. Unlike browser, native, and mobile apps, REST APIs can only be tested with software that drives the APIs. Unlike developer-centric hand-coded unit testing, adequate testing of REST APIs is truly well-suited to advanced automated testing.
As most web service applications are developed following an Agile process, effective testing must also avoid the "testing backblob," in which work to maintain hand-coded BDD-style test suites exceeds available time after a few iterations.
This talk presents a methodology for developing and testing REST APIs using model-based automation that has the beneficial side-effect of shrinking the testing backblob.
Lessons learned validating 60,000 pages of api documentationBob Binder
The document discusses lessons learned from validating over 60,000 pages of API documentation at Microsoft. It provides an overview of Microsoft's protocol quality assurance process, which included developing model-based test suites to validate technical documentation against actual Windows services. Key aspects of the process included requirements engineering to derive testable requirements from documentation statements, modeling protocol behavior, and using the Spec Explorer tool to automatically generate and execute test cases from the models. The process uncovered over 50,000 issues in the documentation, most before test execution, and helped close an antitrust case regarding Microsoft's interoperability documentation.
Testing metrics provide objective measurements of software quality and the testing process. They measure attributes like test coverage, defect detection rates, and requirement changes. There are base metrics that directly capture raw data like test cases run and results, and calculated metrics that analyze the base metrics, like first run failure rates and defect slippage. Tracking these metrics throughout testing provides visibility into project readiness, informs management decisions, and identifies areas for improvement. Regular review and interpretation of the metrics is needed to understand their implications and make changes to the development lifecycle.
(1) The document discusses software testing and provides an introduction to various testing techniques.
(2) It discusses the challenges of software testing including the large input space, different execution paths, and coincidental correctness. Testing aims to find bugs early and is part of quality assurance.
(3) The document then provides short glossaries defining key testing terms like test case, test suite, oracle, and fault model. It also discusses the V-Model and different testing levels from unit to system testing.
This document discusses agile testing processes. It outlines that agile is an iterative development methodology where requirements evolve through collaboration. It also discusses that testers should be fully integrated team members who participate in planning and requirements analysis. When adopting agile, testing activities like planning, automation, and providing feedback remain the same but are done iteratively in sprints with the whole team responsible for quality.
This is my complete introductory course for Software Test Automation.If you need full training that includes different automation tools (Selenium, J-Meter, Burp, SOAP UI etc), feel free to contact me by email (amraldo@hotmail.com) or by mobile (+201223600207).
This document provides an overview of test-driven development (TDD). It defines TDD as a technique for building software where tests are written before code to guide development. The key aspects of TDD covered are:
- Writing tests first before code, which helps improve design and ensures tests are written.
- The TDD mantra of Red-Green-Refactor, where tests initially fail (Red), code is written to pass tests (Green), then code is refactored to improve design.
- An example case study of a large Java project developed using TDD that has over 20,000 lines of unit tests providing over 90% test coverage.
The document outlines a test strategy for an agile software project. It discusses testing at each stage: release planning, sprints, a hardening sprint, and release. Key points include writing test cases during planning and sprints, different types of testing done during each phase including unit, integration, feature and system testing, retrospectives to improve, and using metrics like burn downs and defect tracking to enhance predictability. The overall strategy emphasizes testing early and often throughout development in short iterations.
Agile Testing: The Role Of The Agile TesterDeclan Whelan
This presentation provides an overview of the role of testers on agile teams.
In essence, the differences between testers and developers should blur so that focus is the whole team completing stories and delivering value.
Testers can add more value on agile teams by contributing earlier and moving from defect detection to defect prevention.
A test case is a set of conditions or variables under which a tester will determine whether a software system is working correctly. Test cases are often written as test scripts and collected into test suites. Characteristics of good test cases include being simple, clear, concise, complete, non-redundant, and having a reasonable probability of catching errors. Test cases should be developed to verify specific requirements or designs and include both positive and negative cases.
User Interfaces can be modeled in a technology agnostic way using Conceptual User Interface Patterns. This talk shows how to take advantage of this approach and shows how to generate code to different devices and technologies.
Testing is the process of identifying bugs and ensuring software meets requirements. It involves executing programs under different conditions to check specification, functionality, and performance. The objectives of testing are to uncover errors, demonstrate requirements are met, and validate quality with minimal cost. Testing follows a life cycle including planning, design, execution, and reporting. Different methodologies like black box and white box testing are used at various levels from unit to system. The overall goal is to perform effective testing to deliver high quality software.
With a pre-requisite of ensuring an application's flawless functioning, this PPT sheds light on what functional testing entails with its importance to enhance an application's quality. Get to know more on Functional Testing Services, Functional Testing Types, Smoke Testing, Sanity Testing, Regression Testing with this presentation and stay tuned for our upcoming ones.
The document discusses Object Oriented Design and Analysis using the Rational Unified Process (RUP). RUP is an iterative software development process framework for building object-oriented systems. It is comprised of four phases - Inception, Elaboration, Construction, and Transition. Within each phase are iterative cycles of requirements analysis, design, implementation, testing and feedback. The goal is to produce high-quality software that meets user needs within schedule and budget.
The document discusses various topics related to software testing including:
1. Software testing helps improve software quality by testing conformance to requirements and is important to uncover errors before delivery to customers.
2. Testing involves specialists at different stages from early development through delivery and includes unit testing of individual components, integration testing of combined components, and system testing of the full system.
3. Proper testing methods include black box testing of inputs/outputs, white box testing of code structures, and testing at different levels from units to full system as well as by independent third parties.
Unit testing involves testing individual units or components of code to ensure they work as intended. It focuses on testing small, isolated units of code to check functionality and edge cases. Benefits include faster debugging, development and regression testing. Guidelines for effective unit testing include keeping tests small, automated, independent and focused on the code's public API. Tests should cover a variety of inputs including boundaries and error conditions.
The document discusses various aspects of software testing including definitions, principles, objectives, types and processes. It defines testing as "the process of executing a program with the intent of finding errors". The key principles discussed are that testing shows presence of bugs but not their absence, exhaustive testing is impossible, early testing is beneficial, and testing must be done by an independent party. The major types of testing covered are unit testing, integration testing and system testing.
The document discusses software testing concepts like verification, validation, whitebox testing, and blackbox testing. Verification ensures the product satisfies specifications, while validation ensures it meets customer requirements. Whitebox testing uses internal knowledge to test code, while blackbox testing treats the system as a black box without internal knowledge. The document also covers different types of testing like unit, integration, and functional testing.
Unit testing involves individually testing small units or modules of code, such as functions, classes, or programs, to determine if they are fit for use. The goal is to isolate each part of a program and verify that it works as intended, helps reduce defects early in the development process, and improves code design. Unit testing is typically done by developers to test their code meets its design before integration testing.
How to Release Rock-solid RESTful APIs and Ice the Testing BackBlobBob Binder
REST APIs are a key enabling technology for the cloud. Mobile applications, service-oriented architecture, and the Internet of Things depend on reliable and usable REST APIs. Unlike browser, native, and mobile apps, REST APIs can only be tested with software that drives the APIs. Unlike developer-centric hand-coded unit testing, adequate testing of REST APIs is truly well-suited to advanced automated testing.
As most web service applications are developed following an Agile process, effective testing must also avoid the "testing backblob," in which work to maintain hand-coded BDD-style test suites exceeds available time after a few iterations.
This talk presents a methodology for developing and testing REST APIs using model-based automation that has the beneficial side-effect of shrinking the testing backblob.
Lessons learned validating 60,000 pages of api documentationBob Binder
The document discusses lessons learned from validating over 60,000 pages of API documentation at Microsoft. It provides an overview of Microsoft's protocol quality assurance process, which included developing model-based test suites to validate technical documentation against actual Windows services. Key aspects of the process included requirements engineering to derive testable requirements from documentation statements, modeling protocol behavior, and using the Spec Explorer tool to automatically generate and execute test cases from the models. The process uncovered over 50,000 issues in the documentation, most before test execution, and helped close an antitrust case regarding Microsoft's interoperability documentation.
Model-based Testing: Taking BDD/ATDD to the Next LevelBob Binder
Slides from presentation at the Chicago Quality Assurance Association, February 25, 2014.
Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD) are well-established Agile practices that rely on the knowledge and intuition of testers, product owners, and developers to identify and then translate statements into test suites. But the resulting test suites often cover only a small slice of happy-path behavior. And, as a BDD specification and its associated test code base grows over time, work to maintain it either crowds out new development and testing or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.Model Based Testing is a tool-based approach to automate the creation of test cases. This presentation will outline the techniques and benefits of MBT, and show how model-based testing can address both problems. A detailed demo of Spec Explorer, a free model-based testing tool shows how a model is constructed and used to create and maintain a test suite.
Keynote, ETSI Model-Based Testing User Conference. Tallinn, Estonia September 27, 2012.
High-level discussion of model-based testing and trends driving software/system reliability. Explains how emergent behavior in complex systems ("dragon kings") causes catastrophic failures. My Multi-dimensional testing strategy can reveal this hard to find bugs/failure modes, but this requires a better approach to model-based testing. Overview: Is software eating the world? Bugs, Black Swans, Dragon Kings. Multi-dimensional Testing. Challenges.
Popular Delusions, Crowds, and the Coming Deluge: end of the Oracle?Bob Binder
Invited Talk at the 20th CREST Open Workshop, The Oracle Problem for Automated Software Testing. University College of London. May 21, 2012
Pragmatic Innovations for test oracles, a new Oracle Taxonomy, Characterization of test oracles, Challenges.
Invited Talk, ISSTA 2nd International Workshop on End-to-end Test Script Engineering
July 16, 2012, Minneapolis.
Limitations of x-unit testing framework, MTS testing framework that combined test objects with procedural aspects of TTCN.
Achieving Very High Reliability for Ubiquitous Information Technology Bob Binder
1) The document discusses achieving very high reliability for ubiquitous information technology through full test automation.
2) It outlines the new IT reality of growing usage, mobility, and need for high reliability of "five nines" or 99.999% uptime.
3) The strategy proposed is taking a full end-to-end testing approach through automated test generation and execution to achieve the reliability needed for ubiquitous IT to scale to millions of users.
The Tester’s Dashboard: Release Decision SupportBob Binder
The document discusses metrics for supporting release decisions based on model-based testing. It describes using an operational profile to generate test cases, calculating model coverage metrics, using a reliability demonstration chart to assess risk, and measuring relative proximity to compare expected and actual failure rates. A case study applies these methods to a word processing app and missile defense system. Key observations are that model coverage ensures sufficient testing, reliability demonstration charts assume flat profiles which may be optimistic, and relative proximity indicates when failure intensities match expectations.
Performance Testing Mobile and Multi-Tier ApplicationsBob Binder
Invited Talk, Chicago Quality Assurance Association, Chicago, June 26, 2007. Overview of performance testing strategy for handheld devices and multi-tier systems.
The document discusses lessons learned from testing object-oriented systems. It covers the state of the art in object-oriented test design, automation, and representation. It also examines the state of the practice, finding that the best organizations implement systematic testing at multiple scopes from classes to subsystems. With rigorous testing following design patterns, world-class quality below 0.025 defects per function point is achievable.
The document provides an overview of mVerify Corporation and its mobile testing solution called MTS. MTS aims to address the challenges of testing mobile applications by providing a platform that can simulate millions of users and configurations to thoroughly test apps. The solution slashes testing time and costs while improving reliability and performance. mVerify has seen early traction and seeks funding to further develop the platform and expand its customer base and sales.
Keynote, ISSRE-13, St. Malo, France, November 4, 2004.
Outline: 21st Century IT Trends, Mobile Technology Crisis, Test Effectiveness Levels, Level 4 Case Study, Reliability Arithmetic, Test Performance Envelope.
This document discusses factors that affect testability and strategies for improving testability. It defines testability as the ability to produce tests to verify complex systems. Higher testability allows for more effective testing with the same resources. The document identifies controllability and observability as the main factors that determine a system's testability. It provides examples of how characteristics like complexity, non-determinism, and lack of visibility into state diminish testability. Techniques for improving testability include adding points of control and observation, using state test helpers, building tests into the system, and designing for well-structured and deterministic code.
The document discusses mVerify's Test Objects framework for automated software testing. It was presented at the 2006 Google Test Automation Conference. The framework was influenced by TTCN-3 and XUnit testing frameworks and aims to generate test objects from models, support distributed testing across platforms, and make testing intuitive through one-click repetition and smart progress bars. A demo was presented to illustrate these capabilities.
WTS is a mobile systems verification tool that allows for end-to-end automated testing of wireless applications on thousands of simulated personal digital assistants (PDAs) from a single computer. It generates test cases for up to one million virtual users, simulating real-world user behavior, mobility patterns, and wireless conditions. Current versions support testing on 15 actual PDA models, with more in development. WTS' value proposition is that it can productize proven automated testing techniques to deliver high-fidelity, scalable testing of any wireless application on any device from a single system.
Invited Talk: C-SPIN, the Chicago Software Process Improvement Network. January 7, 2009, Schaumburg, Illinois. Overview of themes and concepts from ISSRE 2008.
Software Test Patterns: Successes and ChallengesBob Binder
This document discusses the successes and challenges of using test patterns over the past 10 years. It describes how test patterns were useful for articulating testing insights and practices, but have not been widely adopted. Reasons for limited adoption include the proliferation of templates, confusion between different pattern types, and the perception that using patterns requires too much additional modeling effort. The document also suggests that while innovators create new patterns, those seeking existing patterns may be less influential. It argues that test patterns will remain important for building a conceptual framework for testing and efficiently sharing solutions, especially as software systems increase in complexity.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Large Language Model (LLM) and it’s Geospatial Applications
Model-Based Testing: Why, What, How
1. Model-Based Testing:
Why, What, How
Bob Binder
System Verification Associates
Juniper Systems Testing Conference
November 9, 2011
2. Overview
• What is Model-Based Testing?
• Testing Economics
• Case Studies
– Automated Derivatives Trading
– Microsoft Protocol Interoperability
• Product Thumbnails
• Real Testers of …
• Q&A
Model-Based Testing: What, Why, How 2
3. Why?
• For Juniper:
– Reduce cost of testing
– Reduce time to market
– Reduce cost of quality
– Increase competitive advantage
• For you:
– Focus on System Under Test (SUT), not test hassles
– Engineering discipline with rigorous foundation
– Enhanced effectiveness and prestige
– Future of testing
Model-Based Testing: What, Why, How 3
5. “All Testing is Model-Based”
• Patterns for test design
– Methods
– Classes
– Package and System
Integration
– Regression
– Test Automation
– Oracles
• 35 patterns, each a
test meta-model
Model-Based Testing: What, Why, How
5
6. What is a Test Model?
TwoPlayerGame
TwoPlayerGame Mode Machine test design pattern
Two Play erG a me ( )
+TwoPlayerGame() α
p1 _S tart ( ) / p2 _S tart ( ) /
+p1_Start( ) s im u lat eV olle y( ) s im u lat eV olle y( ) ThreePlayerGame( ) /TwoPlayerGame( )
+p1_WinsVolley( ) G am e S tarte d α
-p1_AddPoint( )
+p1_IsWinner( ) p1 _W ins V olle y ( ) p2 _W ins V olle y ( )
p1_Start( ) / p3_Start( )/
+p1_IsServer( ) [th is .p 1_ Sc ore ( ) < 20 ] / [th is .p 2_ Sc ore ( ) < 20 ] / simulateVolley( ) simulateVolley( )
+p1_Points( )
th is .p 1A ddP o in t( ) th is .p 2A ddP o in t( ) Game Started
s im u lat eV olle y( ) p1 _W ins V olle y ( ) / s im u lat eV olle y( )
+p2_Start( ) s im u lat eV olle y( )
+p2_WinsVolley( ) P la ye r 1 P la ye r 2 p2_Start( ) /
-p2_AddPoint( ) S erv ed S erv ed
simulateVolley( )
+p2_IsWinner( ) p2 _W ins V olle y ( ) / p1_WinsVolley( ) /
+p2_IsServer( ) p1 _W ins V olle y ( )
s im u lat eV olle y( )
p2 _W ins V olle y ( )
simulateVolley( )
+p2_Points( ) [th is .p 1_ Sc ore ( ) = = 20] / [th is .p 2_ Sc ore ( ) = = 20] /
th is .p 1A ddP o in t( ) th is .p 1A ddP o in t( )
+~( )
p2_WinsVolley( )
P la ye r 1 P la ye r 2 p1_WinsVolley( ) [this.p2_Score( ) < 20] / p3_WinsVolley( )
p1 _Is W in ner( ) /
retu rn TR UE ;
Won Won p2 _Is W in ner( ) /
retu rn TR UE ;
[this.p1_Score( ) < 20] / this.p2AddPoint( ) [this.p3_Score( ) < 20] /
~( ) ~( ) this.p1AddPoint( ) simulateVolley( ) this.p3AddPoint( )
ω
simulateVolley( ) simulateVolley( )
p1_WinsVolley( ) / p2_WinsVolley( ) /
simulateVolley( ) simulateVolley( )
Player 1 Player 2 Player 3
Served Served Served
ThreePlayerGame p2_WinsVolley( ) / p3_WinsVolley( ) /
simulateVolley( ) simulateVolley( )
Th ree P la y erG am e ( ) / Two P la y erG am e ( )
α p1_WinsVolley( ) p3_WinsVolley( )
p 3_ S tart ( ) / [this.p1_Score( ) == 20] / [this.p3_Score( ) == 20] /
s im ulat eV o lley ( )
ThreePlayerGame p 3_ W ins V o lle y( ) /
G a m e S ta rt ed
this.p1AddPoint( ) this.p3AddPoint( )
s im ulat eV o lley ( ) p3_WinsVolley( ) /
+ThreePlayerGame() p 3_ W ins V o lle y( ) simulateVolley( )
+p3_Start( ) [t his .p 3_ S co re ( ) < 2 0] /
p2_WinsVolley( )
th is . p3 A dd P oint ( )
+p3_WinsVolley( ) Tw oP lay erG am e ( )
s im ulat eV o lley ( ) [this.p2_Score( ) == 20] /
-p3_AddPoint( ) p 1_ W ins V o lle y( ) /
+p3_IsWinner( )
s im ulat eV o lley ( ) this.p1AddPoint( )
P la y er 3
+p3_IsServer( ) S erv e d
+p3_Points( ) p 2_ W ins V o lle y( ) /
s im ulat eV o lley ( )
+~( ) p 3_ W ins V o lle y( )
[t his .p 3_ S co re ( ) = = 2 0] /
p1_IsWinner( ) / p2_IsWinner( ) / p3_IsWinner( ) /
th is . p3 A dd P oint ( ) return TRUE; Player 1 return TRUE; Player 2 Player 3 return TRUE;
Won Won Won
P la y er 3
W on p 3_ Is W in ne r( ) / ~( )
ret urn TR UE ; ~( ) ~( )
~( )
ω ω
SUT Design Model Test Model
Model-Based Testing: What, Why, How
6
7. Model-based Test Suite
1 ThreePlayerGame( )
• N+ Strategy 2
3
p1_Start( )
p2_Start( )
8 Player 2 Served
4 p3_Start( )
– Start at α 5 p1_WinsVolley( )
Player 1 Served
11 Player 3 Served
17 omega
6 p1_WinsVolley( )[this.p1_Score( ) < 20]
*7
– Follow transition 7
8
p1_WinsVolley( ) [this.p1_Score( ) == 20]
p2_WinsVolley( )
Player 1 W on
14
Player 1 W on
path 9
10
p2_WinsVolley( ) [this.p2_Score( ) < 20]
p2_WinsVolley( ) [this.p2_Score( ) == 20]
*6 Player 1 Served
2
– Stop if ω or visited *9 Player 2 Served
– Three loop
11 Player 3 Served
1 3
alpha Gam eStarted Player 2 Served 17 omega
* 10
iterations Player 2 W on
15
Player 2 W on
– Assumes state 5 Player 1 Served
4
* 12
observer Player 3 Served
17 omega
11 p3_WinsVolley( )
* 13
– Try all sneak paths
Player 3 W on
12 p3_WinsVolley( ) [this.p3_Score( ) < 20] 16
Player 3 Served Player 3 W on
13 p3_WinsVolley( ) [this.p3_Score( ) == 20] 8
Player 2 Served
14 p1_IsWinner( )
15 p2_IsWinner( )
16 p3_IsWinner( ) 5 Player 1 Served
17 ~( )
N+ Test Suite
Model-Based Testing: What, Why, How
7
8. Automated Model-based Testing
• Software that represents an SUT so that test
inputs and expected results can be computed
– Useful abstraction of SUT aspects
– Algorithmic test input generation
– Algorithmic expected result generation
– Many possible data structures and algorithms
• SUT interface for control and observation
– Abstraction critical
– Generated and/or hand-coded
Model-Based Testing: What, Why, How 8
10. Typical Test Configuration
Test Suite Control
Agent
Adapter
Adapter System
Under Test
Transport
Transport Transport
Transport
Test Suite Host OS SUT OS
Model-Based Testing: What, Why, How 10
11. Typical MBT Environment
Reqmts DB
MBT Tool
Design DB
Test Suite Control
Agent
Bug DB
Adapter
Adapter System
Under Test
Code Stack
Transport
Transport Transport
Transport
Test Suite Host
Test Manager Test Host OS SUT OS
Development Environment
Configuration Management
Model-Based Testing: What, Why, How 11
13. Show Me the Money
How much of this … for one of these?
Model-Based Testing: What, Why, How 13
14. Testing by Poking Around
Manual
“Exploratory”
Testing
System Under Test
+ No tooling costs
No testware costs
Subjective, wide variation
Low coverage
-
Quick start Not repeatable
Opportunistic Can’t scale
Qualitative feedback Inconsistent
Model-Based Testing: What, Why, How 14
15. Manual Testing
Manual
Test Input
Manual
Manual Test Results System Under Test
Test Design Evaluation
Test Setup
+ Flexible, no SUT coupling
Systematic coverage
1 test per hour
Usually not repeatable/ed -
No tooling costs Not scalable
No testware cost Inconsistent
Usage validation Tends to “sunny day” tests
Model-Based Testing: What, Why, How 15
16. Hand-coded Test Driver
Manual Test Driver
Test Design Programming System Under Test
-
10+ tests per hour Tooling costs
+ Repeatable
Predictable
Testware costs
Brittle, high maintenance cost
Consistent Short half-life
Continuous Integration, TDD Technology focus
Model-Based Testing: What, Why, How 16
17. Model-based Testing
Modeling, Automated
Automated Setup and
Generation Execution System Under Test
+ 1000+ tests per hour
Maintain model (not testware)
Tooling costs
Training costs -
Intellectual control Paradigm shift
Explore complex space Still need manual, coded tests
Consistent coverage
Model-Based Testing: What, Why, How 17
20. Real Time Derivatives Trading
• “Screen-based trading” over private network
– 3 million transactions per hour
– 15 billion dollars per day
• Six development increments
– 3 years
– 3 to 5 months per iteration
– Testing cycle shadows dev increments
• QA staff test productivity
– One test per hour
Model-Based Testing: What, Why, How 20
21. System Under Test
• Unified process
• About 90 use-cases, 650 KLOC Java
• CORBA/IDL distributed object model
• HA Sun server farm
• Multi-host Oracle DBMS
• Many interfaces
– GUI (trading floor)
– Many high speed program trading users
– Many legacy input/output
Model-Based Testing: What, Why, How 21
22. MBT: Challenges and Solutions
• One time sample not • Simulator generates fresh,
effective, but fresh test accurate sample on demand
suites too expense
• Too expensive to develop • Oracle generates expected
expected results on demand
• Too many test cases to • Comparator automates
evaluate checking
• Profile/Requirements • Incremental changes to rule
change base
• SUT interfaces change • Common agent interface
Model-Based Testing: What, Why, How 22
23. Test Input Generation
10000000
• Simulation of users 1000000
– Use case profile 100000
10000
– 50 KLOC Prolog 1000
• Load Profile 100
– Time domain variation 10
1
– Orthogonal to event 1 2 3 4 5 6 7 8 9 10 11 12
profile 3500.000
• Each generated event
3000.000
2500.000
assigned a "port" and 2000.000
Events Per Second
submit time 1500.000
1000.000
• 1,000 to 750,000 unique 500.000
tests for 4 hour session -5000
0.000
-500.000
0 5000 10000 15000 20000 25000
Time (seconds)
Model-Based Testing: What, Why, How 23
24. Automated Evaluation
• Oracle
– Processes all test inputs
– About 500 unique rules
– Generates end of session “book”
• Comparator
– Compares SUT “book” to oracle “book”
• Verification
– “Splainer” rule backtracking
– Rule/Run coverage analyzer
Model-Based Testing: What, Why, How 24
25. Test Harness
Simulator
Oracle
Adapter
Splainer
Adapter
Adapter Comparator
Adapter Test Verdict
SUT Run Reports
Model-Based Testing: What, Why, How 25
26. Technical Achievements
• AI-based user simulation generates test suites
• All inputs generated under operational profile
• High volume oracle and evaluation
• Every test run unique and realistic (about 200)
• Evaluated functionality and load response with
fresh tests
• Effective control of many different test agents
(COTS/ custom, Java/4Test/Perl/Sql/proprietary)
Model-Based Testing: What, Why, How 26
28. Results
• Revealed about 1,500 bugs over two years
– 5% showstoppers
• Five person team, huge productivity increase
– 1 TPH versus 1,800 TPH
• Achieved proven high reliability
– Last pre-release test run: 500,000 events in two hours,
no failures detected
– No production failures
• Abandoned by successor QA staff
Model-Based Testing: What, Why, How 28
30. Challenges
• Prove interoperability to Federal Judge and
court-appointed scrutineers
• Validation of documentation, not as-built
implementation
• Is each TD all a third party needs to develop:
– A client that interoperates with an existing service?
– A service that interoperates with existing clients?
• Only use over-the-wire messages
Model-Based Testing: What, Why, How 30
31. Microsoft Protocols
• Remote API for a service • All product groups
– Windows Server
– Office
– Exchange
– SQL Server
– Others
• 500+ protocols
– Remote Desktop
– Active Directory
– File System
– Security
– Many others
Model-Based Testing: What, Why, How 31
32. Microsoft Technical Document (TD)
• Publish protocols as “Technical Documents”
• One TD for each protocol
• Black-box spec – no internals
• All data and behavior specified with text
Model-Based Testing: What, Why, How 32
35. Protocol Quality Assurance Process
TD v1 TD v2 TD vn
Authors
Study Plan Design Final
• Scrutinize • Complete • Complete • Gen & Run
TD Test Rqmts Model Test Suite
Test • Define Test • High Level • Complete • Prep User
Strategy Test Plan Adapters Doc
Suite
Developers
Review Review Review Review
• TD ready? • Test Rqmts • Model Ok? • Coverage
• Strategy OK? OK? • Adapter Ok? OK?
• Plan OK? • Test Code
OK?
Reviewers
Model-Based Testing: What, Why, How 35
36. Productivity
“On average, model- Avg Hrs Per Test Requirement
based testing took 42% Task
less time than hand- Document review 1.1
coding tests” Test requirement extract 0.8
Model authoring 0.5
Threshold result Traditional test coding 0.6
• Nearly all Adapter coding 1.2
requirements had Test case execution 0.6
less than three tests Final adjustments 0.3
• Much greater gain for Total, all phases 5.1
full coverage Grieskamp et al.
Model-Based Testing: What, Why, How 36
37. Results
• Published 500+ TDs, ~150,000 test requirements
• 50,000+ bugs, most identified before tests run
• Many Plugfests, many 3rd party users
• Released high interest test suites as open source
• Met all regulator requirements, on time
– Judge closes DOJ anti-trust case May 12, 2011
• ~20 MSFT product teams now using Spec Explorer
Model-Based Testing: What, Why, How 37
38. TOOL THUMBNAILS
All product or company names mentioned herein may be trademarks or registered
trademarks of their respective owners.
39. CerifyIT
Smartesting
Model Use cases, OCL; custom test stereotypes;
keyword/action abstraction
Notation UML 2, OCL, custom stereotypes, UML Test Profile
UML Support Yes
Requirements Interface to DOORS, HP QC, others
Traceability
Generation Constraint solver selects minimal set of boundary values
Oracle Post conditions in OCL, computed result for test point
Adapter Natural language option; HP GUI drivers
Typical SUT Financial, Smart Card
Notable Top-down formally defined behavior; data stores; GUI
model
Model-Based Testing: What, Why, How 39
40. Conformiq
Designer
Model State machines with coded event/actions
Notation Statecharts, Java
UML Support Yes
Requirements Integrated requirements, traceability matrix
Traceability
Generation Graph traversal: state, transition, 2-switch
Oracle Model post conditions, any custom function
Adapter Output formatter, TTCN and user-defined
Typical SUT Telecom, embedded
Notable Timers; parallelism and concurrency; on-the-fly mode
Model-Based Testing: What, Why, How 40
41. MaTeLo
All4Tec
Model State machine with transition probabilities (Markov);
data domains, event timing
Notation Decorated State Machine
UML Support No
Requirements Integrated requirements and trace matrix; import from
Traceability DOORS, others
Generation Most likely path, user defined, all transitions, Markov
simulation; subset or full model
Oracle User conditions; Matlab and Simulink
Adapter EXAM mappers; Python output formatter
Typical SUT hardware-in-the-loop; Automotive, Rail
Notable Many standards-based device interfaces;
supports software reliability engineering
Model-Based Testing: What, Why, How 41
42. Automatic Test Generation
IBM/Rational
Model Sequence diagrams, flow charts, statecharts, codebase
Notation UML, SysML, UML Testing Profile
UML Support Yes
Requirements DOORS integration; design model traceability
Traceability
Generation Parses generated C++ to generate test cases; Reach
states, transition, operations, events for modeled classes
Oracle User code
Adapter User code, merge generation
Typical SUT Embedded
Notable Part of systems engineering tool chain
Model-Based Testing: What, Why, How 42
43. Spec Explorer
Microsoft
Model C# class with “action” method pre/post condition;
regular expressions define “machine” of classes/actions
Notation C#
UML Support Sequence diagrams
Requirements API for logging user defined requirements
Traceability
Generation For any machine, constraint solver finds feasible short or
long path of actions; generates C# runtime
Oracle Action post conditions; any custom function
Adapter User code
Typical SUT Microsoft protocols, APIs, products
Notable Pairwise data selection; on-the-fly mode; use any Dot
Net capability
Model-Based Testing: What, Why, How 43
44. T-Vec/RAVE
T-Vec
Model Boolean system with data boundaries; SCR types and
modules; hierarchic modules
Notation SCR-based, tabular definition; accepts Simulink
UML Support No
Requirements RAVE requirements management, interface to DOORS,
Traceability others
Generation Constraint solver identifies test points
Oracle Solves constraints for expected value
Adapter Output formatter; html, C++, java, perl, others
Typical SUT Aerospace, DoD
Notable Simulink for input, oracle, model checking; MCDC model
coverage; non-linear and real-valued constraints
Model-Based Testing: What, Why, How 44
45. Close Cousins
• Data Generators
– Grammar based
– Pairwise, combinatoric
– Fuzzers
• TTCN-3 Compilers
• Load Generators
• Model Checkers
• Model-driven Development tool chains
Model-Based Testing: What, Why, How 45
47. MBT User Survey
• Part of 1st Model-based Testing User
Conference
– Offered to many other tester communities
• In progress
• Preliminary analysis of responses to date
• https://www.surveymonkey.com/s/JSJVDJW
Model-Based Testing: What, Why, How 47
48. MBT Users, SUT Domain
Gaming
Social Media
Other
Supercomputing
Communications
Software Infrastructure
Embedded
Transaction Processing
0% 5% 10% 15% 20% 25% 30% 35% 40%
Model-Based Testing: What, Why, How 48
50. MBT Users, Software Process
Other
Ad Hoc
Waterfall
Spiral
Incremental
XP/TDD
CMMI level 2+
Agile
0% 5% 10% 15% 20% 25%
Model-Based Testing: What, Why, How 50
51. How Used?
What stage of adoption? Who is the tool provider?
Evaluation
In House
Pilot Project
Open Source
Rollout
Routine use Commercial
0% 20% 40% 60% 0% 20% 40% 60% 80%
Model-Based Testing: What, Why, How 51
52. What is the Overall MBT Role?
At what scope is MBT used? What is overall test effort for
each testing mode?
System Manual
Component Hand-coded
Unit Model-based
0% 20% 40% 60% 80% 25% 30% 35% 40%
Model-Based Testing: What, Why, How 52
53. How Long to be Proficient?
Median: 100 hours
Hours of training/use to
become proficient
160+
80-120
1-40
0% 10% 20% 30% 40% 50%
Model-Based Testing: What, Why, How 53
54. How Bad are Common Problems?
Misses bugs
Cant integrate w other test assets
Developing SUT interfaces too hard
Inadequate coverage
Developing test models is too difficult
Oracle ineffective
Too difficult to update model
Model "blows up"
0% 50% 100%
Worse than expected Not an issue Better than expected
Model-Based Testing: What, Why, How 54
55. MBT Effect on Time, Cost, Quality?
Percent change
40%
35% from baseline: e.g.,
35% 36%
35% fewer escaped
30% 28%
bugs, 0% more bugs
25% 23%
20%
18%
15%
10%
5% 0%
0% Better
Worse
Bugs Escaped
Overall Testing
Costs Overall Testing
Time
Model-Based Testing: What, Why, How 55
56. MBT Traction
Overall, how effective is MBT? How likely are you to continue
using MBT?
Not at all 0%
No effect 4%
Slightly 4%
Slightly 13%
Moderately 21%
42% 38%
Moderately
Very
38%
Extremely 42% Extremely
Model-Based Testing: What, Why, How 56
58. What Have We Learned?
• Test engineering with rigorous foundation
• Global best practice
• Broad applicability
• Mature commercial offerings
• Many proof points
• Commitment and planning necessary
• 10x to 1,000x improvement possible
Model-Based Testing: What, Why, How 58
59. Q&A
rvbinder@gmail.com
Model-Based Testing: What, Why, How 59