The document summarizes a series of talks on software testing. The talks will cover the different stages of the test life cycle, including: decomposing the system into independent units; building tests for formal models; establishing testability through test harnesses; automating tests using frameworks; and including tests in continuous integration. The document then provides more details about the second talk on building tests for formal models and describes the use of formal models in test design techniques.
Design Test Case Technique (Equivalence partitioning And Boundary value analy...Ryan Tran
At the end of this course, you are going to know:
To provide an approach to design test case.
Understand how to apply equivalence partitioning and boundary to design test case.
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings.
Boundary value analysis and equivalence partitioningSneha Singh
What is Boundary value analysis and Equivalence partitioning?
Border value research and Equivalence dividing, explained with simple example:
Boundary value research and equivalence dividing both are analyze situation style techniques in black box examining.
As testers, we know that we can define many more test cases than we will ever have time to design, execute, and report. The key problem in testing is choosing a small, “smart” subset from the almost infinite number of tests available that will find a large percentage of the defects. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class testing, boundary value testing, decision tables, and state-transition diagrams. Explore examples of each of these techniques in action. Don’t just pick test cases at random. Rather, learn to selectively choose a set of test cases that maximizes your effectiveness and efficiency to find more defects in less time. Then, learn how to use the test results to evaluate the quality of both your products and your testing. Discover the test design techniques that will make your testing more productive.
Design Test Case Technique (Equivalence partitioning And Boundary value analy...Ryan Tran
At the end of this course, you are going to know:
To provide an approach to design test case.
Understand how to apply equivalence partitioning and boundary to design test case.
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings.
Boundary value analysis and equivalence partitioningSneha Singh
What is Boundary value analysis and Equivalence partitioning?
Border value research and Equivalence dividing, explained with simple example:
Boundary value research and equivalence dividing both are analyze situation style techniques in black box examining.
As testers, we know that we can define many more test cases than we will ever have time to design, execute, and report. The key problem in testing is choosing a small, “smart” subset from the almost infinite number of tests available that will find a large percentage of the defects. Join Lee Copeland to discover how to design test cases using formal black-box techniques, including equivalence class testing, boundary value testing, decision tables, and state-transition diagrams. Explore examples of each of these techniques in action. Don’t just pick test cases at random. Rather, learn to selectively choose a set of test cases that maximizes your effectiveness and efficiency to find more defects in less time. Then, learn how to use the test results to evaluate the quality of both your products and your testing. Discover the test design techniques that will make your testing more productive.
Black box testing, equivalence partitioning, equivalence class partition, ECP, Boundary Value Analysis, BVA, ISTQB Foundation level, Manual Testing, Examples for Equivalence Partitioning, Examples for Boundary value analysis
Block-box testing (or functional testing, or behavior testing) focuses on the functional requirements of the software.
Gray box testing is a combination of white and black box testing
Whitepaper Test Case Design and Testing Techniques- Factors to ConsiderRapidValue
Software testing is an essential and important technique for assessing the quality of a particular software product/service. In software testing, test cases and scenarios play an inevitable and a pivotal role. A good strategic design and technique help to improve the quality of the software testing process.
This whitepaper provides information about test case design activities, test analysis, quality risks, testing techniques, phases of test development. The paper also, explains the factors that need to be considered while choosing the right testing techniques and provides a checklist of test cases based on our rich experience of testing mobile apps.
Black Box Testing, also known as Behavioral Testing it is a method of software testing in which the internal structure/ design/ implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It typically comprises most if not all higher level testing, but can also embody unit testing
In this session you will learn:
Test Case Design and Techniques
Black-box: Three major approaches
Steps for drawing cause-Effect Diagram:
Behavior Testing
Random Testing
White Box Techniques
Path Testing
Statement Coverage
Data Flow Testing
For more information: https://www.mindsmapped.com/courses/quality-assurance/qa-software-testing-training-for-beginners/
Black box testing, equivalence partitioning, equivalence class partition, ECP, Boundary Value Analysis, BVA, ISTQB Foundation level, Manual Testing, Examples for Equivalence Partitioning, Examples for Boundary value analysis
Block-box testing (or functional testing, or behavior testing) focuses on the functional requirements of the software.
Gray box testing is a combination of white and black box testing
Whitepaper Test Case Design and Testing Techniques- Factors to ConsiderRapidValue
Software testing is an essential and important technique for assessing the quality of a particular software product/service. In software testing, test cases and scenarios play an inevitable and a pivotal role. A good strategic design and technique help to improve the quality of the software testing process.
This whitepaper provides information about test case design activities, test analysis, quality risks, testing techniques, phases of test development. The paper also, explains the factors that need to be considered while choosing the right testing techniques and provides a checklist of test cases based on our rich experience of testing mobile apps.
Black Box Testing, also known as Behavioral Testing it is a method of software testing in which the internal structure/ design/ implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It typically comprises most if not all higher level testing, but can also embody unit testing
In this session you will learn:
Test Case Design and Techniques
Black-box: Three major approaches
Steps for drawing cause-Effect Diagram:
Behavior Testing
Random Testing
White Box Techniques
Path Testing
Statement Coverage
Data Flow Testing
For more information: https://www.mindsmapped.com/courses/quality-assurance/qa-software-testing-training-for-beginners/
This seminar-workshop was developed at the request of Ephesians Publishing Inc. for the Holy Angel School of Caloocan teacher training series for primary school, intermediate and high school teachers. This was conducted on June 3, 2013.
(MST) Test Construction and Material
(class report(s)/discussion(s))
DISCLAIMER: I do not claim ownership of the photos, videos, templates, and etc used in this slideshow
CREDIT/s: education-portal
Chapter 10 Testing and Quality Assurance1Unders.docxketurahhazelhurst
Chapter 10:
Testing and Quality
Assurance
1
Understand quality & basic techniques for software verification and validation.
Analyze basics of software testing and testing techniques.
Discuss the concept of “inspection” process.
Objectives
2
Quality assurance (QA): activities designed
to measure and improve quality in a product— and process.
Quality control (QC): activities designed to validate and verify the quality of the product through detecting faults and “fixing” the defects.
Need good techniques, process, tools,
and team.
Testing Introduction
similar
3
Two traditional definitions:
Conforms to requirements.
Fit to use.
Verification: checking software conforms to
its requirements (did the software evolve
from the requirements properly; does the software “work”?)
Is the system correct?
Validation: checking software meets user requirements (fit to use)
Are we building the correct system?
What Is “Quality”?
4
Testing: executing program in a controlled environment and “verifying/validating” output.
Inspections and reviews.
Formal methods (proving software correct).
Static analysis detects “error-prone conditions.”
Some “Error-Detection” Techniques (finding errors)
5
Error: a mistake made by a programmer or software engineer that caused the fault, which in turn may cause a failure
Fault (defect, bug): condition that may cause a failure in the system
Failure (problem): inability of system to perform a function according to its spec due to some fault
Fault or failure/problem severity (based on consequences)
Fault or failure/problem priority (based on importance of developing a fix, which is in turn based
on severity)
Faults and Failures
6
Activity performed for:
Evaluating product quality
Improving products by identifying defects and having them fixed prior to software release
Dynamic (running-program) verification of program’s behavior on a finite set of test cases selected from execution domain.
Testing can NOT prove product works 100%—even though we use testing to demonstrate that parts of the software works.
Testing
Not always
done!
7
Who tests?
Programmers
Testers/Req. Analyst
Users
What is tested?
Unit code testing
Functional code testing
Integration/system testing
User interface testing
Testing (cont.)
Why test?
Acceptance (customer)
Conformance (std, laws, etc.)
Configuration (user vs. dev.)
Performance, stress, security, etc.
How (test cases designed)?
Intuition
Specification based (black box)
Code based (white box)
Existing cases (regression)
8
Progression of Testing
Equivalence Class Partitioning
Divide the input into several groups, deemed “equivalent” for purposes of finding errors.
Pick one “representative” for each class used for testing.
Equivalence classes determined by req./design specifications and some intuition
Example: pick “larger” of
two integers and . . .
Lessen duplication.
Complete coverage.
10
Suppose we have n distinct functional requirements.
Su ...
A presenetation on basics of software testing, explaining the software development life cycle and steps invovled in it and detials about each step from the testing point of view.
Testing Experience - Evolution of Test Automation FrameworksŁukasz Morawski
Implementing automated tests is something that everybody wants to do. If you ask
any tester, test automation is their aim. And while it may be the golden target, very
few testers take pains to assess the required knowledge, under the illusion that a
programming language or expensive tool will suffice to cope with all problems likely
to arise. This is not true. Writing good automated tests is much harder than that,
requiring knowledge this article will make clear
Annotated Bibliography
.
Guidelines: Annotated Bibliography
Purpose: Explore current literature (collection of writing on a specific topic) to increase
knowledge of leadership in nursing practice.
The annotated bibliography assignment will help students prepare to design and present a poster presentation regarding nursing leadership in practice. The focus is building student knowledge of various leadership roles in nursing (current trends). The assignment also promotes student reflection on development of their own leadership skills.
Students will read the summary of the Institute of Medicine (IOM) “The Future of Nursing: Leading Change, Advancing Health” for baseline identification of leadership roles (posted in Blackboard).
Students will then search the literature to identify and select five (5) nurse leaders, who will be the topic of the annotated bibliography summaries (students must use credible sources when searching literature).
Students may also choose to submit 2 of the 5 annotated bibliography summaries on the following topics:
1. Student Nurse Leaders (2)
2. Current Trends in Nursing Leadership (3)
Each of the five annotated bibliography summaries should be no more than one page, typed, and must include the following:
1. The identified leader’s specific roles & responsibilities
2. The identified leader’s accomplishments
3. Barriers and facilitators to leader achievement of goals
4. Knowledge gained from reading content included in the annotated bibliography summary
Annotated Bibliography Grading Rubric
Criteria
Points Possible
Points Earned
Faculty Comments
Provides a clear description of the identified leader’s role (s) and responsibilities (related to nursing)
20
Provides examples of the leader’s
accomplishments (at least 2 examples)
10
Summarizes barriers inhibiting the leader’s achievement of goals
15
Summarizes facilitators enhancing the leader’s achievement of goals
15
Summary of leadership knowledge gained from reading content included in the annotated bibliography summary
20
Correct grammar/spelling
10
APA format
10
Total
100
[Type text]
30 February 2005 QUEUE rants: [email protected] DARNEDTesting large systems is a daunting task, but there are steps we can take to ease the pain.
T
he increasing size and complexity of software, coupled with concurrency and dis-
tributed systems, has made apparent the ineffectiveness of using only handcrafted
tests. The misuse of code coverage and avoidance of random testing has exacer-
bated the problem. We must start again, beginning with good design (including
dependency analysis), good static checking (including model property checking), and
good unit testing (including good input selection). Code coverage can help select and
prioritize tests to make you more effi cient, as can the all-pairs technique for controlling
the number of confi gurations. Finally, testers can use models to generate test coverage
and good stochastic.
A presenetation on basics of software testing, explaining the software development life cycle and steps invovled in it and detials about each step from the testing point of view.
Assignment 1 Week 2.docx
1
Assignment 1: Topic Selection
Assignment 1: Topic Selection
Software Engineering: The Autotest Framework
Jessica Hill Scott
Dr. Teresa Wilburn
RES 531: Research Methods
January 12th 2014
Topic Selection: Software Engineering: Automated Testing/Programs That Test Themselves
Topic Description:
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we cannot completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality (Hetzel). Automated testing is a widely used phrase. To understand what it entails, it is necessary to distinguish several increasingly ambitious levels of automation. What is best automated today is test execution. In a project that has generated thousands of test cases, running them manually would be tedious, especially as testing campaigns occur repeatedly. For example, it is customary to run extensive tests before every release. Traditionally, testers wrote scripts to run the tests. A related goal, also addressed by some of today’s tools, is regression testing. It is a common phenomenon of software development that some corrected faults reappear in later versions, indicating that the software has partly “regressed.” A project should retain any test that failed at any stage of its history, then passed after the fault was corrected; test campaigns should run all such tests to spot cases of regression. Automated tools should provide resilience. A large test suite is likely to contain some test cases that, in a particular execution, crash the program. Resilience means that the process may continue anyway with the remaining cases. One of the most tedious aspects of testing is test case generation. With modern computers we can run very large numbers of test cases. Usually, developers or testers have to devise them; this approach, limited by people’s time, does not scale up. Commonly used frameworks mostly address the first three goals: test execution, regression testing, and resilience. They do not address the most labor-intensive tasks: preparing test cases, possibly in a minimized form, and interpreting test results. Without progress on these issues, testing confronts a paradox: While the growth of computing power should enable us to perform ever more exhaustive tes ...
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
2. 2
We are going to present a series of test related talks. They will cover all stages of the test life cycle as follows:
•The first talk describes How to Provide decomposition / abstraction of the system. Per Alexander the Great “divide and rule” - it is necessary to split the system into the manageable independent units for autonomous manual test generation
•The next talk is about How to Build tests for all practical software formal models. As Johann Goethe predicted “It is easier to perceive error than to find truth “ – software testing can only find the error in these models, but cannot prove the correctness of the system
•The third one shows How to Establish testability through test harness. To paraphrase Archimedes “Give me a lever and a fulcrum and I shall move the world” - give me a test harness and I shall verify the system
•Talk number 4 describes How to Implement test automation using a test framework. Based on Johann Goethe “He who lives must be prepared for changes” – automated cases survive if they are maintainable and are prepared for the code changes
•The last talk illustrates How to Include tests into the Continuous Integration test framework. Ronald Reagan repeated the Russian proverb “Trust but verify”; we repeat it as well: each new build has to be automatically verified by regression and new functional tests This course presents the second stage from the mentioned five : “test design techniques” that shows the ways to build complete test. That means that results of the first stage (decomposition and abstraction) are in place. We start with course preconditions that are descriptions of the previous stage: decomposition and abstraction.
3. 3
Typical software applications usually include millions lines of code. However, the test effort is not daunting, because it is not actually proportional with size of the software, but it is proportional to the size of independent modules.
The good news is that modern software follows this paradigm by using a layered architecture, where lower layers provide services for the higher ones through APIs. For instance, an embedded system consists of the following layers: drivers, OS, middleware, applications.
Furthermore, each layer comprises independent services and features. Each software release is described by a set of incremental requirements and architecture documents; before development starts, the requirements are refined to a sufficiently detailed level.
This is usually where test planning and test generation start.
4. 4
When decomposition is done the object is presented with as a hierarchy of the system documentation
Where:
FRS – Functional Requirement Specification that is defined by customers
Then
FTS – Feature Technical Specification as and End2End or ARD – Architectural Requirements Document
And finally
SSD – System Specification Document as LLD – Low Level Design or SRD – System Requirement Document
In Common view the test type is focused on a particular test objective. It is easy to give a definition ,for example, regression, functional, sanity , unit test. But it is challenging and there are no formal processes to extract from the functional test the regression test cases, or from the regression test the sanity cases.
In our view, test types reflect only exiting documents. Then unit, functional, sanity tests reflect the respective document in their hierarchy from low level to higher one
5. 5
A requirement is usually written in business terms, which is not the best format for writing test cases, because there are no definitions of possible errors. Only subject matter experts can define tests, based on their expertise. Moreover, each tester will come up with their own unique set. Our goal is to provide an approach where each tester will end up with the same set of test cases for the one requirement.
6. 6
Examples of informal requirements that were taken from Internet as the test interview questions. Informal presentation of requirements made test design an art. Each tester ends up with different set of intuitive cases.
7. 7
When an abstraction is done each requirement is presented as a formal model. Formal models such as, for example, condition, algorithm, state machine, etc., use formal definitions of error classes and there are test design methods that guarantee the error coverage.
A tester should ensure that each requirement is presented as a formal model prior to building tests.
A tester should ensure that each requirement is presented as a formal model prior to building tests.
The goal is to transform tester’s test design skill from an art to the craft. The word ‘craft’ here is an opposite to the art, but not something that is primitive and not creative, but in the sense of being predictable and repeatable
8. 8
I would like to start the course with two epigraphs that, from my point of view, present the spirit and even the justification of the current course.
Whenever James Whittaker asked about the completion of a Test Plan, he would always get the answer today or tomorrow. And this supports my view that using the formal test design techniques all tests can be built in days for a new product release, based on architecture and requirements documents. Hours - Days, but not months.
Second saying belongs to Austin Schutz, the author of Expect Perl package. He took only a small percentage of the language Tcl, but delivers almost all functionality necessary for writing interactive scripts. I tried to present the similar approach: propose simplified versions of test design methods that are good enough to cover the majority of implementation errors: we use minimum theory of test design to cover most of the practical needs.
9. 9
Our approach to the test design is based on these two Dijkstra’s axioms.
•To demonstrate object correctness in a convincing manner it must be usefully structured. “Usefully structured” reflects necessity to use only formal models in test design techniques. And an abstraction of the requirements guarantees the usefulness.
•Testing shows the presence, not the absence of bugs. It means that the test design techniques should be oriented to errors coverage.
Thus the formal model is not a purpose, but the means to identify the error that needs to be found
10. 10
Our experience shows that all requirements that we worked with can be describe with the following models:
“Atomic” models: the arithmetic, relational and logical expressions
“Compound” models: an algorithm, a state machine, an instruction set and a syntax.
However, note, that object description can be hierarchical and recursive. Some object elements can be presented as a model itself and so on.
I would like to repeat, that, in most cases, it is a tester’s job to present the business oriented requirements as set of formal models.
11. 11
Each formal model is represented by a set of elements, their attributes, connection between the elements, and element’s function.
The general error model is ‘element-swap fault’, where one of the object’s element is inadvertently swapped with another element of the same type. For example: it can be a replacement of one variable for another, one connection for another, one functional symbol to another, etc. These errors will not be caught by syntax analyzer and have to be caught by respective test cases.
12. 12
All test design techniques that are presented in this course will be derived from these two methods: Boundary Value Analysis and Path Sensitization Technique
The errors change the boundaries of the object.
Boundary Value Analysis requires selection of test cases at the edges of the equivalence classes and the closest values around the edges. The boundary analysis provides a separation between correct and faulty models’ elements.
Path Sensitization Technique requires choosing the path from the origin of the fault to the output and ensuring that the effect of the fault will be propagated to the output. The Path Sensitization Technique allows to see the error, which was identified by a boundary analysis, at the object output interface.
13. 13
Based on Primary Test Design Methods we will describe how to build test cases for arithmetic, relational and logical expressions (“atomic” models). We will then show how the two can be combined to create test design methods for “compound” models, such algorithm, state machine, instruction set and syntax. This picture is the content of our class.
14. 14
The following template is going to be used for each technique presentation:
Object Model as a set of particular elements, the connections among them and their functions
Error Model as the element distortions of the current model
Test design technique as a procedure for building test cases that identify all errors of described types and a procedure for building the minimum set of test cases
Example
15. 15
Arithmetic expression Model:
•Numeric constants and variables are arithmetic expressions
•If α and β are arithmetic expressions, then so are α + β, α / β, etc. Error Model: α β, α constant; ‘+’ ‘-’; etc Each fault creates an output different from output of the faultless object. The output of relational and logical expressions is binary. To distinguish the correct expected result from the faulty ones requires more than two cases. The arithmetic expressions, on other hand, return numerical (non-binary) values. This fact allow us to have the fewest number of test cases that can find all the errors. Our approach is to deliver the changes of each variable value to the expression output. This way we can prove that the developers did not replace the variable name with another name or with a constant. Test design technique includes 2 steps:
•Use two test cases where each variable is substituted with two different values.
•Add a test case for each division operator to verify that the ‘division by zero’ returns an error message.
16. 16
Let’s build a test for arithmetic expression ‘a + 45 / (b-c)’
Three cases are needed to verify the arithmetic expression.
Note that the value for each variable changes in each of the three test cases.
The third case also verifies the correct behavior for ‘division by 0’.
17. 17
Relational expression
Model:
If α and β are arithmetic expressions then α > β , α == β, α <= β, etc are relational expressions
Error Model: α β, α constant; ‘==’ ‘<=’; etc
Known methods for a relational expression are based on the branch coverage . They require that each predicate (‘<’, ‘<=’, =. etc) be true and false. However, this approach always leaves some uncovered faults. For example, to verify the expression a > 4 per known techniques two test cases should be chosen:
a=3 (no) and a= 5 (yes)
However, the fault “>” “>=” will not be identified.
Test design technique:
Based on the boundary value analysis, three test cases are needed for each predicate symbol . The three test cases will use values on and around a boundary.
18. 18
Here are the three test cases for relational expression ‘a > 4’
The cases needed are a=3, a=4, a=5.
19. 19
Logical expression Model:
•Logical constants and variables are logical expressions.
•Relational expressions are logical expressions.
•If α is a logical expression then so is !α (not α)
•If α and β are logical expressions then so α ∧ β , α ∨ β. Error Model: α β, α constant; ‘∧’ ‘∨’; etc The known ATPG (Automatic Test Pattern Generation) methods can be used to test a logical expression. These methods have been developing for the last 50 years for use with combinational and sequential circuits. Early test generation algorithms were the Boolean difference and D- algorithm. Note that the size of circuits grows from hundreds gates (AND – OR elements) to hundreds of millions. Each new method dealt with exponential increase of circuits sizes. These methods need to be practical in terms of memory requirements and reduction of computation time. However, our case is different: we need to have a method to build test cases manually for the logical expression with a size less than 10 variables. In my opinion, the Victor Danilov’s Graph model and the respective method greatly simplifies the creation of a manual test.
20. 20
Graph Model:
•Remove all global logical negations using de Morgan’s Law: (!(a & b) ≡!a || !b and !(a || b)≡!a & !b)
•Present each input variable by an edge.
•Beginning from most embedded parenthesis:
•present each OR operator by a parallel connection of edges;
•present each AND operator by a sequential connection of edges;
21. 21
Let’s take for example a logical expression F = ((b or c) and d).
First of all variables are presented in the graph as edges
‘b or c’ is presented with parallel edges.
(sub-graph ‘b or c’ and edge ‘d’) is presented by a sequential connection.
If a Boolean variable is ‘TRUE’ then the edge stays in the graph, if it is FALSE, then the respective edge is removed from the graph.
If after removing all edges that are equal to ‘FALSE’ a pathway from SP (start point) to the EP (end point) still exists, then the logical expression is ‘TRUE’.
For example if all variables are ‘TRUE’ then a connection between SP and EP exist and F =“TRUE’ as well
if all variables are ‘FALSE’ then the is no a connection between SP and EP and F =“FALSE’ as well
22. 22
The previous slide illustrates how to calculate the result value of logical expression. It was done by removing edges that have the ‘FALSE’ value and observing the presence of the path way from the SP to the EP. If a pathway exist that ‘F’ is equal to ‘TRUE’, otherwise to ‘FALSE’.
Now we understand that a graph is adequately presents a logical expression.
Before presenting a test design method we need to introduce two definitions:
Let’s call the path the set of Boolean variables that, being equal to ‘TRUE’, form a pathway from SP to EP. For instance, the path ‘cd’ means that c=1 and d=1. Another path is ‘bd’: b=1 and d=1; these paths make the expression F= ’TRUE’.
Let’s call the cut the set of Boolean variables that being equal to ‘FALSE’ does not allow presence of any paths. The presence of the cut makes the logical expression F equal to ‘FALSE’. For example, the cut ‘cb’ means c=0 and b=0. Another cut ‘d’: d=0.
23. 23
It is time to describe the test design technique for the logical expressions: The method is based on the application that automatically builds test cases using the Graph model. I wrote this application 40 years ago, when the graph model was introduced. The method creates the minimum number of test cases that covers all single faults. Here is a simplified version of this approach:
•Build graph paths from the SP to the EP that cover all edges. Each “path” becomes a test case when all edges that are not on the path (parallel edges) are cut (assign ‘FALSE’ value).
•Build graph cuts that cover all edges. Each “cut” becomes a test case when all edges that do not belong to the cut are assigned the value ‘TRUE’.
27. 27
This slide presents the test cases to verify our logical expression Only seven test cases are needed to find all possible errors of this logical expression, whereas if we used all permutations of seven Boolean variables then 128 cases would be necessary (which is 2 in power of 7). Note:
•The path is not only a connection between SP and EP. It is the only connection – all parallel connections are cut. Therefore all ‘stack-in-0’ faults (variable is replaced with constant ‘FALSE’) will be visible for the edges that belong to the path. If an error occur (one of the variable on the path will have value ‘FAUSE’ instead of ‘TRUE’) then output function will be set to ‘FALSE’ instead of ‘TRUE’ (real value would be different from the expected one)
•The cut is the only separation between SP and EP. The rest of the edges (variable) are set to ‘TRUE’. Therefore each ‘stuck-in-1’ faults (variable is replaced with constant ‘TRUE’) will de identified for each edge (variable) that belongs to the cut.
28. 28
Note that nested relationships can exist - some functional blocks can be algorithms themselves.
29. 29
The existing structural approaches to test design (statement, branch, path coverage) do not guarantee the coverage of all possible implementation errors (see an example for a relational expression a>4 above at slide 18) The proposed test design method includes two stages for each functional and conditional block:
•Design test cases for each expression
•Use the Path Sensitization Technique to propagate the results to the output
31. 31
The conditional block contains three relational expressions
‘A>5’, ‘B>0’, ‘B<9’ that are encapsulated into a logical expression.
Let’s name them X, Y, Z:
X A>5
Y B>0
Z B<9
The logical expression X OR (Y AND Z) can be represented by the graph model.
First we build test cases for the logical expression and then extend the test with cases that verifies the relational expressions
33. 33
Based on the Path Sensitization Technique we will use the logical expression test to propagate the relational and arithmetical expression results to the output of the algorithm.
Thus,
path 1 makes the output depend on the value of X and therefore allows us to verify the relational expression ‘A>5’;
path 2 makes the output depend on the values of Y and Z and therefore allows us to verify the relational expression ‘B>0’ and ‘B<9’.
Let’s expand table 4 based on these facts:
Each relational expression requires three test cases:
X A>5 needs test cases A= 4; A=5; A= 6
Y B>0 needs test cases B=-1; B=0; B= 1
Z B<9 needs test cases B= 8; B=9; B=10
To test the arithmetic expression (D/A) three test cases are needed:
(D=6; A=A1); (D=8; A=A2); (D=6; A=0), where A1 <> A2.
Let’s incorporate expression cases into the logical expression cases.
34. 34
Let’s take a look at the table.
The first path’s output depends on the variable X. Since variable X presents the relational expression the first set of 3 test cases in the table are created to verify it.
The second path’s output depends on the variables Y and Z. The second set of 3 test cases in the table are created to verify the relational expression B>0 and the third set of 3 cases verifies respectively B<9.
Through this exercise we notice that all cuts are already implemented:
The cut 1 is represented by test cases #4 or #5
The cut 2 is represented by test cases #8 or #9 respectively.
The test cases #3 and #6 can be selected as the test set for an arithmetic expression ‘D/A’. We need two values for ‘D’: let them be ‘8’ and ‘6’.
We add the test case #10 verifies the ’division by 0’ case.
For all other cases any value for variable ‘D’ can be selected, for instance ‘6’.
The final set of test cases is presented on the following slide.
36. 36
These are informal requirements. We will present them with an algorithm model.
37. 37
This algorithm presents an implementation example of this triangle problem.
The algorithm does not include blocks that verify the number and types of the input values (these checks are obvious).
The first block verifies that the input numbers do represent real triangle sides.
It is done by checking that the length of each side is smaller than the sum of the length of the two other sides.
The following slide presents a table that can be used as a test template.
Pause the presentation, take 10-15 minutes to fill the table.
On the subsequent slide you have will a chance to compare your results with my solution for this problem.
39. 39
We proposed 35 test cases for the triangle test. Search online to see other solutions and compare with ours.
The presentation of the algorithm test design technique would not be complete if the algorithm’s loops were not mentioned. Our example does not include loops. In our approach, however, the tests for a logical expression that controls the number of loop iterations, based on the boundary conditions, would require the loop execution of none, one, max, max+1 times, exactly as a traditional test design method for algorithms would require.
40. 40
A State Machine model is represented by a set of states and a set of input conditions that transfer control between states and generate respective outputs.
Error Model: connection errors; transfer and output function errors
Assumption:
Each state is observable. For a state machine without observability, it is a challenge to identify the arrival state based on the series of outputs. However, a developer can easily implement a ‘show’ function that will publish the object’s state, therefore it is realistic to assume that the state can be made observable.
41. 41
The Test Design Method for the state machine is similar to the algorithm one:
•Implement the Path Sensitization Technique for each state and for each transfer between states.
•For each transfer function use a test design method for the logical conditions.
•For each output function use a test design method for the respective output model (check all output messages formats).
42. 42
Our example presents the simplified states for a TCP transaction. Various events define transitions between states.
In our proposed solution each test starts from the initial state #1.
Yet in most cases, the subsequent test cases can be run from the state where the previous test case finished.
However, to increase the error resolution it is preferable to make all test cases independent from each other and run them from the initial state.
Independence allows us to continue testing after finding an error, without skipping some test cases.
43. 43
In this example we assume that some kind of ‘show’ function exists to report the arrival state.
The first three test cases check the condition ‘A and B’, the following two cases verify the condition ‘C or D’, and so on.
In general, the output functions are more sophisticated than a simple state report.
Report format options are: printouts, transactions, mails, etc. The boundary analyses can be used to test various output reports.
44. 44
Instruction set
An Instruction set model is a collection of commands, with each command represented by a name, input and output parameters, and a model that describes how output results are derived from the input parameters.
Examples are PDP-11 or IBM instruction sets. Another example is a messaging system instruction set: ‘create buffer’, ‘fill buffer’, ‘send a message’, etc.
Assumption: Each command is testable, meaning that it can be initiated from an external interface and its results can be observed on the external interface. However, if commands or APIs are not accessible from an external interface (in the case of embedded system, for example), then developers can create a test harness that provides access to all commands and APIs.
45. 45
Test Design Method:
•Use the Path Sensitization Technique for each command to create a “macro- instruction” as a set of commands that allow us to set all command parameters and retrieve the results of their execution.
•Use the boundary analysis for each input parameter and output response.
•Use the respective test design method for the command’s functions. The next slide presents an example that is an instruction set for a messaging service.
46. 46
Based on the path sensitization technique, for each command, let’s build the “macro- instruction” as a set of commands that allow us to set all command parameters and retrieve the results of their execution.
47. 47
All necessary “macro-instruction” are presented in the following table.
We will not build test cases since we already discussed the boundary analyses technique application.
48. 48
The last model we are going to discuss is a Syntax Model:
Syntax is a set of rules that describe the various elements of objects and how they form a correct structure. Each rule is often based on the definition of a preceding rules.
49. 49
The natural order of the syntax rules defines the test hierarchy. Test Design Method:
•Start the test design from the first rule and continue based on the rule order.
•Use the Boundary value analysis for all the rule’s elements.
•Use the test techniques for the various rule’s expressions.
50. 50
Example: A fragment of the syntax of a XML file that describes the test results in UC/ TC hierarchy
51. 51
This XML file describes the test results in the UC/TC hierarchy
52. 52
The table presents a fragment of the test where the content of the first column refers to the respective rule. In this test plan the word “processed” can be read “can be shown on the GUI of the web site of the test results”. The first rule is “…The ‘ucSet’ element contains one or more Non-Empty Closed Elements ‘uc’…” The key word here is “one or more”. The boundary analyses requires us to use three test cases (lines 1,2,3) for none, one and two ‘uc’ as a content of a Test Set. And the conclusions are:
•A Path Sensitization technique can be applied using the natural order of the rules.
•A Boundary analysis is applied to the content of each rule.
53. 53
Remember that this picture presented the content of our class. Based on two primary Test Design Methods (A Path Sensitization technique and A Boundary analysis) we described how to build test cases for arithmetic, relational and logical expressions (“atomic” models) and then show how the two can be combined to create test design methods for “compound” models, such algorithm, state machine, instruction set and syntax.
54. 54
The necessary condition for a test design is to guarantee the coverage of all implementation errors.
However, sometimes there are more requests than can be applied to the test.
If we do not expect errors, for example, when we run a regression test over and over again, we prefer to run a minimum set of cases to save time of execution.
On other hand, when we deal with new code the goal is to identify each and every error separately to save debugging time.
The latter goal (highest error resolution) is different from the one to get the smallest test set to run.
This is shown on the graphs.
Each regression test case is built to cover as many errors as possible.
Each ‘new feature’ test case is build to cover as few errors as possible.
The desired result is to have a one-to-one relationship between a test case and each fault.
However some errors produces the same output and are indistinguishable.
They form a class of equivalent errors. The next slide presents the approach used to build these two test types.
55. 55
Let’s build two types of tests for the algorithm that calculates first ‘A’ as a disjunction of ‘B’ and ‘C’ and then calculate ‘F’ as a conjunction of ‘A’ and ‘D’.
To build a regression test use the graph model described earlier that allows us to define a test as two paths and two cuts.
To build the test of the second type ‘error identification test’, let’s first build the error tree, where all errors are ordered based on a ‘masking’ relationship.
It means that error f1 masks (doesn't allow us to see) all other errors.
The class of equivalent errors ‘a1, b1, c1’ covers errors ‘c0’ and ‘b0’, etc.
56. 56
Then we build test cases for each class of equivalent errors.
We do not discuss here the formal method to build the error tree and design identifiers for each class.
We are just presenting the result.
The number of test cases is bigger in the second set, but the error resolution is bigger as well.