The document discusses integration testing strategies and challenges for component-based systems. It describes various integration testing approaches like incremental, top-down, bottom-up, sandwich, and big-bang. Component-based integration poses special challenges due to dependencies between components. Approaches like thread-based and critical module prioritization are recommended to increase visibility and reduce risk. Metrics like the number of integration test sessions can quantify the effort required for different decomposition strategies.
This document discusses testing object-oriented software. It covers key characteristics of OO software like encapsulation and polymorphism that impact testing. It discusses unit testing at the class level and integration testing between classes. It also describes techniques for testing state-dependent behavior using state machines and structural testing using control flow graphs and definition-use pairs. Scaffolding with drivers, stubs and oracles is needed to test classes in isolation.
This document provides an overview of system integration testing. It defines integration testing as testing that combines software or hardware components to evaluate interactions between them. The document discusses different integration testing strategies like incremental, top-down, bottom-up, and sandwich approaches. It also covers key aspects of integration testing like drivers, stubs, granularity levels, and advantages of conducting integration testing.
1. The document describes various steps involved in integration and system testing during software development including acceptance test planning, system test planning, creating functional tests, integration and unit test planning, and generating oracles.
2. The main activities described are grouped into requirements elicitation, requirements specification, architectural design, detail design, unit coding, integration and delivery, and maintenance phases. Key activities associated with each phase are defined.
3. Unit, integration, and system testing are described as having different goals and execution procedures, with unit testing focusing on small units, integration testing on module interactions, and system testing on overall system behavior.
The document contains multiple choice questions related to software testing concepts and terms. It covers topics like different levels of testing (unit, integration, system, acceptance), test case design techniques, test metrics, software quality metrics, software configuration management, and agile methodologies.
The document provides questions and answers related to the ISTQB Advanced Level Certification exam. It discusses key principles for system testing, types of errors targeted by regression and integration testing, strategies for integration testing, guidelines for selecting paths in transaction flows, definitions of failure analysis, concurrency analysis, and performance analysis.
The document contains an ISTQB Foundation level exam sample paper with 40 multiple choice questions covering various topics in software testing. The questions test knowledge of topics like test stages, test tools, test coverage criteria, test techniques, incidents, reviews, and more. The paper provides answers to each question and includes links at the bottom for additional free testing resources, articles, and jobs.
The Maestro framework implemented by the validation group at Cirrus Logic provides GUI-based test automation and management for mixed signal validation. It leads to a 66% reduction in testing time through a modular structure with configuration files, a MATLAB GUI, and reusable validation scripts. Key benefits include abstracted test development and execution, standardized methodologies, and a system for monitoring and logging test results.
Software testing quiz questions and answersRajendraG
This document contains a software testing quiz with 77 multiple choice questions covering various topics in software testing. The questions assess knowledge in areas such as test documentation, test types, quality management, testing levels, metrics, risks, and the software development life cycle. Correct answers are provided at the end. The quiz is intended to help individuals learn and evaluate their understanding of key concepts in software testing.
This document discusses testing object-oriented software. It covers key characteristics of OO software like encapsulation and polymorphism that impact testing. It discusses unit testing at the class level and integration testing between classes. It also describes techniques for testing state-dependent behavior using state machines and structural testing using control flow graphs and definition-use pairs. Scaffolding with drivers, stubs and oracles is needed to test classes in isolation.
This document provides an overview of system integration testing. It defines integration testing as testing that combines software or hardware components to evaluate interactions between them. The document discusses different integration testing strategies like incremental, top-down, bottom-up, and sandwich approaches. It also covers key aspects of integration testing like drivers, stubs, granularity levels, and advantages of conducting integration testing.
1. The document describes various steps involved in integration and system testing during software development including acceptance test planning, system test planning, creating functional tests, integration and unit test planning, and generating oracles.
2. The main activities described are grouped into requirements elicitation, requirements specification, architectural design, detail design, unit coding, integration and delivery, and maintenance phases. Key activities associated with each phase are defined.
3. Unit, integration, and system testing are described as having different goals and execution procedures, with unit testing focusing on small units, integration testing on module interactions, and system testing on overall system behavior.
The document contains multiple choice questions related to software testing concepts and terms. It covers topics like different levels of testing (unit, integration, system, acceptance), test case design techniques, test metrics, software quality metrics, software configuration management, and agile methodologies.
The document provides questions and answers related to the ISTQB Advanced Level Certification exam. It discusses key principles for system testing, types of errors targeted by regression and integration testing, strategies for integration testing, guidelines for selecting paths in transaction flows, definitions of failure analysis, concurrency analysis, and performance analysis.
The document contains an ISTQB Foundation level exam sample paper with 40 multiple choice questions covering various topics in software testing. The questions test knowledge of topics like test stages, test tools, test coverage criteria, test techniques, incidents, reviews, and more. The paper provides answers to each question and includes links at the bottom for additional free testing resources, articles, and jobs.
The Maestro framework implemented by the validation group at Cirrus Logic provides GUI-based test automation and management for mixed signal validation. It leads to a 66% reduction in testing time through a modular structure with configuration files, a MATLAB GUI, and reusable validation scripts. Key benefits include abstracted test development and execution, standardized methodologies, and a system for monitoring and logging test results.
Software testing quiz questions and answersRajendraG
This document contains a software testing quiz with 77 multiple choice questions covering various topics in software testing. The questions assess knowledge in areas such as test documentation, test types, quality management, testing levels, metrics, risks, and the software development life cycle. Correct answers are provided at the end. The quiz is intended to help individuals learn and evaluate their understanding of key concepts in software testing.
The document discusses the waterfall model of software development. It describes the stages of the waterfall process as requirements gathering, design, development, and testing. It notes that each stage requires specific skills and documentation. Testing mirrors the development stages in reverse order, from unit testing to acceptance testing. The waterfall model can take a long time due to non-overlapping stages and risks of requirements changing over time.
Software test management overview for managersTJamesLeDoux
Software test management presentation given to the senior management of several Fortune 100 companies to aid them in planning their software development management efforts.
The document proposes a test automation hierarchy that allows for parallel testing during development. It recommends defining a hierarchy from subsystem to unit level, designing tests to cover all potential errors, and building a test harness to provide control and observation of the system under test. This approach aims to reuse tests across phases and support continuous integration.
Deployment of Debug and Trace for features in RISC-V CoreIRJET Journal
1) The document discusses verification and debugging techniques for RISC-V cores, specifically using instruction and data tracing.
2) It describes the phases of verification including test planning, testbench building, test writing, code coverage analysis, and debugging.
3) Debugging with tracing allows reconstructing the program flow by decoding traced instruction and data accesses and comparing them to the simulation flow to check for errors.
This document contains a 40 question practice exam for the Foundation Certificate in Software Testing. The questions cover a range of software testing topics including test stages, test coverage criteria, test techniques, test planning, and defect management. The goal of the exam is to assess knowledge of terminology, principles, and best practices related to software testing.
The document discusses software testing. It defines software testing as executing a program to find errors based on its requirements. The objectives of testing are to design tests that systematically uncover errors with minimal time and effort. Good tests have a high likelihood of finding errors, are non-redundant, and test specific parts of the program. There are different levels of testing including unit, integration, validation, and acceptance testing. White-box and black-box testing techniques derive test cases from a program's internal design or external requirements specifications, respectively.
The document discusses white box testing (WBT), which tests the internal structure and workings of an application. WBT involves understanding source code to design test cases that execute all paths and verify expected outputs. Key WBT techniques include control flow testing, data flow testing, branch testing, and path testing. WBT provides advantages like optimizing code and revealing hidden errors, but requires extensive knowledge of the application and may not test all conditions.
The document discusses various software testing techniques including black-box testing which focuses on inputs and outputs without seeing internal code, and white-box testing which considers internal logic and structures. Different levels of testing are covered from unit to acceptance testing. Strategies for effective test case design such as equivalence partitioning and boundary value analysis are also presented.
1. The document contains a sample question paper for an ISTQB certification with 35 multiple choice questions covering software testing topics like test stages, test tools, test techniques, test coverage, test management, and more.
2. The questions test knowledge of topics like the purposes of different test stages, test coverage criteria, test techniques like error guessing and boundary value analysis, test management activities, and definitions of terms like faults, failures, and incidents.
3. The answers provide explanations for choices in questions related to test objectives, benefits of early verification, and differences between related terms.
1. The document discusses different strategies for integration and system testing, including big bang, bottom-up, top-down, and sandwich testing.
2. It explains the key steps in integration testing such as selecting components, developing drivers/stubs, defining test cases, and executing tests.
3. The goals of system testing are outlined as functional testing, structure testing, performance testing, and acceptance testing.
1. Splitting testing into stages allows each stage to have a distinct purpose and makes testing easier to manage.
2. Regression testing benefits most from test capture and replay facilities as it involves rerunning existing tests.
3. A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
This document discusses integration and testing in product development. It notes that integration and testing are often optimized separately, but should be considered together. Integration involves combining components and enabling verification activities. The document argues that integration without subsequent testing is incomplete, as testing is needed to fully validate the integrated system works as intended. True optimization requires giving both integration and testing appropriate roles throughout the project lifecycle.
1. The document provides multiple choice questions related to software testing concepts and terms. It covers topics like test case design, test levels, defect management, risk analysis, test techniques and tools.
2. Several questions test knowledge of terms related to test coverage, test types, integration testing techniques, defect prioritization and analysis. Other topics assessed include test planning, test metrics, compatibility testing and quality perspectives.
3. The document contains 75 multiple choice questions to evaluate understanding of key software testing concepts and best practices. The breadth of topics covered provides a comprehensive skills assessment.
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.Software Testing Tutorial for beginners - Learn what is software testing and different testing types and methods and associated estimation techniques.
The document discusses various types and levels of software testing. It describes unit testing, integration testing, system testing, and acceptance testing. Unit testing involves testing individual program modules and is done by developers. Integration testing involves testing interfaces between modules. System testing tests the entire system against requirements. Acceptance testing is done by clients to determine if the system meets needs. The document also covers black-box and white-box testing strategies and techniques.
Software testing is a vital part in developing software.Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
The document discusses various aspects of physical design in VLSI circuits. It describes the physical design cycle which involves transforming a circuit diagram into a layout through steps like partitioning, floorplanning, placement, routing, and compaction. It also discusses different design styles like full-custom, standard cell, and gate array. Full-custom design allows maximum flexibility but has higher complexity, while restricted models like standard cell and gate array simplify the design process at the cost of less optimization in the layout. Physical design aims to produce layouts that meet timing and area constraints.
This document discusses software testing and provides an overview of key testing concepts. It covers the differences between component testing and system testing, approaches for test case design including requirements-based testing and equivalence partitioning, and white-box testing techniques like path testing. Examples are provided for testing a binary search algorithm and a weather station object. The goal is to help discover defects through systematic testing at both the component and system level.
Assembly is often a neglected activity: human beings are clever, dexterous and adaptable, so all too often we leave the production worker to figure things out. Proper assembly planning would reduce risks, and save time and money. This presentation introduces the key concepts of assembly sequence generation and evaluation. It doesn’t promote any particular software tool, but provides an introduction to the types that exist, and makes a case for a systematic consideration of the assembly task. Review questions for learners are included.
Downloadable presentation in Powerpoint format, originally produced with Keynote.
The document discusses the software development life cycle (SDLC) and its various stages and models. It provides definitions of key SDLC terms like waterfall model, verification and validation testing, integration testing, and system testing. It also describes different testing types like unit testing, component testing, and acceptance testing that occur at different stages of the SDLC.
The document discusses the waterfall model of software development. It describes the stages of the waterfall process as requirements gathering, design, development, and testing. It notes that each stage requires specific skills and documentation. Testing mirrors the development stages in reverse order, from unit testing to acceptance testing. The waterfall model can take a long time due to non-overlapping stages and risks of requirements changing over time.
Software test management overview for managersTJamesLeDoux
Software test management presentation given to the senior management of several Fortune 100 companies to aid them in planning their software development management efforts.
The document proposes a test automation hierarchy that allows for parallel testing during development. It recommends defining a hierarchy from subsystem to unit level, designing tests to cover all potential errors, and building a test harness to provide control and observation of the system under test. This approach aims to reuse tests across phases and support continuous integration.
Deployment of Debug and Trace for features in RISC-V CoreIRJET Journal
1) The document discusses verification and debugging techniques for RISC-V cores, specifically using instruction and data tracing.
2) It describes the phases of verification including test planning, testbench building, test writing, code coverage analysis, and debugging.
3) Debugging with tracing allows reconstructing the program flow by decoding traced instruction and data accesses and comparing them to the simulation flow to check for errors.
This document contains a 40 question practice exam for the Foundation Certificate in Software Testing. The questions cover a range of software testing topics including test stages, test coverage criteria, test techniques, test planning, and defect management. The goal of the exam is to assess knowledge of terminology, principles, and best practices related to software testing.
The document discusses software testing. It defines software testing as executing a program to find errors based on its requirements. The objectives of testing are to design tests that systematically uncover errors with minimal time and effort. Good tests have a high likelihood of finding errors, are non-redundant, and test specific parts of the program. There are different levels of testing including unit, integration, validation, and acceptance testing. White-box and black-box testing techniques derive test cases from a program's internal design or external requirements specifications, respectively.
The document discusses white box testing (WBT), which tests the internal structure and workings of an application. WBT involves understanding source code to design test cases that execute all paths and verify expected outputs. Key WBT techniques include control flow testing, data flow testing, branch testing, and path testing. WBT provides advantages like optimizing code and revealing hidden errors, but requires extensive knowledge of the application and may not test all conditions.
The document discusses various software testing techniques including black-box testing which focuses on inputs and outputs without seeing internal code, and white-box testing which considers internal logic and structures. Different levels of testing are covered from unit to acceptance testing. Strategies for effective test case design such as equivalence partitioning and boundary value analysis are also presented.
1. The document contains a sample question paper for an ISTQB certification with 35 multiple choice questions covering software testing topics like test stages, test tools, test techniques, test coverage, test management, and more.
2. The questions test knowledge of topics like the purposes of different test stages, test coverage criteria, test techniques like error guessing and boundary value analysis, test management activities, and definitions of terms like faults, failures, and incidents.
3. The answers provide explanations for choices in questions related to test objectives, benefits of early verification, and differences between related terms.
1. The document discusses different strategies for integration and system testing, including big bang, bottom-up, top-down, and sandwich testing.
2. It explains the key steps in integration testing such as selecting components, developing drivers/stubs, defining test cases, and executing tests.
3. The goals of system testing are outlined as functional testing, structure testing, performance testing, and acceptance testing.
1. Splitting testing into stages allows each stage to have a distinct purpose and makes testing easier to manage.
2. Regression testing benefits most from test capture and replay facilities as it involves rerunning existing tests.
3. A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
This document discusses integration and testing in product development. It notes that integration and testing are often optimized separately, but should be considered together. Integration involves combining components and enabling verification activities. The document argues that integration without subsequent testing is incomplete, as testing is needed to fully validate the integrated system works as intended. True optimization requires giving both integration and testing appropriate roles throughout the project lifecycle.
1. The document provides multiple choice questions related to software testing concepts and terms. It covers topics like test case design, test levels, defect management, risk analysis, test techniques and tools.
2. Several questions test knowledge of terms related to test coverage, test types, integration testing techniques, defect prioritization and analysis. Other topics assessed include test planning, test metrics, compatibility testing and quality perspectives.
3. The document contains 75 multiple choice questions to evaluate understanding of key software testing concepts and best practices. The breadth of topics covered provides a comprehensive skills assessment.
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.Software Testing Tutorial for beginners - Learn what is software testing and different testing types and methods and associated estimation techniques.
The document discusses various types and levels of software testing. It describes unit testing, integration testing, system testing, and acceptance testing. Unit testing involves testing individual program modules and is done by developers. Integration testing involves testing interfaces between modules. System testing tests the entire system against requirements. Acceptance testing is done by clients to determine if the system meets needs. The document also covers black-box and white-box testing strategies and techniques.
Software testing is a vital part in developing software.Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
The document discusses various aspects of physical design in VLSI circuits. It describes the physical design cycle which involves transforming a circuit diagram into a layout through steps like partitioning, floorplanning, placement, routing, and compaction. It also discusses different design styles like full-custom, standard cell, and gate array. Full-custom design allows maximum flexibility but has higher complexity, while restricted models like standard cell and gate array simplify the design process at the cost of less optimization in the layout. Physical design aims to produce layouts that meet timing and area constraints.
This document discusses software testing and provides an overview of key testing concepts. It covers the differences between component testing and system testing, approaches for test case design including requirements-based testing and equivalence partitioning, and white-box testing techniques like path testing. Examples are provided for testing a binary search algorithm and a weather station object. The goal is to help discover defects through systematic testing at both the component and system level.
Assembly is often a neglected activity: human beings are clever, dexterous and adaptable, so all too often we leave the production worker to figure things out. Proper assembly planning would reduce risks, and save time and money. This presentation introduces the key concepts of assembly sequence generation and evaluation. It doesn’t promote any particular software tool, but provides an introduction to the types that exist, and makes a case for a systematic consideration of the assembly task. Review questions for learners are included.
Downloadable presentation in Powerpoint format, originally produced with Keynote.
The document discusses the software development life cycle (SDLC) and its various stages and models. It provides definitions of key SDLC terms like waterfall model, verification and validation testing, integration testing, and system testing. It also describes different testing types like unit testing, component testing, and acceptance testing that occur at different stages of the SDLC.
Similar to Module-5 Integration testing and System Testing.pdf (20)
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
2. Ch 21, slide 2
Learning objectives
• Understand the purpose of integration testing
– Distinguish typical integration faults from faults that should be
eliminated in unit testing
– Understand the nature of integration faults and how to prevent
as well as detect them
• Understand strategies for ordering construction and
testing
– Approaches to incremental assembly and testing to reduce
effort and control risk
• Understand special challenges and approaches for
testing component-based systems
6/28/2023
3. What is integration testing?
Module test Integration test System test
Specification: Module
interface
Interface specs,
module breakdown
Requirements
specification
Visible
structure:
Coding details Modular structure
(software
architecture)
— none —
Scaffolding
required:
Some Often extensive Some
Looking for
faults in:
Modules Interactions,
compatibility
System
functionality
Ch 21, slide 3
6/28/2023
4. Integration versus Unit Testing
• Unit (module) testing is a necessary foundation
– Unit level has maximum controllability and visibility
– Integration testing can never compensate for inadequate unit
testing
• Integration testing may serve as a process check
– If module faults are revealed in integration testing, they signal
inadequate unit testing
– If integration faults occur in interfaces between correctly
implemented modules, the errors can be traced to module
breakdown and interface specifications
Ch 21, slide 4
6/28/2023
5. Ch 21, slide 5
Integration Faults
• Inconsistent interpretation of parameters or values
– Example: Mixed units (meters/yards) in Martian Lander
• Violations of value domains, capacity, or size limits
– Example: Buffer overflow
• Side effects on parameters or resources
– Example: Conflict on (unspecified) temporary file
• Omitted or misunderstood functionality
– Example: Inconsistent interpretation of web hits
• Nonfunctional properties
– Example: Unanticipated performance issues
• Dynamic mismatches
– Example: Incompatible polymorphic method calls
6/28/2023
6. Ch 21, slide 6
Example: A Memory Leak
Apache web server, version 2.0.48
Response to normal page request on secure (https) port
static void ssl io filter disable(ap filter t *f)
{ bio filter in ctx t *inctx = f->ctx;
inctx->ssl = NULL;
inctx->filter ctx->pssl = NULL;
}
No obvious error, but
Apache leaked memory
slowly (in normal use) or
quickly (if exploited for a
DOS attack)
6/28/2023
7. Ch 21, slide 7
Example: A Memory Leak
Apache web server, version 2.0.48
Response to normal page request on secure (https) port
static void ssl io filter disable(ap filter t *f)
{ bio filter in ctx t *inctx = f->ctx;
SSL_free(inctx -> ssl);
inctx->ssl = NULL;
inctx->filter ctx->pssl = NULL;
}
The missing code is for a
structure defined and
created elsewhere,
accessed through an
opaque pointer.
6/28/2023
8. Ch 21, slide 8
Example: A Memory Leak
Apache web server, version 2.0.48
Response to normal page request on secure (https) port
static void ssl io filter disable(ap filter t *f)
{ bio filter in ctx t *inctx = f->ctx;
SSL_free(inctx -> ssl);
inctx->ssl = NULL;
inctx->filter ctx->pssl = NULL;
}
Almost impossible to find
with unit testing.
(Inspection and some
dynamic techniques could
have found it.)
6/28/2023
9. Maybe you’ve heard ...
• Yes, I implemented
module A , but I didn’t
test it thoroughly yet. It
will be tested along with
module B when that’s
ready.
Ch 21, slide 10
6/28/2023
10. Translation...
• Yes, I implemented
module A , but I didn’t
test it thoroughly yet. It
will be tested along with
module B when that’s
ready.
• I didn’t think at all about
the strategy for testing. I
didn’t design module
A for testability and I
didn’t think about the best
order to build and test
modules A and B .
Ch 21, slide 11
6/28/2023
11. System Architecture
Integration Plan + Test Plan
• Integration test
plan drives and is
driven by the
project “build
plan”
– A key feature of
the system
architecture and
project plan
Ch 21, slide 12
Build Plan
...
...
Test Plan
...
6/28/2023
12. Ch 21, slide 13
Big Bang Integration Test
An extreme and desperate approach:
Test only after integrating all modules
+Does not require scaffolding
• The only excuse, and a bad one
- Minimum observability, diagnosability, efficacy, feedback
- High cost of repair
• Recall: Cost of repairing a fault rises as a function of time
between error and repair
6/28/2023
13. Structural and Functional Strategies
• Structural orientation:
Modules constructed, integrated and tested based on a
hierarchical project structure
– Top-down, Bottom-up, Sandwich, Backbone
• Functional orientation:
Modules integrated according to application
characteristics or features
– Threads, Critical module
Ch 21, slide 14
6/28/2023
14. Integration testing approaches
Common approaches to perform system
integration testing
Incremental
Top-down
Bottom-up
Sandwich
Big-bang
6/28/2023 Ch 21, slide 15
15. DriversandStubs
A program that calls the interface procedures of the
module being tested and reports the results. A driver
simul at e s a mo du l e t h a t c a l l s t h e m o d u l e
currently being Tested.
Driver:
Stub:
A program that has the same interface
procedures as a module that is being
called by the module being tested but is
simpler.
A stub simulates a module called by the
module currently being tested
Driver
Tested
Unit
Stub
Procedure
call
Procedure
call
Module under test
6/28/2023 Ch 21, slide 16
16. Example: A3-Layer-Design(SpreadSheet)
Layer I
Layer II
Layer III
Spread
SheetView
Calculator
BinaryFile
Storage
XMLFile
Storage
Currency
DataBase
Currency
Converter
Data
Model
A
C
E F G
D
B
Spread
SheetView
BinaryFile
Storage
Entity
Model
A
E F
Currency
DataBase
G
Currency
Converter
D
B
Calculator
C
XMLFile
Storage
6/28/2023 Ch 21, slide 17
18. Bottom-up Testing Strategy
The subsystems in the lowest layer of the call hierarchy are
tested individually
Then the next subsystems are tested that call the
previously tested subsystems
This is repeated until all subsystems are included
Drivers are needed.
6/28/2023 Ch 21, slide 19
19. Bottom-up Testing Strategy
A
C
E F G
D
B
A
Test
A, B, C, D,
E, F, G
E
Test E
F
Test F
B
Test B, E, F
C
Test C
D
Test D,G
G
Test G
6/28/2023 Ch 21, slide 20
20. Pros and Cons of Bottom-Up Integration Testing
Con:
Tests the most important subsystem (user
interface) last,
Drivers needed.
Pro:
No stubs needed.
Useful for integration testing of the following
systems,
• Object-oriented systems,
• Real-time systems,
• Systems with strict performance
requirements.
6/28/2023 Ch 21, slide 21
21. Top-down testing strategy
Test the top layer or the controlling subsystem first
Then combine all the subsystems that are called by the
tested subsystems and test the resulting collection of
subsystems
Do this until all subsystems are incorporated into the test
Stubs are needed to do the testing.
6/28/2023 Ch 21, slide 22
22. T
op-downIntegration
Test
A, B, C, D,
E, F, G
All Layers
Layer I + II
Test A, B, C, D
Layer I
Test A
A
E F
B C D
G
6/28/2023 Ch 21, slide 23
23. Pros and Cons of Top-down Integration Testing
Pro
Test cases can be defined in terms of the functionality of
the system
(functional requirements)
No drivers needed
Cons
Writing stubs is difficult: Stubs must allow all possible
conditions to be tested.
Large number of stubs may be required, especially if the
lowest level of the
system contains many methods.
Some interfaces are not tested separately.
6/28/2023 Ch 21, slide 24
24. Sandwich Testing Strategy
Combines top-down strategy with
bottom-up strategy
The system is viewed as having three
layers
A target layer in the middle
A layer above the target
A layer below the target
Testing converges at the target layer.
6/28/2023 Ch 21, slide 25
25. Sandwich Testing Strategy
Test
A, B, C, D,
E, F, G
Test B, E, F
Test D,G
Test A
Test E
Test F
Test G
Test A,B,C, D
A
E F
B C D
G
6/28/2023 Ch 21, slide 26
26. Pros and Cons of Sandwich Testing
Top and Bottom Layer Tests can be done in parallel
Problem: Does not test the individual subsystems
and their
interfaces thoroughly before integration
Solution: Modified sandwich
6/28/2023 Ch 21, slide 27
27. Modified SandwichT
estingStrategy
Test in parallel:
Middle layer with drivers and stubs
Top layer with stubs
Bottom layer with drivers
Test in parallel:
Top layer accessing middle layer (top layer
replaces drivers)
Bottom accessed by middle layer
(bottom layer replaces stubs).
6/28/2023 Ch 21, slide 28
28. Modified SandwichT
esting
Test F
Test E
Test B
Test A
Test C
Test B, E, F
Test D
Test D,G
Test G
Test A,C
Test
A, B, C, D,
E, F, G
A
E F
B C D
G
6/28/2023 Ch 21, slide 29
29. ContinuousT
esting
Continuous build:
Build from day
one Test from
day one
Integrate from day one
System is always runnable
Requires integrated tool support:
Continuous build server
Automated tests with high
coverage Tool supported
refactoring
Software configuration
management Issue tracking.
6/28/2023 Ch 21, slide 30
31. Stepsin IntegrationTesting
4.Test subsystem decomposition: Define test
cases
that exercise all dependencies
5.Test non-functional requirements:
Execute
performance tests
6.Keep records of the test cases and testing
activities.
7.Repeat steps 1 to 7 until the full system
is tested.
The primary goal of integration testing is
to identify failures with the (current)
component configuration.
1.Based on the integration
strategy, select a
component to be tested.
Unit test all the classes in
the component.
2.Put selected component
together; do any
preliminary fix-up
necessary to make the
integration test
operational (drivers, stubs)
3.Test functional
requirements: Define test
cases that exercise all
uses cases with the
selected component
6/28/2023 32/20
32. QuickTimeª and a
None decompressor
are needed to see this picture.
Top
A C
X
Thread ...
Ch 21, slide 33
A “thread” is a portion of several
modules that together provide a
user-visible program feature.
6/28/2023
33. QuickTimeª and a
None decompressor
are needed to see this picture.
Top
A B C
Y
X
Thread ...
Ch 21, slide 34
Integrating one
thread, then another,
etc., we maximize
visibility for the user
6/28/2023
34. Thread ...
Ch 21, slide 35
QuickTimeª and a
None decompressor
are needed to see this picture.
Top
A B C
Y
X
As in sandwich
integration testing,
we can minimize
stubs and drivers, but
the integration plan
may be complex
6/28/2023
35. Critical Modules
• Strategy: Start with riskiest modules
– Risk assessment is necessary first step
– May include technical risks (is X feasible?), process risks (is
schedule for X realistic?), other risks
• May resemble thread or sandwich process in tactics for
flexible build order
– E.g., constructing parts of one module to test functionality in
another
• Key point is risk-oriented process
– Integration testing as a risk-reduction activity, designed to
deliver any bad news as early as possible Ch 21, slide 36
6/28/2023
36. Ch 21, slide 37
Choosing a Strategy
• Functional strategies require more planning
– Structural strategies (bottom up, top down, sandwich) are
simpler
– But thread and critical modules testing provide better process
visibility, especially in complex systems
• Possible to combine
– Top-down, bottom-up, or sandwich are reasonable for relatively
small components and subsystems
– Combinations of thread and critical modules integration testing
are often preferred for larger subsystems
6/28/2023
37. Integration test metrics
The number of integration tests for a decomposition tree is the
following
Sessions = nodes – leaves + edges
For SATM have 42 integration test sessions, which
correspond to 42 separate sets of test cases
For top-down integration nodes – 1 stubs are needed
For bottom-up integration nodes – leaves drivers are
needed
For SATM need 32 stubs and 10 drivers
38. Call Graph-Based Integration
The basic idea is to use the call graph instead of the
decomposition tree
The call graph is a directed, labeled graph
Vertices are program units; e.g. methods
A directed edge joins calling vertex to the called
vertex
Adjacency matrix is also used
Do not scale well, although some insights are
useful
Nodes of high degree are critical
42. Pair-Wise Integration
The idea behind Pair-Wise integration testing
Eliminate need for developing stubs / drivers
Use actual code instead of stubs/drivers
In order not to deteriorate the process to a big-bang strategy
Restrict a testing session to just a pair of units in the
call graph.
Results in one integration test session for each edge
in the call graph.
44. Neighbourhood integration
The neighbourhood of a node in a graph
The set of nodes that are one edge away
from the given node
In a directed graph
All the immediate predecessor nodes and all the
immediate successor nodes of a given node.
Neighborhood Integration Testing
Reduces the number of test sessions.
Fault isolation is more difficult.
47. Pros and Cons of Call-Graph Integration
Aim to eliminate / reduce the need for drivers / stubs
Development effort is a drawback
Closer to a build sequence
Neighborhoods can be combined to create “villages”
Suffer from fault isolation problems
Specially for large neighborhoods
48. Pros and Cons of Call-Graph Integration – 2
Redundancy
Nodes can appear in several neighborhoods
Assumes that correct behaviour follows from correct units
and correct interfaces
Not always the case
Call-graph integration is well suited to devising a sequence
of builds with which to implement a system
49. Path-Based Integration
Motivation
Combine structural and behavioral type of testing
for integration testing as we did for unit testing.
Basic idea
Focus on interactions among system units
Rather than merely to test interfaces among
separately developed and tested units
Interface-based testing is structural while interaction-based is
behavioral
50. Extended Concepts – 1
Source node
A program statement fragment at which program
execution begins or resumes.
For example, the first “begin” statement in a program.
Also, immediately after nodes that transfer
control to other units.
Sink node
A statement fragment at which program execution
terminates.
The final “end” in a program as well as statements
that transfer control to other units.
51. Extended Concepts – 2
Module execution path
A sequence of statements that begins with a source
node and ends with a sink node with no intervening
sink nodes.
Message
A programming language mechanism by which one
unit transfers control to another unit.
Usually interpreted as subroutine invocations
The unit which receives the message always returns
control to the message source.
52. MM-Path
An interleaved sequence of module execution paths and
messages.
Describes sequences of module execution paths that include
transfers of control among separate units.
MM-paths always represent feasible execution paths, and these
paths cross unit boundaries.
There is no correspondence between MM-paths and DD- paths.
The intersection of a module execution path with a unit is the
analog of a slice with respect to the MM-path function
54. MM-path Graph
Given a set of units their MM-path graph is the
directed graph in which
Nodes are module execution paths
Edges correspond to messages and returns
from one unit to another
The definition is with respect to a set of units
It directly supports composition of units and
composition- based integration testing
56. MM-path guidelines
How long, or deep, is an MM-path? What determines the end points?
Message quiescence
Occurs when a unit that sends no messages is reached
Module C in the example
Data quiescence
Occurs when a sequence of processing ends in the
creation of stored data that is not immediately used
(path D1 and D2)
Quiescence points are natural endpoints for MM-paths
57. MM-Path metric
How many MM-paths are sufficient to test a system
Should cover all source-to-sink paths in the set of units
What about loops?
Use condensation graphs to get directed acyclic graphs
Avoids an excessive number of paths
58. Pros and cons of path-based integration
Hybrid of functional and structural testing
Functional – represent actions with input and output
Structural – how they are identified
Avoids pitfall of structural testing (???)
Fairly seamless union with system testing
Path-based integration is closely coupled with actual system
behaviour
Works well with OO testing
No need for stub and driver development
There is a significant effort involved in identifying MM-paths
59. MM-path compared to other methods
Strategy Ability to test
interfaces
Ability to test
co-functionality
Fault isolation
resolution
Functional
decomposition
Acceptable, can
be deceptive
Limited to pairs
of units
Good to faulty
unit
Call-graph Acceptable Limited to pairs
of units
Good to faulty
unit
MM-path Excellent Complete Excellent to unit
path level
60. Working Definition of Component
• Reusable unit of deployment and composition
– Deployed and integrated multiple times
– Integrated by different teams (usually)
• Component producer is distinct from component user
• Characterized by an interface or contract
• Describes access points, parameters, and all functional and non-
functional behavior and conditions for using the component
• No other access (e.g., source code) is usually available
• Often larger grain than objects or packages
– Example: A complete database system may be a component
Ch 21, slide 62
6/28/2023
61. Components — Related Concepts
• Framework
• Skeleton or micro-architecture of an application
• May be packaged and reused as a component, with “hooks” or “slots”
in the interface contract
• Design patterns
• Logical design fragments
• Frameworks often implement patterns, but patterns are not frameworks.
Frameworks are concrete, patterns are abstract
• Component-based system
• A system composed primarily by assembling components, often
“Commercial off-the-shelf” (COTS) components
• Usually includes application-specific “glue code” Ch 21, slide 63
6/28/2023
62. Component Interface Contracts
• Application programming interface (API) is distinct from
implementation
– Example: DOM interface for XML is distinct from many possible
implementations, from different sources
• Interface includes everything that must be known to use
the component
– More than just method signatures, exceptions, etc
– May include non-functional characteristics like performance,
capacity, security
– May include dependence on other components
Ch 21, slide 64
6/28/2023
63. Challenges in Testing Components
• The component builder’s challenge:
– Impossible to know all the ways a component may be used
– Difficult to recognize and specify all potentially important
properties and dependencies
• The component user’s challenge:
– No visibility “inside” the component
– Often difficult to judge suitability for a particular use and context
Ch 21, slide 65
6/28/2023
64. Testing a Component: Producer View
• First: Thorough unit and subsystem testing
– Includes thorough functional testing based on application
program interface (API)
– Rule of thumb: Reusable component requires at least twice the
effort in design, implementation, and testing as a subsystem
constructed for a single use (often more)
• Second: Thorough acceptance testing
– Based on scenarios of expected use
– Includes stress and capacity testing
• Find and document the limits of applicability
Ch 21, slide 66
6/28/2023
65. Testing a Component: User View
• Not primarily to find faults in the component
• Major question: Is the component suitable for this
application?
– Primary risk is not fitting the application context:
• Unanticipated dependence or interactions with environment
• Performance or capacity limits
• Missing functionality, misunderstood API
– Risk high when using component for first time
• Reducing risk: Trial integration early
– Often worthwhile to build driver to test model scenarios, long
before actual integration Ch 21, slide 67
6/28/2023
66. Adapting and Testing a Component
• Applications often access components
through an adaptor, which can also be
used by a test driver Ch 21, slide 68
QuickTimeª and a
None decompressor
are needed to see this picture.
Component
Adaptor
Application
6/28/2023
67. Ch 21, slide 69
Summary
• Integration testing focuses on interactions
– Must be built on foundation of thorough unit testing
– Integration faults often traceable to incomplete or
misunderstood interface specifications
• Prefer prevention to detection, and make detection easier by imposing
design constraints
• Strategies tied to project build order
– Order construction, integration, and testing to reduce cost or
risk
• Reusable components require special care
– For component builder, and for component user
6/28/2023
68. Ch 22, slide 1
System, Acceptance, and Regression
Testing
69. Ch 22, slide 2
Learning objectives
• Distinguish system and acceptance testing
– How and why they differ from each other and from unit and
integration testing
• Understand basic approaches for quantitative
assessment (reliability, performance, ...)
• Understand interplay of validation and verification for
usability and accessibility
– How to continuously monitor usability from early design to
delivery
• Understand basic regression testing approaches
– Preventing accidental changes
70. Ch 22, slide 3
System Acceptance Regression
Test for ... Correctness,
completion
Usefulness,
satisfaction
Accidental
changes
Test by ... Development
test group
Test group with
users
Development
test group
Verification Validation Verification
72. System Testing
• Key characteristics:
– Comprehensive (the whole system, the whole spec)
– Based on specification of observable behavior
Verification against a requirements specification, not validation, and not opinions
– Independent of design and implementation
Independence: Avoid repeating software design errors in
system test design
Ch 22, slide 5
73. Independent V&V
• One strategy for maximizing independence: System (and
acceptance) test performed by a different organization
– Organizationally isolated from developers (no pressure to say
“ok”)
– Sometimes outsourced to another company or agency
• Especially for critical systems
• Outsourcing for independent judgment, not to save money
• May be additional system test, not replacing internal V&V
– Not all outsourced testing is IV&V
• Not independent if controlled by development organization
Ch 22, slide 6
74. Independence without changing staff
• If the development organization controls system testing ...
– Perfect independence may be unattainable, but we can reduce
undue influence
• Develop system test cases early
– As part of requirements specification, before major design
decisions have been made
• Agile “test first” and conventional “V model” are both examples of
designing system test cases before designing the implementation
• An opportunity for “design for test”: Structure system for critical system
testing early in project
Ch 22, slide 7
75. Incremental System Testing
• System tests are often used to measure progress
– System test suite covers all features and scenarios of use
– As project progresses, the system passes more and more
system tests
• Assumes a “threaded” incremental build plan: Features
exposed at top level as they are developed
Ch 22, slide 8
76. Global Properties
• Some system properties are inherently global
– Performance, latency, reliability, ...
– Early and incremental testing is still necessary, but provide only
estimates
• A major focus of system testing
– The only opportunity to verify global properties against actual
system specifications
– Especially to find unanticipated effects, e.g., an unexpected
performance bottleneck
Ch 22, slide 9
77. Context-Dependent Properties
• Beyond system-global: Some properties depend on the
system context and use
– Example: Performance properties depend on environment and
configuration
– Example: Privacy depends both on system and how it is used
• Medical records system must protect against unauthorized use, and
authorization must be provided only as needed
– Example: Security depends on threat profiles
• And threats change!
• Testing is just one part of the approach
Ch 22, slide 10
78. Establishing an Operational Envelope
• When a property (e.g., performance or real-time
response) is parameterized by use ...
– requests per second, size of database, ...
• Extensive stress testing is required
– varying parameters within the envelope, near the bounds, and
beyond
• Goal: A well-understood model of how the property varies
with the parameter
– How sensitive is the property to the parameter?
– Where is the “edge of the envelope”?
– What can we expect when the envelope is exceeded?
Ch 22, slide 11
79. Stress Testing
• Often requires extensive simulation of the execution
environment
– With systematic variation: What happens when we push the
parameters? What if the number of users or requests is 10
times more, or 1000 times more?
• Often requires more resources (human and machine)
than typical test cases
– Separate from regular feature tests
– Run less often, with more manual control
– Diagnose deviations from expectation
• Which may include difficult debugging of latent faults! Ch 22, slide 12
81. Estimating Dependability
• Measuring quality, not searching for faults
– Fundamentally different goal than systematic testing
• Quantitative dependability goals are statistical
– Reliability
– Availability
– Mean time to failure
– ...
• Requires valid statistical samples from operational profile
– Fundamentally different from systematic testing
Ch 22, slide 14
82. Statistical Sampling
• We need a valid operational profile (model)
– Sometimes from an older version of the system
– Sometimes from operational environment (e.g., for an
embedded controller)
– Sensitivity testing reveals which parameters are most important,
and which can be rough guesses
• And a clear, precise definition of what is being measured
– Failure rate? Per session, per hour, per operation?
• And many, many random samples
– Especially for high reliability measures
Ch 22, slide 15
83. Is Statistical Testing Worthwhile?
• Necessary for ...
– Critical systems (safety critical, infrastructure, ...)
• But difficult or impossible when ...
– Operational profile is unavailable or just a guess
• Often for new functionality involving human interaction
– But we may factor critical functions from overall use to obtain a good model of
only the critical properties
– Reliability requirement is very high
• Required sample size (number of test cases) might require years of
test execution
• Ultra-reliability can seldom be demonstrated by testing
Ch 22, slide 16
84. Process-based Measures
• Less rigorous than statistical testing
– Based on similarity with prior projects
• System testing process
– Expected history of bugs found and resolved
• Alpha, beta testing
– Alpha testing: Real users, controlled environment
– Beta testing: Real users, real (uncontrolled) environment
– May statistically sample users rather than uses
– Expected history of bug reports
Ch 22, slide 17
86. Usability
• A usable product
– is quickly learned
– allows users to work efficiently
– is pleasant to use
• Objective criteria
– Time and number of operations to perform a task
– Frequency of user error
• blame user errors on the product!
• Plus overall, subjective satisfaction
Ch 22, slide 19
87. Verifying Usability
• Usability rests ultimately on testing with real users —
validation, not verification
– Preferably in the usability lab, by usability experts
• But we can factor usability testing for process visibility —
validation and verification throughout the project
– Validation establishes criteria to be verified by testing, analysis,
and inspection
Ch 22, slide 20
88. Factoring Usability Testing
Validation
(usability lab)
• Usability testing
establishes usability
check-lists
– Guidelines applicable
across a product line or
domain
• Early usability testing
evaluates “cardboard
prototype” or mock-up
Verification
(developers, testers)
• Inspection applies
usability check-lists to
specification and design
• Behavior objectively
verified (e.g., tested)
against interface design
Ch 22, slide 21
89. Varieties of Usability Test
• Exploratory testing
– Investigate mental model of users
– Performed early to guide interface design
• Comparison testing
– Evaluate options (specific interface design choices)
– Observe (and measure) interactions with alternative interaction
patterns
• Usability validation testing
– Assess overall usability (quantitative and qualitative)
– Includes measurement: error rate, time to complete Ch 22, slide 22
90. Typical Usability Test Protocol
Select representative sample of user groups
– Typically 3-5 users from each of 1-4 groups
– Questionnaires verify group membership
Ask users to perform a representative sequence of tasks
Observe without interference (no helping!)
– The hardest thing for developers is to not help. Professional
usability testers use one-way mirrors.
Measure (clicks, eye movement, time, ...) and follow up with
questionnaire
Ch 22, slide 23
91. Accessibility Testing
• Check usability by people with disabilities
– Blind and low vision, deaf, color-blind, ...
• Use accessibility guidelines
– Direct usability testing with all relevant groups is usually
impractical; checking compliance to guidelines is practical and
often reveals problems
• Example: W3C Web Content Accessibility Guidelines
– Parts can be checked automatically
– but manual check is still required
• e.g., is the “alt” tag of the image meaningful?
Ch 22, slide 24
93. Regression
• Yesterday it worked, today it doesn’t
– I was fixing X, and accidentally broke Y
– That bug was fixed, but now it’s back
• Tests must be re-run after any change
– Adding new features
– Changing, adapting software to new conditions
– Fixing other bugs
• Regression testing can be a major cost of software
maintenance
– Sometimes much more than making the change Ch 22, slide 26
94. Basic Problems of Regression Test
• Maintaining test suite
– If I change feature X, how many test cases must be revised
because they use feature X?
– Which test cases should be removed or replaced? Which test
cases should be added?
• Cost of re-testing
– Often proportional to product size, not change size
– Big problem if testing requires manual effort
• Possible problem even for automated testing, when the test suite and
test execution time grows beyond a few hours
Ch 22, slide 27
95. Test Case Maintenance
• Some maintenance is inevitable
– If feature X has changed, test cases for feature X will require
updating
• Some maintenance should be avoided
– Example: Trivial changes to user interface or file format should
not invalidate large numbers of test cases
• Test suites should be modular!
– Avoid unnecessary dependence
– Generating concrete test cases from test case specifications
can help
Ch 22, slide 28
96. Obsolete and Redundant
• Obsolete: A test case that is not longer valid
– Tests features that have been modified, substituted, or removed
– Should be removed from the test suite
• Redundant: A test case that does not differ significantly
from others
– Unlikely to find a fault missed by similar test cases
– Has some cost in re-execution
– Has some (maybe more) cost in human effort to maintain
– May or may not be removed, depending on costs
Ch 22, slide 29
97. Selecting and Prioritizing Regression Test Cases
• Should we re-run the whole regression test suite? If so,
in what order?
– Maybe you don’t care. If you can re-rerun everything
automatically over lunch break, do it.
– Sometimes you do care ...
• Selection matters when
– Test cases are expensive to execute
• Because they require special equipment, or long run-times, or cannot
be fully automated
• Prioritization matters when
– A very large test suite cannot be executed every day Ch 22, slide 30
98. Code-based Regression Test Selection
• Observation: A test case can’t find a fault in code it
doesn’t execute
– In a large system, many parts of the code are untouched by
many test cases
• So: Only execute test cases that execute changed or new
code
Ch 22, slide 31
QuickTimeª and a
None decompressor
are needed to see this picture.
QuickTimeª and a
None decompressor
are needed to see this picture.
QuickTimeª and a
None decompressor
are needed to see this picture.
New or changed
Executed by
test case
99. Control-flow and Data-flow Regression Test Selection
• Same basic idea as code-based selection
– Re-run test cases only if they include changed elements
– Elements may be modified control flow nodes and edges, or
definition-use (DU) pairs in data flow
• To automate selection:
– Tools record elements touched by each test case
• Stored in database of regression test cases
– Tools note changes in program
– Check test-case database for overlap
Ch 22, slide 32
100. Specification-based Regression Test Selection
• Like code-based and structural regression test case
selection
– Pick test cases that test new and changed functionality
• Difference: No guarantee of independence
– A test case that isn’t “for” changed or added feature X might
find a bug in feature X anyway
• Typical approach: Specification-based prioritization
– Execute all test cases, but start with those that related to
changed and added features
Ch 22, slide 33
101. Prioritized Rotating Selection
• Basic idea:
– Execute all test cases, eventually
– Execute some sooner than others
• Possible priority schemes:
– Round robin: Priority to least-recently-run test cases
– Track record: Priority to test cases that have detected faults
before
• They probably execute code with a high fault density
– Structural: Priority for executing elements that have not been
recently executed
• Can be coarse-grained: Features, methods, files, ... Ch 22, slide 34
102. Ch 22, slide 35
Summary
• System testing is verification
– System consistent with specification?
– Especially for global properties (performance, reliability)
• Acceptance testing is validation
– Includes user testing and checks for usability
• Usability and accessibility require both
– Usability testing establishes objective criteria to verify
throughout development
• Regression testing repeated after each change
– After initial delivery, as software evolves