SlideShare a Scribd company logo
1 of 52
Download to read offline
Module V:
Software Testing Strategies
A Strategic Approach to Software Testing
• Test Strategies for Conventional Software
• Validation Testing
• System Testing
Estimation for Software Projects
• The Project Planning Process
• Resources
• Decomposition Techniques
A strategy for software testing provides a road map that describes the steps to be conducted as
part of testing, when these steps are planned and then undertaken, and how much effort, time,
and resources will be required.
Any testing strategy must incorporate:
• Test planning
• Test-case design
• Test execution
• Resultant data collection
• Evaluation
A software testing strategy should be flexible enough to promote a customized testing
approach.
Testing is a set of activities that can be planned in advance and conducted systematically.
A template for software testing — a set of steps into which can place specific test-case design techniques
and testing methods — should be defined for the software process.
A number of software testing strategies have been proposed in the literature.
A template for testing have the following generic characteristics:
•To perform effective testing, one should conduct effective technical reviews.
•Many errors will be eliminated before testing commences.
•Testing begins at the component level and works “outward” toward the integration of the entire computer-based system.
•Different testing techniques are appropriate for different software engineering approaches and at different points in time.
•Testing is conducted by the developer of the software and (for large projects) an independent test group.
•Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
Verification and Validation
• Software testing is one element of a broader topic that is often
referred to as verification and validation (V&V).
• Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
• Validation refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements.
• Boehm states this another way:
• Verification: “Are we building the product right?”
• Validation: “Are we building the right product?”
Verification and Validation…
• Verification and validation includes a wide array of SQA activities:
• Technical reviews
• Quality and configuration audits
• Performance monitoring
• Simulation
• Feasibility study
• Documentation review
• Database review
• Algorithm analysis
• Development testing
• Usability testing
• Qualification testing
• Acceptance testing
• Installation testing
Organizing for Software Testing
•For every software project, there is an
inherent conflict of interest that occurs
as testing begins.
•The people who have built the software
are now asked to test the software.
Software Testing Strategy—The Big Picture
Software Testing Strategy—The Big Picture…
Software testing steps
Criteria for Completion of Testing
• A classic question arises every time software testing is
discussed:
• “When to do testing? — how do we know that we’ve
tested enough?”
• Ans: Every time the user executes a computer program,
the program is being tested.
• Ans: “You’re done testing when you run out of time or
you run out of money”.
Many strategies can be used to test software.
Fully constructed system – then conduct test – does not work.
Conduct tests on a daily basis, whenever any part of the system is constructed.
A testing strategy that is chosen by many software teams falls between the two
extremes.
• It takes an incremental view of testing, beginning with the testing of individual program units, moving to
tests designed to facilitate the integration of the units (sometimes on a daily basis)
• Culminating with tests that exercise the constructed system.
Unit Testing
Integration Testing
Unit testing focuses verification effort on the smallest unit of software design -- the
software component or module.
Using the component-level design description as a guide, important control paths are
tested to uncover errors within the boundary of the module.
The relative complexity of tests and the errors those tests uncover is limited by the
constrained scope established for unit testing.
The unit test focuses on the internal processing logic and data structures within the
boundaries of a component.
Unit testing can be conducted in parallel for multiple components.
Unit Test Considerations
• The module interface is tested to ensure that information properly flows into and
out of the program unit under test.
• Local data structures are examined to ensure that data stored temporarily
maintains its integrity during all steps in an algorithm’s execution.
• All independent paths through the control structure are exercised to ensure that
all statements in a module have been executed at least once.
• Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
• All error-handling paths are tested.
• Data flow across a component interface is tested before any other testing is
initiated.
• If data do not enter and exit properly, all other tests are moot.
• Local data structures should be exercised and the local impact on global data
should be ascertained (if possible) during unit testing.
• Selective testing of execution paths is an essential task during the unit test.
• Test cases should be designed to uncover errors due to erroneous computations,
incorrect comparisons, or improper control flow.
• Boundary testing is one of the most important unit testing tasks.
• Software often fails at its boundaries.
Unit Test
Unit Test Considerations…
• If error-handling paths are implemented, they must be tested.
• The potential errors that should be tested when error handling is
evaluated are:
• (1) Error description is unintelligible
• (2) Error noted does not correspond to error encountered
• (3) Error condition causes system intervention prior to error
handling
• (4) Exception-condition processing is incorrect
• (5) Error description does not provide enough information to assist
in the location of the cause of the error.
Unit Test Procedures
• Unit testing is normally considered as an adjunct to the coding step.
• The design of unit tests can occur before coding begins or after source
code has been generated.
• A review of design information provides guidance for establishing test
cases that are likely to uncover errors in each of the categories
discussed earlier.
• Each test case should be coupled with a set of expected results.
• A component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test.
• A driver is nothing more than a “main program” that accepts test-
case data, passes such data to the component (to be tested), and
prints relevant results.
• Stubs serve to replace modules that are subordinate (invoked by)
the component to be tested.
• A stub or “dummy subprogram” uses the subordinate module’s
interface, may do minimal data manipulation, prints verification of
entry, and returns control to the module undergoing testing.
After Unit testing, “If they all work individually, why do you doubt that they’ll work when we put them together?”
The problem is “putting them together”—interfacing.
• Data can be lost across an interface.
• One component can have an inadvertent, adverse effect on another.
• Subfunctions, when combined, may not produce the desired major function.
• Individually acceptable imprecision may be magnified to unacceptable level.
• Global data structures can present problems.
Integration testing is a systematic technique for constructing the software architecture while at the same time
conducting tests to uncover errors associated with interfacing.
The objective is to take unit-tested components and build a program structure that has been dictated by design.
Incremental integration is the antithesis of the big bang approach.
A number of different incremental integration
strategies are discussed.
• Top-Down Integration.
• Bottom-Up Integration.
• Regression Testing.
• Smoke Testing.
• Integration Test Work Products.
Top-Down Integration.
• Top-down integration testing is an incremental approach to
the construction of the software architecture.
• Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module
(main program).
• Modules subordinate (and ultimately subordinate) to the
main control module are incorporated into the structure in
either a depth-first or breadth-first manner.
Top-Down Integration...
• The integration process is performed in a series
of five steps:
• 1. The main control module is used as a test
driver and stubs are substituted for all
components directly subordinate to the main
control module.
• 2. Depending on the integration approach
selected (i.e., depth or breadth-first),
subordinate stubs are replaced one at a time
with actual components.
• 3. Tests are conducted as each component is
integrated.
• 4. On completion of each set of tests,
another stub is replaced with the real
component.
• 5. Regression testing may be conducted to
ensure that new errors have not been
introduced.
Bottom-Up Integration.
• Begins construction and testing with atomic modules (i.e., components at the
lowest levels in the program structure).
• Because components are integrated from the bottom up, the functionality
provided by components subordinate to a given level is always available and the
need for stubs is eliminated.
• A bottom-up integration strategy may be implemented with the following steps:
• 1. Low-level components are combined into clusters (sometimes called builds)
that perform a specific software subfunction.
• 2. A driver (a control program for testing) is written to coordinate test-case
input and output.
• 3. The cluster is tested.
• 4. Drivers are removed and clusters are combined moving upward in the
program structure.
Bottom-Up Integration...
Regression Testing.
• Each time a new module is added as part of integration testing, the software changes.
• New data flow paths are established, new I/O may occur, and new control logic is invoked.
• Side effects associated with these changes may cause problems with functions that previously
worked flawlessly.
• In the context of an integration test strategy, regression testing is the re-execution of some subset
of tests that have already been conducted to ensure that changes have not propagated
unintended side effects.
• Regression testing helps to ensure that changes (due to testing or for other reasons) do not
introduce unintended behavior or additional errors.
• Regression testing may be conducted manually, by re-executing a subset of all test cases or using
automated capture/playback tools.
• Capture/playback tools enable the software engineer to capture test cases and results for
subsequent playback and comparison.
Regression Testing...
• The regression test suite contains three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected
by the change.
• Tests that focus on the software components that have been changed.
• As integration testing proceeds, the number of regression tests can grow quite
large.
• The regression test suite should be designed to include only those tests that
address one or more classes of errors in each of the major program functions.
Smoke Testing.
• Smoke testing is an integration testing approach that is commonly used when product software is
developed.
• Designed as a pacing mechanism for time-critical projects, allowing the software team to assess
the project frequently.
• The smoke-testing approach encompasses the following activities:
• 1. Software components that have been translated into code are integrated into a build.
• A build includes all data files, libraries, reusable modules, and engineered components that
are required to implement one or more product functions.
• 2. A series of tests is designed to expose errors that will keep the build from properly
performing its function.
• The intent should be to uncover “show-stopper” errors that have the highest likelihood of
throwing the software project behind schedule.
• 3. The build is integrated with other builds, and the entire product is smoke-tested daily.
• The integration approach may be top-down or bottom-up.
Smoke Testing…
• Smoke testing provides many benefits when it is applied to complex, time-critical
software projects:
• Integration risk is minimized.
• The quality of the end product is improved.
• Error diagnosis and correction are simplified.
• Progress is easier to assess.
Integration Test Work Products.
• An overall plan for integration of the software and a description of specific tests is documented in a Test
Specification.
• This work product incorporates a test plan and a test procedure and becomes part of the software
configuration.
• Testing is divided into phases and builds that address specific functional and behavioral characteristics of the
software.
• Each of integration test phase delineates a broad functional category within the software and generally can
be related to a specific domain within the software architecture.
• Program builds (groups of modules) are created to correspond to each phase.
• A schedule for integration, the development of overhead software, and related topics -- as part of the test
plan.
• Start and end dates for each phase are established and “availability windows” for unit-tested modules are
defined.
Integration Test Work Products…
• A brief description of overhead software (stubs and drivers) concentrates on characteristics that might
require special effort.
• Finally, the test environment and resources are described.
• Unusual hardware configurations, exotic simulators, and special test tools or techniques are a few of many
topics to be considered.
• The detailed testing procedure that is required to accomplish the test plan is described next.
• The order of integration and corresponding tests at each integration step are described.
• A listing of all test cases and expected results are also included.
• A history of actual test results, problems, or peculiarities is recorded in a Test Report that can be appended to
the Test Specification, if desired.
• Information contained in this section can be vital during software maintenance.
• Appropriate references and appendixes are also presented.
Validation testing begins at the culmination of integration testing, when individual components have been
exercised, the software is completely assembled as a package, and interfacing errors have been uncovered and
corrected.
At the validation or system level, the distinction between different software categories disappears.
Testing focuses on user-visible actions and user-recognizable output from the system.
Validation can be defined in many ways, but a simple definition is that validation succeeds when software
functions in a manner that can be reasonably expected by the customer.
If a Software Requirements Specification has been developed, it describes all user-visible attributes of the
software and contains a Validation Criteria section that forms the basis for a validation-testing approach.
Validation-Test Criteria
Configuration Review
Alpha and Beta Testing
Software validation is achieved through a series of tests that demonstrate conformity with requirements.
A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that are
designed to ensure that:
• All functional requirements are satisfied.
• All behavioral characteristics are achieved.
• All content is accurate and properly presented.
• All performance requirements are attained.
• Documentation is correct.
• Usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability).
If a deviation from specification is uncovered, a deficiency list is created.
A method for resolving deficiencies (acceptable to stakeholders) must be established.
An important element of the validation process is a configuration review.
The intent of the review is to ensure that all elements of the software configuration
have been properly developed, are catalogued, and have the necessary detail to
bolster the support activities.
The configuration review, sometimes called an audit.
It is virtually impossible for a software developer to foresee how the customer will really use a program.
• Instructions for use may be misinterpreted.
• Strange combinations of data may be used.
• Output that seemed clear to the tester may be unintelligible to a user in the field.
When custom software is built for one customer, a series of acceptance tests are conducted to enable the
customer to validate all requirements.
If software is developed as a product to be used by many customers, it is impractical to perform formal
acceptance tests with each one.
Most software product builders use a process called alpha and beta testing to uncover errors that only the end
user seems able to find.
• The alpha test is conducted at the developer’s site by a representative group of end users.
• The software is used in a natural setting with the developer “looking over the shoulder” of the users and
recording errors and usage problems.
• Alpha tests are conducted in a controlled environment.
Alpha Test
• The alpha test is conducted at the developer’s site by a representative group of end users.
• The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording errors
and usage problems.
• Alpha tests are conducted in a controlled environment.
Beta Test
• The beta test is conducted at one or more end-user sites.
• Unlike alpha testing, the developer generally is not present.
• The beta test is a “live” application of the software in an environment that cannot be controlled by the developer.
• The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the
developer at regular intervals.
• As a result of problems reported during beta tests, make modifications and then prepare for the release of the software
product to the entire customer base.
Customer acceptance test
• A variation on beta testing, called customer acceptance testing, is sometimes performed when custom software is
delivered to a customer under contract.
• The customer performs a series of specific tests in an attempt to uncover errors before accepting the software from the
developer.
• In some cases, acceptance testing can be very formal and encompass many days or even weeks of testing.
A classic system-testing problem is “finger pointing”.
This occurs when an error is uncovered, and the developers of different
system elements blame each other for the problem.
Anticipate potential interfacing problems
• (1) Design error-handling paths that test all information coming from other elements of the
system.
• (2) Conduct a series of tests that simulate bad data or other potential errors at the software
interface.
• (3) Record the results of tests to use as “evidence” if finger pointing does occur.
• (4) Participate in planning and design of system tests to ensure that software is adequately
tested.
Recovery Testing
Security Testing
Stress Testing
Performance Testing
Deployment Testing
Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), reinitialization,
checkpointing mechanisms, data recovery, and restart are evaluated for
correctness.
If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration.
“The system’s security must be tested for invulnerability from frontal attack — but
must also be tested for invulnerability from flank or rear attack”.
Given enough time and resources, good security testing will ultimately penetrate a
system.
The role of the system designer is to make penetration cost more than the value of
the information that will be obtained.
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.
For example:
•(1) Special tests may be designed that generate 10 interrupts per second, when one or two is the average rate.
•(2) Input data rates may be increased by an order of magnitude to determine how input functions will respond.
•(3) Test cases that require maximum memory or other resources are executed.
•(4) Test cases that may cause thrashing in a virtual operating system are designed.
•(5) Test cases that may cause excessive hunting for disk-resident data are created.
Essentially, the tester attempts to break the program.
A variation of stress testing is a technique called sensitivity testing.
In some situations, a very small range of data contained within the bounds of valid data for a program may cause extreme and
even erroneous processing or profound performance degradation.
Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper
processing.
Performance testing is designed to test the run-time performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be assessed as tests are conducted.
However, it is not until all system elements are fully integrated that the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation.
It is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion.
External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states
on a regular basis.
By instrumenting a system, the tester can uncover situations that lead to degradation and possible system failure.
In many cases, software must execute on a variety of platforms and under more
than one operating system environment.
Deployment testing, sometimes called configuration testing, exercises the software
in each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software (e.g., “installers”) that will be used by customers, and all
documentation that will be used to introduce the software to end users.
Software project management begins with a set of activities that are collectively called project
planning.
Before the project can begin, the software team should estimate the work to be done, the resources
that will be required, and the time that will elapse from start to finish.
Once these activities are accomplished, the software team should establish a project schedule that
defines software engineering tasks and milestones, identifies who is responsible for conducting each
task, and specifies the intertask dependencies that may have a strong bearing on progress.
The objective of software project planning is to provide a framework that
enables the manager to make reasonable estimates of resources, cost, and
schedule.
In addition, estimates should attempt to define best-case and worst-case
scenarios so that project outcomes can be bounded.
Although there is an inherent degree of uncertainty, the software team
embarks on a plan that has been established as a consequence of these tasks.
Therefore, the plan must be adapted and updated as the project proceeds.
The three major categories of software engineering
resources:
•People
•Reusable software components
•The development environment (hardware and software tools).
Each resource is specified with four characteristics:
•Description of the resource.
•A statement of availability.
•Time when the resource will be required
•Duration of time that the resource will be applied.
The last two characteristics can be viewed as a time
window.
Availability of the resource for a specified window
must be established at the earliest practical time.
The planner begins by evaluating software scope and selecting the skills required to complete development.
Both organizational position (e.g., manager, senior software engineer) and specialty (e.g., telecommunications,
database, client-server) are specified.
For relatively small projects (a few person-months), a single individual may perform all software engineering tasks,
consulting with specialists as required.
For larger projects, the software team may be geographically dispersed across a number of different locations.
Hence, the location of each human resource is specified.
The number of people required for a software project can be determined only after an estimate of development effort
(e.g., person-months) is made.
Component-based software engineering (CBSE) emphasizes reusability—that is, the
creation and reuse of software building blocks.
Such building blocks, often called components, must be catalogued for easy reference,
standardized for easy application, and validated for easy integration.
Bennatan suggests four software resource categories that should be considered as planning
proceeds:
• Off-the-shelf components (existing software that can be acquired from a third party or from a past project).
• Full-experience components (existing specifications, designs, code, or test data developed for past projects that
are similar to the software to be built for the current project).
• Partial-experience components (existing specifications, designs, code, or test data developed for past projects
that are related to the software to be built for the current project but will require substantial modification).
• New components (components built by the software team specifically for the needs of the current project).
The environment that supports a software project, often called the software engineering environment (SEE), incorporates
hardware and software.
Hardware provides a platform that supports the tools (software) required to produce the work products that are an outcome of
good software engineering practice.
Because most software organizations have multiple constituencies that require access to the SEE, prescribe the time window
required for hardware and software and verify that these resources will be available.
When a computer-based system (incorporating specialized hardware and software) is to be engineered, the software team may
require access to hardware elements being developed by other engineering teams.
For example, software for a robotic device used within a manufacturing cell may require a specific robot (e.g., a robotic welder)
as part of the validation test step; a software project for advanced page layout may need a high-speed digital printing system at
some point during development.
Each hardware element must be specified as part of planning.
Software project estimation is a form of problem solving, and in most cases, the problem to be solved (i.e.,
developing a cost and effort estimate for a software project) is too complex to be considered in one piece.
For this reason, decompose the problem, recharacterizing it as a set of smaller (and hopefully, more
manageable) problems.
The decomposition approach was discussed from two different points of view:
•Decomposition of the problem.
•Decomposition of the process.
Estimation uses one or both forms of partitioning.
But before an estimate can be made, understand the scope of the software to be built and generate an estimate
of its “size.”
Software Sizing
Problem-Based Estimation
An Example of LOC-Based Estimation
An Example of FP-Based Estimation
Process-Based Estimation
An Example of Process-Based Estimation
Estimation with Use Cases
An Example of Estimation Using Use Case Points
Reconciling Estimates
Module V - Software Testing Strategies.pdf

More Related Content

Similar to Module V - Software Testing Strategies.pdf

Software Testing Strategies
Software Testing StrategiesSoftware Testing Strategies
Software Testing StrategiesAlpana Bhaskar
 
Unit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeUnit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeHitesh Mohapatra
 
Software testing strategies And its types
Software testing  strategies And its typesSoftware testing  strategies And its types
Software testing strategies And its typesMITULJAMANG
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality AssuranceSaqib Raza
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategiesHemanth Gajula
 
Fundamentals of software part 1
Fundamentals of software part 1Fundamentals of software part 1
Fundamentals of software part 1Siddharth Sharma
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Tanzeem Aslam
 
Fundamentals of Software Engineering
Fundamentals of Software Engineering Fundamentals of Software Engineering
Fundamentals of Software Engineering Madhar Khan Pathan
 
Software testing
Software testingSoftware testing
Software testingAshu Bansal
 
Software testing & its technology
Software testing & its technologySoftware testing & its technology
Software testing & its technologyHasam Panezai
 
Mt s5 levels
Mt s5 levelsMt s5 levels
Mt s5 levelsTestingGeeks
 
Testing strategies,techniques & test case SE
Testing strategies,techniques & test case SETesting strategies,techniques & test case SE
Testing strategies,techniques & test case SEMeet1020
 
Software Testing Introduction (Part 3)
 Software Testing Introduction (Part 3) Software Testing Introduction (Part 3)
Software Testing Introduction (Part 3)Thapar Institute
 
Unit 6 final
Unit 6 finalUnit 6 final
Unit 6 finalsietkcse
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycleMusTufa Nullwala
 
Automated testing overview
Automated testing overviewAutomated testing overview
Automated testing overviewAlex Pop
 
SE Group H.pptx
SE Group H.pptxSE Group H.pptx
SE Group H.pptxStudyvAbhi
 
SENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxSENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxMinsasWorld
 

Similar to Module V - Software Testing Strategies.pdf (20)

Software Testing Strategies
Software Testing StrategiesSoftware Testing Strategies
Software Testing Strategies
 
Unit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coeUnit iv-testing-pune-university-sres-coe
Unit iv-testing-pune-university-sres-coe
 
Software testing strategies And its types
Software testing  strategies And its typesSoftware testing  strategies And its types
Software testing strategies And its types
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
software testing strategies
software testing strategiessoftware testing strategies
software testing strategies
 
Fundamentals of software part 1
Fundamentals of software part 1Fundamentals of software part 1
Fundamentals of software part 1
 
Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.Software Testing Strategies ,Validation Testing and System Testing.
Software Testing Strategies ,Validation Testing and System Testing.
 
Testing fundamentals
Testing fundamentalsTesting fundamentals
Testing fundamentals
 
Fundamentals of Software Engineering
Fundamentals of Software Engineering Fundamentals of Software Engineering
Fundamentals of Software Engineering
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing & its technology
Software testing & its technologySoftware testing & its technology
Software testing & its technology
 
Mt s5 levels
Mt s5 levelsMt s5 levels
Mt s5 levels
 
Testing strategies,techniques & test case SE
Testing strategies,techniques & test case SETesting strategies,techniques & test case SE
Testing strategies,techniques & test case SE
 
Lec25
Lec25Lec25
Lec25
 
Software Testing Introduction (Part 3)
 Software Testing Introduction (Part 3) Software Testing Introduction (Part 3)
Software Testing Introduction (Part 3)
 
Unit 6 final
Unit 6 finalUnit 6 final
Unit 6 final
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycle
 
Automated testing overview
Automated testing overviewAutomated testing overview
Automated testing overview
 
SE Group H.pptx
SE Group H.pptxSE Group H.pptx
SE Group H.pptx
 
SENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptxSENG202-v-and-v-modeling_121810.pptx
SENG202-v-and-v-modeling_121810.pptx
 

Recently uploaded

VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130
VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130
VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130Suhani Kapoor
 
Cosumer Willingness to Pay for Sustainable Bricks
Cosumer Willingness to Pay for Sustainable BricksCosumer Willingness to Pay for Sustainable Bricks
Cosumer Willingness to Pay for Sustainable Bricksabhishekparmar618
 
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai Doux
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai DouxDubai Call Girls Pro Domain O525547819 Call Girls Dubai Doux
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai Douxkojalkojal131
 
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...Amil baba
 
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun service
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun serviceCALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun service
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun serviceanilsa9823
 
The history of music videos a level presentation
The history of music videos a level presentationThe history of music videos a level presentation
The history of music videos a level presentationamedia6
 
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`dajasot375
 
VIP Kolkata Call Girl Gariahat 👉 8250192130 Available With Room
VIP Kolkata Call Girl Gariahat 👉 8250192130  Available With RoomVIP Kolkata Call Girl Gariahat 👉 8250192130  Available With Room
VIP Kolkata Call Girl Gariahat 👉 8250192130 Available With Roomdivyansh0kumar0
 
PORTAFOLIO 2024_ ANASTASIYA KUDINOVA
PORTAFOLIO   2024_  ANASTASIYA  KUDINOVAPORTAFOLIO   2024_  ANASTASIYA  KUDINOVA
PORTAFOLIO 2024_ ANASTASIYA KUDINOVAAnastasiya Kudinova
 
WAEC Carpentry and Joinery Past Questions
WAEC Carpentry and Joinery Past QuestionsWAEC Carpentry and Joinery Past Questions
WAEC Carpentry and Joinery Past QuestionsCharles Obaleagbon
 
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 night
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 nightCheap Rate Call girls Malviya Nagar 9205541914 shot 1500 night
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 nightDelhi Call girls
 
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...ankitnayak356677
 
Fashion trends before and after covid.pptx
Fashion trends before and after covid.pptxFashion trends before and after covid.pptx
Fashion trends before and after covid.pptxVanshNarang19
 
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Design Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryDesign Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryWilliamVickery6
 
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130Suhani Kapoor
 
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...babafaisel
 
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...Suhani Kapoor
 
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Service
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Service
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Servicejennyeacort
 

Recently uploaded (20)

VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130
VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130
VIP Call Girls Service Kukatpally Hyderabad Call +91-8250192130
 
Cosumer Willingness to Pay for Sustainable Bricks
Cosumer Willingness to Pay for Sustainable BricksCosumer Willingness to Pay for Sustainable Bricks
Cosumer Willingness to Pay for Sustainable Bricks
 
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai Doux
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai DouxDubai Call Girls Pro Domain O525547819 Call Girls Dubai Doux
Dubai Call Girls Pro Domain O525547819 Call Girls Dubai Doux
 
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...
NO1 Famous Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Add...
 
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun service
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun serviceCALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun service
CALL ON ➥8923113531 🔝Call Girls Aminabad Lucknow best Night Fun service
 
The history of music videos a level presentation
The history of music videos a level presentationThe history of music videos a level presentation
The history of music videos a level presentation
 
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`
Abu Dhabi Call Girls O58993O4O2 Call Girls in Abu Dhabi`
 
VIP Kolkata Call Girl Gariahat 👉 8250192130 Available With Room
VIP Kolkata Call Girl Gariahat 👉 8250192130  Available With RoomVIP Kolkata Call Girl Gariahat 👉 8250192130  Available With Room
VIP Kolkata Call Girl Gariahat 👉 8250192130 Available With Room
 
PORTAFOLIO 2024_ ANASTASIYA KUDINOVA
PORTAFOLIO   2024_  ANASTASIYA  KUDINOVAPORTAFOLIO   2024_  ANASTASIYA  KUDINOVA
PORTAFOLIO 2024_ ANASTASIYA KUDINOVA
 
WAEC Carpentry and Joinery Past Questions
WAEC Carpentry and Joinery Past QuestionsWAEC Carpentry and Joinery Past Questions
WAEC Carpentry and Joinery Past Questions
 
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 night
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 nightCheap Rate Call girls Malviya Nagar 9205541914 shot 1500 night
Cheap Rate Call girls Malviya Nagar 9205541914 shot 1500 night
 
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...
Raj Nagar Extension Call Girls 9711199012 WhatsApp No, Delhi Escorts in Raj N...
 
Fashion trends before and after covid.pptx
Fashion trends before and after covid.pptxFashion trends before and after covid.pptx
Fashion trends before and after covid.pptx
 
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Okhla Delhi 💯Call Us 🔝8264348440🔝
 
Design Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William VickeryDesign Portfolio - 2024 - William Vickery
Design Portfolio - 2024 - William Vickery
 
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
VIP Call Girls Service Bhagyanagar Hyderabad Call +91-8250192130
 
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...
Kala jadu for love marriage | Real amil baba | Famous amil baba | kala jadu n...
 
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Harsh Vihar (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...
VIP Russian Call Girls in Gorakhpur Deepika 8250192130 Independent Escort Ser...
 
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Service
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts ServiceCall Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Service
Call Girls In Safdarjung Enclave 24/7✡️9711147426✡️ Escorts Service
 

Module V - Software Testing Strategies.pdf

  • 2. A Strategic Approach to Software Testing • Test Strategies for Conventional Software • Validation Testing • System Testing Estimation for Software Projects • The Project Planning Process • Resources • Decomposition Techniques
  • 3.
  • 4. A strategy for software testing provides a road map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. Any testing strategy must incorporate: • Test planning • Test-case design • Test execution • Resultant data collection • Evaluation A software testing strategy should be flexible enough to promote a customized testing approach.
  • 5. Testing is a set of activities that can be planned in advance and conducted systematically. A template for software testing — a set of steps into which can place specific test-case design techniques and testing methods — should be defined for the software process. A number of software testing strategies have been proposed in the literature. A template for testing have the following generic characteristics: •To perform effective testing, one should conduct effective technical reviews. •Many errors will be eliminated before testing commences. •Testing begins at the component level and works “outward” toward the integration of the entire computer-based system. •Different testing techniques are appropriate for different software engineering approaches and at different points in time. •Testing is conducted by the developer of the software and (for large projects) an independent test group. •Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
  • 6. Verification and Validation • Software testing is one element of a broader topic that is often referred to as verification and validation (V&V). • Verification refers to the set of tasks that ensure that software correctly implements a specific function. • Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. • Boehm states this another way: • Verification: “Are we building the product right?” • Validation: “Are we building the right product?”
  • 7. Verification and Validation… • Verification and validation includes a wide array of SQA activities: • Technical reviews • Quality and configuration audits • Performance monitoring • Simulation • Feasibility study • Documentation review • Database review • Algorithm analysis • Development testing • Usability testing • Qualification testing • Acceptance testing • Installation testing
  • 8. Organizing for Software Testing •For every software project, there is an inherent conflict of interest that occurs as testing begins. •The people who have built the software are now asked to test the software.
  • 10. Software Testing Strategy—The Big Picture… Software testing steps
  • 11. Criteria for Completion of Testing • A classic question arises every time software testing is discussed: • “When to do testing? — how do we know that we’ve tested enough?” • Ans: Every time the user executes a computer program, the program is being tested. • Ans: “You’re done testing when you run out of time or you run out of money”.
  • 12. Many strategies can be used to test software. Fully constructed system – then conduct test – does not work. Conduct tests on a daily basis, whenever any part of the system is constructed. A testing strategy that is chosen by many software teams falls between the two extremes. • It takes an incremental view of testing, beginning with the testing of individual program units, moving to tests designed to facilitate the integration of the units (sometimes on a daily basis) • Culminating with tests that exercise the constructed system.
  • 14. Unit testing focuses verification effort on the smallest unit of software design -- the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and the errors those tests uncover is limited by the constrained scope established for unit testing. The unit test focuses on the internal processing logic and data structures within the boundaries of a component. Unit testing can be conducted in parallel for multiple components.
  • 15. Unit Test Considerations • The module interface is tested to ensure that information properly flows into and out of the program unit under test. • Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. • All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. • All error-handling paths are tested. • Data flow across a component interface is tested before any other testing is initiated. • If data do not enter and exit properly, all other tests are moot. • Local data structures should be exercised and the local impact on global data should be ascertained (if possible) during unit testing. • Selective testing of execution paths is an essential task during the unit test. • Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. • Boundary testing is one of the most important unit testing tasks. • Software often fails at its boundaries. Unit Test
  • 16. Unit Test Considerations… • If error-handling paths are implemented, they must be tested. • The potential errors that should be tested when error handling is evaluated are: • (1) Error description is unintelligible • (2) Error noted does not correspond to error encountered • (3) Error condition causes system intervention prior to error handling • (4) Exception-condition processing is incorrect • (5) Error description does not provide enough information to assist in the location of the cause of the error.
  • 17. Unit Test Procedures • Unit testing is normally considered as an adjunct to the coding step. • The design of unit tests can occur before coding begins or after source code has been generated. • A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed earlier. • Each test case should be coupled with a set of expected results. • A component is not a stand-alone program, driver and/or stub software must often be developed for each unit test. • A driver is nothing more than a “main program” that accepts test- case data, passes such data to the component (to be tested), and prints relevant results. • Stubs serve to replace modules that are subordinate (invoked by) the component to be tested. • A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.
  • 18. After Unit testing, “If they all work individually, why do you doubt that they’ll work when we put them together?” The problem is “putting them together”—interfacing. • Data can be lost across an interface. • One component can have an inadvertent, adverse effect on another. • Subfunctions, when combined, may not produce the desired major function. • Individually acceptable imprecision may be magnified to unacceptable level. • Global data structures can present problems. Integration testing is a systematic technique for constructing the software architecture while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit-tested components and build a program structure that has been dictated by design. Incremental integration is the antithesis of the big bang approach.
  • 19. A number of different incremental integration strategies are discussed. • Top-Down Integration. • Bottom-Up Integration. • Regression Testing. • Smoke Testing. • Integration Test Work Products.
  • 20. Top-Down Integration. • Top-down integration testing is an incremental approach to the construction of the software architecture. • Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). • Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
  • 21. Top-Down Integration... • The integration process is performed in a series of five steps: • 1. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. • 2. Depending on the integration approach selected (i.e., depth or breadth-first), subordinate stubs are replaced one at a time with actual components. • 3. Tests are conducted as each component is integrated. • 4. On completion of each set of tests, another stub is replaced with the real component. • 5. Regression testing may be conducted to ensure that new errors have not been introduced.
  • 22. Bottom-Up Integration. • Begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). • Because components are integrated from the bottom up, the functionality provided by components subordinate to a given level is always available and the need for stubs is eliminated. • A bottom-up integration strategy may be implemented with the following steps: • 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software subfunction. • 2. A driver (a control program for testing) is written to coordinate test-case input and output. • 3. The cluster is tested. • 4. Drivers are removed and clusters are combined moving upward in the program structure.
  • 24. Regression Testing. • Each time a new module is added as part of integration testing, the software changes. • New data flow paths are established, new I/O may occur, and new control logic is invoked. • Side effects associated with these changes may cause problems with functions that previously worked flawlessly. • In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. • Regression testing helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors. • Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. • Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.
  • 25. Regression Testing... • The regression test suite contains three different classes of test cases: • A representative sample of tests that will exercise all software functions. • Additional tests that focus on software functions that are likely to be affected by the change. • Tests that focus on the software components that have been changed. • As integration testing proceeds, the number of regression tests can grow quite large. • The regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions.
  • 26. Smoke Testing. • Smoke testing is an integration testing approach that is commonly used when product software is developed. • Designed as a pacing mechanism for time-critical projects, allowing the software team to assess the project frequently. • The smoke-testing approach encompasses the following activities: • 1. Software components that have been translated into code are integrated into a build. • A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions. • 2. A series of tests is designed to expose errors that will keep the build from properly performing its function. • The intent should be to uncover “show-stopper” errors that have the highest likelihood of throwing the software project behind schedule. • 3. The build is integrated with other builds, and the entire product is smoke-tested daily. • The integration approach may be top-down or bottom-up.
  • 27. Smoke Testing… • Smoke testing provides many benefits when it is applied to complex, time-critical software projects: • Integration risk is minimized. • The quality of the end product is improved. • Error diagnosis and correction are simplified. • Progress is easier to assess.
  • 28. Integration Test Work Products. • An overall plan for integration of the software and a description of specific tests is documented in a Test Specification. • This work product incorporates a test plan and a test procedure and becomes part of the software configuration. • Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. • Each of integration test phase delineates a broad functional category within the software and generally can be related to a specific domain within the software architecture. • Program builds (groups of modules) are created to correspond to each phase. • A schedule for integration, the development of overhead software, and related topics -- as part of the test plan. • Start and end dates for each phase are established and “availability windows” for unit-tested modules are defined.
  • 29. Integration Test Work Products… • A brief description of overhead software (stubs and drivers) concentrates on characteristics that might require special effort. • Finally, the test environment and resources are described. • Unusual hardware configurations, exotic simulators, and special test tools or techniques are a few of many topics to be considered. • The detailed testing procedure that is required to accomplish the test plan is described next. • The order of integration and corresponding tests at each integration step are described. • A listing of all test cases and expected results are also included. • A history of actual test results, problems, or peculiarities is recorded in a Test Report that can be appended to the Test Specification, if desired. • Information contained in this section can be vital during software maintenance. • Appropriate references and appendixes are also presented.
  • 30. Validation testing begins at the culmination of integration testing, when individual components have been exercised, the software is completely assembled as a package, and interfacing errors have been uncovered and corrected. At the validation or system level, the distinction between different software categories disappears. Testing focuses on user-visible actions and user-recognizable output from the system. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. If a Software Requirements Specification has been developed, it describes all user-visible attributes of the software and contains a Validation Criteria section that forms the basis for a validation-testing approach.
  • 32. Software validation is achieved through a series of tests that demonstrate conformity with requirements. A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that are designed to ensure that: • All functional requirements are satisfied. • All behavioral characteristics are achieved. • All content is accurate and properly presented. • All performance requirements are attained. • Documentation is correct. • Usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability). If a deviation from specification is uncovered, a deficiency list is created. A method for resolving deficiencies (acceptable to stakeholders) must be established.
  • 33. An important element of the validation process is a configuration review. The intent of the review is to ensure that all elements of the software configuration have been properly developed, are catalogued, and have the necessary detail to bolster the support activities. The configuration review, sometimes called an audit.
  • 34. It is virtually impossible for a software developer to foresee how the customer will really use a program. • Instructions for use may be misinterpreted. • Strange combinations of data may be used. • Output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find. • The alpha test is conducted at the developer’s site by a representative group of end users. • The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording errors and usage problems. • Alpha tests are conducted in a controlled environment.
  • 35. Alpha Test • The alpha test is conducted at the developer’s site by a representative group of end users. • The software is used in a natural setting with the developer “looking over the shoulder” of the users and recording errors and usage problems. • Alpha tests are conducted in a controlled environment. Beta Test • The beta test is conducted at one or more end-user sites. • Unlike alpha testing, the developer generally is not present. • The beta test is a “live” application of the software in an environment that cannot be controlled by the developer. • The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. • As a result of problems reported during beta tests, make modifications and then prepare for the release of the software product to the entire customer base. Customer acceptance test • A variation on beta testing, called customer acceptance testing, is sometimes performed when custom software is delivered to a customer under contract. • The customer performs a series of specific tests in an attempt to uncover errors before accepting the software from the developer. • In some cases, acceptance testing can be very formal and encompass many days or even weeks of testing.
  • 36. A classic system-testing problem is “finger pointing”. This occurs when an error is uncovered, and the developers of different system elements blame each other for the problem. Anticipate potential interfacing problems • (1) Design error-handling paths that test all information coming from other elements of the system. • (2) Conduct a series of tests that simulate bad data or other potential errors at the software interface. • (3) Record the results of tests to use as “evidence” if finger pointing does occur. • (4) Participate in planning and design of system tests to ensure that software is adequately tested.
  • 37. Recovery Testing Security Testing Stress Testing Performance Testing Deployment Testing
  • 38. Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.
  • 39. Security testing attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration. “The system’s security must be tested for invulnerability from frontal attack — but must also be tested for invulnerability from flank or rear attack”. Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost more than the value of the information that will be obtained.
  • 40. Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example: •(1) Special tests may be designed that generate 10 interrupts per second, when one or two is the average rate. •(2) Input data rates may be increased by an order of magnitude to determine how input functions will respond. •(3) Test cases that require maximum memory or other resources are executed. •(4) Test cases that may cause thrashing in a virtual operating system are designed. •(5) Test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester attempts to break the program. A variation of stress testing is a technique called sensitivity testing. In some situations, a very small range of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.
  • 41. Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as tests are conducted. However, it is not until all system elements are fully integrated that the true performance of a system can be ascertained. Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation. It is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a system, the tester can uncover situations that lead to degradation and possible system failure.
  • 42. In many cases, software must execute on a variety of platforms and under more than one operating system environment. Deployment testing, sometimes called configuration testing, exercises the software in each environment in which it is to operate. In addition, deployment testing examines all installation procedures and specialized installation software (e.g., “installers”) that will be used by customers, and all documentation that will be used to introduce the software to end users.
  • 43.
  • 44. Software project management begins with a set of activities that are collectively called project planning. Before the project can begin, the software team should estimate the work to be done, the resources that will be required, and the time that will elapse from start to finish. Once these activities are accomplished, the software team should establish a project schedule that defines software engineering tasks and milestones, identifies who is responsible for conducting each task, and specifies the intertask dependencies that may have a strong bearing on progress.
  • 45. The objective of software project planning is to provide a framework that enables the manager to make reasonable estimates of resources, cost, and schedule. In addition, estimates should attempt to define best-case and worst-case scenarios so that project outcomes can be bounded. Although there is an inherent degree of uncertainty, the software team embarks on a plan that has been established as a consequence of these tasks. Therefore, the plan must be adapted and updated as the project proceeds.
  • 46. The three major categories of software engineering resources: •People •Reusable software components •The development environment (hardware and software tools). Each resource is specified with four characteristics: •Description of the resource. •A statement of availability. •Time when the resource will be required •Duration of time that the resource will be applied. The last two characteristics can be viewed as a time window. Availability of the resource for a specified window must be established at the earliest practical time.
  • 47. The planner begins by evaluating software scope and selecting the skills required to complete development. Both organizational position (e.g., manager, senior software engineer) and specialty (e.g., telecommunications, database, client-server) are specified. For relatively small projects (a few person-months), a single individual may perform all software engineering tasks, consulting with specialists as required. For larger projects, the software team may be geographically dispersed across a number of different locations. Hence, the location of each human resource is specified. The number of people required for a software project can be determined only after an estimate of development effort (e.g., person-months) is made.
  • 48. Component-based software engineering (CBSE) emphasizes reusability—that is, the creation and reuse of software building blocks. Such building blocks, often called components, must be catalogued for easy reference, standardized for easy application, and validated for easy integration. Bennatan suggests four software resource categories that should be considered as planning proceeds: • Off-the-shelf components (existing software that can be acquired from a third party or from a past project). • Full-experience components (existing specifications, designs, code, or test data developed for past projects that are similar to the software to be built for the current project). • Partial-experience components (existing specifications, designs, code, or test data developed for past projects that are related to the software to be built for the current project but will require substantial modification). • New components (components built by the software team specifically for the needs of the current project).
  • 49. The environment that supports a software project, often called the software engineering environment (SEE), incorporates hardware and software. Hardware provides a platform that supports the tools (software) required to produce the work products that are an outcome of good software engineering practice. Because most software organizations have multiple constituencies that require access to the SEE, prescribe the time window required for hardware and software and verify that these resources will be available. When a computer-based system (incorporating specialized hardware and software) is to be engineered, the software team may require access to hardware elements being developed by other engineering teams. For example, software for a robotic device used within a manufacturing cell may require a specific robot (e.g., a robotic welder) as part of the validation test step; a software project for advanced page layout may need a high-speed digital printing system at some point during development. Each hardware element must be specified as part of planning.
  • 50. Software project estimation is a form of problem solving, and in most cases, the problem to be solved (i.e., developing a cost and effort estimate for a software project) is too complex to be considered in one piece. For this reason, decompose the problem, recharacterizing it as a set of smaller (and hopefully, more manageable) problems. The decomposition approach was discussed from two different points of view: •Decomposition of the problem. •Decomposition of the process. Estimation uses one or both forms of partitioning. But before an estimate can be made, understand the scope of the software to be built and generate an estimate of its “size.”
  • 51. Software Sizing Problem-Based Estimation An Example of LOC-Based Estimation An Example of FP-Based Estimation Process-Based Estimation An Example of Process-Based Estimation Estimation with Use Cases An Example of Estimation Using Use Case Points Reconciling Estimates