SOFTWARE TESTING
Military Aviation
Problems
• An F-18 crashed because of a missing
exception condition:
if ... then ... without the else clause that
was thought could not possibly arise.
• In simulation, an F-16 program bug caused
the virtual plane to flip over whenever it
crossed the equator, as a result of a missing
minus sign to indicate south latitude.
Therac-25 Radiation “Therapy”
• In Texas, 1986, a man received between 16,500-
25,000 rads in less than 1 sec, over an area of
about 1 cm.
• He lost his left arm, and died of complications 5
months later.
• In Texas, 1986, a man received at least 4,000 rads
in the right temporal lobe of his brain.
• The patient eventually died as a result of the
overdose.
Therac-25 Radiation
“Therapy” (Cont’d)
• In Washington, 1987, a patient received
8,000-10,000 rads instead of the prescribed
86 rads.
• The patient died of complications of the
radiation overdose.
Lack of proper software testing.
6
Software Testing
• One of the practical methods commonly used to detect
the presence of errors (failures) in a computer
program is to test it for a set of inputs.
Our program
The output
is correct?
I1, I2, I3,
…, In, … Expected results
= ?
Obtained results
“Inputs”
Objectives of testing
• Executing a program with the intent of finding
an error.
• To check if the system meets the requirements
and be executed successfully in the Intended
environment.
• To check if the system is “ Fit for purpose”.
• To check if the system does what it is expected
to do.
Testing facts
• A good test case is one that has a probability
of finding an yet undiscovered error.
• A successful test is one that uncovers a yet
undiscovered error.
• A good test is not redundant.
• A good test should be “best of breed”.
• A good test should neither be too simple nor
too complex.
Goal of a software tester
• … to find bugs
• … as early in the software development
processes as possible
• … and make sure they get fixed.
Advice: Be careful not to get get caught in the
dangerous spiral of unattainable perfection.
LEVELS OF TESTING
Unit Testing
• Algorithms and logic
• Data structures (global and local)
• Interfaces
• Independent paths
• Boundary conditions
• Error handling
Why Integration Testing Is
Necessary
• One module can have an adverse effect on
another
• Subfunctions, when combined, may not
produce the desired major function
• Individually acceptable imprecision in
calculations may be magnified to
unacceptable levels
Why Integration Testing Is
Necessary (cont’d)
• Interfacing errors not detected in unit testing
may appear
• Timing problems (in real-time systems) are
not detectable by unit testing
• Resource contention problems are not
detectable by unit testing
System testing
• SYSTEM TESTING is a level of testing that validates
the complete and fully integrated software product.
• The purpose of a system test is to evaluate the end-
to-end system specifications.
• Usually, the software is only one element of a larger
computer-based system.
• Ultimately, the software is interfaced with other
software/hardware systems.
• System Testing is actually a series of different tests
whose sole purpose is to exercise the full computer-
based system.
Acceptance Testing
• Similar to validation testing except that
customers are present or directly involved.
• Usually the tests are developed by the
customer
Alpha and Beta Testing
• It’s best to provide customers with an outline
of the things that you would like them to
focus on and specific test scenarios for them
to execute.
• Provide with customers who are actively
involved with a commitment to fix defects
that they discover.
Testing as a Process
Verification Validation
1. check whether the software
conforms to specifications
2. evaluating software
system to determine
whether the products of a
given development phase
satisfy the conditions
imposed at the start of that
phase.
3. Verification is usually
associated with activities
such as inspections and
reviews of the s/w
deliverables.
1. check whether software meets the
customer expectations and
requirements
2. process of evaluating
software system or component during
or at the end of the development
phase satisfy the conditions
imposed at the start of that phase.
3. Validation is usually associated with
Traditional execution based testing,
i.e., Exercising the code with test
cases.
Testing Axioms
• Rules or facts
• Impossible to test a program complexity
• Software testing is a risk based exercise
• Test can’t show that bugs don’t exist
• The more bugs you find, the more bugs there
are
• The pesticide paradox
• Not all the bugs that you find will be fixed
Why Can’t Every Bug be Found?
• Too many possible paths.
• Too many possible inputs.
• Too many possible user environments.
Too Many Possible Paths
Too Many Possible Inputs
• Programs take input in a variety of ways:
mouse, keyboard, and other devices.
• Must test Valid and Invalid inputs.
• Most importantly, there are an infinite
amount of sequences of inputs to be tested.
Too Many Possible User
Environments
• Difficult to replicate the user’s combination of
hardware, peripherals, OS, and applications.
• Impossible to replicate a thousand-node network
to test networking software.
Cost of defects
Relative cost of bugs
“bugs found later cost more to fix”
• Cost to fix a bug increases exponentially (10x
)
–i.e., it increases tenfold as time increases
• E.g., a bug found during specification costs $1
to fix.
• … if found in design cost is $10
• … if found in code cost is $100
• … if found in released software cost is $1000
Test case design strategies
White box testing
Black box testing
Black Box Testing
• The technique of testing without having any
knowledge of the interior workings of the
application is called black-box testing.
• The tester is oblivious to the system architecture
and does not have access to the source code.
• a tester will interact with the system's user
interface by providing inputs and examining
outputs without knowing how and where the
inputs are worked upon.
Advantages Disadvantages
Well suited and efficient for large code
segments.
Limited coverage, since only a selected
number of test scenarios is actually
performed.
Code access is not required. Inefficient testing, due to the fact that
the tester only has limited knowledge
about an application.
Clearly separates user's perspective
from the developer's perspective
through visibly defined roles.
Blind coverage, since the tester cannot
target specific code segments or error
prone areas.
Large numbers of moderately skilled
testers can test the application with no
knowledge of implementation,
programming language, or operating
systems.
The test cases are difficult to design
white-box testing
• White-box testing is the detailed investigation of
internal logic and structure of the code. White-box
testing is also called glass testing or open-box
testing.
• to perform white-box testing on an application, a
tester needs to know the internal workings of the
code.
• The tester needs to have a look inside the source
code and find out which unit/chunk of the code is
behaving inappropriately.
Advantages Disadvantages
As the tester has knowledge of the
source code, it becomes very easy to
find out which type of data can help in
testing the application effectively.
Due to the fact that a skilled tester is
needed to perform white-box testing,
the costs are increased.
It helps in optimizing the code. Sometimes it is impossible to look
into every nook and corner to find out
hidden errors that may create
problems, as many paths will go
untested.
Extra lines of code can be removed
which can bring in hidden defects.
It is difficult to maintain white-box
testing, as it requires specialized tools
like code analyzers and debugging
tools.
Due to the tester's knowledge about
the code, maximum coverage is
attained during test scenario writing.
CODE COVERAGE
BRANCH COVERAGE
CONDITION COVERAGE
... -2 -1 0 1 ................. 12 13 14 15 .....
--------------|-------------------|-------------------
invalid partition 1 valid partition invalid partition 2
• The boundary between two partitions is the place
where the behavior of the application changes and
is not a real number itself.
• The number 0 is the maximum number in the first
partition, the number 1 is the minimum value in the
second partition, both are boundary values.
• In the example above there are boundary
values at 0,1 and 12,13 and each should be
tested.
• Take an example where a heater is turned on
if the temperature is 10 degrees or colder.
There are two partitions (temperature<=10,
temperature>10) and two boundary values to
be tested (temperature=10, temperature=11).
Equivalence class partitioning
• Divides the input domain of the program into
classes of data from which test cases can be
derived.
• Eg: voter’s id issue
S.No Equivalence
Partitions
Type Of
Input
Test
Data
Expected Results
1 Age above 18 Valid 24 Id will be given
2 Age below 18 invalid 16 Id will not be given
3 Age as 0 invalid 0 Warning message-invalid input
State based testing
• Non-functional testing is the testing of a software application
or system for its non-functional requirements: the way a system
operates, rather than specific behaviours of that system.
• This is in contrast to functional testing, which tests against
functional requirements that describe the functions of a system
and its components.
• The names of many non-functional tests are often used
interchangeably because of the overlap in scope between
various non-functional requirements.
• For example, software performance is a broad term that
includes many specific requirements like reliability and
scalability.
Non-functional testing includes
• Baseline testing
• Compliance testing
• Documentation testing
• Endurance testing or reliability testing
• Load testing
• Localization testing and Internationalization testing
• Performance testing
• Recovery testing
• Resilience testing
• Security testing
• Scalability testing
• Stress testing
• Usability testing
• Volume testing
Volume testing
• Volume testing refers to testing a software application with a certain
amount of data.
• This amount can, in generic terms, be the database size or it could also
be the size of an interface file that is the subject of volume testing.
• For example, if you want to volume test your application with a specific
database size, you will expand your database to that size and then test
the application's performance on it.
• Another example could be when there is a requirement for your
application to interact with an interface file (could be any file such
as .dat, .xml); this interaction could be reading and/or writing on to/from
the file.
• You will create a sample file of the size you want and then test the
application's functionality with that file in order to test the performance.
Performance testing
• In software quality assurance, performance testing is in general
a testing practice performed to determine how a system performs
in terms of responsiveness and stability under a particular
workload.
• It can also serve to investigate, measure, validate or verify other
quality attributes of the system, such as scalability, reliability and
resource usage.
• Performance testing, a subset of performance engineering, is
a computer science practice which strives to build performance
standards into the implementation, design and architecture of a
system.
• Testing types includes 1.1Load testing 1.2Stress testing 1.3Soak testing
1.4Spike testing 1.5Breakpoint testing 1.6Configuration testing 1.7Isolation
testing 1.8Internet testing
EFFICIENCY TESTING
• Definition: Efficiency testing tests the amount of resources required by
a program to perform a specific function. In software companies, this
term is used to show the effort put in to develop the application and to
quantify its user-satisfaction.
Description: In a company, how much resources used and how much
of these are turned in to productive goods are internal. This mainly
focuses on resource availability, tools, people, time, type of project,
complexity, situation, customer requirement etc. It is a measure of how
well the team does the required work to get useful output.
Efficiency is one of the parameters. It can never be more than a 100%.
By definition, efficiency is the ratio of output to input expressed in
percentage.
• A Use Case in Testing is a brief description
of a particular use of the software
application by an actor or user.
• Use cases are made on the basis of user
actions and the response of the software
application to those user actions.
• It is widely used in developing test cases at
system or acceptance level.
• use Case Testing is a software testing
technique that helps to identify test cases
that cover entire system on a transaction by
transaction basis from start to end.
• Test cases are the interactions between users
and software application.
• Use case testing helps to identify gaps in
software application that might not be found
by testing individual software components.
• Consider the first step of an end to end scenario for a
login functionality for our web application where the Actor
enters email and password.
• In the next step, the system will validate the password
• Next, if the password is correct, the access will be granted
• There can be an extension of this use case. In case
password is not valid system will display a message and
ask for re-try four times
• If Password, not valid four times system will ban the IP
address.
• An inspection is one of the most common sorts of review
practices found in software projects.
• The goal of the inspection is to identify defects.
• Commonly inspected work products include software
requirements specifications and test plans.
• In an inspection, a work product is selected for review and a
team is gathered for an inspection meeting to review the work
product.
• A moderator is chosen to moderate the meeting.
• Each inspector prepares for the meeting by reading the work
product and noting each defect.
• In an inspection, a defect is any part of the work product that
will keep an inspector from approving it.
Inspection process
• The stages in the inspections process are: Planning, Overview meeting,
Preparation, Inspection meeting, Rework and Follow-up. The Preparation,
Inspection meeting and Rework stages might be iterated.
• Planning: The inspection is planned by the moderator.
• Overview meeting: The author describes the background of the work
product.
• Preparation: Each inspector examines the work product to identify
possible defects.
• Inspection meeting: During this meeting the reader reads through the
work product, part by part and the inspectors point out the defects for
every part.
• Rework: The author makes changes to the work product according to the
action plans from the inspection meeting.
• Follow-up: The changes by the author are checked to make sure
everything is correct.
Inspection roles
• During an inspection the following roles are used.
• Author: The person who created the work product being
inspected.
• Moderator: This is the leader of the inspection. The
moderator plans the inspection and coordinates it.
• Reader: The person reading through the documents, one
item at a time. The other inspectors then point out defects.
• Recorder/Scribe: The person that documents the defects
that are found during the inspection.
• Inspector: The person that examines the work product to
identify possible defects.
Software Testing introduction and in detail

Software Testing introduction and in detail

  • 1.
  • 2.
    Military Aviation Problems • AnF-18 crashed because of a missing exception condition: if ... then ... without the else clause that was thought could not possibly arise. • In simulation, an F-16 program bug caused the virtual plane to flip over whenever it crossed the equator, as a result of a missing minus sign to indicate south latitude.
  • 3.
    Therac-25 Radiation “Therapy” •In Texas, 1986, a man received between 16,500- 25,000 rads in less than 1 sec, over an area of about 1 cm. • He lost his left arm, and died of complications 5 months later. • In Texas, 1986, a man received at least 4,000 rads in the right temporal lobe of his brain. • The patient eventually died as a result of the overdose.
  • 4.
    Therac-25 Radiation “Therapy” (Cont’d) •In Washington, 1987, a patient received 8,000-10,000 rads instead of the prescribed 86 rads. • The patient died of complications of the radiation overdose.
  • 5.
    Lack of propersoftware testing.
  • 6.
    6 Software Testing • Oneof the practical methods commonly used to detect the presence of errors (failures) in a computer program is to test it for a set of inputs. Our program The output is correct? I1, I2, I3, …, In, … Expected results = ? Obtained results “Inputs”
  • 7.
    Objectives of testing •Executing a program with the intent of finding an error. • To check if the system meets the requirements and be executed successfully in the Intended environment. • To check if the system is “ Fit for purpose”. • To check if the system does what it is expected to do.
  • 8.
    Testing facts • Agood test case is one that has a probability of finding an yet undiscovered error. • A successful test is one that uncovers a yet undiscovered error. • A good test is not redundant. • A good test should be “best of breed”. • A good test should neither be too simple nor too complex.
  • 9.
    Goal of asoftware tester • … to find bugs • … as early in the software development processes as possible • … and make sure they get fixed. Advice: Be careful not to get get caught in the dangerous spiral of unattainable perfection.
  • 10.
  • 11.
    Unit Testing • Algorithmsand logic • Data structures (global and local) • Interfaces • Independent paths • Boundary conditions • Error handling
  • 12.
    Why Integration TestingIs Necessary • One module can have an adverse effect on another • Subfunctions, when combined, may not produce the desired major function • Individually acceptable imprecision in calculations may be magnified to unacceptable levels
  • 13.
    Why Integration TestingIs Necessary (cont’d) • Interfacing errors not detected in unit testing may appear • Timing problems (in real-time systems) are not detectable by unit testing • Resource contention problems are not detectable by unit testing
  • 14.
    System testing • SYSTEMTESTING is a level of testing that validates the complete and fully integrated software product. • The purpose of a system test is to evaluate the end- to-end system specifications. • Usually, the software is only one element of a larger computer-based system. • Ultimately, the software is interfaced with other software/hardware systems. • System Testing is actually a series of different tests whose sole purpose is to exercise the full computer- based system.
  • 15.
    Acceptance Testing • Similarto validation testing except that customers are present or directly involved. • Usually the tests are developed by the customer
  • 16.
    Alpha and BetaTesting • It’s best to provide customers with an outline of the things that you would like them to focus on and specific test scenarios for them to execute. • Provide with customers who are actively involved with a commitment to fix defects that they discover.
  • 17.
    Testing as aProcess Verification Validation 1. check whether the software conforms to specifications 2. evaluating software system to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. 3. Verification is usually associated with activities such as inspections and reviews of the s/w deliverables. 1. check whether software meets the customer expectations and requirements 2. process of evaluating software system or component during or at the end of the development phase satisfy the conditions imposed at the start of that phase. 3. Validation is usually associated with Traditional execution based testing, i.e., Exercising the code with test cases.
  • 18.
    Testing Axioms • Rulesor facts • Impossible to test a program complexity • Software testing is a risk based exercise • Test can’t show that bugs don’t exist • The more bugs you find, the more bugs there are • The pesticide paradox • Not all the bugs that you find will be fixed
  • 19.
    Why Can’t EveryBug be Found? • Too many possible paths. • Too many possible inputs. • Too many possible user environments.
  • 20.
  • 21.
    Too Many PossibleInputs • Programs take input in a variety of ways: mouse, keyboard, and other devices. • Must test Valid and Invalid inputs. • Most importantly, there are an infinite amount of sequences of inputs to be tested.
  • 22.
    Too Many PossibleUser Environments • Difficult to replicate the user’s combination of hardware, peripherals, OS, and applications. • Impossible to replicate a thousand-node network to test networking software.
  • 23.
  • 24.
    Relative cost ofbugs “bugs found later cost more to fix” • Cost to fix a bug increases exponentially (10x ) –i.e., it increases tenfold as time increases • E.g., a bug found during specification costs $1 to fix. • … if found in design cost is $10 • … if found in code cost is $100 • … if found in released software cost is $1000
  • 26.
    Test case designstrategies White box testing Black box testing
  • 27.
    Black Box Testing •The technique of testing without having any knowledge of the interior workings of the application is called black-box testing. • The tester is oblivious to the system architecture and does not have access to the source code. • a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.
  • 28.
    Advantages Disadvantages Well suitedand efficient for large code segments. Limited coverage, since only a selected number of test scenarios is actually performed. Code access is not required. Inefficient testing, due to the fact that the tester only has limited knowledge about an application. Clearly separates user's perspective from the developer's perspective through visibly defined roles. Blind coverage, since the tester cannot target specific code segments or error prone areas. Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language, or operating systems. The test cases are difficult to design
  • 29.
    white-box testing • White-boxtesting is the detailed investigation of internal logic and structure of the code. White-box testing is also called glass testing or open-box testing. • to perform white-box testing on an application, a tester needs to know the internal workings of the code. • The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.
  • 30.
    Advantages Disadvantages As thetester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively. Due to the fact that a skilled tester is needed to perform white-box testing, the costs are increased. It helps in optimizing the code. Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems, as many paths will go untested. Extra lines of code can be removed which can bring in hidden defects. It is difficult to maintain white-box testing, as it requires specialized tools like code analyzers and debugging tools. Due to the tester's knowledge about the code, maximum coverage is attained during test scenario writing.
  • 34.
  • 36.
  • 39.
  • 42.
    ... -2 -10 1 ................. 12 13 14 15 ..... --------------|-------------------|------------------- invalid partition 1 valid partition invalid partition 2 • The boundary between two partitions is the place where the behavior of the application changes and is not a real number itself. • The number 0 is the maximum number in the first partition, the number 1 is the minimum value in the second partition, both are boundary values.
  • 43.
    • In theexample above there are boundary values at 0,1 and 12,13 and each should be tested. • Take an example where a heater is turned on if the temperature is 10 degrees or colder. There are two partitions (temperature<=10, temperature>10) and two boundary values to be tested (temperature=10, temperature=11).
  • 49.
    Equivalence class partitioning •Divides the input domain of the program into classes of data from which test cases can be derived. • Eg: voter’s id issue S.No Equivalence Partitions Type Of Input Test Data Expected Results 1 Age above 18 Valid 24 Id will be given 2 Age below 18 invalid 16 Id will not be given 3 Age as 0 invalid 0 Warning message-invalid input
  • 51.
  • 61.
    • Non-functional testingis the testing of a software application or system for its non-functional requirements: the way a system operates, rather than specific behaviours of that system. • This is in contrast to functional testing, which tests against functional requirements that describe the functions of a system and its components. • The names of many non-functional tests are often used interchangeably because of the overlap in scope between various non-functional requirements. • For example, software performance is a broad term that includes many specific requirements like reliability and scalability.
  • 62.
    Non-functional testing includes •Baseline testing • Compliance testing • Documentation testing • Endurance testing or reliability testing • Load testing • Localization testing and Internationalization testing • Performance testing • Recovery testing • Resilience testing • Security testing • Scalability testing • Stress testing • Usability testing • Volume testing
  • 63.
    Volume testing • Volumetesting refers to testing a software application with a certain amount of data. • This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. • For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. • Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. • You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.
  • 64.
    Performance testing • Insoftware quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. • It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. • Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system. • Testing types includes 1.1Load testing 1.2Stress testing 1.3Soak testing 1.4Spike testing 1.5Breakpoint testing 1.6Configuration testing 1.7Isolation testing 1.8Internet testing
  • 65.
    EFFICIENCY TESTING • Definition:Efficiency testing tests the amount of resources required by a program to perform a specific function. In software companies, this term is used to show the effort put in to develop the application and to quantify its user-satisfaction. Description: In a company, how much resources used and how much of these are turned in to productive goods are internal. This mainly focuses on resource availability, tools, people, time, type of project, complexity, situation, customer requirement etc. It is a measure of how well the team does the required work to get useful output. Efficiency is one of the parameters. It can never be more than a 100%. By definition, efficiency is the ratio of output to input expressed in percentage.
  • 66.
    • A UseCase in Testing is a brief description of a particular use of the software application by an actor or user. • Use cases are made on the basis of user actions and the response of the software application to those user actions. • It is widely used in developing test cases at system or acceptance level.
  • 67.
    • use CaseTesting is a software testing technique that helps to identify test cases that cover entire system on a transaction by transaction basis from start to end. • Test cases are the interactions between users and software application. • Use case testing helps to identify gaps in software application that might not be found by testing individual software components.
  • 68.
    • Consider thefirst step of an end to end scenario for a login functionality for our web application where the Actor enters email and password. • In the next step, the system will validate the password • Next, if the password is correct, the access will be granted • There can be an extension of this use case. In case password is not valid system will display a message and ask for re-try four times • If Password, not valid four times system will ban the IP address.
  • 69.
    • An inspectionis one of the most common sorts of review practices found in software projects. • The goal of the inspection is to identify defects. • Commonly inspected work products include software requirements specifications and test plans. • In an inspection, a work product is selected for review and a team is gathered for an inspection meeting to review the work product. • A moderator is chosen to moderate the meeting. • Each inspector prepares for the meeting by reading the work product and noting each defect. • In an inspection, a defect is any part of the work product that will keep an inspector from approving it.
  • 70.
    Inspection process • Thestages in the inspections process are: Planning, Overview meeting, Preparation, Inspection meeting, Rework and Follow-up. The Preparation, Inspection meeting and Rework stages might be iterated. • Planning: The inspection is planned by the moderator. • Overview meeting: The author describes the background of the work product. • Preparation: Each inspector examines the work product to identify possible defects. • Inspection meeting: During this meeting the reader reads through the work product, part by part and the inspectors point out the defects for every part. • Rework: The author makes changes to the work product according to the action plans from the inspection meeting. • Follow-up: The changes by the author are checked to make sure everything is correct.
  • 71.
    Inspection roles • Duringan inspection the following roles are used. • Author: The person who created the work product being inspected. • Moderator: This is the leader of the inspection. The moderator plans the inspection and coordinates it. • Reader: The person reading through the documents, one item at a time. The other inspectors then point out defects. • Recorder/Scribe: The person that documents the defects that are found during the inspection. • Inspector: The person that examines the work product to identify possible defects.