SlideShare a Scribd company logo
1 of 21
Download to read offline
www.lazystud.com
SOFTWARE TESTING &
QUALITY ASSURANCE
CHAPTER 1 BASIC CONCEPTS AND PRELIMINARIES
Compiled by Samruddhi Sheth
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 2
CHAPTER 1 BASIC CONCEPTS AND PRELIMINARIES
 SOFTWARE :
 Software means computer instructions or data.
 A simple program in a computer is software which helps in functioning of the
hardware.
 Eg: Operating system.
 SOFTWARE TESTING :
 Software testing is a process of executing a program or application with
the intent of finding the software bugs.
 It can also be stated as the process of validating and verifying that a software
program or application or product meets the business and technical
requirements that guided its design and development.
 Role of testing:
 Testing plays an important role in achieving and assessing the quality of a
software product.
 Testing helps you to improve the quality of software when you test and
find defects in your program while development.
 Testing also helps to assess how good your system is when you perform
system-level tests.
 The activities of software testing assessment are divided into 2 categories:
 Static Analysis:
o Static analysis is the testing and evaluation of an application by
examining the code without the executing the application.
o It examines all the possible execution paths and variable values, not
just those invoked during execution.
o Static analysis can reveal errors that may not manifest themselves
until weeks, months or years after release.
o It is based on the examination of a number of documents, namely
requirement documents, software models, design documents and
source code.
 Dynamic Analysis:
o Dynamic analysis is the testing and evaluation of an application during
runtime.
o It reveals subtle defects or vulnerabilities whose cause is too complex
to be discovered by static analysis.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 3
o Dynamic analysis can play a role in security assurance, but its primary
goal is finding and debugging errors.
o A finite subset of input is selected carefully to reach a reiable
conclusion.
 OBJECTIVES OF TESTING:
o It does work
 While implementing a program unit, the programmer may want to
test whether or not the unit works in normal circumstances.
 The programmer gets much confidence if the unit works to his
satisfaction.
 The same idea applies to the whole system as well.
 Once the system has been integrated, the developers may want to
test whether or not the system performs the basic functions.
 The objective of testing is to show that the system works, rather
than it does not.
o It does not work
 Once the programmer is satisfied that a unit works to a certain
degree, more tests are conducted with the objective of finding
faults in the unit.
 Here the idea is to try to make the unit fail.
o Reduces the risk of failure
 Most of the complex system software contains faults which causes
the system to fail from time to time.
 This concept of failing from time to time gives rise to failure rate.
 As faults are discovered and fixed while performing more and
more tests, the failure rate of a system generally decreases.
 Thus, higher level objective of performing tests is to bring down
the risk of failing to an acceptable level.
o Reduces the cost of testing
 The different kinds of costs associated with a test process include:
 The cost of designing, maintaining and executing test cases
 The cost of analyzing the result of executing each test case
 The cost of documenting the test cases
 The cost of actually executing the system and documenting it.
 TESTING ACTIVITIES:
o Identify the objective to be tested
 We need to identify “Why are we designing this test case?”
 A clear purpose must be associated with every test case.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 4
o Select inputs
 It is based as per mentioned in the software requirement
specification document, the source code and the expected
outcome.
 Test case inputs are selected keeping in mind the objective of
each test case.
o Compute the expected outcome
 This can be done from an overall, high level understanding of
the test objective and specification of the program under test.
o Setup the execution environment of the program
 Here, all the assumptions external to the program must be
satisfied.
o Execute the program
 The test engineer executes the program with the selected
input and observes the actual outcome of the program.
 To execute a test case, inputs may be provided to the
programs at different physical locations at different times.
o Analyze the test result
 The main task is to compare the actual outcome of the
program execution with the expected outcome.
 At the end of the analysis step, a test verdict is assigned to the
program.
 ISSUES IN TESTING:
o Test planning and scheduling problem:
 They often occur when there is no separate test plan, but
rather highly incomplete and superficial summaries in other
planning documents.
 Test plans are often ignored once they are already written and
test case descriptions are often mistaken overall test plans.
 The schedule of testing is often inadequate for the amount of
testing that should be performed especially when testing is
primarily manual.
o Stakeholder involvement and commitment problems:
 It includes having the wrong testing mindset, having unrealistic
expectations and having stakeholders who are inadequate
committed to and supporting of testing effort.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 5
o Management related testing problems:
 It involves the impact of inadequate management.
 For example, management can fail to supply adequate test
resources.
o Test organizational and professionalism problems:
 It includes a lack of independence, unclear testing
responsibilities and inadequate testing expertise.
o Test process problems:
 They often occur when testing and engineering processes are
poorly integrated.
 Testing may not be adequately prioritized so that functional
testing, black box testing, or white box unit and unit testing
may be over emphasized.
o Test tools and environment problems:
 It includes over reliance on manual testing.
 Often there are an insufficient number of test environments.
 Some of the test environments may also have poor quality.
 Moreover, the system and software under test may behave
differently during testing than during operation.
o Test communication problems:
 It primarily involves inadequate test documentation.
 These types of problems often occur when test documents are
not maintained or inadequate communication concerning
testing is taking place.
o Requirements related testing problems:
 They are related to the requirement that should be driving
testing.
 Often the requirements are ambiguous, missing, incomplete,
incorrect or unstable.
 Lower level requirements may be improperly derived from
their higher level sources.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 6
 QUALITY
 A measure of excellence or a state of being free from defects, deficiencies
and significant variations is quality.
 Quality and software quality are terms which can’t be defined as everyone
has different perspectives and different views.
 Quality is viewed as something defined by the customer and thus focus is on
customer-driven quality.
 Quality is a complex concept since it means different things to different
people and is highly context dependent.
 A quality factor represents the behavioral characteristic of a system.
 Some factors of high quality factors are correctness, reliability, efficiency,
testability, maintainability and reusability.
 SOFTWARE QUALITY:
 A definition by IEEE says that “the degree to which a component, system or a
process meets the specified requirements is software quality”.
 It is important to have an understanding that the work of software quality
begins before the testing phase and continues after the software is delivered.
 A highly modular software allows designers to put cohesive components in
one module, thereby improving the maintainability of the system.
 Kitchenham and Pfleeger have discussed 5 views of quality which are as
follows:
o Transcendental view:
 It states that the quality is something that can be recognized
through experience but can’t be defined in some tractable form.
 A good quality object stands out, and it is easily recognized.
o User view:
 This view is highly personalized.
 A product is of good quality if it satisfies a large number of users.
 It states that quality is concerned with the extent to which a
product meets a user’s needs and satisfactions.
 This view may encompass many subject elements such as
usability, reliability and efficiency.
 Example: When you buy a phone and the reviews you have about
the phone you’re holding in your hand becomes user view.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 7
o Manufacturing view:
 This view has it’s genesis in the manufacturing industry.
 The main idea of this view is the answer to the question whether
it satisfies the requirements or not.
 Products are manufactured right the first time so that the
development and the maintenance cost is reduced.
 Conformance to the requirements leads to the uniformity in
products.
 Example: One single brand produces multiple phones with some
respective names and they do vary in cost a lot. This is because
they try to fit in few of the specifications in the minimum cost with
better performance.
o Product view:
 If a product is manufactured with good internal properties, then it
will have good external properties too.
 A daily life example can that be of a cell phone. When you go to
buy a cell phone, you do check it’s specifications and then assume
that it will surely provide a good hardware and thus you buy the
same cell phone.
o Value based view:
 This represents the merging of two concepts: excellence and
worth.
 Quality is the measure of excellence whereas value is the measure
of worth.
 How much the customer is willing to pay for a certain level of
quality?
 The value based view makes a trade-off between cost and quality.
 QUALITY ASSURANCE:
 Quality assurance is a way of preventing mistakes or defects in manufactured
products and avoiding problems when delivering solutions or services to
customers.
 ISO 9000 defines quality assurance as "A part of quality management focused
on providing confidence that quality requirements will be fulfilled"
 SOFTWARE ENGINEERING:
 Software Engineering is the study and application of engineering to design,
develop and maintain the software.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 8
 SOFTWARE QUALITY ASSURANCE:
 Software Quality Assurance consists of a means of monitoring the software
engineering processes and methods used to ensure quality
 SQA is organized into goals, commitments, abilities, activities, measurements
and verifications.
 EFFECTIVE QUALITY PROCESS:
 Every step in the development standard must be performed to the highest
possible level.
 An effective quality process must focus on:
o Paying much attention to the customer’s requirements –
Example: Android operating system was developed for user-friendly
purpose which is now at the top level in the market.
o Making efforts to continuously improve the quality –
Example: Android operating system kept upgrading it’s OS for more
improvements and a better quality. Taking into consideration the
latest android OS Lollipop, is better than Kitkat in some ways; but yet
a lot of improvements to be done.
o Integrating measurement process with product design and
development
o Pushing the quality concept down to the lowest level of organization.
o Developing a system level perspective with an emphasis on
methodology and process.
o Eliminating waste through continuous improvement –
All the flaws are been tried to eliminate to excel the performance of
the software. Such as in Lollipop, an android OS, you can perform the
task you were performing, though your phone is buzzing with an
incoming call. This facility wasn’t available in Kitkat.
 METRICS:
 A quantitative measure of degree to which a system, component or process
possesses a given attribute.
 Software metric relates the individual measure in some way.
 For example: average number of errors found per person.
 STATISTICAL QUALITY CONTROL (SQC):
 Statistical Quality Control is a discipline based on measurements and
statistics.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 9
 Decisions are made and plans are developed based on the collection and
evaluation of data in the form of metrics(a measure of some property of
software or it’s specifications), rather than intuitions and experience.
 The SQC methods use 7 basic quality management tools which are:
o Pareto analysis –
Pareto Analysis is a statistical technique in decision-making used
for the selection of a limited number of tasks that produce
significant overall effect. It uses the Pareto Principle (also
known as the 80/20 rule) the idea that by doing 20% of the
work you can generate 80% of the benefit of doing the entire
job.
o Cause-and-effect diagram –
It is also known as Ishikawa diagram that shows the causes of a
specific event.
o Flow chart –
It is a diagram that represents the workflow of the system or
software.
o Trend chart –
It is a graphical representation of time series data showing the curve
that reveals a general pattern of change.
o Histogram –
It is the graphical representation of the distribution of the data.
o Scatter diagram –
It is a type of mathematical diagram using Cartesian coordinates to
display values for two variables for a set of data.
o Control chart –
Shewhart cycle.
 Shewhart cycle was introduced by Deming which includes the plan-do-check-
act cycle.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 10
 PLAN:
 Managers must evaluate the current process and make plans based on
any problems they find.
 They need to document all the current procedures, collect data and
identify problems.
 This information should then be studied and used to develop a plan for
improvement as well as specific measures to evaluate performance.
 DO:
 The next step in the cycle is implementing the plan.
 During the implementation process, managers should document all the
changes made and collect data for evaluation.
 CHECK:
 The next step is to check the data collected in the previous phase.
 The data is evaluated to see whether the plan is achieving the goals
established in the plan phase.
 ACT:
 The last phase of the cycle is to act on the basis of the results of the first
three phases.
 The best way to accomplish this is to communicate the results to the
other members in the company and then implement the new procedure
if it has been successful.
 After this the cycle repeats again.
 TOTAL QUALITY CONTROL (TQC):
 The key elements of TQC management are as follows:
o Quality comes first, not the short-term profits.
o The customer comes first, not the producer.
o Decisions are based on facts and data.
o Management is participatory and respectful of all employees.
 One of the innovative TQC methodologies, developed in Japan is referred to
as Ishikawa or cause-and-effect diagram.
 Kaoru Ishikawa found from statistical data that dispersion in product quality
came from four common causes, namely materials, machines, methods, and
measurements, known as the 4 Ms.
 Materials often differ when sources of supply or size requirements vary.
 Machines, or equipment, also function differently depending on variations in
their parts, and they operate optimally for only part of the time.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 11
 Methods, or processes, cause even greater variations due to lack of training
and poor handwritten instructions.
 Finally, measurements also vary due to outdated equipment and improper
calibration.
Diagram: Ishikawa Diagram or Fishbone diagram or cause-and effect diagram
 VERIFICATION:
 Verification activities aim at confirming that one is “building the product
correctly”
 Verification is the static practice of verifying documents, design, code and
program.
 It does not involve executing the code.
 Verification is the process of evaluating the intermediary work products
(documents developed during the development phase like requirement
specification, ER diagram, test cases etc) of a software development life cycle
to check if we’re in a right track of creating the final product.
 VALIDATION:
 Validation aims at confirming that one is “building the correct product”
 Validation activities help us in confirming that a product meets its intended
use.
 Validation is the process of evaluating the final product to check whether
software meets the customer’s expectation.
 It determines whether the software is fit for use.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 12
 VERIFICATION vs. VALIDATION:
Verification Validation
1. Verification is a static practice of verifying
documents, design, code and program.
1. Validation is a dynamic mechanism of
validating and testing the actual product.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of documents
and files.
3. It is computer based execution of
program.
4. Verification uses methods like
inspections, reviews, walkthroughs, and
Desk-checking etc.
4. Validation uses methods like black box
(functional) testing, gray box testing, and
white box (structural) testing etc.
5. Verification is to check whether the
software conforms to specifications.
5. Validation is to check whether software
meets the customer expectations
and requirements.
6. It can catch errors that validation cannot
catch. It is low level exercise.
6. It can catch errors that verification cannot
catch. It is High Level Exercise.
7. Target is requirements specification,
application and software architecture, high
level, complete design, and database design
etc.
7. Target is actual product-a unit, a module,
a bent of integrated modules, and effective
final product.
8. Verification is done developers. 8. Validation is carried out by the testers.
9. It generally comes first-done before
validation.
9. It generally follows after verification.
 ERROR:
 It refers to the difference between the computed result and the expected
result.
 It is an undesirable deviation from the requirement.
 The mistake made by the programmer is an error that is, it represents the
mistakes made by people.
 IEEE defines error as “human mistake that causes a fault”
 Error means to change the functionality of the program.
 Error is the terminology of the developer.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 13
 FAULT:
 The actual mistake in the code is fault.
 It is a condition that causes the software to fail to perform its required
function.
 IEEE defines fault as “Discrepancy in the code that causes failure”
 The fault is basically the original cause of an error.
 It may stay undetected for a long time, until some event activates it.
 BUG:
 It is an evidence of the fault in the program.
 Bug is the fault in the program which causes the program to behave in an
unintended manner.
 It doesn’t stop the execution of the application, but it produces wrong
output.
 Bug is the terminology of the tester.
 FAILURE:
 A failure occurs when the fault is executed.
 The inability of the system or component to perform its required functions
within the specified performance requirements is failure.
 If the product fails to fulfill the requirements, it is called failure.
 IEE defines failure as “external behavior is incorrect”
 DEFECT:
 A defect is an error in coding or logic that causes a program to malfunction or
to produce incorrect or unexpected results.
 A defect is said to be detected when a failure is observed.
 In software testing, a defect is considered to be anything that can hamper the
functioning or execution of the software application.
 Eg: an application hangs or stops responding all of a sudden.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 14
 TEST CASE:
 A test case is a document, which has a set of test data, preconditions,
expected results and postconditions, developed for a particular test scenario
in order to verify compliance against a specific requirement.
 A test case is a simple pair of <input, expected outcome>.
 A test case is meaningful only if it is possible to decide on the acceptability
of the result produced by the program under test.
 A test outcome is computed before the program is executed with the
selected test inputs.
RELATIONSHIP BETWEEN ERROR, FAULT, BUG, FAILURE AND DEFECT
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 15
 TESTING LEVELS:
DIAGRAM: TESTING LEVELS
 Unit Testing:
 It is a level of the software testing process where individual units/components
of a software/system are tested.
 The purpose is to validate that each unit of the software performs as
designed.
 The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.
 Integration Testing :
 It is a level of the software testing process where individual units are
combined and tested as a group.
 There are two types of integration testing:
 Top-Down integration:
This testing, the highest-level modules are tested first and
progressively lower-level modules are tested after that.
 Bottom-up integration:
This testing begins with unit testing, followed by tests of progressively
higher-level combinations of units called modules or builds.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 16
 The purpose of this level of testing is to expose faults in the interaction
between integrated units.
 Below are few types of integration testing:
o Big bang integration testing
o Top down
o Bottom up
o Functional incremental
 System Testing:
 It is a level of the software testing process where a complete, integrated
system/software is tested.
 The purpose of this test is to evaluate the system’s compliance with the
specified requirements.
 System testing is so important because of the following reasons:
o System Testing is the first step in the Software Development Life Cycle,
where the application is tested as a whole.
o The application is tested thoroughly to verify that it meets the
functional and technical specifications.
o The application is tested in an environment which is very close to the
production environment where the application will be deployed.
o System Testing enables us to test, verify and validate both the business
requirements as well as the Applications Architecture.
 Acceptance Testing:
 It is a level of the software testing process where a system is tested for
acceptability.
 The purpose of this test is to evaluate the system’s compliance with the
business requirements and assess whether it is acceptable for delivery.
 This is arguably the most importance type of testing as it is conducted by the
Quality Assurance Team who will gauge whether the application meets the
intended specifications and satisfies the client’s requirements.
 Acceptance testing may occur at more than just a single level, for example:
o A Commercial Off the shelf (COTS) software product may be
acceptance tested when it is installed or integrated.
o Acceptance testing of the usability of the component may be done
during component testing.
o Acceptance testing of a new functional enhancement may come before
system testing.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 17
 WHITE BOX TESTING:
 White box testing is the detailed investigation of internal logic and structure
of the code.
 White box testing is also called as glass testing or open box testing.
 In order to perform white box testing on an application, the tester needs to
possess knowledge of the internal working of the code.
 The tester needs to have a look inside the source code and find out which
unit/chunk of the code is behaving inappropriately.
 Advantages:
 It helps in optimizing the code.
 Extra lines of codes can be removed which can bring in hidden defects.
 Due to tester’s knowledge about the code, maximum coverage is
attained during test scenario writing.
 As the tester has knowledge of the source code, it becomes very easy
to find out which type of data can help in testing the application
effectively.
 Disadvantages:
 Due to the fact that a skilled tester is needed to perform white box
testing, the costs are increased.
 Sometimes it is impossible to look into every nook and corner to find
out hidden errors that may create problems as many paths will go
untested.
 It is difficult to maintain white box testing as the use of specialized
tools like code analyzers and debugging tools are required.
 BLACK BOX TESTING:
 The technique of testing without having any knowledge of the interior
workings of the application is black box testing.
 The tester is oblivious to the system architecture and does not have to access
the source code.
 Typically when performing the black box test, a tester will interact with the
system’s user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.
 Advantages:
 Well suited and efficient for large code arguments.
 Code access not required.
 Clearly separates user’s perspective from the developer’s perspective
visibly defined roles.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 18
 Large numbers of moderately skilled testers can test the application
with no knowledge of implementation, programming language or
operating systems.
 Disadvantages:
 Limited coverage since only a selected number of test scenarios are
actually performed.
 Inefficient testing due to the fact that the tester only has a limited
knowledge about an application.
 The test cases are difficult to design.
 Blind coverage, since the tester cannot target specific code segments
or error prone areas.
 DIFFERENCE BETWEEN BLACK BOX AND WHITE BOX TESTING:
Criteria Black Box Testing White Box Testing
Definition
Black Box Testing is a software
testing method in which the
internal structure/ design/
implementation of the item being
tested is NOT known to the tester
White Box Testing is a software
testing method in which the
internal structure/ design/
implementation of the item
being tested is known to the
tester.
Levels Applicable
To
Mainly applicable to higher levels of
testing:Acceptance Testing
System Testing
Mainly applicable to lower levels
of testing:Unit Testing
Integration Testing
Responsibility
Generally, independent Software
Testers Generally, Software Developers
Programming
Knowledge Not Required Required
Implementation
Knowledge Not Required Required
Basis for Test
Cases Requirement Specifications Detail Design
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 19
Sr Black Box Testing White Box Testing
1 Low granularity High granularity.
2 Internals NOT
known.
Internals fully known.
3 Internals not
required to be
known.
Internal code of the application
and database known.
4 Also known as
# Opaque box testing
# Closed box testing
# Input output
testing
# Data driven testing
# Behavioral testing
# Functional testing
Also known as
# Glass box testing
# Clear box testing
# Design based testing
# Logic based testing
# Structural testing
# Code based testing.
5 It is done by end-
users
(User acceptance
testing).
Also done by testers
& developers.
It is normally done by testers &
developers.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 20
6 Testing method
where
# System is viewed
as a black box
# Internal
behavior of the
program is ignored
# Testing is based
upon external
specifications.
Internals are fully known.
7 It is likely to be least
exhaustive of the
three.
Potentially most exhaustive of
the three.
8 Requirements based.
Test cases based on
the functional
specifications, as
internals not known.
Ability to exercise code with
relevant variety of data.
9 Being specification
based if would not
suffer from the
deficiency as
described for white
box testing.
Since test cases are written
based on the code,
specifications missed out in
coding would not be revealed.
10 It is suited for
functional / business
domain testing.
It is suited for all.
WWW.LAZYSTUD.COM
Compiled by Samruddhi Sheth Page 21
11 Not suited to
algorithm testing.
Appropriate for algorithm
testing.
12 It is concerned
with validating
outputs for given
inputs, the
application being,
treated as a black
box.
It facilitates structural testing. It
enables logic coverage,
coverage of statements,
decisions, conditions, path and
control flow within the code.
13 It can test only by
trial and error
(data domains,
internal
boundaries and
overflow.
It can determine and therefore
test better: data domains,
internal boundaries and
overflow.

More Related Content

What's hot

Project control and process instrumentation
Project control and process instrumentationProject control and process instrumentation
Project control and process instrumentationKuppusamy P
 
Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and typesConfiz
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testingHaris Jamil
 
Software Engineering Layered Technology Software Process Framework
Software Engineering  Layered Technology Software Process FrameworkSoftware Engineering  Layered Technology Software Process Framework
Software Engineering Layered Technology Software Process FrameworkJAINAM KAPADIYA
 
Risk management(software engineering)
Risk management(software engineering)Risk management(software engineering)
Risk management(software engineering)Priya Tomar
 
Types of software testing
Types of software testingTypes of software testing
Types of software testingPrachi Sasankar
 
Constructive Cost Model - II (COCOMO-II)
Constructive Cost Model - II (COCOMO-II)Constructive Cost Model - II (COCOMO-II)
Constructive Cost Model - II (COCOMO-II)AmanSharma1172
 
Formal Approaches to SQA.pptx
Formal Approaches to SQA.pptxFormal Approaches to SQA.pptx
Formal Approaches to SQA.pptxKarthigaiSelviS3
 
Lect2 conventional software management
Lect2 conventional software managementLect2 conventional software management
Lect2 conventional software managementmeena466141
 
Software Engineering- Requirement Elicitation and Specification
Software Engineering- Requirement Elicitation and SpecificationSoftware Engineering- Requirement Elicitation and Specification
Software Engineering- Requirement Elicitation and SpecificationNishu Rastogi
 
Interface specification
Interface specificationInterface specification
Interface specificationmaliksiddique1
 
Software Testing Principles
Software Testing PrinciplesSoftware Testing Principles
Software Testing PrinciplesKanoah
 
OO Metrics
OO MetricsOO Metrics
OO Metricsskmetz
 

What's hot (20)

Project control and process instrumentation
Project control and process instrumentationProject control and process instrumentation
Project control and process instrumentation
 
Software Evolution
Software EvolutionSoftware Evolution
Software Evolution
 
Software testing methods, levels and types
Software testing methods, levels and typesSoftware testing methods, levels and types
Software testing methods, levels and types
 
Object oriented testing
Object oriented testingObject oriented testing
Object oriented testing
 
Software Engineering Layered Technology Software Process Framework
Software Engineering  Layered Technology Software Process FrameworkSoftware Engineering  Layered Technology Software Process Framework
Software Engineering Layered Technology Software Process Framework
 
Cohesion and coupling
Cohesion and couplingCohesion and coupling
Cohesion and coupling
 
Risk management(software engineering)
Risk management(software engineering)Risk management(software engineering)
Risk management(software engineering)
 
Software design
Software designSoftware design
Software design
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
Software Reliability
Software ReliabilitySoftware Reliability
Software Reliability
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
 
Constructive Cost Model - II (COCOMO-II)
Constructive Cost Model - II (COCOMO-II)Constructive Cost Model - II (COCOMO-II)
Constructive Cost Model - II (COCOMO-II)
 
Formal Approaches to SQA.pptx
Formal Approaches to SQA.pptxFormal Approaches to SQA.pptx
Formal Approaches to SQA.pptx
 
Lect2 conventional software management
Lect2 conventional software managementLect2 conventional software management
Lect2 conventional software management
 
Software Engineering- Requirement Elicitation and Specification
Software Engineering- Requirement Elicitation and SpecificationSoftware Engineering- Requirement Elicitation and Specification
Software Engineering- Requirement Elicitation and Specification
 
Interface specification
Interface specificationInterface specification
Interface specification
 
Software cost estimation
Software cost estimationSoftware cost estimation
Software cost estimation
 
Role-of-lexical-analysis
Role-of-lexical-analysisRole-of-lexical-analysis
Role-of-lexical-analysis
 
Software Testing Principles
Software Testing PrinciplesSoftware Testing Principles
Software Testing Principles
 
OO Metrics
OO MetricsOO Metrics
OO Metrics
 

Similar to Software Testing Basics - 40 Character Title

Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGSachin Pathania
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTINGacemindia
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfAnupmaMunshi
 
Aim (A).pptx
Aim (A).pptxAim (A).pptx
Aim (A).pptx14941
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAnuraj S.L
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experiencedzynofustechnology
 
Lesson 8...Question Part 2
Lesson 8...Question Part 2Lesson 8...Question Part 2
Lesson 8...Question Part 2bhushan Nehete
 
Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...App Sierra
 
Fundamental of testing
Fundamental of testingFundamental of testing
Fundamental of testingReginaKhalida
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdfMounikaCh26
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfMuhammadShoaibHussai2
 
20MCE14_Software Testing and Quality Assurance Notes.pdf
20MCE14_Software Testing and Quality Assurance Notes.pdf20MCE14_Software Testing and Quality Assurance Notes.pdf
20MCE14_Software Testing and Quality Assurance Notes.pdfDSIVABALASELVAMANIMC
 
software testing and quality assurance .pdf
software testing and quality assurance .pdfsoftware testing and quality assurance .pdf
software testing and quality assurance .pdfMUSAIDRIS15
 

Similar to Software Testing Basics - 40 Character Title (20)

Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 
Stm unit1
Stm unit1Stm unit1
Stm unit1
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdf
 
Aim (A).pptx
Aim (A).pptxAim (A).pptx
Aim (A).pptx
 
stm f.pdf
stm f.pdfstm f.pdf
stm f.pdf
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test Management
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experienced
 
Fundamentals of Testing Section 1/6
Fundamentals of Testing   Section 1/6Fundamentals of Testing   Section 1/6
Fundamentals of Testing Section 1/6
 
Lesson 8...Question Part 2
Lesson 8...Question Part 2Lesson 8...Question Part 2
Lesson 8...Question Part 2
 
Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...Why is it important to hire an independent testing team for your development ...
Why is it important to hire an independent testing team for your development ...
 
Fundamental of testing
Fundamental of testingFundamental of testing
Fundamental of testing
 
Software coding and testing
Software coding and testingSoftware coding and testing
Software coding and testing
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
 
20MCE14_Software Testing and Quality Assurance Notes.pdf
20MCE14_Software Testing and Quality Assurance Notes.pdf20MCE14_Software Testing and Quality Assurance Notes.pdf
20MCE14_Software Testing and Quality Assurance Notes.pdf
 
software testing and quality assurance .pdf
software testing and quality assurance .pdfsoftware testing and quality assurance .pdf
software testing and quality assurance .pdf
 

Software Testing Basics - 40 Character Title

  • 1. www.lazystud.com SOFTWARE TESTING & QUALITY ASSURANCE CHAPTER 1 BASIC CONCEPTS AND PRELIMINARIES Compiled by Samruddhi Sheth
  • 2. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 2 CHAPTER 1 BASIC CONCEPTS AND PRELIMINARIES  SOFTWARE :  Software means computer instructions or data.  A simple program in a computer is software which helps in functioning of the hardware.  Eg: Operating system.  SOFTWARE TESTING :  Software testing is a process of executing a program or application with the intent of finding the software bugs.  It can also be stated as the process of validating and verifying that a software program or application or product meets the business and technical requirements that guided its design and development.  Role of testing:  Testing plays an important role in achieving and assessing the quality of a software product.  Testing helps you to improve the quality of software when you test and find defects in your program while development.  Testing also helps to assess how good your system is when you perform system-level tests.  The activities of software testing assessment are divided into 2 categories:  Static Analysis: o Static analysis is the testing and evaluation of an application by examining the code without the executing the application. o It examines all the possible execution paths and variable values, not just those invoked during execution. o Static analysis can reveal errors that may not manifest themselves until weeks, months or years after release. o It is based on the examination of a number of documents, namely requirement documents, software models, design documents and source code.  Dynamic Analysis: o Dynamic analysis is the testing and evaluation of an application during runtime. o It reveals subtle defects or vulnerabilities whose cause is too complex to be discovered by static analysis.
  • 3. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 3 o Dynamic analysis can play a role in security assurance, but its primary goal is finding and debugging errors. o A finite subset of input is selected carefully to reach a reiable conclusion.  OBJECTIVES OF TESTING: o It does work  While implementing a program unit, the programmer may want to test whether or not the unit works in normal circumstances.  The programmer gets much confidence if the unit works to his satisfaction.  The same idea applies to the whole system as well.  Once the system has been integrated, the developers may want to test whether or not the system performs the basic functions.  The objective of testing is to show that the system works, rather than it does not. o It does not work  Once the programmer is satisfied that a unit works to a certain degree, more tests are conducted with the objective of finding faults in the unit.  Here the idea is to try to make the unit fail. o Reduces the risk of failure  Most of the complex system software contains faults which causes the system to fail from time to time.  This concept of failing from time to time gives rise to failure rate.  As faults are discovered and fixed while performing more and more tests, the failure rate of a system generally decreases.  Thus, higher level objective of performing tests is to bring down the risk of failing to an acceptable level. o Reduces the cost of testing  The different kinds of costs associated with a test process include:  The cost of designing, maintaining and executing test cases  The cost of analyzing the result of executing each test case  The cost of documenting the test cases  The cost of actually executing the system and documenting it.  TESTING ACTIVITIES: o Identify the objective to be tested  We need to identify “Why are we designing this test case?”  A clear purpose must be associated with every test case.
  • 4. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 4 o Select inputs  It is based as per mentioned in the software requirement specification document, the source code and the expected outcome.  Test case inputs are selected keeping in mind the objective of each test case. o Compute the expected outcome  This can be done from an overall, high level understanding of the test objective and specification of the program under test. o Setup the execution environment of the program  Here, all the assumptions external to the program must be satisfied. o Execute the program  The test engineer executes the program with the selected input and observes the actual outcome of the program.  To execute a test case, inputs may be provided to the programs at different physical locations at different times. o Analyze the test result  The main task is to compare the actual outcome of the program execution with the expected outcome.  At the end of the analysis step, a test verdict is assigned to the program.  ISSUES IN TESTING: o Test planning and scheduling problem:  They often occur when there is no separate test plan, but rather highly incomplete and superficial summaries in other planning documents.  Test plans are often ignored once they are already written and test case descriptions are often mistaken overall test plans.  The schedule of testing is often inadequate for the amount of testing that should be performed especially when testing is primarily manual. o Stakeholder involvement and commitment problems:  It includes having the wrong testing mindset, having unrealistic expectations and having stakeholders who are inadequate committed to and supporting of testing effort.
  • 5. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 5 o Management related testing problems:  It involves the impact of inadequate management.  For example, management can fail to supply adequate test resources. o Test organizational and professionalism problems:  It includes a lack of independence, unclear testing responsibilities and inadequate testing expertise. o Test process problems:  They often occur when testing and engineering processes are poorly integrated.  Testing may not be adequately prioritized so that functional testing, black box testing, or white box unit and unit testing may be over emphasized. o Test tools and environment problems:  It includes over reliance on manual testing.  Often there are an insufficient number of test environments.  Some of the test environments may also have poor quality.  Moreover, the system and software under test may behave differently during testing than during operation. o Test communication problems:  It primarily involves inadequate test documentation.  These types of problems often occur when test documents are not maintained or inadequate communication concerning testing is taking place. o Requirements related testing problems:  They are related to the requirement that should be driving testing.  Often the requirements are ambiguous, missing, incomplete, incorrect or unstable.  Lower level requirements may be improperly derived from their higher level sources.
  • 6. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 6  QUALITY  A measure of excellence or a state of being free from defects, deficiencies and significant variations is quality.  Quality and software quality are terms which can’t be defined as everyone has different perspectives and different views.  Quality is viewed as something defined by the customer and thus focus is on customer-driven quality.  Quality is a complex concept since it means different things to different people and is highly context dependent.  A quality factor represents the behavioral characteristic of a system.  Some factors of high quality factors are correctness, reliability, efficiency, testability, maintainability and reusability.  SOFTWARE QUALITY:  A definition by IEEE says that “the degree to which a component, system or a process meets the specified requirements is software quality”.  It is important to have an understanding that the work of software quality begins before the testing phase and continues after the software is delivered.  A highly modular software allows designers to put cohesive components in one module, thereby improving the maintainability of the system.  Kitchenham and Pfleeger have discussed 5 views of quality which are as follows: o Transcendental view:  It states that the quality is something that can be recognized through experience but can’t be defined in some tractable form.  A good quality object stands out, and it is easily recognized. o User view:  This view is highly personalized.  A product is of good quality if it satisfies a large number of users.  It states that quality is concerned with the extent to which a product meets a user’s needs and satisfactions.  This view may encompass many subject elements such as usability, reliability and efficiency.  Example: When you buy a phone and the reviews you have about the phone you’re holding in your hand becomes user view.
  • 7. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 7 o Manufacturing view:  This view has it’s genesis in the manufacturing industry.  The main idea of this view is the answer to the question whether it satisfies the requirements or not.  Products are manufactured right the first time so that the development and the maintenance cost is reduced.  Conformance to the requirements leads to the uniformity in products.  Example: One single brand produces multiple phones with some respective names and they do vary in cost a lot. This is because they try to fit in few of the specifications in the minimum cost with better performance. o Product view:  If a product is manufactured with good internal properties, then it will have good external properties too.  A daily life example can that be of a cell phone. When you go to buy a cell phone, you do check it’s specifications and then assume that it will surely provide a good hardware and thus you buy the same cell phone. o Value based view:  This represents the merging of two concepts: excellence and worth.  Quality is the measure of excellence whereas value is the measure of worth.  How much the customer is willing to pay for a certain level of quality?  The value based view makes a trade-off between cost and quality.  QUALITY ASSURANCE:  Quality assurance is a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers.  ISO 9000 defines quality assurance as "A part of quality management focused on providing confidence that quality requirements will be fulfilled"  SOFTWARE ENGINEERING:  Software Engineering is the study and application of engineering to design, develop and maintain the software.
  • 8. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 8  SOFTWARE QUALITY ASSURANCE:  Software Quality Assurance consists of a means of monitoring the software engineering processes and methods used to ensure quality  SQA is organized into goals, commitments, abilities, activities, measurements and verifications.  EFFECTIVE QUALITY PROCESS:  Every step in the development standard must be performed to the highest possible level.  An effective quality process must focus on: o Paying much attention to the customer’s requirements – Example: Android operating system was developed for user-friendly purpose which is now at the top level in the market. o Making efforts to continuously improve the quality – Example: Android operating system kept upgrading it’s OS for more improvements and a better quality. Taking into consideration the latest android OS Lollipop, is better than Kitkat in some ways; but yet a lot of improvements to be done. o Integrating measurement process with product design and development o Pushing the quality concept down to the lowest level of organization. o Developing a system level perspective with an emphasis on methodology and process. o Eliminating waste through continuous improvement – All the flaws are been tried to eliminate to excel the performance of the software. Such as in Lollipop, an android OS, you can perform the task you were performing, though your phone is buzzing with an incoming call. This facility wasn’t available in Kitkat.  METRICS:  A quantitative measure of degree to which a system, component or process possesses a given attribute.  Software metric relates the individual measure in some way.  For example: average number of errors found per person.  STATISTICAL QUALITY CONTROL (SQC):  Statistical Quality Control is a discipline based on measurements and statistics.
  • 9. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 9  Decisions are made and plans are developed based on the collection and evaluation of data in the form of metrics(a measure of some property of software or it’s specifications), rather than intuitions and experience.  The SQC methods use 7 basic quality management tools which are: o Pareto analysis – Pareto Analysis is a statistical technique in decision-making used for the selection of a limited number of tasks that produce significant overall effect. It uses the Pareto Principle (also known as the 80/20 rule) the idea that by doing 20% of the work you can generate 80% of the benefit of doing the entire job. o Cause-and-effect diagram – It is also known as Ishikawa diagram that shows the causes of a specific event. o Flow chart – It is a diagram that represents the workflow of the system or software. o Trend chart – It is a graphical representation of time series data showing the curve that reveals a general pattern of change. o Histogram – It is the graphical representation of the distribution of the data. o Scatter diagram – It is a type of mathematical diagram using Cartesian coordinates to display values for two variables for a set of data. o Control chart – Shewhart cycle.  Shewhart cycle was introduced by Deming which includes the plan-do-check- act cycle.
  • 10. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 10  PLAN:  Managers must evaluate the current process and make plans based on any problems they find.  They need to document all the current procedures, collect data and identify problems.  This information should then be studied and used to develop a plan for improvement as well as specific measures to evaluate performance.  DO:  The next step in the cycle is implementing the plan.  During the implementation process, managers should document all the changes made and collect data for evaluation.  CHECK:  The next step is to check the data collected in the previous phase.  The data is evaluated to see whether the plan is achieving the goals established in the plan phase.  ACT:  The last phase of the cycle is to act on the basis of the results of the first three phases.  The best way to accomplish this is to communicate the results to the other members in the company and then implement the new procedure if it has been successful.  After this the cycle repeats again.  TOTAL QUALITY CONTROL (TQC):  The key elements of TQC management are as follows: o Quality comes first, not the short-term profits. o The customer comes first, not the producer. o Decisions are based on facts and data. o Management is participatory and respectful of all employees.  One of the innovative TQC methodologies, developed in Japan is referred to as Ishikawa or cause-and-effect diagram.  Kaoru Ishikawa found from statistical data that dispersion in product quality came from four common causes, namely materials, machines, methods, and measurements, known as the 4 Ms.  Materials often differ when sources of supply or size requirements vary.  Machines, or equipment, also function differently depending on variations in their parts, and they operate optimally for only part of the time.
  • 11. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 11  Methods, or processes, cause even greater variations due to lack of training and poor handwritten instructions.  Finally, measurements also vary due to outdated equipment and improper calibration. Diagram: Ishikawa Diagram or Fishbone diagram or cause-and effect diagram  VERIFICATION:  Verification activities aim at confirming that one is “building the product correctly”  Verification is the static practice of verifying documents, design, code and program.  It does not involve executing the code.  Verification is the process of evaluating the intermediary work products (documents developed during the development phase like requirement specification, ER diagram, test cases etc) of a software development life cycle to check if we’re in a right track of creating the final product.  VALIDATION:  Validation aims at confirming that one is “building the correct product”  Validation activities help us in confirming that a product meets its intended use.  Validation is the process of evaluating the final product to check whether software meets the customer’s expectation.  It determines whether the software is fit for use.
  • 12. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 12  VERIFICATION vs. VALIDATION: Verification Validation 1. Verification is a static practice of verifying documents, design, code and program. 1. Validation is a dynamic mechanism of validating and testing the actual product. 2. It does not involve executing the code. 2. It always involves executing the code. 3. It is human based checking of documents and files. 3. It is computer based execution of program. 4. Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc. 4. Validation uses methods like black box (functional) testing, gray box testing, and white box (structural) testing etc. 5. Verification is to check whether the software conforms to specifications. 5. Validation is to check whether software meets the customer expectations and requirements. 6. It can catch errors that validation cannot catch. It is low level exercise. 6. It can catch errors that verification cannot catch. It is High Level Exercise. 7. Target is requirements specification, application and software architecture, high level, complete design, and database design etc. 7. Target is actual product-a unit, a module, a bent of integrated modules, and effective final product. 8. Verification is done developers. 8. Validation is carried out by the testers. 9. It generally comes first-done before validation. 9. It generally follows after verification.  ERROR:  It refers to the difference between the computed result and the expected result.  It is an undesirable deviation from the requirement.  The mistake made by the programmer is an error that is, it represents the mistakes made by people.  IEEE defines error as “human mistake that causes a fault”  Error means to change the functionality of the program.  Error is the terminology of the developer.
  • 13. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 13  FAULT:  The actual mistake in the code is fault.  It is a condition that causes the software to fail to perform its required function.  IEEE defines fault as “Discrepancy in the code that causes failure”  The fault is basically the original cause of an error.  It may stay undetected for a long time, until some event activates it.  BUG:  It is an evidence of the fault in the program.  Bug is the fault in the program which causes the program to behave in an unintended manner.  It doesn’t stop the execution of the application, but it produces wrong output.  Bug is the terminology of the tester.  FAILURE:  A failure occurs when the fault is executed.  The inability of the system or component to perform its required functions within the specified performance requirements is failure.  If the product fails to fulfill the requirements, it is called failure.  IEE defines failure as “external behavior is incorrect”  DEFECT:  A defect is an error in coding or logic that causes a program to malfunction or to produce incorrect or unexpected results.  A defect is said to be detected when a failure is observed.  In software testing, a defect is considered to be anything that can hamper the functioning or execution of the software application.  Eg: an application hangs or stops responding all of a sudden.
  • 14. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 14  TEST CASE:  A test case is a document, which has a set of test data, preconditions, expected results and postconditions, developed for a particular test scenario in order to verify compliance against a specific requirement.  A test case is a simple pair of <input, expected outcome>.  A test case is meaningful only if it is possible to decide on the acceptability of the result produced by the program under test.  A test outcome is computed before the program is executed with the selected test inputs. RELATIONSHIP BETWEEN ERROR, FAULT, BUG, FAILURE AND DEFECT
  • 15. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 15  TESTING LEVELS: DIAGRAM: TESTING LEVELS  Unit Testing:  It is a level of the software testing process where individual units/components of a software/system are tested.  The purpose is to validate that each unit of the software performs as designed.  The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.  Integration Testing :  It is a level of the software testing process where individual units are combined and tested as a group.  There are two types of integration testing:  Top-Down integration: This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.  Bottom-up integration: This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
  • 16. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 16  The purpose of this level of testing is to expose faults in the interaction between integrated units.  Below are few types of integration testing: o Big bang integration testing o Top down o Bottom up o Functional incremental  System Testing:  It is a level of the software testing process where a complete, integrated system/software is tested.  The purpose of this test is to evaluate the system’s compliance with the specified requirements.  System testing is so important because of the following reasons: o System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole. o The application is tested thoroughly to verify that it meets the functional and technical specifications. o The application is tested in an environment which is very close to the production environment where the application will be deployed. o System Testing enables us to test, verify and validate both the business requirements as well as the Applications Architecture.  Acceptance Testing:  It is a level of the software testing process where a system is tested for acceptability.  The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.  This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client’s requirements.  Acceptance testing may occur at more than just a single level, for example: o A Commercial Off the shelf (COTS) software product may be acceptance tested when it is installed or integrated. o Acceptance testing of the usability of the component may be done during component testing. o Acceptance testing of a new functional enhancement may come before system testing.
  • 17. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 17  WHITE BOX TESTING:  White box testing is the detailed investigation of internal logic and structure of the code.  White box testing is also called as glass testing or open box testing.  In order to perform white box testing on an application, the tester needs to possess knowledge of the internal working of the code.  The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.  Advantages:  It helps in optimizing the code.  Extra lines of codes can be removed which can bring in hidden defects.  Due to tester’s knowledge about the code, maximum coverage is attained during test scenario writing.  As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively.  Disadvantages:  Due to the fact that a skilled tester is needed to perform white box testing, the costs are increased.  Sometimes it is impossible to look into every nook and corner to find out hidden errors that may create problems as many paths will go untested.  It is difficult to maintain white box testing as the use of specialized tools like code analyzers and debugging tools are required.  BLACK BOX TESTING:  The technique of testing without having any knowledge of the interior workings of the application is black box testing.  The tester is oblivious to the system architecture and does not have to access the source code.  Typically when performing the black box test, a tester will interact with the system’s user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.  Advantages:  Well suited and efficient for large code arguments.  Code access not required.  Clearly separates user’s perspective from the developer’s perspective visibly defined roles.
  • 18. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 18  Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language or operating systems.  Disadvantages:  Limited coverage since only a selected number of test scenarios are actually performed.  Inefficient testing due to the fact that the tester only has a limited knowledge about an application.  The test cases are difficult to design.  Blind coverage, since the tester cannot target specific code segments or error prone areas.  DIFFERENCE BETWEEN BLACK BOX AND WHITE BOX TESTING: Criteria Black Box Testing White Box Testing Definition Black Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is NOT known to the tester White Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is known to the tester. Levels Applicable To Mainly applicable to higher levels of testing:Acceptance Testing System Testing Mainly applicable to lower levels of testing:Unit Testing Integration Testing Responsibility Generally, independent Software Testers Generally, Software Developers Programming Knowledge Not Required Required Implementation Knowledge Not Required Required Basis for Test Cases Requirement Specifications Detail Design
  • 19. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 19 Sr Black Box Testing White Box Testing 1 Low granularity High granularity. 2 Internals NOT known. Internals fully known. 3 Internals not required to be known. Internal code of the application and database known. 4 Also known as # Opaque box testing # Closed box testing # Input output testing # Data driven testing # Behavioral testing # Functional testing Also known as # Glass box testing # Clear box testing # Design based testing # Logic based testing # Structural testing # Code based testing. 5 It is done by end- users (User acceptance testing). Also done by testers & developers. It is normally done by testers & developers.
  • 20. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 20 6 Testing method where # System is viewed as a black box # Internal behavior of the program is ignored # Testing is based upon external specifications. Internals are fully known. 7 It is likely to be least exhaustive of the three. Potentially most exhaustive of the three. 8 Requirements based. Test cases based on the functional specifications, as internals not known. Ability to exercise code with relevant variety of data. 9 Being specification based if would not suffer from the deficiency as described for white box testing. Since test cases are written based on the code, specifications missed out in coding would not be revealed. 10 It is suited for functional / business domain testing. It is suited for all.
  • 21. WWW.LAZYSTUD.COM Compiled by Samruddhi Sheth Page 21 11 Not suited to algorithm testing. Appropriate for algorithm testing. 12 It is concerned with validating outputs for given inputs, the application being, treated as a black box. It facilitates structural testing. It enables logic coverage, coverage of statements, decisions, conditions, path and control flow within the code. 13 It can test only by trial and error (data domains, internal boundaries and overflow. It can determine and therefore test better: data domains, internal boundaries and overflow.