SlideShare a Scribd company logo
1 of 233
IT8076 – Software Testing
Introduction
UNIT I INTRODUCTION
UNIT II TEST CASE DESIGN STRATEGIES
UNIT III LEVELS OF TESTING
UNIT IV TEST MANAGEMENT
UNIT V TEST AUTOMATION
UNIT I INTRODUCTION
Testing as an Engineering Activity – Testing as a Process – Testing
Maturity Model- Testing axioms – Basic definitions – Software
Testing Principles – The Tester’s Role in a Software Development
Organization – Origins of Defects – Cost of defects – Defect Classes –
The Defect Repository and Test Design –Defect Examples-
Developer/Tester Support of Developing a Defect Repository.
UNIT II TEST CASE DESIGN STRATEGIES
Test case Design Strategies – Using Black Box Approach to Test Case
Design – Boundary Value Analysis – Equivalence Class Partitioning
– State based testing – Cause-effect graphing – Compatibility testing
– user documentation testing – domain testing - Random Testing –
Requirements based testing – Using White Box Approach to Test
design – Test Adequacy Criteria – static testing vs. structural testing
– code functional testing – Coverage and Control Flow Graphs –
Covering Code Logic – Paths – code complexity testing – Additional
White box testing approaches- Evaluating Test Adequacy Criteria.
UNIT III LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning –
Designing the Unit Tests – The Test Harness – Running the Unit tests
and Recording results – Integration tests – Designing Integration
Tests – Integration Test Planning – Scenario testing – Defect bash
elimination System Testing – Acceptance testing – Performance
testing – Regression Testing – Internationalization testing – Ad-hoc
testing – Alpha, Beta Tests – Testing OO systems – Usability and
Accessibility testing – Configuration testing –Compatibility testing –
Testing the documentation – Website testing.
UNIT IV TEST MANAGEMENT
People and organizational issues in testing – Organization structures
for testing teams – testing services – Test Planning – Test Plan
Components – Test Plan Attachments – Locating Test Items – test
management – test process – Reporting Test Results – Introducing
the test specialist – Skills needed by a test specialist – Building a
Testing Group- The Structure of Testing Group- .The Technical
Training Program.
UNIT V TEST AUTOMATION
Software test automation – skills needed for automation – scope of
automation – design and architecture for automation – requirements
for a test tool – challenges in automation – Test metrics and
measurements – project, progress and productivity metrics.
Software Testing
UNIT I INTRODUCTION
Testing as an Engineering Activity – Testing as a Process – Testing
Maturity Model- Testing axioms – Basic definitions – Software
Testing Principles – The Tester’s Role in a Software Development
Organization – Origins of Defects – Cost of defects – Defect Classes –
The Defect Repository and Test Design –Defect Examples-
Developer/Tester Support of Developing a Defect Repository.
INTRODUCTION TO TESTING AS
AN ENGINEERING ACTIVITY
UNIT-I
 Software important role in our lives both economically
and socially.
 Pressure for software professionals to focus on quality
issues.
 Poor quality software that can cause loss of life or
property is no longer acceptable to society.
 For high-quality software, there is a need for well-
educated software professionals.
 Highly qualified staff ensure that software products are
built on time, within budget
 The highest quality meets the attributes - reliability,
correctness, usability, and the ability to meet all user
requirements.
The Evolving Profession of
Software Testing
Software Development Practices
 The development of high-quality software
includes
 project planning,
 Requirements management,
 development of formal specifications,
 structured design
 inspections and reviews, product and process measures.
 education and training of software professionals.
 development and application of CASE tools.
 use of effective testing techniques.
Process -Definition
 Process, in the software engineering domain, is
the set of methods, practices, standards,
documents, activities, policies, and procedures
that software engineers use to develop and
maintain a software system.
Software Development Process
Validation is the process of evaluating a software
system or component during, or at the end of, the
development cycle in order to determine whether
it satisfies specified requirements.
Verification is the process of evaluating a software
system or component to determine whether the
products of a given development phase satisfy the
conditions imposed at the start of that phase.
1.3 Testing as a Process
Verification: Are we building the product right?
Validation: Are we building the right product?
Testing:
Process that reveals the defects in the software
Ensures the degree of quality.
Testing Activity:
Technical Reviews
Test Plan
Test Tracking
Test Case Design
Unit Test
Integration Test
System Test
Acceptance Test
Usability Test
Debugging, or fault localization is the process of
 locating the fault or defect,
repairing the code,
retesting the code Testing:
 Basic Definition
 Errors: An error is a mistake, misconception, or
misunderstanding on the part of a software
developer.
 Faults (Defects)
 A fault (defect) is introduced into the software as
the result of an error. It is an anomaly in the
software that may cause it to behave incorrectly,
and not according to its specification.
TESTING AXIOMS
 Failures
◦ A failure is the inability of a software system or
component to perform its require functions within
specified performance requirements.
◦ Observed by
 Developer
 Tester
 User
TESTING AXIOMS
 A test case in a practical sense is a test-related item which
contains the following information:
1. A set of test inputs. These are data items received from
an external source by the code under test. The external
source can be hardware, software, or human.
2. Execution conditions. These are conditions required for
running the test, for example, a certain state of a database,
or a configuration of a hardware device.
3. Expected outputs. These are the specified results to be
produced by the code under test.
Test Case
Test
A test is a group of related test cases, or a group of
related test cases and test Procedures.
Test Suite
A group of related tests that run together is referred as
test suite.
Test Oracle
A test oracle is a document, or piece of software that
allows testers to determine whether a test has been
passed or failed.
Test Bed
A test bed is an environment that contains all the
hardware and software needed to test a software
component or a software system.
 1. Quality relates to the degree to which a system,
system component, or process meets specified
requirements.
 2. The Quality attributes are measured using
quality metrics.
Metric:
 A metric is a quantitative measure of the degree to
which a system, system component, or process
possesses a given attribute.
Software Quality
 Metrics can be
◦ Product Metrics – Software size
◦ Process Metric – Cost and time required for a given task.
Quality Attributes:
 Correctness:
◦ Degree to which the system performs the intended
function.
 Reliability:
◦ Degree to which the software is expected to perform the
required function under stated condition.
Software Quality
 Usability:
◦ The degree of effort needed to learn, operate, prepare
input and interpret output of the software
 Integrity:
◦ The system’s ability to withstand both international and
accidental attacks.
 Portability:
◦ The ability of the software to be transferred from one
environment to another.
Software Quality
 Maintainability:
◦ The effort needed to make changes in the software.
 Interoperability:
◦ The effort needed to link or couple one system to
another.
 Testability:
◦ The amount of effort needed to test the software to
ensure it performs according to the specific requirement.
Software Quality
 The software quality assurance (SQA) group is a
team of people with the necessary training and
skills to ensure that all necessary actions are taken
during the development process so that the
resulting software conforms to established
technical requirements.
 REVIEW:
A review is a group meeting whose purpose is to
evaluate a software artifact or a set of software
artifacts.
Software Quality Assurance
Group
 A principle is defined as
◦ A general or fundamentals, law, doctrine or assumptions.
◦ A rule or code of conduct
◦ The laws or facts of nature underlying the working of an
artificial device.
Software Testing Principles
 Principle 1. Testing is the process of exercising a
software component using a selected set of test
cases, with the intent of (i) revealing defects, and
(ii) evaluating quality.
 Principle 2. When the test objective is to detect
defects, then a good test case is one that has a high
probability of revealing a yet undetected defect(s).
Software Testing Principles
Principle 3. Test results should be inspected meticulously.
Principle 4. A test case must contain the expected output or result.
Principle 5. Test cases should be developed for both valid and invalid input
conditions.
Principle 6. The probability of the existence of additional defects in a software
component is proportional to the number of defects already detected in that
component.
Principle 7. Testing should be carried out by a group that is independent of the
development group.
Principle 8. Tests must be repeatable and reusable.
Principle 9. Testing should be planned.
Principle 10. Testing activities should be integrated into the software life cycle.
Principle 11. Testing is a creative and challenging task.
Reveal Defects
Find weak Points
Find inconsistent behaviour
Find Circumstance where the software does
not work as expected.
TESTER’S ROLE
The term defect and its relationship to the
terms error and failure in the context of the
software development domain.
Sources of defects are Education,
Communication, Oversight, Transcription,
Process.
Impact on Software – Failure
Impact from User’s View – Poor quality, user
dissatisfaction .
DEFECTS, HYPOTHESES, AND TESTS
Defects revealed by using hypotheses.
Hypotheses used are.
◦ Design test case
◦ Design test procedure
◦ Assemble test sets
◦ Select the testing levels
◦ Evaluate the results of the test.
DEFECTS, HYPOTHESES, AND TESTS
Defect Repository:
◦ Storage and retrieval of defect data from all
projects in a centrally accessible location.
Defect Classification:
◦ To form defect repository, defect should be
classified
Defect Classes:
◦ Four major classes
DEFECTS, HYPOTHESES, AND TESTS
3.2 Defect Classes, the Defect
Repository, and Test Design
Requirement and Specification Defects:
◦ Written in natural language
◦ Ambiguous, contradictory, unclear, Redundant and
Imprecise
Possible Defects:
1 . Functional Description Defects.
Overall description and behavior is incorrect
2 . Feature Defects
Features may be described as distinguishing
characteristics of a software component or
system.
3 . Feature Interaction Defects
4 . Interface Description Defects
 Design defects occur when system components,
interactions between system components,
interactions between the components and outside
software/hardware, or users are incorrectly
designed.
Design Defects
Algorithmic and Processing Defects
Control, Logic, and Sequence Defects
Data Defects
Module Interface Description Defects
Functional Description Defects
External Interface Description Defects.
Design Defects
 Coding defects are derived from errors in implementing
the code. Coding defects classes are closely related to
design defect classes especially if pseudo code has been
used for detailed design.
 Algorithmic and Processing Defects
 Control, Logic and Sequence Defects
 Typographical Defects
◦ Syntax error
 Initialization Defects
 Data-Flow Defects
 Data Defects
 Module Interface Defects
 Code Documentation Defects
 External Hardware, Software Interfaces Defects
Coding Defects
Defects in test plan, test cases, test harnesses
and test procedures. Two types
Test Harness Defects:
◦ Auxiliary code developed to test a software.
◦ Defect in test harness
Test Case Design and Test Procedure Defects
3.6 Testing Defects
Defect Example
 Functional Description:
 Input and output value – +ve or -ve
 Precondition : no_of_coins >=0;
 Postcondition : no_of_dollars, no_of_cents
>=0
 Interface Description:
 How to interact with user
Requirement Defects
Design Defect:
 Control,logic and Sequencing Defect:
 While loop
 Algorithmic and processing Defects:
 Input valid /invalid
 Data Defect:
 1,5,10,25,25,100
 External interface description defect:
 Message box
Design Defect
Coding Defect
 Control, logic and sequence:
 Loop condition
 Algorithmic and Processing:
 Input valid/invalid
 Data flow:
 total_coin_value not intialized
 Data:
 Error in array intialization
 External interface:
 Scanf statement is wrong
 Code documentation:
 Comment information missing
Coding Defect
Repository development steps
1.Defect Classification.
2.Collecting defect data from projects
3.Designing forms and templates to collect data
4.recording each defect after testing
5.Recording frequency of occurrence of each defect
6.Monitoring the defect in each on-going projects
Developer/Tester Support for
developing a Defect Repository
 1. Understanding customer requirement:
 Natural language representation
 Confirmed with customer
2. Peer review:
 Team work
 Review stages:
 1.Requirement Specification Review
 2. Design Review
 3.Code Review
 4.Source code documentation
 5. Use a coding Checklist
 6. Frequent Release
 7.Test and testability
Defect Prevention Strategies
Unit II
Test Case Design
Test Case Design Strategies
The Smart Tester:
Develops effective test cases to maximize use of
time and resources
Impact of effective test cases:
• Greater probability of detecting defects
• Effective use of resources
• Close adherence to testing schedule and budget
• Delivery of high quality s/w product
Test Case Design Strategies
Two basic test case design strategies:
• Black box test strategies
• White box strategies
Test Case Design Strategies
Difference
Black Box Testing
•Opaque box testing
•No knowledge about inner
structure required
•Tests simple module,
member functions,
subsystems or a complete
s/w system.
•Reveals requirement and
specification defects
White Box Testing
•Transparent box testing
•Knowledge about inner
structure required
•Tests statements, true/false
branches..
•Reveals design, control,
logic and sequence defects,
initialization defects and
data flow defects
1. Random Testing
2. Requirements Based Testing
3. Boundary Value Analysis
4. Equivalence Class Partitioning
5. State Based Testing
6. Cause Effect Graphing
7. Compatibility Testing
8. User Documentation Testing
9. Domain Testing
Black Box Approaches
• Inputs selected randomly from input domain
• Saves Time and effort
• Little chances of producing effective test cases
• Example
•Input Domain: all positive numbers
between 1 and 100
•Random inputs : 55, 24, 3
Random testing
• Validating the requirements given in SRS
(Software Requirement Specification)
with TRS (Test Requirement
Specification) documented.
• Done using RTM (Requirement
Traceability Matrix)
Requirement Based Testing
Requirement Traceability Matrix
Req.ID Description Prority
H,M,L
Test
Condition
Test Case Id Phase of
Testing
BR-01 Inserting the
numbers 123 and
turn it clockwise to
lock
H Use key 123 Lock_001 Unit Testing
Test Result
S.I
Req-
Id
Prorit
y
H,M,L
Test Case
Id
Total
Test
Cases
Total
Test
Cases
Passed
Total Test
Cases
Failed
% Pass
No of
Defects
1 BR-01 H LOCk-01 1
1
0
100 1
 Two major sources of defects.
– Conditions
– Boundaries
 Example – Billing system that offers volume
discount
Boundary Value Analysis
8/29/2023
Defects occur in boundaries (When buying
9,10,11,20,21,29,30,31)
 Reasons for defects.
– Confusion with <= and <
– Confusion with looping
Boundary Value Analysis
Test Case Design
Value
to be
tested
Why this value should be tested Expected
value of
the output
1 The beginning of the first slab $5.00
5 A value in the first slab $25.00
9 Just at the end of the first slab $45.00
10 End of the first slab $50.00
11 Just into the second slab $54.75
16 A value in the second slab $78.50
19 Just at the end of the second slab $92.75
20 End of the second slab $97.50
 Technique identifies a different set of inputs that produce
many different output conditions
 Set of input values that generate one single expected
output is called a partition
 When the behavior of the s/w is same for a set of values –
Equivalence class or partition.
 This testing involves
– Identifying all partitions
– Picking up one representative from each partition for
testing
Equivalence Partitioning
Reduces the number of test cases.
A defect in one value in a partition is
extrapolated to all values in that partition.
Redundancy of testing can be reduced.
Advantages
Base premium $0.50.
Additional premium to be paid is based on
the age
Example LIC
Age Group
Additional
Premium
Under 35 $1.65
35-59 $2.87
60+ $6.00
Below 35 Years of age (valid input)
Between 35 and 59 years of age (valid input)
Above 60 years of age (valid input)
Negative age (Invalid input)
Age as 0 (invalid input)
Age as any three digit numbers (valid input)
Equivalence partitions
Test case Design
S.no Equivalence partitions Type of input Test data Expected Result
1 Age below 35 Valid 26,12 Monthly premium 0.50+1.65
2 Age 35-59 valid 37 050+2.87
3 Age above 60 valid 65,90 0.50+6.0
4 Negative age Invalid -23 Warning Message
5 Age as 0 Invalid 0 Warning message
Used When Product under test is
– compiler
– Work flow model
– Data flow model
State Based or Graph Based
Testing
Rules for validating a number
1. A number can start with a optional sign
2. The optional sign can be followed by any no of
digits.
3. The digit can be optionally followed by a
decimal point
4.If there is a decimal point , then there should be
2 digits after decimal points.
5. Any number should be terminated by a blank
Example – Validating a number
State transition diagram
1 2 3 4 5
6
+/
-
Digit
Decimal
point
Digit Digit
Blank Blank
State transition table Current State Input Next State
1 Digit 2
1 + 2
1 - 2
2 Digit 2
2 Decimal point 3
2 Blank 6
3 Digit 4
4 Digit 5
5 Blank 6
Test Case design
• Start from the start state
• Choose a path that leads to next state
• If invalid input encountered generate
an error condition
• Repeat the process till the final state is
reached
Cause and Effect Graphing
• Test cases are derived by combining
conditions.
• Used to reveal inconsistency in the
specifications
• Methodology adapted
• Specifications represented as digital
logic circuit. (graph)
• Graph converted to Decision table
• Test cases developed using decison
table
Steps Involved
• Specifications decomposed into lower units
• For each unit, cause and effect identified
• A Boolean cause and effect graph created
• Cause in the left hand side and effect in
the right hand side.
• In cause the relationship is represented
using logical operators.
• Graph is annotated using constraints
• Graph converted to decision table
• Column in decision table are transformed
into test cases
Example
1
2
3
AND ˄
Effect 3 occurs when 1 and 2 are present
Example
• Problem: Searching for a character in an
existing string
• Specification States:
•1. Input the string length and character
to be searched.
•2. If the length of string is out of range,
an error message occurs.
•If the character appears in the screen,
its position is reported.
•If the character is not in the string , not
found message will occur.
Example
• Input Condition:
• C1 :Length of the string,
•integer
•ranges from 1 to 80
•C2 :Character to be searched,
• single
•character
• Output Condition:
•E1 : Integer out of range
•E2 : Position of the character
•E3 : Character not found
•.
Example
• Rules and Relationship:
•If C1 and C2, then E2
•If C1 and not C2, then E3
•If not C1, then E1
• Output Condition:
•E1 : Integer out of range
•E2 : Position of the character
•E3 : Character not found
Cause and Effect Graph
C1
C2
E2
˄
E1
E3
˄
8/29/2023
Decision Table
• Existing String: abcde
• Test Cases : T1, T2, T3
Inputs Length Character to
be searched
Output
T1 5 C 3
T2 5 W Not Found
T3 90 Integer out
of range
8/29/2023
Advantages:
• Allows thorough inspection of specification
• Any omission, inaccuracies or
inconsistencies can be easily detected
8/29/2023
Compatibility Testing:
• Testing the infrastructure parameters.
• Infrastructure Parameters
•Processor
•Architecture
•Resource
•Equipment
•OS
•Middle ware
•Backend
• Permutation and combination of the
parameters should be tested
• A compatibility matrix can be designed
8/29/2023
Server Application
Server
Web Server Client Browser Msoffice Mail Server
Windows
2000
Advanced
Server with
SP4 Microsoft
SQL Server
2000 with
SP3a
Windows
2000
Advanced
Server with
SP4 and .Net
framework
1.1
IIS 5.0 Win 2k
professional
and Win 2k
terminal
server
IE 6.0
And IE 5.5
SP2
Office 2K and
Office XP
Exchange 5.5
and 2K
Using this table combinations are created to perform compatibility testing
8/29/2023
Common techniques:
• Horizontal Combination:
•Parameters are grouped together in a row
•Product features are tested using each row
• Intelligent Sampling:
•Dependencies between parameters identified
•Using the information an intelligent sample is
made and tested
8/29/2023
Further Classification:
• Backward Compatibility Testing:
•Assuring current version coincides with
the older version
• Forward compatibility Testing:
•Ensuring the provision for later version
8/29/2023
User Documentation Testing
• Objective:
•Check the content stated in the document
•Checking whether the product is explained correctly in
the document
• User Documents
•Manuals
•User Guide
•Installation guide
•Set Up Guide
•Read me file
•Software release notes
•Online help
8/29/2023
Benefits
• Highlights the problem overlooked in the
review
• Minimize the possible defects reported by
the customers
• Ensuring that the product is working as per
the documents
8/29/2023
Domain Testing
• Need Business domain Knowledge
• Business vertical testing
•Testing the product in the business flow
8/29/2023
Example: Cash Withdrawal in
ATM
• User Action
•Step 1: Go to the ATM
•Step 2: Put ATM Card inside
•Step 3: Enter correct pin number
•Step4 : Choose cash withdrawal
•Step 5: Enter the amount
•Step 6: Take the cash
•Step 7: Exit and retrieve the card
• Domain tester should test whether user
actions are performing well
8/29/2023
White Box Testing
• Testing the program code
• Clear box, open box or glass box testing
• Testing include
•Program code
•Code structure
•Internal design flow
 Testers need a framework
 for deciding which structural elements to select
as the focus of testing,
 for choosing the appropriate test data, and
 for deciding when the testing efforts are
adequate enough to terminate the process with
confidence that the software is working
properly.
 Such a framework exists in the form of test
adequacy criteria.
Test Adequacy Criteria
The application scope of adequacy criteria also
includes:
(i) helping testers to select properties of a
program to focus on during test;
(ii) helping testers to select a test data set for a
program based on the selected properties;
(iii) supporting testers with the development of
quantitative objectives for testing;
(iv) indicating testers whether or not testing can
be stopped for that program.
Program Based Adequacy criteria.
 Focus is on structural properties
 Logic and control structure, data flow,
program text
Specification based adequacy criteria.
 Focus is on specification
Classification
 Static Testing:
 Desk Checking
 Code Walk Through
 Code Inspection
 Structural Testing
 Unit/Code functional testing
 Code Coverage
Statement coverage
Path coverage
Condition Coverage
Function Coverage
 Code Complexity
Cyclomatic Complexity
Types of White Box Testing
 Static Testing:
 Test the source code
 Done by human to ensure
•Code works according to the functional requirement
•Code written in accordance with design
•Any functionality in code has been missed out
•Code handles error properly
 Static testing by Human
 Reading the program code to detect error
 Advantages
Sometime human can find errors that computer cannot
Multiple human gives multiple perspection, more problems identified
Code can be compared with design
Computer resources can be saved.
Static Testing
 Static testing by human:
 Desk Checking
 Code walk through
 Code review
 Code inspection
Static Testing
 Verifying the portion code for correctness
 Done by comparing code with design and specifications
 Done by programmer before compilation and execution
 Advantages
 Enables the programmer to well understand his/her own code
 Only few scheduling and logistic overhead (done by individual)
• Disadvantages
 Programmer may be tunnel visioned
 Person dependent method
Desk Checking
 Set of people look at the program code and raise questions
 Author gives justification
Code Walk through
 Fagan Inspection
 Detects all faults, violations and other side effects
 Takes place after desk checking and walk through
 Inspection meeting arranged.
 Members of the meeting
• Author of the code
• Moderator who formally run the inspection
• Inspectors who provide review comments
• A Scribe who takes notes
 Defect classified in two dimensions
• Minor/major
• Systematic/misexecution
Formal Inspection
 Run by the computer
 Code, code structure, internal design are tested
 Classification
• Unit/code functional testing
• Code coverage testing
• Code complexity testing
Structural testing
 Quick Check done by the developer
 Methods
• Obvious testing (Known I/p and o/p)
• Debug version ( intermediate statements)
Unit/code functional testing
 Test Cases are designed and executed
 Percentage of code covered is obtained
 Tool
• Instrumentation of code
•Rebuilds the product by linking with set of libraries provided by the tool
vendor.
•Monitors and keep audit of what portion of code are covered
 Types of coverage
• Statement coverage
• Path coverage
• Function coverage
Code coverage Testing
 Program Construct
• Sequential Control flow
• Two way decision statement (if then else)
• Multi way decision statements (switch)
• Loops (while, do while, repeat until and for)
Statement Coverage
 Sequential statements
• test cases designed should run from top to bottom
• Exceptions encountered, test case does not cover the entire code
 Two way decision statement (if then else)
• Separate test cases for if, then and else should be written
 Multi way decision statements (switch)
• Multiple test cases for all the possible cases should be written
 Loops (while, do while, repeat until and for)
• Test cases for boundaries values written
• Test cases for skipping loop
• Test cases for normal loop operation
Statement Coverage
 Formula
Statement coverage = ((Total statements exercised) / (Total no of executable
statements in program)) *100
Statement Coverage
 Program is spitted into no of distinct paths
 Example: Date Validation Routine
Flow Chart in notes
Path coverage
 Testing Boolean Expression
 Test Cases written for both true path and false path
Condition Coverage = ((Total decisions exercised/ Total no of decisions in
program)) *100
Condition coverage
 Used to identify the no of functions covered by test cases
 Test Cases written to exercise different functions
Function Coverage = ((Total functions exercised/ Total no of functions in
program)) *100
Function coverage
 Complexity of a program – determines the test bound
 Measured by Cyclomatic complexity
 Measured by converting Flow chart into flow graph
 Steps involved
– Identify all predicates(decision)Se paths
– Complex decision paths broken into simple
– All sequential statements combined into a single node
– Sequential statements followed by predicate check, a predicate node is created.
Edges emanate from the nodes.
– Ensure all edges terminate at the same node
Code Complexity Testing
Code Complexity Testing
 Cyclomatic complexity = E – N +2
 N – Nodes, E -Edges
 Cyclomatic complexity =4-4 +2 = 2
 1-10 Well Written code
 10 -20 Moderate, medium testability
 20-40 Testability is very complex
 > 40 Not Testable
Code Complexity Testing
 Testers are often faced with the decision of which criterion to apply to a given item
under test given the nature of the item and the constraints of the test environment
(time, costs, resources).
 Testers can use the axioms to
◦ Recognize both strong and weak adequacy criteria; a tester may decide to use a
weak criterion, but should be aware of its weakness with respect to the properties
described by the axioms;
.
Evaluating Test Adequacy Criteria
 Applicability Property
 Non-exhaustive applicability Property
 Monotonicity Property
 Inadequate Empty Set
 Antiextensionality Property
 General Multiple Change Property
 Antidecomposition Property
 Anticomposition Property
 Complexity Property
 Statement Coveraage Property
Properties
8/29/2023
Unit III
Levels of Testing
Levels of testing include the different
methodologies that can be used while
conducting Software Testing.
They are:
◦ Unit Test
◦ Integration Test
◦ System Test
◦ Acceptance Test
Levels of Testing
Unit – Small testable software component.
Example: Function, procedure, class, object
Goal: individual s/w unit is functioning
according to its specification
Task Involved
 Unit Test Planning
Design Test Cases
Define the relationship between the tests
Prepare auxiliary code for testing
Unit Testing
Phase I: Describe Unit Test Approach and Risk
Identify test risk
Describe the techniques to be used for designing
test cases
Describe the techniques to be used for data
validation and recording test results
Describe the requirement for test harnesses
Identify the termination condition
Estimates the resources needed for testing
Unit Testing Planning Phases
Phase II: Identifying unit feature to be tested
Identify the input output characteristics
associated with each unit
Phase III: Add levels of details to the plan
Refines the plan produced in the phase 1
and 2
Add new details to the plan
Unit Testing Planning Phases
Use black box and white box test design
strategies to design test cases
Designing the unit test
Supporting code to carry our test
Contains drivers to call the target code and
stubs that represent the module it calls
Test Harness
Drivers
Unit under test
Stub1 Stub2
Call and pass parameter Result
Call Ack Call Ack
Use worksheet to record the result
Running the unit test and
recording the result
Unit Test Worksheet
Unit Name:
Unit Identifier:
Tester:
Date
Test Case Id Status(run/not run) Summary of result pass/fail
Goals: Detect defects on the interface
Ensure the system is complete and ready for
system test
Integration Strategies:
Top down
Bottom Up
Bi directional
Integration Test
Start from top most component
Navigate from top to bottom till all
components are covered
Top Down Integration Test
C1
C2 C3 C4
C5 C6 C7 C8
Top Down Integration Test
Step Interface Tested
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6
6 1-3-6 (3-7)
7 (1-2-5)-(1-3-6-(3-7))
8 1-4-8
9 (1-2-5)-(1-3-6-(3-7)-(1-4-8)
Start from bottom most component
Navigate from bottom to top till all
components are covered
Bottom Up Integration Test
C8
C5 C6 C7
C1 C2 C3 C4
Bottom Up Integration Test
Step Interface Tested
1 1-5
2 2-6, 3-6
3 2-6-(3-6)
4 4-7
5 1-5-8
6 2-6-(3-6)-8
7 4-7-8
8 (1-5-8)-(2-6-(3-6)-(4-7-8)
Combination of both
1,2,3,4 and 5 are tested separately
6-2,
7-3-4
Bi Directional Integration Test
C1
C6 C7 C8
C2 C3 C4 C5
Using realistic user activities to test the
product
Two methods
System Scenario
Use-Case Scenario / Role based Scenario
Scenario Testing
System Scenario: Done with the help of system
component
Approaches
Storyline
Life Cycle/State Transition
Deployment/Implementation stories from
customer
Business Verticals
Battle Ground
System Scenario Testing
 Story Line:
Developing stories about various activities of the
product that may be executed by the user.
 Life Cycle/ State transition:
Various transitions / modifications that happen to an
object
 Deployment/ Implementation stories from customer:
 Business verticals: Visualizing the business situations
 Battle ground: try and break the system and justify the
product does not break
System Scenario
 Procedure that describe how a customer use the
system
 Users are Actors, product activity is system
behaviour
 Example: Withdrawing cash from bank
Use Case Scenario
Actor
Agent
System
Response
Customer
Bank Official
Compute
r
Cheque Query
Response
 People performing different role in an organization test
the product together at the same time.
 Advantages:
Cross boundaries and test beyond assigned areas
Testing isn’t for tester’s alone
Eat your own dog food
Fresh eyes has less bias
Users of software are not the same
Does testing wait till all documentation
Testing isn’t to conclude the system works or does
not work
Defect Bash Elimination
 Step 1: Choosing the frequency and duration of defect
bash
 Step 2: Selecting the right product build
 Step 3: Communicating the objective of each defect bash to
everyone.
 Step 4: Setting up and monitoring the lab for defect bash
 Step 5: Taking actions and fixing issues
 Step 6: Optimizing the effort involved in defect bash
Defect Bash – Steps Involved
 Testing complete integrated system
 Evaluate system compliance with its specified
requirement
 Two methodologies
Functional Testing
Non Functional testing
System Testing
Testing Aspects Functional Testing Non Functional
Testing
Involves Product features and
functionality
Quality factors
Tests Product behaviour Behaviour and
experience
Result Conclusion Simple step written to
check expected result
Huge data collected
and tested
Knowledge Required Product and Domain Design , architecture,
domain
Testing phase Unit, component,
integration, system
System
 Testing product's functionality and features.
 Common techniques are
Design/architecture verification
Business vertical testing
Deployment testing
Beta testing
Certification, standard and testing for compliance
Functional System Testing
 Design/architecture verification:
 Business vertical testing:
Simulation
Replications
• Deployment testing:
Off-site deployment
On-site deployment
• Beta testing: receiving feedback from customer
• Certification, standard and testing for compliance:
Testing whether the product is certified by popular h/w, s/w
OS, DB.
Certificate Agencies provide certificate test suite
Functional System Testing
Standard:
•Conforming whether the standards are properly
implemented
 Compliance
•Testing for contractual, legal and statutory
•Compliance to FDA : Food and Drug
•508 accessibility guidelines: Physically challenged
Functional System Testing
 Non functional testing achieved by
• Setting up configuration
•Simulated environment
•Real time environment
• Coming up with entry/exit criteria
•Setting limit for parameters
Non Functional System Testing
Type of test Parameter Sample entry
criteria
Sample exit
criteria
Scalability Maximum
limit
Scale upto
one million
record
Scale upto 10
millions
record
• Balancing key resources
•Products reaction with the four key resources :
CPU, Disk, Memory and network
 Verifying quality factors: reliability, scalability, etc.,
• Scalability Testing:
•Maximum capacity of the product
•Eg: no of clients machine log into the server
simultaneously
• Reliability Testing:
•Product's ability to perform its required
function.
•Eg: querying database continuously for 48 hrs
• Stress Testing:
•Overloading the product and finding out its
behaviour.
• Interoperability Testing
•Ensuring that two or more products can exchange
information, use information and work properly
together
 Done by the customer
 Test cases designed by the customer
 Criteria classified as
• Acceptance criteria for product acceptance:
•Requirement comparison
• Acceptance criteria for procedure acceptance
•Procedure followed for delivery
• Acceptance criteria for service level agreement
•Contract signed between the customer and product
organisation
Acceptance Testing
 Test cases Used
•End-to-end Verification
•Domain Tests
•User scenario test
•Basic sanity test
•New functionality
 Executing Acceptance test:
•Done with people having 90 % business process
knowledge and 10 % technical knowledge
•Appropriate training on the product and process
provided.
•Test cases and test data can be collected from the
organization
•Test results are properly recorded
•Major defects identified, release is postponed
 Factors governing performance
– Throughput :Capacity of system to handle multiple
transactions
– Response Time: Delay between the point of request and
the first response
– Latency: Delay caused by the application,OS, H/w, etc
Network latency = N1+N2+N3+N4
Product Latency = A1+A2+A3
Actual Reponse time = network latency + product
latency
Performance Testing
 Factors governing performance
– Tuning:Process of improving performance by setting
parameter values.
– Benchmarking: Comparing the performance of the
product with standard product
– Capacity Planning: Finding out the resource and
configuration need
Performance Testing
 Steps involved
– Collecting requirement
– Writing test cases
– Automating test cases
– Executing test cases
– Analysis the result
– Tuning
– Benchmarking
– Capacity Planning (Recommending)
Performance Testing -
Methodology
 Collecting requirement: Performance testing requirements
can be collected by
– Comparing the performance with previous released
products
– Comparing the performance with competitive products
– Comparing the absolute number with actual need
– Driving the performance number from architecture and
design
 Requirement classification
– Generic requirement: Common across all products
– Specific Requirement: Specific the product
Performance Testing -
Methodology
Performance Testing -
Methodology
Transaction Expected
Response Time
Loading Pattern
/ throughput
Machine
configuration
ATM Cash
withdrawal
2 Sec Upto 10,000
simultaneous
access
Pentium IV 512
RAM Broad
band N/w
ATM Cash
withdrawal
4 Sec Upto 10,000
simultaneous
access
Pentium IV 512
RAM dial up
N/w
 Writing Test cases: A test case should have the following
details
– List of operations to be tested
– Steps to execute those operations
– List of parameter that impact performance and their
values
– Loading pattern
– Resource and their configuration
– Competitive product to be compared
Performance Testing -
Methodology
 Automating Performance test cases: Characteristics that
leads to automation
– When the testing repetitive
– Need for effectiveness
– Result of test should be accurate
– More factors to be considered
– Resource and their configuration
– Account information need to be collected periodically
Performance Testing -
Methodology
 Executing Performance test cases: Details to be recorded
during execution
– Start and end time of test case
– Log and audit file
– Utilization of resources
– Configuration details
– Response time, throughput and latency at regular
intervals
Performance Testing -
Methodology
 Analyzing the result: steps used in analysis
– Calculating mean : Performance test result mean is taken
– Calculating standard deviations
– Removing noise and re plotting: The values out of range in a
graph are ignored and smooth graph is produced.
– Differentiating whether the data is from cache or server:
Response time increases when caching done
– Differentiating whether the data is using the available resources
completely: Garbage collection/ defragmentation decreases the
performance of the foreground process
• Analysis report will narrow down the parameters
Performance Testing -
Methodology
 Performance Tuning:
– The narrow down parameters are tested repeatedly for
different values
– Two steps
•Tuning the product parameter
•Tuning the operating system and parameters
Performance Testing -
Methodology
 Performance Benchmarking: Steps involved
– Identifying the scenarios and configurations
– Comparing it with the competitive product
– Tuning the product performance
– Publishing the result of performance benchmark
Performance Testing -
Methodology
 Capacity Planning:
– Using the performance result right configurations
requirement are identified
– Classified as
•Minimum requirement: anything less than this
configuration the product will not work
•Typical requirement: The product works fine in this
configuration
•Special configuration: future requirement
Performance Testing -
Methodology
 Two types of Tools:
– Functional performance tools
•Winrunner from mercury
•QA partner from compuware
•Silktest from segue
– Load testing tools
•Load runner from mercury
•QA load compuware
•Silk performer from segue
Performance Testing - Tools
 Ensures all changes made works as designed
 Ensures the changes do not break something that is
already working
 Selective retesting:
– Reuse of existing test case to test the impacted area.
• Build:
– Source code of the complete product and the fixed
defect along with existing features get consolidated into
a bulid.
• Types
– Regular regression testing: Done between test cycle
– Final regression testing : Done at last to validate the
bulid.
Regression Testing
 Whenever changes are made regression test should be
done
 Steps
– Performing initial smoke or sanity test
– Understanding the criteria for selecting test cases
– Classifying the test cases into different priority
– A methodology for selecting the test case
– Resetting the test case for test execute
– Concluding the result
Regression Testing
 Performing an initial “smoke” or sanity test:
Smoke test:
Identifying the basic functionality of the product
Design smoke test suite. (Test cases to ensure the
working of the functionality of the product)
Ensure the suite runs every time when changes
made
Failure reported to developer
Regression Testing
 Criteria for selecting test cases:
Test cases that produce maximum defects
Test cases for those function where changes are
made
Test cases that test end to end behaviour
Test cases that test the positive test conditions
 Classifying test cases:
 Three categories
Priority 0: sanity test cases – checks functionality
Priority 1: Test cases which use basic and normal set
up
Priority 2: Executed as apart of testing cycle
Regression Testing
 Resetting the test cases
Reset Procedure: A method to indicate that some of
the test cases are selected for regression.
A flag is used in TCDB to indicate to not run or
execute.
 Concluding the Result:
Comparison with previous results
Regression Testing
Regression
Test result
Previous
Results
Conclusion Remarks
Fail Pass Fail Need to
improve the
test process
Pass Fail Pass Defect fixes
works
properly
Fail Fail Fail Defect fixes
not working
Pass Pass Pass Defect fixes
done
properly
with no side
effect
 Language:
A language is a tool used for communication,
internationalization. It focuses mostly on human
languages and not on computer language.
ASCII:
American Standard Code for Information
Interchange. It is a byte representation for characters
that is used in computers.
Double-Byte Character Set:
English is represented in single byte where as
Chinese and Japanese are represented in two byte
called DBCS
Internationalization Testing
 Unicode:
Provides unique no to every character irrespective of
platform
Locale:
Subset of a language that defines its behaviour in
different countries. (eg) English common to US and
India but currency, punctuations are different.
Internationationalization(I18n):
Defines all activities to make the s/w available for
international market.
18 Characters between I and n
The testing done for internationalization is I18n
Testing.
Internationalization Testing
 Localization(L10n):
Converting into a target language.
Globalization(G11n):
Means Internationalization
Internationalization Testing
 Done by Test engineers, developers and local team
Test Activities done by test engineers:
Enabling Testing
Locale Testing
I18n Testing and validation
Fake language testing
Language testing
Localization testing
Test activities done by developers:
Enable the code
Internationalization Testing
Phases
 Testing activities done by localization team
Message Consolidation
Message Translation
Include message into the product
Internationalization Testing
Phases
Internationalization Testing
Phases
Enable the Code
1. Enable testing
2. Locale Testing
3. I18n testing and validation
4. Fake language testing.
5. Language Testing
Release English
version
Message consolidation
Message Translation
Include message into the product
6. Localization Testing
Release International
version
Internationalization Testing
Phases
Enabling Testing Unit Testing
I18n Validation
Locale Testing
Component Testing
Fake Language Testing Integration Testing
Localization Testing System Testing
Acceptance Testing
 Ensures that the source code allows internationalization
Check list for enabling testing
Check the code for API/functional calls
Check the code for hard coded date, currency format,
ASCII code
Check the code to see that no computation done
using date variable
Ensure that the dialog boxes and screen have 0.5
times more space for expansion
Ensure region culture based message and slang are
not in the code
Enabling Testing
 Using system setting or environment variables for
different locale and testing the functionality, number,
date, time and currency format is called locale testing.
Check list for enabling testing
All features applicable to I18n are tested with
different locale
Hard keys, function keys and help screens are tested
with different locale
Ensure date, time, currency, number format are in
line with different locale
Locale Testing
 Done with the objective
S/w tested for ASCII, DBCS and European
characters
S/w handles the string operations, sorting done as per
the character selected
Checklist
The functionality is same in all locale
Sorting, sequencing are done as per the convention
of the language
Non ASCII characters are entered as they were
entered
Cut and copy and paste of non ASCII characters
retain their style.
Internationalization Validation
 Translator is used to catch the translation and localization
issues.
S/w tested for ASCII, DBCS and European
characters
Checklist
Ensure s/w is tested for atleast one European single-
byte fake language (Pig Latin)
Ensure s/w is tested for atleast one double-byte fake
language (Wide Roman)
Ensuring size of screen, dialog box, pop up menus
are adequate for fake language
Fake Language Testing
 Ensuring software created in English can work with
platform and environment that are non English.
Done for one European language, one double byte
representation language and one default language
(English)
Language Compatibility Testing
Server
English
E
G
J
German
E
G
J
 The messages are consolidated from release ready s/w
and send to multilingual expert for translation
 After translation the s/w functionality is tested to ensure
that it works as expected in the localized environment
Checklist
 Messages, documents, pictures, menus reflect the
convention of the country, locale and language
 Sorting and case conversions are right as per language
conventions
•China, Cyberia in English
•Cyberia ,China in Spanish
LocalizationTesting
Checklist
Font sizes and hot keys are working correctly in the
translated messages, documents and screens
Filtering and searching capacities work as per the
language
Address, Ph no, numbers and postal codes are as per
the convention of the target language
LocalizationTesting
8/29/2023
UNIT IV
People and Organizational
issues in Testing
Classified into
Product Organization
Service Organization
Single site
Multi site
Organization
Organization Structures
Single Product Company
CTO
CEO
Product
Business Unit
Support
Product
Management
Product
Delivery
Testing Team Structures
Single Product Company
Teams/Engineers
Project
Manager
Teams/Engine
ers
Teams/Engine
ers
Develop
Test
Test
Develop
Exploits the rear loading nature of testing activities
Enables engineers to gain experience in all aspects of
life cycle
Some defects may be detected early
Advantages
Accountability for testing and quality reduces
Effectiveness of testing reduces
Schedule pressures generally compromise testing
Disadvantages
Organisation Structures
Multi Product Company
Component
Manager
Project
Manager
Component
Manager
Component
Manager
Develop
Test
Test
Develop
How tightly coupled the product with the
technology
Dependency among the product
Similarity among the customers of the product
Based on these factors the test team can be organized
as
 A central team for the whole organization
 One test team for each product
 Different test team for each product
 Different test team for different test
 A hybrid of all the above
Factors for organizing Test Team
 Test team directly reports to the CTO before design
and development starts
Advantages
 The product design and architecture is decided
keeping test requirement in mind
 Testing team can provide in selecting the
technology
 Testing team can better understand the design and
architecture
Test Team as a part of CTO’s
office
 The product is tested by a single team
Since the product is developed by different teams ,
the testing is reported to different levels
Single Test Team for all products
 Responsibility is given to business unit
Business unit organizes the developer and test team
Test team organized by product
 Separate testing team is assigned for each phase
Each test team reports to different groups
Separate test team for different
phases of testing
Types of Test Report to
White box Development team
Black Box Testing team
Integration Organization wide
team
System Product Management
Performance Bench Mark group
Acceptance Product Management
Internationalization Internationalization
and local team
Regression All test teams
 Testing activity is outsourced to external companies
Advantages
 Testing is a very specialized function
 Variety of tools available (need specialization)
 Developer can not be an expert in testing
Testing Service Organization
 Testing: product organization vs Service
organization
•Service organization are at arm's length from
product organization
•Service organization does not provide full breadth
of coverage
•Service organization tend to be more homogenous
Difference
 Roles and Responsibilities
•Made of up number of accounts
•Accounts are managed by account manager
•Account is responsible for customer
Functions of account manager
•Act as single point of contact between the
customer and test organization
•Develops rapport with the customer
•Participates in all communication between
customer and organization
•Acts as a proxy for the customer
Test Service Organization
Test Service Organization
CEO
Account Manager
Customer contact
Near Site Team Remote team
Test Lead
Test Engineer
Challenges and Issues
•Testing team is an outsider.
•Cannot develop domain expert for all the
customers
•Privacy and physical isolation not possible
Success factors
•Establishing communication with the developer
•Bringing customer expectation in testing
•Using appropriate tool and environment
•Periodic skill upgrade
Test Service Organization
A plan is a document that provides a framework for
achieving a goal.
The test plan is provided with
•Overall test objective
•What to test
•Who will test
•How to test
•When to test
•When to stop
Test Planning
Test Plan Hierarchy
SQA
Master Plan Review Plan
Unit Test Plan Integration Plan System Test Plan Acceptance Plan
Test Plan Components
Test Plan Components
1. Test Plan Identifier
2. Introduction
3. Items to be tested
4. Features to be tested
5. Approach
6. Pass/Fail Criteria
7. Suspension and resumption criteria
8. Test Deliverable
9. Testing Tasks
10. Testing Environment
11. Responsibilities
12. Staffing and training records
13. Schedule
14. Risks and contingencies
15. Testing cost
16. Approvals
Test Design Specification
Test Case Specification
Test Procedure Specification
Test Plan Attachment
Test Design Specification Identifier
Features to be tested
Approach refinement
Test Case Identifier
Pass/Fail criteria
Test Design Specification
Test case Specification Identifier
Test items
Input Specification
Output Specification
Special environment needs
Special procedural requirement
Inter case Dependencies
Test Case Specification
Test procedure Specification Identifier
Purpose
Specific requirement
Procedure steps
Test Procedure Specification
Test item transmittal report – to locate items
Contains information
–Version / revision number of the items
–Location of the items
–Persons responsible for the items
–Status of the items
–approvals
Locating Test Items
Test plan management planning aspects
–Choice of standards
–Test Infrastructure Management
–Test People Management
–Integration with product release
Test Management
External Standards and Internal standards
Internal Standards
–Naming and storage conventions for test artifacts
–Document standards
»Appropriate header comments
»Sufficient in line comments
»Up to date change history information
–Test Coding standards (right of initialization, re
usability)
–Test reporting standards
Choice of Standards
Three elements
–A test case database (TCDB)
–Defect Repository
–SCM (Software Configuration Management)
Test Infrastructure Management
TCDB
Test Infrastructure Management
Entity Purpose Attribute
Test Case Records all the static
information about the tests
Test Case id
Test case name
Test case owner
Associated files
Test case
run history
Gives the history of when a
test ran
Test Case id
Run date
Time taken
Run status
Defect repository
Test Infrastructure Management
Entity Purpose Attribute
Defect details Records all the static
information about the tests
Defect id
Defect priority
Defect description
Affected products
Any relevant information
Defect test
details
Provides details of test cases
for a given defect
Defect id
Test case id
SCM
–Keeps track of change control and version control of
all the files/entities that make up a software product
Test Infrastructure Management
Done by project manager
–Hiring
–Motivating
–Retaining
Test People Management
Test schedule and product release should be integrated
Development and testing team should involve in release
Integrating with product release
Process of arriving a template after combining all the
information in a test plan
Includes
–Test case specification
–Update of traceability matrix
–Identifying proper candidate for automation
–Developing and base lining test cases
–Executing test cases
–Collecting and analyzing metrics
–Preparing test summary reports
–Recommending product release criteria
Test Process
Documents to be prepared during and after execution of
the tests
–Test log
–Test incident report
–Test summary report
Reporting Test Result
Diary of the events that takes place during testing
–Test log identifier
–Description
–Activity and event entries
»Execution description
»Procedure results
»Environmental information
»Anomalous events
»Incident report idnetifier
Test Log
Problem report
–Test incident report identifier
–Summary
–Incident description
–Impact
Test Incident report
Complete report
–Test summary report identifier
–Variances
–Comprehensive assessment
–Summary of result
–Evaluation
–Summary of activities
–Approvals
Test Summary report
 This type of testing is performed by the developers
before the setup is handed over to the testing team to
formally execute the test cases.
 Unit testing is performed by the respective developers
on the individual units of source code assigned areas.
The developers use test data that is separate from the
test data of the quality assurance team.
 The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in
terms of requirements and functionality.
6.1 Unit Test
Testing cannot catch each and every bug in an
application. It is impossible to evaluate every
execution path in every software application. The
same is the case with unit testing.
There is a limit to the number of scenarios and test
data that the developer can use to verify the source
code. So after he has exhausted all options there is
no choice but to stop unit testing and merge the
code segment with other units.
6.2 Limitation of unit testing
 The testing of combined parts of an application to
determine if they function correctly together is
Integration testing. There are two methods of doing
Integration Testing Bottom-up Integration testing and
Top Down Integration testing.
 The process concludes with multiple tests of the
complete application, preferably in scenarios designed
to mimic those it will encounter in customers'
computers, systems and network.
6.3 Integration Test
SNO. Integration Testing Method
1 Bottom-up integration
This testing begins with unit testing, followed by tests of
progressively higher-level combinations of units called
modules or builds.
2 Top-Down integration
This testing, the highest-level modules are tested first and
progressively lower-level modules are tested after that.
 This is the next level in the testing and tests the system
as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it
meets Quality Standards. This type of testing is
performed by a specialized testing team.
 System testing is so important because of the following
reasons:
 System Testing is the first step in the Software
Development Life Cycle, where the application is
tested as a whole.
 The application is tested thoroughly to verify that it
meets the functional and technical specifications.
6.4 System Testing
 Functional testing
 Performance testing
 Stress testing
 Configuration testing
 Security testing
 Recovery testing
6.4.1 Types of System Testing
 This is arguably the most importance type of testing as it is conducted by
the Quality Assurance Team who will gauge whether the application meets
the intended specifications and satisfies the client.s requirements. The QA
team will have a set of pre written scenarios and Test Cases that will be
used to test the application.
 More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated. Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs
in the application that will result in system crashers or major errors in the
application.
 By performing acceptance tests on an application the testing team will
deduce how the application will perform in production. There are also legal
and contractual requirements for acceptance of the system.
6.5 Acceptance Testing
 This test is the first stage of testing and will be
performed amongst the teams (developer and QA
teams). Unit testing, integration testing and system
testing when combined are known as alpha testing.
During this phase, the following will be tested in the
application:
 Spelling Mistakes
 Broken Links
 Cloudy Directions
 The Application will be tested on machines with the
lowest specification to test loading times and any
latency problems.
6.6 Alpha testing
Beta test is performed after Alpha testing has been
successfully performed. In beta testing a sample of the
intended audience tests the application. Beta testing is
also known as pre-release testing. In this phase the
audience will be testing the following:
Users will install, run the application and send their
feedback to the project team.
Typographical errors, confusing application flow, and
even crashes.
Getting the feedback, the project team can fix the
problems before releasing the software to the actual
users.
6.7 Beta Testing
The development, documentation, and
institutionalization of goals and related policies is
important to an organization.
The goals/policies may be business-related,
technical, or political in nature.
They are the basis for decision making; therefore
setting goals and policies requires the
participation and support of upper management.
CHAPTER-7
Testing Goals, Policies, Plans and
Documentation
 1.Business goal: to increase market share 10% in the
next 2 years in the area of financial software.
 2. Technical goal: to reduce defects by 2% per year
over the next 3 years.
 3. Business/technical goal: to reduce hotline calls by
5% over the next 2 years.
 4. Political goal: to increase the number of women and
minorities in high management positions by 15% in the
next 3 years.
7.1 Testing Goals
 A plan is a document that provides a framework or
approach for achieving a set of goals.
 Test plans for software projects are very complex and
detailed documents.
 The planner usually includes the following essential
high-level items.
 Overall test objectives.
 What to test (scope of the tests).
 Who will test
 How to test
 When to test
 When to stop testing
7.2 Test Planning
7.3 Test Plan Components
Test Plan Attachments
 Test Item Transmittal Report is not a component of the
test plan, but is necessary to locate and track the items
that are submitted for test. Each Test Item Transmittal
Report has a unique identifier.
 It should contain the following information for each
item that is tracked.
 Version/revision number of the item;
 Location of the item;
 Persons responsible for the item (e.g., the developer);
 References to item documentation and the test plan it is
related to;
 Status of the item;
7.4 Locating Test Items
 The test plan and its attachments are test-related
documents that are prepared prior to test execution.
There are additional documents related to testing that
are prepared during and after execution of the tests.
 The test log should be prepared by the person
executing the tests. It is a diary of the events that take
place during the test.
 Test Log Identifier
Each test log should have a unique identifier.
7.5 Reporting Test Results
7.6 The Role of the Three Critical Groups in Testing
Planning and Test Policy Development
 Working with management to develop testing and debugging policy
and goals.
 Participating in the teams that oversee policy compliance and
change management.
 Familiarizing themselves with the approved set of testing/debugging
goals and policies, keeping up-to-date with revisions, and making
suggestions for changes when appropriate.
 When developing test plans, setting testing goals for each project at
each level of test that reflect organizational testing goals and
policies.
 Carrying out testing activities that are in compliance with
organizational policies.
7.7 Tasks of Developer
THANK YOU 

More Related Content

Similar to IT8076 – Software Testing Intro

Manual testing concepts course 1
Manual testing concepts course 1Manual testing concepts course 1
Manual testing concepts course 1
Raghu Kiran
 
Software testing lecture 9
Software testing lecture 9Software testing lecture 9
Software testing lecture 9
Abdul Basit
 

Similar to IT8076 – Software Testing Intro (20)

Software testing
Software testingSoftware testing
Software testing
 
Software_Testing_ppt.pptx
Software_Testing_ppt.pptxSoftware_Testing_ppt.pptx
Software_Testing_ppt.pptx
 
Software-Testing-ppt.pptx
Software-Testing-ppt.pptxSoftware-Testing-ppt.pptx
Software-Testing-ppt.pptx
 
Manual testing concepts course 1
Manual testing concepts course 1Manual testing concepts course 1
Manual testing concepts course 1
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Testing
TestingTesting
Testing
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
 
Software engineering quality assurance and testing
Software engineering quality assurance and testingSoftware engineering quality assurance and testing
Software engineering quality assurance and testing
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software testing lecture 9
Software testing lecture 9Software testing lecture 9
Software testing lecture 9
 
IT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTINGIT8076 - SOFTWARE TESTING
IT8076 - SOFTWARE TESTING
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers
 
Mca se chapter_07_software_validation
Mca se chapter_07_software_validationMca se chapter_07_software_validation
Mca se chapter_07_software_validation
 
Quality Assurance and Testing services
Quality Assurance and Testing servicesQuality Assurance and Testing services
Quality Assurance and Testing services
 
Istqb v.1.2
Istqb v.1.2Istqb v.1.2
Istqb v.1.2
 
SQA_Class
SQA_ClassSQA_Class
SQA_Class
 
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
 
System testing
System testingSystem testing
System testing
 
Software quality assurance
Software quality assuranceSoftware quality assurance
Software quality assurance
 
Software Testing
Software TestingSoftware Testing
Software Testing
 

Recently uploaded

Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
EADTU
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
AnaAcapella
 

Recently uploaded (20)

Tatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf artsTatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf arts
 
Introduction to TechSoup’s Digital Marketing Services and Use Cases
Introduction to TechSoup’s Digital Marketing  Services and Use CasesIntroduction to TechSoup’s Digital Marketing  Services and Use Cases
Introduction to TechSoup’s Digital Marketing Services and Use Cases
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
How to Add a Tool Tip to a Field in Odoo 17
How to Add a Tool Tip to a Field in Odoo 17How to Add a Tool Tip to a Field in Odoo 17
How to Add a Tool Tip to a Field in Odoo 17
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
Economic Importance Of Fungi In Food Additives
Economic Importance Of Fungi In Food AdditivesEconomic Importance Of Fungi In Food Additives
Economic Importance Of Fungi In Food Additives
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
Transparency, Recognition and the role of eSealing - Ildiko Mazar and Koen No...
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
AIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.pptAIM of Education-Teachers Training-2024.ppt
AIM of Education-Teachers Training-2024.ppt
 
Play hard learn harder: The Serious Business of Play
Play hard learn harder:  The Serious Business of PlayPlay hard learn harder:  The Serious Business of Play
Play hard learn harder: The Serious Business of Play
 
Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17
 
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdfUGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 

IT8076 – Software Testing Intro

  • 1. IT8076 – Software Testing Introduction
  • 2.
  • 3.
  • 4. UNIT I INTRODUCTION UNIT II TEST CASE DESIGN STRATEGIES UNIT III LEVELS OF TESTING UNIT IV TEST MANAGEMENT UNIT V TEST AUTOMATION
  • 5. UNIT I INTRODUCTION Testing as an Engineering Activity – Testing as a Process – Testing Maturity Model- Testing axioms – Basic definitions – Software Testing Principles – The Tester’s Role in a Software Development Organization – Origins of Defects – Cost of defects – Defect Classes – The Defect Repository and Test Design –Defect Examples- Developer/Tester Support of Developing a Defect Repository.
  • 6. UNIT II TEST CASE DESIGN STRATEGIES Test case Design Strategies – Using Black Box Approach to Test Case Design – Boundary Value Analysis – Equivalence Class Partitioning – State based testing – Cause-effect graphing – Compatibility testing – user documentation testing – domain testing - Random Testing – Requirements based testing – Using White Box Approach to Test design – Test Adequacy Criteria – static testing vs. structural testing – code functional testing – Coverage and Control Flow Graphs – Covering Code Logic – Paths – code complexity testing – Additional White box testing approaches- Evaluating Test Adequacy Criteria.
  • 7. UNIT III LEVELS OF TESTING The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit Tests – The Test Harness – Running the Unit tests and Recording results – Integration tests – Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash elimination System Testing – Acceptance testing – Performance testing – Regression Testing – Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO systems – Usability and Accessibility testing – Configuration testing –Compatibility testing – Testing the documentation – Website testing.
  • 8. UNIT IV TEST MANAGEMENT People and organizational issues in testing – Organization structures for testing teams – testing services – Test Planning – Test Plan Components – Test Plan Attachments – Locating Test Items – test management – test process – Reporting Test Results – Introducing the test specialist – Skills needed by a test specialist – Building a Testing Group- The Structure of Testing Group- .The Technical Training Program.
  • 9. UNIT V TEST AUTOMATION Software test automation – skills needed for automation – scope of automation – design and architecture for automation – requirements for a test tool – challenges in automation – Test metrics and measurements – project, progress and productivity metrics.
  • 11. UNIT I INTRODUCTION Testing as an Engineering Activity – Testing as a Process – Testing Maturity Model- Testing axioms – Basic definitions – Software Testing Principles – The Tester’s Role in a Software Development Organization – Origins of Defects – Cost of defects – Defect Classes – The Defect Repository and Test Design –Defect Examples- Developer/Tester Support of Developing a Defect Repository.
  • 12. INTRODUCTION TO TESTING AS AN ENGINEERING ACTIVITY UNIT-I
  • 13.  Software important role in our lives both economically and socially.  Pressure for software professionals to focus on quality issues.  Poor quality software that can cause loss of life or property is no longer acceptable to society.  For high-quality software, there is a need for well- educated software professionals.  Highly qualified staff ensure that software products are built on time, within budget  The highest quality meets the attributes - reliability, correctness, usability, and the ability to meet all user requirements. The Evolving Profession of Software Testing
  • 14. Software Development Practices  The development of high-quality software includes  project planning,  Requirements management,  development of formal specifications,  structured design  inspections and reviews, product and process measures.  education and training of software professionals.  development and application of CASE tools.  use of effective testing techniques.
  • 15. Process -Definition  Process, in the software engineering domain, is the set of methods, practices, standards, documents, activities, policies, and procedures that software engineers use to develop and maintain a software system.
  • 17. Validation is the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements. Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. 1.3 Testing as a Process Verification: Are we building the product right? Validation: Are we building the right product?
  • 18. Testing: Process that reveals the defects in the software Ensures the degree of quality. Testing Activity: Technical Reviews Test Plan Test Tracking Test Case Design Unit Test Integration Test System Test Acceptance Test Usability Test
  • 19. Debugging, or fault localization is the process of  locating the fault or defect, repairing the code, retesting the code Testing:
  • 20.  Basic Definition  Errors: An error is a mistake, misconception, or misunderstanding on the part of a software developer.  Faults (Defects)  A fault (defect) is introduced into the software as the result of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification. TESTING AXIOMS
  • 21.  Failures ◦ A failure is the inability of a software system or component to perform its require functions within specified performance requirements. ◦ Observed by  Developer  Tester  User TESTING AXIOMS
  • 22.  A test case in a practical sense is a test-related item which contains the following information: 1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human. 2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device. 3. Expected outputs. These are the specified results to be produced by the code under test. Test Case
  • 23. Test A test is a group of related test cases, or a group of related test cases and test Procedures. Test Suite A group of related tests that run together is referred as test suite. Test Oracle A test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed. Test Bed A test bed is an environment that contains all the hardware and software needed to test a software component or a software system.
  • 24.  1. Quality relates to the degree to which a system, system component, or process meets specified requirements.  2. The Quality attributes are measured using quality metrics. Metric:  A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute. Software Quality
  • 25.  Metrics can be ◦ Product Metrics – Software size ◦ Process Metric – Cost and time required for a given task. Quality Attributes:  Correctness: ◦ Degree to which the system performs the intended function.  Reliability: ◦ Degree to which the software is expected to perform the required function under stated condition. Software Quality
  • 26.  Usability: ◦ The degree of effort needed to learn, operate, prepare input and interpret output of the software  Integrity: ◦ The system’s ability to withstand both international and accidental attacks.  Portability: ◦ The ability of the software to be transferred from one environment to another. Software Quality
  • 27.  Maintainability: ◦ The effort needed to make changes in the software.  Interoperability: ◦ The effort needed to link or couple one system to another.  Testability: ◦ The amount of effort needed to test the software to ensure it performs according to the specific requirement. Software Quality
  • 28.  The software quality assurance (SQA) group is a team of people with the necessary training and skills to ensure that all necessary actions are taken during the development process so that the resulting software conforms to established technical requirements.  REVIEW: A review is a group meeting whose purpose is to evaluate a software artifact or a set of software artifacts. Software Quality Assurance Group
  • 29.  A principle is defined as ◦ A general or fundamentals, law, doctrine or assumptions. ◦ A rule or code of conduct ◦ The laws or facts of nature underlying the working of an artificial device. Software Testing Principles
  • 30.  Principle 1. Testing is the process of exercising a software component using a selected set of test cases, with the intent of (i) revealing defects, and (ii) evaluating quality.  Principle 2. When the test objective is to detect defects, then a good test case is one that has a high probability of revealing a yet undetected defect(s). Software Testing Principles
  • 31. Principle 3. Test results should be inspected meticulously. Principle 4. A test case must contain the expected output or result. Principle 5. Test cases should be developed for both valid and invalid input conditions. Principle 6. The probability of the existence of additional defects in a software component is proportional to the number of defects already detected in that component. Principle 7. Testing should be carried out by a group that is independent of the development group. Principle 8. Tests must be repeatable and reusable. Principle 9. Testing should be planned. Principle 10. Testing activities should be integrated into the software life cycle. Principle 11. Testing is a creative and challenging task.
  • 32. Reveal Defects Find weak Points Find inconsistent behaviour Find Circumstance where the software does not work as expected. TESTER’S ROLE
  • 33. The term defect and its relationship to the terms error and failure in the context of the software development domain. Sources of defects are Education, Communication, Oversight, Transcription, Process. Impact on Software – Failure Impact from User’s View – Poor quality, user dissatisfaction . DEFECTS, HYPOTHESES, AND TESTS
  • 34. Defects revealed by using hypotheses. Hypotheses used are. ◦ Design test case ◦ Design test procedure ◦ Assemble test sets ◦ Select the testing levels ◦ Evaluate the results of the test. DEFECTS, HYPOTHESES, AND TESTS
  • 35. Defect Repository: ◦ Storage and retrieval of defect data from all projects in a centrally accessible location. Defect Classification: ◦ To form defect repository, defect should be classified Defect Classes: ◦ Four major classes DEFECTS, HYPOTHESES, AND TESTS
  • 36. 3.2 Defect Classes, the Defect Repository, and Test Design
  • 37. Requirement and Specification Defects: ◦ Written in natural language ◦ Ambiguous, contradictory, unclear, Redundant and Imprecise Possible Defects: 1 . Functional Description Defects. Overall description and behavior is incorrect 2 . Feature Defects Features may be described as distinguishing characteristics of a software component or system. 3 . Feature Interaction Defects 4 . Interface Description Defects
  • 38.  Design defects occur when system components, interactions between system components, interactions between the components and outside software/hardware, or users are incorrectly designed. Design Defects
  • 39. Algorithmic and Processing Defects Control, Logic, and Sequence Defects Data Defects Module Interface Description Defects Functional Description Defects External Interface Description Defects. Design Defects
  • 40.  Coding defects are derived from errors in implementing the code. Coding defects classes are closely related to design defect classes especially if pseudo code has been used for detailed design.  Algorithmic and Processing Defects  Control, Logic and Sequence Defects  Typographical Defects ◦ Syntax error  Initialization Defects  Data-Flow Defects  Data Defects  Module Interface Defects  Code Documentation Defects  External Hardware, Software Interfaces Defects Coding Defects
  • 41. Defects in test plan, test cases, test harnesses and test procedures. Two types Test Harness Defects: ◦ Auxiliary code developed to test a software. ◦ Defect in test harness Test Case Design and Test Procedure Defects 3.6 Testing Defects
  • 43.  Functional Description:  Input and output value – +ve or -ve  Precondition : no_of_coins >=0;  Postcondition : no_of_dollars, no_of_cents >=0  Interface Description:  How to interact with user Requirement Defects
  • 45.  Control,logic and Sequencing Defect:  While loop  Algorithmic and processing Defects:  Input valid /invalid  Data Defect:  1,5,10,25,25,100  External interface description defect:  Message box Design Defect
  • 47.  Control, logic and sequence:  Loop condition  Algorithmic and Processing:  Input valid/invalid  Data flow:  total_coin_value not intialized  Data:  Error in array intialization  External interface:  Scanf statement is wrong  Code documentation:  Comment information missing Coding Defect
  • 48. Repository development steps 1.Defect Classification. 2.Collecting defect data from projects 3.Designing forms and templates to collect data 4.recording each defect after testing 5.Recording frequency of occurrence of each defect 6.Monitoring the defect in each on-going projects Developer/Tester Support for developing a Defect Repository
  • 49.  1. Understanding customer requirement:  Natural language representation  Confirmed with customer 2. Peer review:  Team work  Review stages:  1.Requirement Specification Review  2. Design Review  3.Code Review  4.Source code documentation  5. Use a coding Checklist  6. Frequent Release  7.Test and testability Defect Prevention Strategies
  • 51. Test Case Design Strategies The Smart Tester: Develops effective test cases to maximize use of time and resources Impact of effective test cases: • Greater probability of detecting defects • Effective use of resources • Close adherence to testing schedule and budget • Delivery of high quality s/w product
  • 52. Test Case Design Strategies Two basic test case design strategies: • Black box test strategies • White box strategies
  • 53. Test Case Design Strategies
  • 54. Difference Black Box Testing •Opaque box testing •No knowledge about inner structure required •Tests simple module, member functions, subsystems or a complete s/w system. •Reveals requirement and specification defects White Box Testing •Transparent box testing •Knowledge about inner structure required •Tests statements, true/false branches.. •Reveals design, control, logic and sequence defects, initialization defects and data flow defects
  • 55. 1. Random Testing 2. Requirements Based Testing 3. Boundary Value Analysis 4. Equivalence Class Partitioning 5. State Based Testing 6. Cause Effect Graphing 7. Compatibility Testing 8. User Documentation Testing 9. Domain Testing Black Box Approaches
  • 56. • Inputs selected randomly from input domain • Saves Time and effort • Little chances of producing effective test cases • Example •Input Domain: all positive numbers between 1 and 100 •Random inputs : 55, 24, 3 Random testing
  • 57. • Validating the requirements given in SRS (Software Requirement Specification) with TRS (Test Requirement Specification) documented. • Done using RTM (Requirement Traceability Matrix) Requirement Based Testing
  • 58. Requirement Traceability Matrix Req.ID Description Prority H,M,L Test Condition Test Case Id Phase of Testing BR-01 Inserting the numbers 123 and turn it clockwise to lock H Use key 123 Lock_001 Unit Testing
  • 59. Test Result S.I Req- Id Prorit y H,M,L Test Case Id Total Test Cases Total Test Cases Passed Total Test Cases Failed % Pass No of Defects 1 BR-01 H LOCk-01 1 1 0 100 1
  • 60.  Two major sources of defects. – Conditions – Boundaries  Example – Billing system that offers volume discount Boundary Value Analysis
  • 61. 8/29/2023 Defects occur in boundaries (When buying 9,10,11,20,21,29,30,31)
  • 62.  Reasons for defects. – Confusion with <= and < – Confusion with looping Boundary Value Analysis
  • 63. Test Case Design Value to be tested Why this value should be tested Expected value of the output 1 The beginning of the first slab $5.00 5 A value in the first slab $25.00 9 Just at the end of the first slab $45.00 10 End of the first slab $50.00 11 Just into the second slab $54.75 16 A value in the second slab $78.50 19 Just at the end of the second slab $92.75 20 End of the second slab $97.50
  • 64.  Technique identifies a different set of inputs that produce many different output conditions  Set of input values that generate one single expected output is called a partition  When the behavior of the s/w is same for a set of values – Equivalence class or partition.  This testing involves – Identifying all partitions – Picking up one representative from each partition for testing Equivalence Partitioning
  • 65. Reduces the number of test cases. A defect in one value in a partition is extrapolated to all values in that partition. Redundancy of testing can be reduced. Advantages
  • 66. Base premium $0.50. Additional premium to be paid is based on the age Example LIC Age Group Additional Premium Under 35 $1.65 35-59 $2.87 60+ $6.00
  • 67. Below 35 Years of age (valid input) Between 35 and 59 years of age (valid input) Above 60 years of age (valid input) Negative age (Invalid input) Age as 0 (invalid input) Age as any three digit numbers (valid input) Equivalence partitions
  • 68. Test case Design S.no Equivalence partitions Type of input Test data Expected Result 1 Age below 35 Valid 26,12 Monthly premium 0.50+1.65 2 Age 35-59 valid 37 050+2.87 3 Age above 60 valid 65,90 0.50+6.0 4 Negative age Invalid -23 Warning Message 5 Age as 0 Invalid 0 Warning message
  • 69. Used When Product under test is – compiler – Work flow model – Data flow model State Based or Graph Based Testing
  • 70. Rules for validating a number 1. A number can start with a optional sign 2. The optional sign can be followed by any no of digits. 3. The digit can be optionally followed by a decimal point 4.If there is a decimal point , then there should be 2 digits after decimal points. 5. Any number should be terminated by a blank Example – Validating a number
  • 71. State transition diagram 1 2 3 4 5 6 +/ - Digit Decimal point Digit Digit Blank Blank
  • 72. State transition table Current State Input Next State 1 Digit 2 1 + 2 1 - 2 2 Digit 2 2 Decimal point 3 2 Blank 6 3 Digit 4 4 Digit 5 5 Blank 6
  • 73. Test Case design • Start from the start state • Choose a path that leads to next state • If invalid input encountered generate an error condition • Repeat the process till the final state is reached
  • 74. Cause and Effect Graphing • Test cases are derived by combining conditions. • Used to reveal inconsistency in the specifications • Methodology adapted • Specifications represented as digital logic circuit. (graph) • Graph converted to Decision table • Test cases developed using decison table
  • 75. Steps Involved • Specifications decomposed into lower units • For each unit, cause and effect identified • A Boolean cause and effect graph created • Cause in the left hand side and effect in the right hand side. • In cause the relationship is represented using logical operators. • Graph is annotated using constraints • Graph converted to decision table • Column in decision table are transformed into test cases
  • 76. Example 1 2 3 AND ˄ Effect 3 occurs when 1 and 2 are present
  • 77. Example • Problem: Searching for a character in an existing string • Specification States: •1. Input the string length and character to be searched. •2. If the length of string is out of range, an error message occurs. •If the character appears in the screen, its position is reported. •If the character is not in the string , not found message will occur.
  • 78. Example • Input Condition: • C1 :Length of the string, •integer •ranges from 1 to 80 •C2 :Character to be searched, • single •character • Output Condition: •E1 : Integer out of range •E2 : Position of the character •E3 : Character not found •.
  • 79. Example • Rules and Relationship: •If C1 and C2, then E2 •If C1 and not C2, then E3 •If not C1, then E1 • Output Condition: •E1 : Integer out of range •E2 : Position of the character •E3 : Character not found
  • 80. Cause and Effect Graph C1 C2 E2 ˄ E1 E3 ˄
  • 81. 8/29/2023 Decision Table • Existing String: abcde • Test Cases : T1, T2, T3 Inputs Length Character to be searched Output T1 5 C 3 T2 5 W Not Found T3 90 Integer out of range
  • 82. 8/29/2023 Advantages: • Allows thorough inspection of specification • Any omission, inaccuracies or inconsistencies can be easily detected
  • 83. 8/29/2023 Compatibility Testing: • Testing the infrastructure parameters. • Infrastructure Parameters •Processor •Architecture •Resource •Equipment •OS •Middle ware •Backend • Permutation and combination of the parameters should be tested • A compatibility matrix can be designed
  • 84. 8/29/2023 Server Application Server Web Server Client Browser Msoffice Mail Server Windows 2000 Advanced Server with SP4 Microsoft SQL Server 2000 with SP3a Windows 2000 Advanced Server with SP4 and .Net framework 1.1 IIS 5.0 Win 2k professional and Win 2k terminal server IE 6.0 And IE 5.5 SP2 Office 2K and Office XP Exchange 5.5 and 2K Using this table combinations are created to perform compatibility testing
  • 85. 8/29/2023 Common techniques: • Horizontal Combination: •Parameters are grouped together in a row •Product features are tested using each row • Intelligent Sampling: •Dependencies between parameters identified •Using the information an intelligent sample is made and tested
  • 86. 8/29/2023 Further Classification: • Backward Compatibility Testing: •Assuring current version coincides with the older version • Forward compatibility Testing: •Ensuring the provision for later version
  • 87. 8/29/2023 User Documentation Testing • Objective: •Check the content stated in the document •Checking whether the product is explained correctly in the document • User Documents •Manuals •User Guide •Installation guide •Set Up Guide •Read me file •Software release notes •Online help
  • 88. 8/29/2023 Benefits • Highlights the problem overlooked in the review • Minimize the possible defects reported by the customers • Ensuring that the product is working as per the documents
  • 89. 8/29/2023 Domain Testing • Need Business domain Knowledge • Business vertical testing •Testing the product in the business flow
  • 90. 8/29/2023 Example: Cash Withdrawal in ATM • User Action •Step 1: Go to the ATM •Step 2: Put ATM Card inside •Step 3: Enter correct pin number •Step4 : Choose cash withdrawal •Step 5: Enter the amount •Step 6: Take the cash •Step 7: Exit and retrieve the card • Domain tester should test whether user actions are performing well
  • 91. 8/29/2023 White Box Testing • Testing the program code • Clear box, open box or glass box testing • Testing include •Program code •Code structure •Internal design flow
  • 92.  Testers need a framework  for deciding which structural elements to select as the focus of testing,  for choosing the appropriate test data, and  for deciding when the testing efforts are adequate enough to terminate the process with confidence that the software is working properly.  Such a framework exists in the form of test adequacy criteria. Test Adequacy Criteria
  • 93. The application scope of adequacy criteria also includes: (i) helping testers to select properties of a program to focus on during test; (ii) helping testers to select a test data set for a program based on the selected properties; (iii) supporting testers with the development of quantitative objectives for testing; (iv) indicating testers whether or not testing can be stopped for that program.
  • 94. Program Based Adequacy criteria.  Focus is on structural properties  Logic and control structure, data flow, program text Specification based adequacy criteria.  Focus is on specification Classification
  • 95.  Static Testing:  Desk Checking  Code Walk Through  Code Inspection  Structural Testing  Unit/Code functional testing  Code Coverage Statement coverage Path coverage Condition Coverage Function Coverage  Code Complexity Cyclomatic Complexity Types of White Box Testing
  • 96.  Static Testing:  Test the source code  Done by human to ensure •Code works according to the functional requirement •Code written in accordance with design •Any functionality in code has been missed out •Code handles error properly  Static testing by Human  Reading the program code to detect error  Advantages Sometime human can find errors that computer cannot Multiple human gives multiple perspection, more problems identified Code can be compared with design Computer resources can be saved. Static Testing
  • 97.  Static testing by human:  Desk Checking  Code walk through  Code review  Code inspection Static Testing
  • 98.  Verifying the portion code for correctness  Done by comparing code with design and specifications  Done by programmer before compilation and execution  Advantages  Enables the programmer to well understand his/her own code  Only few scheduling and logistic overhead (done by individual) • Disadvantages  Programmer may be tunnel visioned  Person dependent method Desk Checking
  • 99.  Set of people look at the program code and raise questions  Author gives justification Code Walk through
  • 100.  Fagan Inspection  Detects all faults, violations and other side effects  Takes place after desk checking and walk through  Inspection meeting arranged.  Members of the meeting • Author of the code • Moderator who formally run the inspection • Inspectors who provide review comments • A Scribe who takes notes  Defect classified in two dimensions • Minor/major • Systematic/misexecution Formal Inspection
  • 101.  Run by the computer  Code, code structure, internal design are tested  Classification • Unit/code functional testing • Code coverage testing • Code complexity testing Structural testing
  • 102.  Quick Check done by the developer  Methods • Obvious testing (Known I/p and o/p) • Debug version ( intermediate statements) Unit/code functional testing
  • 103.  Test Cases are designed and executed  Percentage of code covered is obtained  Tool • Instrumentation of code •Rebuilds the product by linking with set of libraries provided by the tool vendor. •Monitors and keep audit of what portion of code are covered  Types of coverage • Statement coverage • Path coverage • Function coverage Code coverage Testing
  • 104.  Program Construct • Sequential Control flow • Two way decision statement (if then else) • Multi way decision statements (switch) • Loops (while, do while, repeat until and for) Statement Coverage
  • 105.  Sequential statements • test cases designed should run from top to bottom • Exceptions encountered, test case does not cover the entire code  Two way decision statement (if then else) • Separate test cases for if, then and else should be written  Multi way decision statements (switch) • Multiple test cases for all the possible cases should be written  Loops (while, do while, repeat until and for) • Test cases for boundaries values written • Test cases for skipping loop • Test cases for normal loop operation Statement Coverage
  • 106.  Formula Statement coverage = ((Total statements exercised) / (Total no of executable statements in program)) *100 Statement Coverage
  • 107.  Program is spitted into no of distinct paths  Example: Date Validation Routine Flow Chart in notes Path coverage
  • 108.  Testing Boolean Expression  Test Cases written for both true path and false path Condition Coverage = ((Total decisions exercised/ Total no of decisions in program)) *100 Condition coverage
  • 109.  Used to identify the no of functions covered by test cases  Test Cases written to exercise different functions Function Coverage = ((Total functions exercised/ Total no of functions in program)) *100 Function coverage
  • 110.  Complexity of a program – determines the test bound  Measured by Cyclomatic complexity  Measured by converting Flow chart into flow graph  Steps involved – Identify all predicates(decision)Se paths – Complex decision paths broken into simple – All sequential statements combined into a single node – Sequential statements followed by predicate check, a predicate node is created. Edges emanate from the nodes. – Ensure all edges terminate at the same node Code Complexity Testing
  • 112.  Cyclomatic complexity = E – N +2  N – Nodes, E -Edges  Cyclomatic complexity =4-4 +2 = 2  1-10 Well Written code  10 -20 Moderate, medium testability  20-40 Testability is very complex  > 40 Not Testable Code Complexity Testing
  • 113.  Testers are often faced with the decision of which criterion to apply to a given item under test given the nature of the item and the constraints of the test environment (time, costs, resources).  Testers can use the axioms to ◦ Recognize both strong and weak adequacy criteria; a tester may decide to use a weak criterion, but should be aware of its weakness with respect to the properties described by the axioms; . Evaluating Test Adequacy Criteria
  • 114.  Applicability Property  Non-exhaustive applicability Property  Monotonicity Property  Inadequate Empty Set  Antiextensionality Property  General Multiple Change Property  Antidecomposition Property  Anticomposition Property  Complexity Property  Statement Coveraage Property Properties
  • 116. Levels of testing include the different methodologies that can be used while conducting Software Testing. They are: ◦ Unit Test ◦ Integration Test ◦ System Test ◦ Acceptance Test Levels of Testing
  • 117. Unit – Small testable software component. Example: Function, procedure, class, object Goal: individual s/w unit is functioning according to its specification Task Involved  Unit Test Planning Design Test Cases Define the relationship between the tests Prepare auxiliary code for testing Unit Testing
  • 118. Phase I: Describe Unit Test Approach and Risk Identify test risk Describe the techniques to be used for designing test cases Describe the techniques to be used for data validation and recording test results Describe the requirement for test harnesses Identify the termination condition Estimates the resources needed for testing Unit Testing Planning Phases
  • 119. Phase II: Identifying unit feature to be tested Identify the input output characteristics associated with each unit Phase III: Add levels of details to the plan Refines the plan produced in the phase 1 and 2 Add new details to the plan Unit Testing Planning Phases
  • 120. Use black box and white box test design strategies to design test cases Designing the unit test
  • 121. Supporting code to carry our test Contains drivers to call the target code and stubs that represent the module it calls Test Harness Drivers Unit under test Stub1 Stub2 Call and pass parameter Result Call Ack Call Ack
  • 122. Use worksheet to record the result Running the unit test and recording the result Unit Test Worksheet Unit Name: Unit Identifier: Tester: Date Test Case Id Status(run/not run) Summary of result pass/fail
  • 123. Goals: Detect defects on the interface Ensure the system is complete and ready for system test Integration Strategies: Top down Bottom Up Bi directional Integration Test
  • 124. Start from top most component Navigate from top to bottom till all components are covered Top Down Integration Test C1 C2 C3 C4 C5 C6 C7 C8
  • 125. Top Down Integration Test Step Interface Tested 1 1-2 2 1-3 3 1-4 4 1-2-5 5 1-3-6 6 1-3-6 (3-7) 7 (1-2-5)-(1-3-6-(3-7)) 8 1-4-8 9 (1-2-5)-(1-3-6-(3-7)-(1-4-8)
  • 126. Start from bottom most component Navigate from bottom to top till all components are covered Bottom Up Integration Test C8 C5 C6 C7 C1 C2 C3 C4
  • 127. Bottom Up Integration Test Step Interface Tested 1 1-5 2 2-6, 3-6 3 2-6-(3-6) 4 4-7 5 1-5-8 6 2-6-(3-6)-8 7 4-7-8 8 (1-5-8)-(2-6-(3-6)-(4-7-8)
  • 128. Combination of both 1,2,3,4 and 5 are tested separately 6-2, 7-3-4 Bi Directional Integration Test C1 C6 C7 C8 C2 C3 C4 C5
  • 129. Using realistic user activities to test the product Two methods System Scenario Use-Case Scenario / Role based Scenario Scenario Testing
  • 130. System Scenario: Done with the help of system component Approaches Storyline Life Cycle/State Transition Deployment/Implementation stories from customer Business Verticals Battle Ground System Scenario Testing
  • 131.  Story Line: Developing stories about various activities of the product that may be executed by the user.  Life Cycle/ State transition: Various transitions / modifications that happen to an object  Deployment/ Implementation stories from customer:  Business verticals: Visualizing the business situations  Battle ground: try and break the system and justify the product does not break System Scenario
  • 132.  Procedure that describe how a customer use the system  Users are Actors, product activity is system behaviour  Example: Withdrawing cash from bank Use Case Scenario Actor Agent System Response Customer Bank Official Compute r Cheque Query Response
  • 133.  People performing different role in an organization test the product together at the same time.  Advantages: Cross boundaries and test beyond assigned areas Testing isn’t for tester’s alone Eat your own dog food Fresh eyes has less bias Users of software are not the same Does testing wait till all documentation Testing isn’t to conclude the system works or does not work Defect Bash Elimination
  • 134.  Step 1: Choosing the frequency and duration of defect bash  Step 2: Selecting the right product build  Step 3: Communicating the objective of each defect bash to everyone.  Step 4: Setting up and monitoring the lab for defect bash  Step 5: Taking actions and fixing issues  Step 6: Optimizing the effort involved in defect bash Defect Bash – Steps Involved
  • 135.  Testing complete integrated system  Evaluate system compliance with its specified requirement  Two methodologies Functional Testing Non Functional testing System Testing
  • 136. Testing Aspects Functional Testing Non Functional Testing Involves Product features and functionality Quality factors Tests Product behaviour Behaviour and experience Result Conclusion Simple step written to check expected result Huge data collected and tested Knowledge Required Product and Domain Design , architecture, domain Testing phase Unit, component, integration, system System
  • 137.  Testing product's functionality and features.  Common techniques are Design/architecture verification Business vertical testing Deployment testing Beta testing Certification, standard and testing for compliance Functional System Testing
  • 138.  Design/architecture verification:  Business vertical testing: Simulation Replications • Deployment testing: Off-site deployment On-site deployment • Beta testing: receiving feedback from customer • Certification, standard and testing for compliance: Testing whether the product is certified by popular h/w, s/w OS, DB. Certificate Agencies provide certificate test suite Functional System Testing
  • 139. Standard: •Conforming whether the standards are properly implemented  Compliance •Testing for contractual, legal and statutory •Compliance to FDA : Food and Drug •508 accessibility guidelines: Physically challenged Functional System Testing
  • 140.  Non functional testing achieved by • Setting up configuration •Simulated environment •Real time environment • Coming up with entry/exit criteria •Setting limit for parameters Non Functional System Testing Type of test Parameter Sample entry criteria Sample exit criteria Scalability Maximum limit Scale upto one million record Scale upto 10 millions record
  • 141. • Balancing key resources •Products reaction with the four key resources : CPU, Disk, Memory and network  Verifying quality factors: reliability, scalability, etc., • Scalability Testing: •Maximum capacity of the product •Eg: no of clients machine log into the server simultaneously • Reliability Testing: •Product's ability to perform its required function. •Eg: querying database continuously for 48 hrs
  • 142. • Stress Testing: •Overloading the product and finding out its behaviour. • Interoperability Testing •Ensuring that two or more products can exchange information, use information and work properly together
  • 143.  Done by the customer  Test cases designed by the customer  Criteria classified as • Acceptance criteria for product acceptance: •Requirement comparison • Acceptance criteria for procedure acceptance •Procedure followed for delivery • Acceptance criteria for service level agreement •Contract signed between the customer and product organisation Acceptance Testing
  • 144.  Test cases Used •End-to-end Verification •Domain Tests •User scenario test •Basic sanity test •New functionality  Executing Acceptance test: •Done with people having 90 % business process knowledge and 10 % technical knowledge •Appropriate training on the product and process provided. •Test cases and test data can be collected from the organization •Test results are properly recorded •Major defects identified, release is postponed
  • 145.  Factors governing performance – Throughput :Capacity of system to handle multiple transactions – Response Time: Delay between the point of request and the first response – Latency: Delay caused by the application,OS, H/w, etc Network latency = N1+N2+N3+N4 Product Latency = A1+A2+A3 Actual Reponse time = network latency + product latency Performance Testing
  • 146.  Factors governing performance – Tuning:Process of improving performance by setting parameter values. – Benchmarking: Comparing the performance of the product with standard product – Capacity Planning: Finding out the resource and configuration need Performance Testing
  • 147.  Steps involved – Collecting requirement – Writing test cases – Automating test cases – Executing test cases – Analysis the result – Tuning – Benchmarking – Capacity Planning (Recommending) Performance Testing - Methodology
  • 148.  Collecting requirement: Performance testing requirements can be collected by – Comparing the performance with previous released products – Comparing the performance with competitive products – Comparing the absolute number with actual need – Driving the performance number from architecture and design  Requirement classification – Generic requirement: Common across all products – Specific Requirement: Specific the product Performance Testing - Methodology
  • 149. Performance Testing - Methodology Transaction Expected Response Time Loading Pattern / throughput Machine configuration ATM Cash withdrawal 2 Sec Upto 10,000 simultaneous access Pentium IV 512 RAM Broad band N/w ATM Cash withdrawal 4 Sec Upto 10,000 simultaneous access Pentium IV 512 RAM dial up N/w
  • 150.  Writing Test cases: A test case should have the following details – List of operations to be tested – Steps to execute those operations – List of parameter that impact performance and their values – Loading pattern – Resource and their configuration – Competitive product to be compared Performance Testing - Methodology
  • 151.  Automating Performance test cases: Characteristics that leads to automation – When the testing repetitive – Need for effectiveness – Result of test should be accurate – More factors to be considered – Resource and their configuration – Account information need to be collected periodically Performance Testing - Methodology
  • 152.  Executing Performance test cases: Details to be recorded during execution – Start and end time of test case – Log and audit file – Utilization of resources – Configuration details – Response time, throughput and latency at regular intervals Performance Testing - Methodology
  • 153.  Analyzing the result: steps used in analysis – Calculating mean : Performance test result mean is taken – Calculating standard deviations – Removing noise and re plotting: The values out of range in a graph are ignored and smooth graph is produced. – Differentiating whether the data is from cache or server: Response time increases when caching done – Differentiating whether the data is using the available resources completely: Garbage collection/ defragmentation decreases the performance of the foreground process • Analysis report will narrow down the parameters Performance Testing - Methodology
  • 154.  Performance Tuning: – The narrow down parameters are tested repeatedly for different values – Two steps •Tuning the product parameter •Tuning the operating system and parameters Performance Testing - Methodology
  • 155.  Performance Benchmarking: Steps involved – Identifying the scenarios and configurations – Comparing it with the competitive product – Tuning the product performance – Publishing the result of performance benchmark Performance Testing - Methodology
  • 156.  Capacity Planning: – Using the performance result right configurations requirement are identified – Classified as •Minimum requirement: anything less than this configuration the product will not work •Typical requirement: The product works fine in this configuration •Special configuration: future requirement Performance Testing - Methodology
  • 157.  Two types of Tools: – Functional performance tools •Winrunner from mercury •QA partner from compuware •Silktest from segue – Load testing tools •Load runner from mercury •QA load compuware •Silk performer from segue Performance Testing - Tools
  • 158.  Ensures all changes made works as designed  Ensures the changes do not break something that is already working  Selective retesting: – Reuse of existing test case to test the impacted area. • Build: – Source code of the complete product and the fixed defect along with existing features get consolidated into a bulid. • Types – Regular regression testing: Done between test cycle – Final regression testing : Done at last to validate the bulid. Regression Testing
  • 159.  Whenever changes are made regression test should be done  Steps – Performing initial smoke or sanity test – Understanding the criteria for selecting test cases – Classifying the test cases into different priority – A methodology for selecting the test case – Resetting the test case for test execute – Concluding the result Regression Testing
  • 160.  Performing an initial “smoke” or sanity test: Smoke test: Identifying the basic functionality of the product Design smoke test suite. (Test cases to ensure the working of the functionality of the product) Ensure the suite runs every time when changes made Failure reported to developer Regression Testing
  • 161.  Criteria for selecting test cases: Test cases that produce maximum defects Test cases for those function where changes are made Test cases that test end to end behaviour Test cases that test the positive test conditions  Classifying test cases:  Three categories Priority 0: sanity test cases – checks functionality Priority 1: Test cases which use basic and normal set up Priority 2: Executed as apart of testing cycle Regression Testing
  • 162.  Resetting the test cases Reset Procedure: A method to indicate that some of the test cases are selected for regression. A flag is used in TCDB to indicate to not run or execute.  Concluding the Result: Comparison with previous results Regression Testing
  • 163. Regression Test result Previous Results Conclusion Remarks Fail Pass Fail Need to improve the test process Pass Fail Pass Defect fixes works properly Fail Fail Fail Defect fixes not working Pass Pass Pass Defect fixes done properly with no side effect
  • 164.  Language: A language is a tool used for communication, internationalization. It focuses mostly on human languages and not on computer language. ASCII: American Standard Code for Information Interchange. It is a byte representation for characters that is used in computers. Double-Byte Character Set: English is represented in single byte where as Chinese and Japanese are represented in two byte called DBCS Internationalization Testing
  • 165.  Unicode: Provides unique no to every character irrespective of platform Locale: Subset of a language that defines its behaviour in different countries. (eg) English common to US and India but currency, punctuations are different. Internationationalization(I18n): Defines all activities to make the s/w available for international market. 18 Characters between I and n The testing done for internationalization is I18n Testing. Internationalization Testing
  • 166.  Localization(L10n): Converting into a target language. Globalization(G11n): Means Internationalization Internationalization Testing
  • 167.  Done by Test engineers, developers and local team Test Activities done by test engineers: Enabling Testing Locale Testing I18n Testing and validation Fake language testing Language testing Localization testing Test activities done by developers: Enable the code Internationalization Testing Phases
  • 168.  Testing activities done by localization team Message Consolidation Message Translation Include message into the product Internationalization Testing Phases
  • 169. Internationalization Testing Phases Enable the Code 1. Enable testing 2. Locale Testing 3. I18n testing and validation 4. Fake language testing. 5. Language Testing Release English version Message consolidation Message Translation Include message into the product 6. Localization Testing Release International version
  • 170. Internationalization Testing Phases Enabling Testing Unit Testing I18n Validation Locale Testing Component Testing Fake Language Testing Integration Testing Localization Testing System Testing Acceptance Testing
  • 171.  Ensures that the source code allows internationalization Check list for enabling testing Check the code for API/functional calls Check the code for hard coded date, currency format, ASCII code Check the code to see that no computation done using date variable Ensure that the dialog boxes and screen have 0.5 times more space for expansion Ensure region culture based message and slang are not in the code Enabling Testing
  • 172.  Using system setting or environment variables for different locale and testing the functionality, number, date, time and currency format is called locale testing. Check list for enabling testing All features applicable to I18n are tested with different locale Hard keys, function keys and help screens are tested with different locale Ensure date, time, currency, number format are in line with different locale Locale Testing
  • 173.  Done with the objective S/w tested for ASCII, DBCS and European characters S/w handles the string operations, sorting done as per the character selected Checklist The functionality is same in all locale Sorting, sequencing are done as per the convention of the language Non ASCII characters are entered as they were entered Cut and copy and paste of non ASCII characters retain their style. Internationalization Validation
  • 174.  Translator is used to catch the translation and localization issues. S/w tested for ASCII, DBCS and European characters Checklist Ensure s/w is tested for atleast one European single- byte fake language (Pig Latin) Ensure s/w is tested for atleast one double-byte fake language (Wide Roman) Ensuring size of screen, dialog box, pop up menus are adequate for fake language Fake Language Testing
  • 175.  Ensuring software created in English can work with platform and environment that are non English. Done for one European language, one double byte representation language and one default language (English) Language Compatibility Testing Server English E G J German E G J
  • 176.  The messages are consolidated from release ready s/w and send to multilingual expert for translation  After translation the s/w functionality is tested to ensure that it works as expected in the localized environment Checklist  Messages, documents, pictures, menus reflect the convention of the country, locale and language  Sorting and case conversions are right as per language conventions •China, Cyberia in English •Cyberia ,China in Spanish LocalizationTesting
  • 177. Checklist Font sizes and hot keys are working correctly in the translated messages, documents and screens Filtering and searching capacities work as per the language Address, Ph no, numbers and postal codes are as per the convention of the target language LocalizationTesting
  • 178. 8/29/2023 UNIT IV People and Organizational issues in Testing
  • 179. Classified into Product Organization Service Organization Single site Multi site Organization
  • 180. Organization Structures Single Product Company CTO CEO Product Business Unit Support Product Management Product Delivery
  • 181. Testing Team Structures Single Product Company Teams/Engineers Project Manager Teams/Engine ers Teams/Engine ers Develop Test Test Develop
  • 182. Exploits the rear loading nature of testing activities Enables engineers to gain experience in all aspects of life cycle Some defects may be detected early Advantages
  • 183. Accountability for testing and quality reduces Effectiveness of testing reduces Schedule pressures generally compromise testing Disadvantages
  • 184. Organisation Structures Multi Product Company Component Manager Project Manager Component Manager Component Manager Develop Test Test Develop
  • 185. How tightly coupled the product with the technology Dependency among the product Similarity among the customers of the product Based on these factors the test team can be organized as  A central team for the whole organization  One test team for each product  Different test team for each product  Different test team for different test  A hybrid of all the above Factors for organizing Test Team
  • 186.  Test team directly reports to the CTO before design and development starts Advantages  The product design and architecture is decided keeping test requirement in mind  Testing team can provide in selecting the technology  Testing team can better understand the design and architecture Test Team as a part of CTO’s office
  • 187.  The product is tested by a single team Since the product is developed by different teams , the testing is reported to different levels Single Test Team for all products  Responsibility is given to business unit Business unit organizes the developer and test team Test team organized by product
  • 188.  Separate testing team is assigned for each phase Each test team reports to different groups Separate test team for different phases of testing Types of Test Report to White box Development team Black Box Testing team Integration Organization wide team System Product Management Performance Bench Mark group Acceptance Product Management Internationalization Internationalization and local team Regression All test teams
  • 189.  Testing activity is outsourced to external companies Advantages  Testing is a very specialized function  Variety of tools available (need specialization)  Developer can not be an expert in testing Testing Service Organization
  • 190.  Testing: product organization vs Service organization •Service organization are at arm's length from product organization •Service organization does not provide full breadth of coverage •Service organization tend to be more homogenous Difference
  • 191.  Roles and Responsibilities •Made of up number of accounts •Accounts are managed by account manager •Account is responsible for customer Functions of account manager •Act as single point of contact between the customer and test organization •Develops rapport with the customer •Participates in all communication between customer and organization •Acts as a proxy for the customer Test Service Organization
  • 192. Test Service Organization CEO Account Manager Customer contact Near Site Team Remote team Test Lead Test Engineer
  • 193. Challenges and Issues •Testing team is an outsider. •Cannot develop domain expert for all the customers •Privacy and physical isolation not possible Success factors •Establishing communication with the developer •Bringing customer expectation in testing •Using appropriate tool and environment •Periodic skill upgrade Test Service Organization
  • 194. A plan is a document that provides a framework for achieving a goal. The test plan is provided with •Overall test objective •What to test •Who will test •How to test •When to test •When to stop Test Planning
  • 195. Test Plan Hierarchy SQA Master Plan Review Plan Unit Test Plan Integration Plan System Test Plan Acceptance Plan
  • 196. Test Plan Components Test Plan Components 1. Test Plan Identifier 2. Introduction 3. Items to be tested 4. Features to be tested 5. Approach 6. Pass/Fail Criteria 7. Suspension and resumption criteria 8. Test Deliverable 9. Testing Tasks 10. Testing Environment 11. Responsibilities 12. Staffing and training records 13. Schedule 14. Risks and contingencies 15. Testing cost 16. Approvals
  • 197. Test Design Specification Test Case Specification Test Procedure Specification Test Plan Attachment
  • 198. Test Design Specification Identifier Features to be tested Approach refinement Test Case Identifier Pass/Fail criteria Test Design Specification
  • 199. Test case Specification Identifier Test items Input Specification Output Specification Special environment needs Special procedural requirement Inter case Dependencies Test Case Specification
  • 200. Test procedure Specification Identifier Purpose Specific requirement Procedure steps Test Procedure Specification
  • 201. Test item transmittal report – to locate items Contains information –Version / revision number of the items –Location of the items –Persons responsible for the items –Status of the items –approvals Locating Test Items
  • 202. Test plan management planning aspects –Choice of standards –Test Infrastructure Management –Test People Management –Integration with product release Test Management
  • 203. External Standards and Internal standards Internal Standards –Naming and storage conventions for test artifacts –Document standards »Appropriate header comments »Sufficient in line comments »Up to date change history information –Test Coding standards (right of initialization, re usability) –Test reporting standards Choice of Standards
  • 204. Three elements –A test case database (TCDB) –Defect Repository –SCM (Software Configuration Management) Test Infrastructure Management
  • 205. TCDB Test Infrastructure Management Entity Purpose Attribute Test Case Records all the static information about the tests Test Case id Test case name Test case owner Associated files Test case run history Gives the history of when a test ran Test Case id Run date Time taken Run status
  • 206. Defect repository Test Infrastructure Management Entity Purpose Attribute Defect details Records all the static information about the tests Defect id Defect priority Defect description Affected products Any relevant information Defect test details Provides details of test cases for a given defect Defect id Test case id
  • 207. SCM –Keeps track of change control and version control of all the files/entities that make up a software product Test Infrastructure Management
  • 208. Done by project manager –Hiring –Motivating –Retaining Test People Management Test schedule and product release should be integrated Development and testing team should involve in release Integrating with product release
  • 209. Process of arriving a template after combining all the information in a test plan Includes –Test case specification –Update of traceability matrix –Identifying proper candidate for automation –Developing and base lining test cases –Executing test cases –Collecting and analyzing metrics –Preparing test summary reports –Recommending product release criteria Test Process
  • 210. Documents to be prepared during and after execution of the tests –Test log –Test incident report –Test summary report Reporting Test Result
  • 211. Diary of the events that takes place during testing –Test log identifier –Description –Activity and event entries »Execution description »Procedure results »Environmental information »Anomalous events »Incident report idnetifier Test Log
  • 212. Problem report –Test incident report identifier –Summary –Incident description –Impact Test Incident report
  • 213. Complete report –Test summary report identifier –Variances –Comprehensive assessment –Summary of result –Evaluation –Summary of activities –Approvals Test Summary report
  • 214.
  • 215.  This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases.  Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team.  The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality. 6.1 Unit Test
  • 216. Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing. There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units. 6.2 Limitation of unit testing
  • 217.  The testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of doing Integration Testing Bottom-up Integration testing and Top Down Integration testing.  The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers' computers, systems and network. 6.3 Integration Test
  • 218. SNO. Integration Testing Method 1 Bottom-up integration This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds. 2 Top-Down integration This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
  • 219.  This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team.  System testing is so important because of the following reasons:  System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.  The application is tested thoroughly to verify that it meets the functional and technical specifications. 6.4 System Testing
  • 220.  Functional testing  Performance testing  Stress testing  Configuration testing  Security testing  Recovery testing 6.4.1 Types of System Testing
  • 221.  This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.  More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.  By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system. 6.5 Acceptance Testing
  • 222.  This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:  Spelling Mistakes  Broken Links  Cloudy Directions  The Application will be tested on machines with the lowest specification to test loading times and any latency problems. 6.6 Alpha testing
  • 223. Beta test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. In this phase the audience will be testing the following: Users will install, run the application and send their feedback to the project team. Typographical errors, confusing application flow, and even crashes. Getting the feedback, the project team can fix the problems before releasing the software to the actual users. 6.7 Beta Testing
  • 224. The development, documentation, and institutionalization of goals and related policies is important to an organization. The goals/policies may be business-related, technical, or political in nature. They are the basis for decision making; therefore setting goals and policies requires the participation and support of upper management. CHAPTER-7 Testing Goals, Policies, Plans and Documentation
  • 225.  1.Business goal: to increase market share 10% in the next 2 years in the area of financial software.  2. Technical goal: to reduce defects by 2% per year over the next 3 years.  3. Business/technical goal: to reduce hotline calls by 5% over the next 2 years.  4. Political goal: to increase the number of women and minorities in high management positions by 15% in the next 3 years. 7.1 Testing Goals
  • 226.  A plan is a document that provides a framework or approach for achieving a set of goals.  Test plans for software projects are very complex and detailed documents.  The planner usually includes the following essential high-level items.  Overall test objectives.  What to test (scope of the tests).  Who will test  How to test  When to test  When to stop testing 7.2 Test Planning
  • 227. 7.3 Test Plan Components
  • 229.  Test Item Transmittal Report is not a component of the test plan, but is necessary to locate and track the items that are submitted for test. Each Test Item Transmittal Report has a unique identifier.  It should contain the following information for each item that is tracked.  Version/revision number of the item;  Location of the item;  Persons responsible for the item (e.g., the developer);  References to item documentation and the test plan it is related to;  Status of the item; 7.4 Locating Test Items
  • 230.  The test plan and its attachments are test-related documents that are prepared prior to test execution. There are additional documents related to testing that are prepared during and after execution of the tests.  The test log should be prepared by the person executing the tests. It is a diary of the events that take place during the test.  Test Log Identifier Each test log should have a unique identifier. 7.5 Reporting Test Results
  • 231. 7.6 The Role of the Three Critical Groups in Testing Planning and Test Policy Development
  • 232.  Working with management to develop testing and debugging policy and goals.  Participating in the teams that oversee policy compliance and change management.  Familiarizing themselves with the approved set of testing/debugging goals and policies, keeping up-to-date with revisions, and making suggestions for changes when appropriate.  When developing test plans, setting testing goals for each project at each level of test that reflect organizational testing goals and policies.  Carrying out testing activities that are in compliance with organizational policies. 7.7 Tasks of Developer