1. R 1111-RIA-GUI-Test Plan 1.0
1. Test Plan Identifier:
(Program)(Release No)Test
Plan (Level of Test)
LIA-R1-Test Plan-GUI
1. A Revision History:
Revision History
Version Date Change Description Au
March
1.0 05 Initial Creation Q
Mar Changes required
1.1 06 for Risks
2. 2.0 Introduction:
2.1 A high level
description of what this
level of Test is covering
must be provided
1. QA will validate field
Type, Field Caption,
Business Rules and field
properties associated with
Renters Projects
3. 2.2 The description must
identify the system under
Test (Product, system, or
software being tested)
4. QA will validate Renters
Insurance Project Agent
Module from Customers
functionality to submit
functionality. This project
involves 3 sub modules.
1. Customer 2. Quotation
3.submit modules
5. 2.3 The description must
identify high level
objectives, Goals and
purpose of the Testing
effort.
Renters Insurance Testing
Agent module objective is
validating the defects
associated with field level
i.e before going to test
system functionalities, in
6. GUI we will validate all the
Business rules associated
with Agent Module.
The purpose is to test
Functionality, Regression
smoke and adhoc testing
to send the error (Bug)
free code to the Next level
7. 2.4 If a high level
document exist link to the
document must be
provided
Link to the Test Strategy:
8. 2. Testing scope:
3.1 In scope (Features
to be tested)
3.1.1 A description of
the items to be tested
must be provided.
Depending on the level
of test this may include
specific components,
Modules, Interfaces,
Functional
Requirements, Non
12. 3.2.1 A description of
anything provided by the
project but that has been
specifically excluded from
testing effort must be
provided.
1. Navigation from Agent
Login to customer
screen
2. Validations associated
with third party
13. databases such as
(Dependencies)
1. Customer database
2. Loss history database
3. Billing database
4. After successful
submission of the
policy other products
(Marketing) screen
will be displayed
14. 4. Test Approach:
4.1 For each type of
test create Type of
Testing and
objectives of Testing.
4.1 (Type of Test):
For each type of Test
a description of the
Test, subject of the
test, and the reason
15. for the test must be
provided
4.1.2 Foe each type
of test the
objectives for
the test must
be identified
4.1.3 For each
objective
identified ,the
approach that
will be used to
17. Functional Testing:
1. We will validate all
functional requirements
2. To find the defects
associated with
functional requirements
3. Objective of test is to
find the defects
associated with field
name, field type,
Business Rules
associated.
18. Smoke Testing:
Smoke Testing purpose is to
find the defects associated
with Environment stability
and build stability.
In this project daily morning
and evening we will perform
smoke testing to find the
stability.
Adhoc Testing:
19. To find the uncovered
defects during normal
testing we will do adhoc
testing
Regression Testing:
To find the new/injected
defects associated with build
changes/requirement changes
we will do Regression Testing.
20. 5. Test plan
administration:
5.1 Defect Management
Plan:
Responsibilities:
1. All the defects should
be raised in defect
tracking tool called QC
2. All the defects which
come into pending
21. validation should be
Retested immediately
3. All the defect reports
should be sent to
management.
5.2 Test Management
plan
5.3 Environment
Management
5.4 Estimations
5.5 Status report
managemen
23. Entry criteria for Test case
design:
When to start Test Case
design
1. All the requirements and
design documents are
analyzed by QA and all
the Queries resolved.
2. Required supporting
documents, Test data
should be available.
24. 3. Test case design
(Excl/Tool) should be
available.
Exit criteria for Test case
design:(How many Test
Cases are enough)
1. Requirement – Test Case
Tractability.
2. Requirement Breakdown
3. Known Trouble spots
25. 4. Risk Based analysis
Determining the highest
risk portions of the product
(Impact of failure) can
guide where more test
coverage needs to be
emphasized
When to stop Test case
design
27. When to start Test Case
execution
1. Latest Code base should
be available in QA env
2. Smoke Testing should be
successfully completed
3. Reviewed Test cases
should be available.
4. Test executioner should
be available
5. Test Execution tool should
be ready
28. 6. Automation scripts should
be available for
regression.
Exit criteria for Test Case
Execution:
1. No high and critical
defects
2. Medium and low defects
should be reviewed by TL.
3. Regression Testing should
be successfully completed
29. When to stop test case
execution
7. Test Resource Needs
1. Environment Needs
2. Training needs
3. Tool needs
30. 8. Testing Mile stones:
1. Test plan
2. Test Approach
3. Requirement analysis
4. Test case design
5. Test Case Review
6. Test Case execution
7. Defects
8. Defect Reports
9. Project closure
documents