SlideShare a Scribd company logo
1 of 168
Download to read offline
MC
Full-day Tutorial
4/29/13 8:30AM

Fundamentals of Risk-based Testing
Presented by:
Dale Perry
Software Quality Engineering

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Dale Perry
Dale Perry has more than thirty-six years of experience in information technology as a
programmer/analyst, database administrator, project manager, development manager, tester, and test
manager. Dale’s project experience includes large-system development and conversions, distributed
systems, and both web-based and client/server applications. A professional instructor for more than
twenty years, he has presented at numerous industry conferences on development and testing. With
Software Quality Engineering for fifteen years, Dale has specialized in training and consulting on testing,
inspections and reviews, and other testing and quality-related topics.
©2013 SQE Training - STAR East 2013

1
This page left blank

©2013 SQE Training - STAR East 2013

2
Notice of Rights
Entire contents © 1986-2013 by SQE Training, unless otherwise noted on specific
items. All rights reserved. No material in this publication may be reproduced in any
form without the express written permission of SQE Training.
Home Office
SQE Training
330 Corporate Way, Suite 300
Orange Park, FL 32073 U.S.A.
(904) 278-0524
(904) 278-4380 fax
www.sqetraining.com
Notice of Liability
The information provided in this book is distributed on an “as is” basis, without
warranty. Neither the author nor SQE Training shall have any liability to any person or
entity with respect to any loss or damage caused or alleged to have been caused
directly or indirectly by the content provided in this course.

©2013 SQE Training - STAR East 2013

3
©2013 SQE Training - STAR East 2013

4
©2013 SQE Training - STAR East 2013

5
Formal definitions of testing:
IEEE Standard 829-2008
(A) An activity in which a system or component is executed under
specified conditions, the results are observed or recorded, and an
evaluation is made of some aspect of the system or component. (B)
To conduct an activity as in (A).
IEEE Standard 610.12-1990
Testing: The process of operating a system or component under
specified conditions, observing or recording the results, and making
an evaluation of some aspect of the system or component.

©2013 SQE Training - STAR East 2013

6
©2013 SQE Training - STAR East 2013

7
©2013 SQE Training - STAR East 2013

8
©2013 SQE Training - STAR East 2013

9
Testing every possible data value, every possible navigation path through
the code, and every possible combination of input values is almost always
an infinite task which never can be completed. Even if it were possible, it is
not even a good idea because many of the test cases would be redundant,
consume resources to create, delay time to market, and not add anything of
value.

©2013 SQE Training - STAR East 2013

10
©2013 SQE Training - STAR East 2013

11
©2013 SQE Training - STAR East 2013

12
©2013 SQE Training - STAR East 2013

13
©2013 SQE Training - STAR East 2013

14
Risk Factor
1

Ambiguous Improvement Targets

2

Artificial Maturity Levels

3

Canceled Projects

4

Corporate Politics

5

Cost Overruns

6

Creeping User Requirements

7

Crowded Office Conditions

8

Error Prone Modules

9

Excessive Paperwork

10

Excessive schedule Pressure

11

Excessive Time to Market

12

False Productivity Claims

13

Friction Between Clients and Software
Contractors

14

Friction Between Software Management and
Senior Executives

15

High Maintenance Costs

16

Inaccurate Cost Estimating

17

Inaccurate Metrics

18

Inaccurate Quality Estimating

19

Inaccurate Sizing of Deliverables

20

Inadequate Assessments

21

Inadequate Compensation Plans

©2013 SQE Training - STAR East 2013

15
Risk Factor
22

Inadequate Configuration Control and
Project Repositories

23

Inadequate Curricula (Software Engineering)

24

Inadequate Curricula (Software
Management)

25

Inadequate Measurement

26

Inadequate Package Acquisition Methods

27

Inadequate Research and Reference
Facilities

28

Inadequate Software Policies and Standards

29

Inadequate Project Risk Analysis

30

Inadequate Project value Analysis

31

Inadequate Tools and Methods (Project
Management)

32

Inadequate Tools and Methods (Quality
Assurance)

33

Inadequate Tools and Methods (Software
Engineering)

34

Inadequate Tools and Methods (Technical
Documentation)

35

Lack of Reusable Architecture

36

Lack of Reusable Code

37

Lack of Reusable Data

38

Lack of Reusable Designs (Blueprints)

39

Lack of Reusable Documentation

40

Lack of Reusable Estimates (Templates)

©2013 SQE Training - STAR East 2013

16
Risk Factor
41

Lack of Reusable Human Interfaces

42

Lack of Reusable Project Plans

43

Lack of Reusable Requirements

44

Lack of Reusable Test Plans. Test Cases
and Test Data

45

Lack of Specialization

46

Long Service Life of Obsolete Systems

47

Low Productivity

48

Low Quality

49

Low Status of Software Personnel and
Management

50

Low User Satisfaction

51

Malpractice (Project Management)

52

Malpractice (Technical Staff)

53

Missed Schedules

54

Partial Life-Cycle Definitions

55

Poor Organization Structures

56

Poor Technology Investments

57

Severe Layoffs and Cutbacks of Staff

58

Short-Range Improvement Planning

59

Silver Bullet Syndrome

60

Slow Technology Transfer

©2013 SQE Training - STAR East 2013

17
©2013 SQE Training - STAR East 2013

18
©2013 SQE Training - STAR East 2013

19
©2013 SQE Training - STAR East 2013

20
©2013 SQE Training - STAR East 2013

21
©2013 SQE Training - STAR East 2013

22
The purpose of discussing product risk is to determine what the primary
focus of testing should be. Generally speaking, most organizations find that
their resources are inadequate to test everything in a given release.
Outlining product risks helps the testers prioritize what to test and allows
them to concentrate on those areas that are likely to fail or have a critical
impact on the customer if they do fail.
Risks are used to decide where to start testing and where to test more.
Testing is used to reduce the risk of an adverse effect occurring, or to reduce
the impact of an adverse effect.

©2013 SQE Training - STAR East 2013

23
Organizations that work on safety-critical software usually can use the
information from their safety and hazard analysis.
However, in many other companies, no attempt is made to verbalize product
risks in any fashion. If your company does not currently do any type of risk
analysis, try a brainstorming session among a small group of users,
developers, and testers to identify concerns.

©2013 SQE Training - STAR East 2013

24
©2013 SQE Training - STAR East 2013

25
©2013 SQE Training - STAR East 2013

26
©2013 SQE Training - STAR East 2013

27
©2013 SQE Training - STAR East 2013

28
©2013 SQE Training - STAR East 2013

29
©2013 SQE Training - STAR East 2013

30
©2013 SQE Training - STAR East 2013

31
The concept of risk driven testing applies to all software development models
and processes. It is critical to developing quality software that meets
user/customer expectations and is the focus of both the STEP™ methodology
and many of the new agile development processes.
If you analyze the newer “agile” development methods, this is one of the key
concepts. It’s interesting that this is not really a new concept at all; it has been
around for a couple of decades.

©2013 SQE Training - STAR East 2013

32
There are many different software lifecycle approaches: waterfall, spiral,
incremental delivery, prototyping (evolutionary and throwaway), RAD,
extreme programming (XP), SCRUM, DSDM, etc. The key is to know which
process the project is following and to integrate into that process as soon as
possible (reasonable). The later you get involved, the less chance you have
to prevent problems.

©2013 SQE Training - STAR East 2013

33
Testing is, in fact, dependent on the development processes and activities.
As testers we test—evaluate, verify, validate—what the development groups
create.
When we refer to the development process, we are talking about the
creation of an automated solution to a problem using a computer as the
basis for the solution.
The term development in this course relates to the construction of a software
solution to solve a problem. The problem can be of a business nature or
other type of system such as an embedded controller in a car or medical
device.

©2013 SQE Training - STAR East 2013

34
©2013 SQE Training - STAR East 2013

35
©2013 SQE Training - STAR East 2013

36
©2013 SQE Training - STAR East 2013

37
A master level plan for the entire project provides a “global” view of the
testing efforts and goals.
One (or more) detailed, individual level test plan focuses the efforts of a
specific group.
The number of levels will be determined by the time, cost, and risk factors
affecting the project.

©2013 SQE Training - STAR East 2013

38
Throughout the project you will be revisiting these processes as changes are
made to the project or as new information is uncovered. These processes
will help in various aspects of the planning process, such as estimating,
budget, skills required, etc.

©2013 SQE Training - STAR East 2013

39
One of the most important deliverables in testing is an assessment of the
failure risk associated with the software—not just numbers of test cases,
defects found, and other statistics.
As testers we provide critical information to management about the potential
failure risk of the software based on the thoroughness and
comprehensiveness (effectiveness) of our test efforts.
We plan, analyze, and design tests, and then execute those tests. The
results of the test execution are analyzed and compared to the planned
(designed) tests; then we evaluate the testing and development processes,
the test results, and the potential for failure based on this assessment.
• The decision to stop testing should be based on this type of analysis
of the testing and development processes.

©2013 SQE Training - STAR East 2013

40
©2013 SQE Training - STAR East 2013

41
©2013 SQE Training - STAR East 2013

42
©2013 SQE Training - STAR East 2013

43
More information can be obtained on the planning process from several
courses offered by SQE as well as the STAR tutorial noted on the earlier
slide. Two courses offered through SQE Training address planning issues in
more depth.
Systematic Software Testing – This course covers this process in more
detail including an extended section on planning.
Test Process Management - A managers course that focuses mainly on
planning and strategic issues. It is similar to SST but its focus is on
managers and does not go into details on test design etc.

©2013 SQE Training - STAR East 2013

44
©2013 SQE Training - STAR East 2013

45
©2013 SQE Training - STAR East 2013

46
©2013 SQE Training - STAR East 2013

47
©2013 SQE Training - STAR East 2013

48
©2013 SQE Training - STAR East 2013

49
Reference materials include any information available that can assist in
determining the testing objects/conditions.
Some lifecycles do not have formal sources of documentation. No formal
requirements are written. However, there is usually some information about
what type of system is being created, the platform on which it will run, the
goals of the client, etc.
Any information you can gather will help you better understand the test
requirements for this project.

©2013 SQE Training - STAR East 2013

50
Risk identification and assessment needs to include the viewpoints
noted earlier
The key is to include testers to focus the team on the testing
issues and to help determine the priority of the features to
be developed
Testers need to know the risks and issues in order to
properly analyze and design reasonable tests
Different groups have different ideas about software. The more of these
disparate groups you can combine, the more accurate the picture you will
have of the risks, priorities, and goals for development, and the more
accurate the testing goals and objects/conditions for this project become.

©2013 SQE Training - STAR East 2013

51
©2013 SQE Training - STAR East 2013

52
©2013 SQE Training - STAR East 2013

53
The objects process is an iterative process. You begin the process at
requirements and continue the process at each stage of the development
process. How far you take the process is determined by the scope and risks
associated with the software being tested. In addition to the process being
iterative, it is also cumulative.
The information from requirements is used to improve the requirements
(static testing/reviews), to focus the design, and possibly to improve the
design.
At the design stage of a project the information from the requirements
inventory process is used to evaluate the design and to ensure problems are
corrected and additional items are gathered from the design. This process
can be continued as far as the risks to the project warrant.

©2013 SQE Training - STAR East 2013

54
©2013 SQE Training - STAR East 2013

55
©2013 SQE Training - STAR East 2013

56
©2013 SQE Training - STAR East 2013

57
©2013 SQE Training - STAR East 2013

58
©2013 SQE Training - STAR East 2013

59
©2013 SQE Training - STAR East 2013

60
©2013 SQE Training - STAR East 2013

61
©2013 SQE Training - STAR East 2013

62
©2013 SQE Training - STAR East 2013

63
There are some common aspects of applications that can be drawn from the
design specifications

©2013 SQE Training - STAR East 2013

64
©2013 SQE Training - STAR East 2013

65
©2013 SQE Training - STAR East 2013

66
©2013 SQE Training - STAR East 2013

67
©2013 SQE Training - STAR East 2013

68
©2013 SQE Training - STAR East 2013

69
©2013 SQE Training - STAR East 2013

70
Once the inventory has been built, the next step is to determine the Impact
and likelihood of something going wrong with each of the elements identified
in the inventory.
• Determine the impact (loss or damage) and likelihood (frequency or
probability) of the feature or attribute failing.
While some organizations like to use percentages, number of days/years
between occurrences, or even probability “half lives,” using a set of simple
categories such as the ones listed in the slide above typically provide
sufficient accuracy.
If the likelihood or impact of something going wrong is none or zero, then
this item could be removed from the analysis. However, the removal should
be documented.
• This is not recommended. Just leave it in the inventory, it will
naturally drop to the bottom. Keep it on the list because the
assessment may change later as more information becomes
available.

©2013 SQE Training - STAR East 2013

71
©2013 SQE Training - STAR East 2013

72
©2013 SQE Training - STAR East 2013

73
©2013 SQE Training - STAR East 2013

74
©2013 SQE Training - STAR East 2013

75
©2013 SQE Training - STAR East 2013

76
Risk is in the eye of the beholder as noted earlier
• Any two people may look at the same event and see an entirely
different set of issues. What is critical to one may be trivial to the
other

©2013 SQE Training - STAR East 2013

77
©2013 SQE Training - STAR East 2013

78
Likelihood = The probability or chance of an event occurring (e.g., the
likelihood that a user will make a mistake and, if a mistake is made, the
likelihood that it will go undetected by the software)
Impact = The damage that results from a failure (e.g., the system crashing or
corrupting data might be considered high impact)
H =

High

which has a value of 3

M =

Medium

which has a value of 2

L =

Low

©2013 SQE Training - STAR East 2013

which has a value of 1

79
©2013 SQE Training - STAR East 2013

80
Under likelihood and impact, there may be differences of opinion as to the
risk. It can be high business risk but low technical risk, etc. So you may have
to compromise on an acceptable level of risk.
The numbers are calculated using the values from our original matrix (page
78) and multiplying them.
H =

High

which has a value of 3

M =

Medium

which has a value of 2

L =

Low

©2013 SQE Training - STAR East 2013

which has a value of 1

81
Make adjustments and sort by the agreed priority. We now have a risk-based
assessment of what needs to be tested.

©2013 SQE Training - STAR East 2013

82
Of course, successful prioritization of risks does not help unless test cases
are defined for each risk—with the highest priority risks being assigned the
most comprehensive tests and priority scheduling. The objective of each test
case is to mitigate at least one of these risks.
If time or resources are an issue, then the priority associated with each
feature or attribute can be used to determine which test cases should be
created and/or run. If testing must be cut, then the risk priority can be used
to determine how and what to drop.
• Cut low risk completely (indicated by the horizontal line).
If you plan to ship the low risk features, you may want to consider an across
the board approach At least that way, the features do not ship untested (risk
unknown). This will entail some additional risk as higher risk features get
less testing.

©2013 SQE Training - STAR East 2013

83
The process of “inventorying” test objects can help establish and define the
scope of the testing as well as identify possible points of misunderstanding
and mismatched assumptions. Even a very high level list of test objects can
help in clarifying the goals of the testing and thereby reduce problems later
in the project.
How much effort you put into this process and how often you have to revisit
the “inventory” will depend on many factors,
• The type of development life-cycle used in the project
• The quality of the specifications (detail, scope, etc.)
• The goals of the testing process, how much is enough
• The time, resources, skills, etc. available to the project
One of the key goals of this process is to help understand what really should
be tested and what to do if you cannot test it all, which is usually the case.

©2013 SQE Training - STAR East 2013

84
©2013 SQE Training - STAR East 2013

85
©2013 SQE Training - STAR East 2013

86
©2013 SQE Training - STAR East 2013

87
Although exploratory testing primarily relies on the skills and knowledge of
the tester and tends to be more dynamic than traditional technique-driven
design, it too can be more formalized. Using the inventory process as part
of an exploratory test process can add structure to the definition of the areas
to be investigated rather than relying only on the skills of the individual
tester.

©2013 SQE Training - STAR East 2013

88
©2013 SQE Training - STAR East 2013

89
©2013 SQE Training - STAR East 2013

90
©2013 SQE Training - STAR East 2013

91
©2013 SQE Training - STAR East 2013

92
The level and complexity of documentation represents a serious risk to the
testing process.
• Too much overly detailed, complex documentation takes significant
time to design and create. When things change, the maintenance
costs can be extreme.
• Excessive detail is not necessarily a good characteristic of
documentation.
Too little documentation—with insufficient information to allow for the
analysis, understanding, and maintenance of the tests—is equally bad.
• The time spent reacquiring lost knowledge can be very expensive.
The key is to strike a balance between the level of detail in test
documentation and the time and cost to define, create, and maintain that
same documentation.

©2013 SQE Training - STAR East 2013

93
The goal is to avoid gaps in the testing as well as to avoid overlapping
testing too much.
Depending on how you define your inventories, based on generic groupings
or application specific groupings, the idea to decide who will test which
object at what stage/level.
Some objects cannot be tested until later stages of the process (i.e.,
scenarios and usage based objects). Conversely some elements, such as
field edits, valid ranges, error messages etc., are best tested in the earlier
stages. These code logic elements, created by the programmers, are best
tested at that stage of the process. Finding such errors late in the process
can be very costly.

©2013 SQE Training - STAR East 2013

94
©2013 SQE Training - STAR East 2013

95
©2013 SQE Training - STAR East 2013

96
©2013 SQE Training - STAR East 2013

97
©2013 SQE Training - STAR East 2013

98
©2013 SQE Training - STAR East 2013

99
©2013 SQE Training - STAR East 2013

100
©2013 SQE Training - STAR East 2013

101
©2013 SQE Training - STAR East 2013

102
Software configuration management is critical to the testing effort. It is so
important that, if the software configuration management is done poorly, the
testing effort and the entire project may fail.
While this session only covers software, it should be noted that configuration
management typically encompasses more than just the software being
developed for a particular application.

©2013 SQE Training - STAR East 2013

103
There are several key issues that must also be resolved with the test
environment.
• Is the equipment available when needed?
• Has new equipment acquisition been planned and budgeted for?
•Is there a test lab?
• Is it shared or dedicated?
• Can development share the test lab?
• Can they change it without the test groups approval?
Having the correct environment is critical to good testing. It is hard to
validate or verify something if you do not have the necessary systems,
applications, etc. The more complex and risk sensitive the application the
more important the environment becomes.
How much time may be lost just getting the necessary test environment set
up for testing if you start too late in the project? If you’re spending three to
five days setting up the test lab how much testing are you doing.

©2013 SQE Training - STAR East 2013

104
©2013 SQE Training - STAR East 2013

105
©2013 SQE Training - STAR East 2013

106
This information will allow us to assess and report on risks related
to testing activities.

©2013 SQE Training - STAR East 2013

107
©2013 SQE Training - STAR East 2013

108
©2013 SQE Training - STAR East 2013

109
©2013 SQE Training - STAR East 2013

110
©2013 SQE Training - STAR East 2013

111
©2013 SQE Training - STAR East 2013

112
©2013 SQE Training - STAR East 2013

113
Thoroughness (Depth) – The degree to which inventoried items are tested
relative to risk. How completely did the tests cover the specified risks?

Comprehensiveness (Breadth) – The fraction of inventoried items that
are tested. Not everything gets tested as we noted earlier. The key
here is: How much of what was agreed to be tested was actually tested?

©2013 SQE Training - STAR East 2013

114
A Sample Approach
Develop requirements and design-based tests (as the software is specified)
Run the requirements and design-based tests using a code coverage
analyzer (as soon as the software item is coded and becomes available)
Analyze unexecuted code to determine supplemental objects and tests (if
needed)
Study the implementation and identify any additional objects and tests
required

©2013 SQE Training - STAR East 2013

115
©2013 SQE Training - STAR East 2013

116
Metrics indicate where there are elements to be investigated, they don’t
necessarily tell us why something is occurring ,only that it is and requires
analysis.
Assessing test execution on an ongoing basis helps us avoid worse
problems later in the project and gives us information that we can use to
improve the overall development and testing processes.

©2013 SQE Training - STAR East 2013

117
This appears better overall, but as before, it does not tell us why something
is occurring.

©2013 SQE Training - STAR East 2013

118
The sequence is
• Planned
• Specified
• Implemented
• Executed
• Passed/failed
Again, a trace matrix can help. If you don’t know how many tests were
planned, how do you assess progress?
Passed/failed only has meaning if you know what you were intending to do.

©2013 SQE Training - STAR East 2013

119
©2013 SQE Training - STAR East 2013

120
©2013 SQE Training - STAR East 2013

121
©2013 SQE Training - STAR East 2013

122
©2013 SQE Training - STAR East 2013

123
©2013 SQE Training - STAR East 2013

124
©2013 SQE Training - STAR East 2013

125
©2013 SQE Training - STAR East 2013

126
©2013 SQE Training - STAR East 2013

127
©2013 SQE Training - STAR East 2013

128
©2013 SQE Training - STAR East 2013

129
©2013 SQE Training - STAR East 2013

130
©2013 SQE Training - STAR East 2013

131
©2013 SQE Training - STAR East 2013

132
©2013 SQE Training - STAR East 2013

133
©2013 SQE Training - STAR East 2013

134
©2013 SQE Training - STAR East 2013

135
©2013 SQE Training - STAR East 2013

136
©2013 SQE Training - STAR East 2013

137
©2013 SQE Training - STAR East 2013

138
Using the PICT data I have created a matrix of “valid” test cases

©2013 SQE Training - STAR East 2013

139
©2013 SQE Training - STAR East 2013

140
©2013 SQE Training - STAR East 2013

141
©2013 SQE Training - STAR East 2013

142
©2013 SQE Training - STAR East 2013

143
©2013 SQE Training - STAR East 2013

144
©2013 SQE Training - STAR East 2013

145
The first document in this series is the overall project
overview and the project requirements specification.

©2013 SQE Training - STAR East 2013

146
Reassigned Sales – Project Overview
This project is expected to take approximately 8 months to implement.
The staffing for this project will come from the following internal Widgits departments:
• Marketing and Sales
• Internal MIS
• Applications development
• Quality Assurance
• Infrastructure
• Data base administration
• Network administration
Each department will be responsible to the overall project manager for providing identified
resources. The specific roles and responsibilities will be determined by the project
manager working within corporate guidelines.
Additionally our vendor Zippy Corp. will be providing assistance in the following areas:
Applications development
• XML programming and support
• AS/400 programming assistance
Testing and verification
• Overall testing strategy
• Test design and specification
• Verification of the conversion of XML data into Widgits database formats
• Working with the Marketing and Sales department to create and verify an acceptance
test plan

©2013 SQE Training - STAR East 2013

147
This application consists of a manufacturer (Widgets Inc.), that sells a product through a
sales staff but also allows its’ products to be sold by third party distributors and retail
outlets (resellers). Because the company allows this invasion into their sales people’s
territories they have a method for calculating the degree of encroachment and
compensation for the affected sales representative.
1. The reassigned sales system will receive reassigned sales data though an XML
interface using standard data definitions.
1A. A separate transaction type will be used to identify sales data from client address
data and both types of data will be passed to the new reassigned sales process.
• It is recognized that each third party may have separate internal account numbers
for their customers. The Widgits database will have to provide a mechanism
where-by a Widgits account number can be associated with multiple third party
account numbers; all sharing one address record in the Widgits database.
• Account information will use a separate XML data format transaction type and
will be separated into their own data base files.
• The account information will be validated against the existing customer master
file to ensure no duplicates are stored.
• A separate cross reference table will be created to identify individual accounts
that purchase both directly from us and a distributor and those that purchase from
multiple distributors.
• There is a minimum amount of account identification that must be received to
create an account. The process must identify any incomplete accounts and the
sales administration must have a method for correcting these records.
2. An on-line process will be created for the sales administration staff providing the
following functions.
2A. Provide data roll-up displays allowing sales staff to review sales data by individual
week or by the month.
2B. Allow sales data received in a week to be posted back to the previous sales week,
eliminating the need for manual adjustments.
• A cutoff period will be defined as to how long the system will wait for a reseller
to report sales data. Once that cutoff has been reached all reported sales will be
moved to the next sales period for calculating a sales person’s compensation.
2C. The application will allow the sales staff to eliminate groups and sets of transactions
(received through the XML interface) that are in error, or appear to be duplicated.

©2013 SQE Training - STAR East 2013

148
148
2D. Provide on-line review capabilities to allow the sales staff to review whether a
reseller has or has not sent in their sales data.
• A form of notification process will be provided to do the following:
• Notify the sales commission staff of a late distributor.
• Provide a mechanism (XML based) to contact the distributor.
• Provide a process for the vendor to request a delay. If delayed the data
would automatically move to the next reporting period.
• A notification to the affected sales person(s) of this delay in commission
credit.
2E. Allow the sales staff to activate the commissions posting process after all data has
been reviewed.
• The posting process “must be” initiated by a staff person.
2F. Provide a reports menu with reports by sales region, sales person, month, week,
product line.
3. The system must provide an archive system to remove posted transactions from the
active files on a monthly basis.
• This process must execute after the monthly close has been processed.
• Transactions that are not in a valid state and that have not been posted will be
removed from the active system during archiving but will be retained on the
archive for later analysis.
• Transactions in a valid state, that have not been posted will be retained on the
active files and will be rolled in into the next period (first week) sales data,
regardless of sales date.
• These transactions will not be archived until they have been posted.
4. Prototype screen layouts and report layouts will be provided for user review during
the early stages of system development.
• Once the prototypes have been approved, all additional changes will be addressed
on an as needed, priority basis within the constraints of the project time line.
5. An initial accounts address generation run can be made by a distributor to send in all
available account information prior to starting the XML process. This will enable us to
have an initial set of account records in place.
• Vendors must be provided the opportunity to add, change and delete customer
address records related to their sales activities.
• However, once a record is added to the Widgit’s master address file it will not be
deleted unless all the following are true.
• There are no other vendors related to the same end client.
• There are no direct sales to the client by Widgit’s sales persons.
• The record has not had a recorded purchase in the last two years.

©2013 SQE Training - STAR East 2013

149
149
The following are the initial test objects for the
Reassigned Sales project.
An initial high level risk assessment will be done on
these items in conjunction with the systems design
process.

©2013 SQE Training - STAR East 2013

150
TEST OBJECT INVENTORY - REQUIREMENTS BASED

1. REQUIREMENTS
A.
B.
C.
D.
E.
F.
G.
H.

XML
Order Entry shared interface
New/modified screens
New/modified reports
Sales account information
Sales calculations
Legacy systems interfaces
Archive

2. FEATURES AND FUNCTIONS
A.
B.
C.
D.

Order Entry interface
Manual interface
Reports interface
Sales account information

3. TRANSACTIONS
A.
B.
C.
D.

XML from distributors
XML to distributors
Mailbox management
External applications (legacy, back office)

4. DATA
A.
B.
C.
D.
E.

New format AS/400 database files
Messages
Conversion
Archive
Recovery and backups

5. INTERFACES
A. Order Entry
B. Manual User interface
C. Reports interface
D. Sales account information (existing)
E. XML

©2013 SQE Training - STAR East 2013

151
F. Account data files
G. External applications
6. PERFORMANCE
A.
B.
C.
D.
E.

Data downloads from mailbox
Manual user screens (roll-ups etc.) response time
Archive (delays in processing could result for OE)
Sales commissions reports (timeliness do to delayed postings)
Database responsiveness (volume)

7. CONSTRAINTS
A. Security access to screens
B. Access to mailbox service
8. BACK-UP AND RECOVERY
A. Archive

©2013 SQE Training - STAR East 2013

152
The following is the risk assessment based on the
object inventory developed from the requirements
specification.

©2013 SQE Training - STAR East 2013

153
Some potential risk factors to consider include:
Impact risk factors
• Endangerment of human life or highly valued resources increases the
impact risk
• For non-critical features, the more immediate the failure detection the lower
the impact risk
• The increased availability of a practical work-around lowers the impact risk
• Updating critical data structures is riskier than just accessing them
• Interfaces with critical functions are riskier
Likelihood risk factors
• New components are riskier than reliable, existing ones
• Components with a history of unreliability are riskier
• Frequently changed components tend to get disorganized over time
• Components developed by personnel with a record of poor product
reliability are riskier
• Components developed by personnel with a poor understanding of either
the requirements or the design are riskier
• This can be compounded by projects that use outsourced services
• Components with frequently changing requirements are riskier
• Poorly designed components are riskier
• Programs solving complex problems are riskier than those solving simpler
ones
• Programs doing multiple functions are riskier that those doing the
corresponding single functions
• The more dynamic and complex the data structure, the riskier

©2013 SQE Training - STAR East 2013

154
The following inventory and risk assessment will be used in the test planning process.
Close coordination with the developers will be required to ensure critical features are
developed in a priority sequence where possible. The development of any non-critical
features, early in the schedule, must be approved by all management groups (project,
development and testing).
Elements indicated, as High risk must be considered for development first as other
features are, to a great degree, dependent on the completion of those features first. All
risk assessments included user, technical and testing considerations in assigning the risk
and priorities.
The priorities are in descending order from 10 (highest) to 1 (lowest) Categories break
down as follows
High
10, 9, 8
Medium
7, 6, 5
Low
4, 3
Very low
2
No risk
1
OBJECT INVENTORY REQUIREMENTS BASED

INITIAL RISK ASSESSMENT
(COMBINED)
RISK
PRIORITY

1. REQUIREMENTS
A. XML
B. Order Entry shared interface
C. New/modified screens
D. New/modified reports
E. Sales account information
F. Sales calculations
G. Legacy systems interfaces
H. Archive

High
High
Medium
Low
High
High
Medium
Low

10
10
7
4
9
10
7
3

High
Medium
Low
High

10
7
4
9

2. FEATURES AND FUNCTIONS
A. Order Entry interface
B. Manual interface
C. Reports interface
D. Sales account information

©2013 SQE Training - STAR East 2013

155
3. TRANSACTIONS
A. XML from distributors
B. XML to distributors
C. Mailbox management
D. External applications (legacy, back office)

High
High
Medium
Medium

10
10
7
7

Low
Low
High
Low
Medium

4
4
10
4
5

High
Medium
Low
High
High
High
Medium

10
7
4
9
10
9
7

High
Low

9
4

Low

3

Medium

5

Low

3

4. DATA
A. New format AS/400 database files
B. Messages
C. Conversion
D. Archive
E. Recovery and backups
5. INTERFACES
A. Order Entry
B. Manual User interface
C. Reports interface
D. Sales account information (existing)
E. XML
F. Account data files
G. External applications
6. PERFORMANCE
A. Data downloads from mailbox
B. Manual user screens (roll-ups etc.) response
time
C. Archive (delays in processing could result for
OE)
D. Sales commissions reports (timeliness do to
delayed postings)
E. Database responsiveness (volume)

©2013 SQE Training - STAR East 2013

156
7. CONSTRAINTS
A. Security access to screens
B. Access to mailbox service

Medium
Low

6
4

Low

3

8. BACKUP AND RECOVERY
A. Archive

©2013 SQE Training - STAR East 2013

157
The next document is the system level test plan for
the project.

©2013 SQE Training - STAR East 2013

158
1. TEST PLAN IDENTIFIER

RS-STP01.3

2. REFERENCES
1. Reassigned Sales System Rewrite Requirements - SST_RQMT04.1
2. Reassigned Sales Master test plan RS-MTP01.3
2. Reassigned Sales General Design Specification - RS-SDS01.3
3. INTRODUCTION
This is the System/Integration Test Plan for the Reassigned Sales project. This plan will
address only those items and elements that are related to the Reassigned Sales process, both
directly and indirectly affected elements will be addressed. The primary focus of this plan is to
ensure that the new Reassigned Sales application functions within the proscribed limits and
meets the minimum functions specified in both the requirements specification and the general
design specification.
The system/integration testing will begin as soon as the first complete increment of the
application is available. It is anticipated that the application will be available in several
increments as identified in the test items section. Systems/integration will be conducted by the
development team with the assistance of one full time test person. The sales administration team
will be involved in the screen/report verification process only. Final user approval and
acceptance will be during acceptance testing.
4. TEST ITEMS
The following is a list of the functional areas to focus on during systems/integration testing.
Each area will be a separate test cycle with the complete process being tested as the final phase.
Test 1. - XML Interface
Test 2. - Translator Interface (both sales data and account information)
Test 3. - Manual Intervention interface (including all review/update screens)
Test 4. - Reassigned sales posting process
Test 5. - Reassigned sales reports
Test 6. - Archiving
Test 7. - Backup and recovery

©2013 SQE Training - STAR East 2013

159
5. SOFTWARE RISK ISSUES
There are several interface issues that require additional focus during systems test in addition
to those issues identified in the Master test plan RS-MTP01.3
A. The XML interfaces’ capability to support the added reassigned sales transaction volume
in addition to the current Order Entry transaction Volume.
B. The different timing of the two interfaces pulling from the shared mailbox on the
Advantis network. The reassigned sales transactions must append to the existing files until
the process is executed to process the data.
C. The reformatting of the XML data transaction formats into the appropriate reassigned sale
control files by the translation process is critical to application success.
D. The maintenance of the accounts cross reference file to prevent multiple accounts from
being created must be closely monitored. The manual user intervention required to correct
accounts in error will require close scrutiny to prevent overload of the user process.
E. Proper identification of existing accounts to prevent duplicate accounts is critical to the
accounts process. Single accounts shared by multiple distributors must be properly controlled
through the cross reference file.
F. Availability of the XML interface at the initial distributor. It is critical to beginning
systems/integration testing and will also impact some unit testing.
G. Access to, and updating of, existing customer master file shared with Order Entry. As
Order Entry is an on-line, interactive process file contention and record locking will be a
major concern as our process may generate a large column of account updates to the file.
The distributors have agreed to send initial client files identifying all their current
accounts with their internal account number as well as the information required to identify
the account locally. These will be verified and put through the system in bulk after hours to
avoid problems with Order Entry. However, it is critical that the process be complete and
verified prior to the next days business.
H. Posting the reassigned sales data to the existing summary and history files must be closely
monitored to ensure that the correct to/from accounts are identified and that all transactions
totals balance by distributor. Errors in postings can cause errors in the Order Entry systems
credits and balances processes.

©2013 SQE Training - STAR East 2013

160
6. FEATURES TO BE TESTED
The following is a list of the areas to be focused on during testing of the application. Key
areas by test cycle are noted.
Test Cycle 1. - XML Interface
A. Receipt of transactions.
1. Transaction reformatting
2. Single Distributor
3. Multiple Distributors
4. Single daily pull
5. Multiple pulls in a single day
6. With Order Entry data
7. Sales data alone
8. Overlapping requests
B. Error recovery
C. Backups
D. Access to XML process and menus
Test Cycle 2. - Translator Interface (both reassigned sales and account information)
A. Sales transaction
1. Valid transaction
2. Error transactions
3. Error report
B. Account transactions
1. New accounts
2. Account updates
3. Duplicate accounts
4. Account errors
5. Cross reference file maintenance
6. Error and status reports
7. Weekly control files updates
8. Control table processing
Test Cycle 3. - Manual Intervention interface (including all review/update screens)
A. Access controls (security)
B. Account review screen(s)
1. Account Errors
2. Valid account review
3. Customer master file updates/adds
4. Cross reference file updates/adds
5. Account change process
C. Sales transaction review
1. Monthly screen(s)
2. Weekly screen(s)

©2013 SQE Training - STAR East 2013

161
D. Accounts generation
1. Holding file processing
2. Cross reference file
3. Customer master file
4. Submission of update job
E. Reports
1. Monthly transmission report
2. Weekly transmission report
3. New accounts report
4. Territory report
5. Account match report
Test Cycle 4. - Reassigned sales posting process
A. Sales transaction postings
B. Weekly control file updates
C. Sales History file updates
D. Decision support system file updates
Test Cycle 5. - Archiving
A. Manual archive request
B. Automated monthly archiving process
C. File cleanup and compression
Test Cycle 6. - Independent reports (converted from prior system)
A. Distributor reports
B. Retail reports
C. High volume purchase reports
D. Variance reports
E. Decision support system reports
Test Cycle 7. - Backup and recovery
A. Recovery of interrupted XML transmission
B. Restart of translation process at each step in the process
1. Verification of control areas for restart
C. Restart of Account update process after interrupt
D. Restart of Posting process after interrupt
7. FEATURES NOT TO BE TESTED
Other than those areas identified in the master test plan no additional areas have been
identified for exclusion.

©2013 SQE Training - STAR East 2013

162
8. APPROACH
System/Integration (combined) will commence as soon as the function parts of the
application are available as identified in the individual test cycles in section, features to be tested.
A requirement has been noted for an additional full time independent test person for
system/integration testing. However, with the budget constraints and time line established; most
testing will be done by the test manager with the development teams participation.
Entry into system testing will be controlled by the test manager. Proof of unit testing must be
provided by the programmer to the team leader before unit testing will be accepted and passed on
to the test person. All unit test information will also be provided to the test person.
Program versions submitted for systems/integration testing will be copied from the
development libraries into the test team libraries and will be deleted from the development
library unless the module is segmented and will be used in several overlapping test cycles. If the
module is to be delivered in functional segments then additional regression testing of previous
test cycles will be performed to ensure all functions still work properly.
9. ITEM PASS/FAIL CRITERIA
Each test cycle will be evaluated on an individual basis. If a cycle has no critical defects and
only one (1) major defect, providing it has a functional, reasonable, work-around, the cycle will
be considered complete in terms of starting the next, independent cycle. The major defect will
have to be corrected prior to going to acceptance testing. Minor defects will be addressed on an
as needed basis depending on resource availability and project schedule. However, if there are
more than fifteen minor defects in a single aspect of the application the systems test cycle will be
considered incomplete.
Acceptance testing can begin even if there are two major defects in the entire application.
This is acceptable if there are reasonable workarounds. All Major defects must be repaired prior
to pilot testing and final acceptance testing.
In some instances; (low impact majors and minors), the application can be corrected and
bypass systems/integration testing. This decision will be made by the Test Manager and Project
Manager on an on-going basis

©2013 SQE Training - STAR East 2013

163
10. SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS
1. No Distributors are ready for testing when system testing is scheduled to begin.
Some testing can be done in areas such as general application flow and module
integration but actual data validation and verification cannot be done until data is received
from the distributors. Systems testing will be delayed for a time period to be determined
based on the delay in receiving data from the distributor(s).
11. TEST DELIVERABLES
A.
B.
C.
D.

High level System/Integration test design
Defect reports
Weekly testing status reports
Sample reports from process execution

12. REMAINING TEST TASKS
TASK

Assigned To

Create Requirements based
Inventories

Client, PM, TM,
Dev, Test

Create/Update Design inventories

Dev, TM, PM,
Test

Create System/Integration Test
Design

TM, PM, Test

Define System/Integration Test
rules and Procedures

TM, PM, Test

Setup Controlled Test
environment

Status

TM, Test

©2013 SQE Training - STAR East 2013

164
13. ENVIRONMENTAL NEEDS
The following elements are required to support the Systems/Integration testing.
A. Access to both the development and production AS/400 systems; for development, data
acquisition and testing.
B. Creation and control of test team libraries for systemsintegration testing. A separate set
of source control libraries, data files, and control tables will be required to ensure the quality
of systems testing.
C. A time segment on the data transmission interface to receive test XML transmissions
from the distributors. This segment should also have time where overlap exists with the
Order Entry process.
14. STAFFING AND TRAINING NEEDS
Time will have to be allocated to the test team to allow for source file movement and data
acquisition and control. Time will also have to be allocated to allow for weekly defect and status
reports to be prepared for the team and a meeting will have to be scheduled to report on
system/integration test results. Time must be allocated to the development team members to
attend systems/integration status meetings when required.
15. RESPONSIBILITIES
TM

PM

Dev
Team

Test
Team

Client

Create Requirements based Inventories

X

X

X

X

X

Create/Update Design inventories

X

X

X

X

Create System/Integration Test Design

X

Define System/Integration Test rules and
Procedures

X

Setup Controlled Test environment

X

X
X

X
X

The entire project team will participate in the review of the system and detail designs as well as
review of any change requests that are generated by the user or as a result of defects discovered
during development and testing. The full team will participate in the initial development of the
high level

©2013 SQE Training - STAR East 2013

165
16. SCHEDULE
All scheduling is documented in the project plan time line and is the responsibility of the
project manager to maintain. The Test Manager will provide task estimates as required.
17. PLANNING RISKS AND CONTINGENCIES
1. Limited Testing staff.
The test team is currently comprised of the developers and the Test Manager only.
Additional resources are identified to assist in testing but if those resources are not available
then the development team will have to provide additional assistance to the Test Manager
during systems/integration testing.
If development must assist in systems/integration testing then there is the possibility that
both development and testing will be delayed due to the overlap of resource requirements.
18. APPROVALS
Project Sponsor - Steve Putnam
Development Management - Ron Meade
EDI Project Manager - Peggy Bloodworth
RS Test Manager - Dale Perry
RS Development Team Manager - Dale Perry
Reassigned Sales - Cathy Capelli
Order Entry EDI Team Manager - Julie Cross

©2013 SQE Training - STAR East 2013

166

More Related Content

What's hot

Test Mühendisliğine Giriş Eğitimi - Bölüm 1
Test Mühendisliğine Giriş Eğitimi - Bölüm 1Test Mühendisliğine Giriş Eğitimi - Bölüm 1
Test Mühendisliğine Giriş Eğitimi - Bölüm 1Mesut Günes
 
ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1Yogindernath Gupta
 
Shift left - find defects earlier through automated test and deployment
Shift left - find defects earlier through automated test and deploymentShift left - find defects earlier through automated test and deployment
Shift left - find defects earlier through automated test and deploymentClaudia Ring
 
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFL
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFLINTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFL
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFLRahul R Pandya
 
Fundamentals of Software Testing
Fundamentals of Software TestingFundamentals of Software Testing
Fundamentals of Software TestingSagar Joshi
 
Test case techniques
Test case techniquesTest case techniques
Test case techniquesPina Parmar
 
Agile Testing: The Role Of The Agile Tester
Agile Testing: The Role Of The Agile TesterAgile Testing: The Role Of The Agile Tester
Agile Testing: The Role Of The Agile TesterDeclan Whelan
 
software testing
 software testing software testing
software testingSara shall
 
Overview of test process improvement frameworks
Overview of test process improvement frameworksOverview of test process improvement frameworks
Overview of test process improvement frameworksNikita Knysh
 
Top 50 Performance Testing Interview Questions | Edureka
Top 50 Performance Testing Interview Questions | EdurekaTop 50 Performance Testing Interview Questions | Edureka
Top 50 Performance Testing Interview Questions | EdurekaEdureka!
 
Agile QA presentation
Agile QA presentationAgile QA presentation
Agile QA presentationCarl Bruiners
 
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaSoftware Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaEdureka!
 

What's hot (20)

Test Mühendisliğine Giriş Eğitimi - Bölüm 1
Test Mühendisliğine Giriş Eğitimi - Bölüm 1Test Mühendisliğine Giriş Eğitimi - Bölüm 1
Test Mühendisliğine Giriş Eğitimi - Bölüm 1
 
Istqb foundation level day 1
Istqb foundation level   day 1Istqb foundation level   day 1
Istqb foundation level day 1
 
SDLC vs STLC
SDLC vs STLCSDLC vs STLC
SDLC vs STLC
 
ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1ISTQB / ISEB Foundation Exam Practice -1
ISTQB / ISEB Foundation Exam Practice -1
 
Shift left - find defects earlier through automated test and deployment
Shift left - find defects earlier through automated test and deploymentShift left - find defects earlier through automated test and deployment
Shift left - find defects earlier through automated test and deployment
 
Test Strategy
Test StrategyTest Strategy
Test Strategy
 
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFL
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFLINTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFL
INTRODUCTION TO ISTQB FOUNDATION LEVEL - CTFL
 
Fundamentals of Software Testing
Fundamentals of Software TestingFundamentals of Software Testing
Fundamentals of Software Testing
 
Test case techniques
Test case techniquesTest case techniques
Test case techniques
 
Software testing
Software testingSoftware testing
Software testing
 
Agile Testing: The Role Of The Agile Tester
Agile Testing: The Role Of The Agile TesterAgile Testing: The Role Of The Agile Tester
Agile Testing: The Role Of The Agile Tester
 
Testing Best Practices
Testing Best PracticesTesting Best Practices
Testing Best Practices
 
software testing
 software testing software testing
software testing
 
Overview of test process improvement frameworks
Overview of test process improvement frameworksOverview of test process improvement frameworks
Overview of test process improvement frameworks
 
Top 50 Performance Testing Interview Questions | Edureka
Top 50 Performance Testing Interview Questions | EdurekaTop 50 Performance Testing Interview Questions | Edureka
Top 50 Performance Testing Interview Questions | Edureka
 
White Box Testing
White Box Testing White Box Testing
White Box Testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Agile QA presentation
Agile QA presentationAgile QA presentation
Agile QA presentation
 
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | EdurekaSoftware Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
Software Testing Life Cycle (STLC) | Software Testing Tutorial | Edureka
 

Viewers also liked

Taming the Beast: Test/QA on Large-scale Projects
Taming the Beast: Test/QA on Large-scale ProjectsTaming the Beast: Test/QA on Large-scale Projects
Taming the Beast: Test/QA on Large-scale ProjectsTechWell
 
CMMI® to Agile: Options and Consequences
CMMI® to Agile: Options and ConsequencesCMMI® to Agile: Options and Consequences
CMMI® to Agile: Options and ConsequencesTechWell
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
Leading with Purpose
Leading with PurposeLeading with Purpose
Leading with PurposeTechWell
 
Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test AutomationTechWell
 
Cloud Computing: Powering the Future of Development and Testing
Cloud Computing: Powering the Future of Development and TestingCloud Computing: Powering the Future of Development and Testing
Cloud Computing: Powering the Future of Development and TestingTechWell
 
Identify and Exploit Behavioral Boundaries for Unit Testing
Identify and Exploit Behavioral Boundaries for Unit TestingIdentify and Exploit Behavioral Boundaries for Unit Testing
Identify and Exploit Behavioral Boundaries for Unit TestingTechWell
 
DevOps Is Only Half the Story to Delivering Winning Products
DevOps Is Only Half the Story to Delivering Winning ProductsDevOps Is Only Half the Story to Delivering Winning Products
DevOps Is Only Half the Story to Delivering Winning ProductsTechWell
 
Project Estimation: Myths, Taboos, and Inconvenient Truths
Project Estimation: Myths, Taboos, and Inconvenient TruthsProject Estimation: Myths, Taboos, and Inconvenient Truths
Project Estimation: Myths, Taboos, and Inconvenient TruthsTechWell
 
Agile Release Planning, Metrics, and Retrospectives
Agile Release Planning, Metrics, and RetrospectivesAgile Release Planning, Metrics, and Retrospectives
Agile Release Planning, Metrics, and RetrospectivesTechWell
 
The Tester's Role in Agile Planning
The Tester's Role in Agile PlanningThe Tester's Role in Agile Planning
The Tester's Role in Agile PlanningTechWell
 
Test Design for Responsive Websites
Test Design for Responsive WebsitesTest Design for Responsive Websites
Test Design for Responsive WebsitesTechWell
 
Add Security Testing Tools to Your Delivery Pipeline
Add Security Testing Tools to Your Delivery PipelineAdd Security Testing Tools to Your Delivery Pipeline
Add Security Testing Tools to Your Delivery PipelineTechWell
 
High-Performance Agile Testing in Software Development
High-Performance Agile Testing in Software DevelopmentHigh-Performance Agile Testing in Software Development
High-Performance Agile Testing in Software DevelopmentTechWell
 
Quality-Driven Delivery in IT
Quality-Driven Delivery in ITQuality-Driven Delivery in IT
Quality-Driven Delivery in ITTechWell
 
Sensible Test Automation
Sensible Test AutomationSensible Test Automation
Sensible Test AutomationTechWell
 
Testers in Agile Teams—Isolation or Collaboration?
Testers in Agile Teams—Isolation or Collaboration?Testers in Agile Teams—Isolation or Collaboration?
Testers in Agile Teams—Isolation or Collaboration?TechWell
 
Become an Influential Tester: Learn How to Be Heard
Become an Influential Tester: Learn How to Be HeardBecome an Influential Tester: Learn How to Be Heard
Become an Influential Tester: Learn How to Be HeardTechWell
 

Viewers also liked (18)

Taming the Beast: Test/QA on Large-scale Projects
Taming the Beast: Test/QA on Large-scale ProjectsTaming the Beast: Test/QA on Large-scale Projects
Taming the Beast: Test/QA on Large-scale Projects
 
CMMI® to Agile: Options and Consequences
CMMI® to Agile: Options and ConsequencesCMMI® to Agile: Options and Consequences
CMMI® to Agile: Options and Consequences
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Leading with Purpose
Leading with PurposeLeading with Purpose
Leading with Purpose
 
Management Issues in Test Automation
Management Issues in Test AutomationManagement Issues in Test Automation
Management Issues in Test Automation
 
Cloud Computing: Powering the Future of Development and Testing
Cloud Computing: Powering the Future of Development and TestingCloud Computing: Powering the Future of Development and Testing
Cloud Computing: Powering the Future of Development and Testing
 
Identify and Exploit Behavioral Boundaries for Unit Testing
Identify and Exploit Behavioral Boundaries for Unit TestingIdentify and Exploit Behavioral Boundaries for Unit Testing
Identify and Exploit Behavioral Boundaries for Unit Testing
 
DevOps Is Only Half the Story to Delivering Winning Products
DevOps Is Only Half the Story to Delivering Winning ProductsDevOps Is Only Half the Story to Delivering Winning Products
DevOps Is Only Half the Story to Delivering Winning Products
 
Project Estimation: Myths, Taboos, and Inconvenient Truths
Project Estimation: Myths, Taboos, and Inconvenient TruthsProject Estimation: Myths, Taboos, and Inconvenient Truths
Project Estimation: Myths, Taboos, and Inconvenient Truths
 
Agile Release Planning, Metrics, and Retrospectives
Agile Release Planning, Metrics, and RetrospectivesAgile Release Planning, Metrics, and Retrospectives
Agile Release Planning, Metrics, and Retrospectives
 
The Tester's Role in Agile Planning
The Tester's Role in Agile PlanningThe Tester's Role in Agile Planning
The Tester's Role in Agile Planning
 
Test Design for Responsive Websites
Test Design for Responsive WebsitesTest Design for Responsive Websites
Test Design for Responsive Websites
 
Add Security Testing Tools to Your Delivery Pipeline
Add Security Testing Tools to Your Delivery PipelineAdd Security Testing Tools to Your Delivery Pipeline
Add Security Testing Tools to Your Delivery Pipeline
 
High-Performance Agile Testing in Software Development
High-Performance Agile Testing in Software DevelopmentHigh-Performance Agile Testing in Software Development
High-Performance Agile Testing in Software Development
 
Quality-Driven Delivery in IT
Quality-Driven Delivery in ITQuality-Driven Delivery in IT
Quality-Driven Delivery in IT
 
Sensible Test Automation
Sensible Test AutomationSensible Test Automation
Sensible Test Automation
 
Testers in Agile Teams—Isolation or Collaboration?
Testers in Agile Teams—Isolation or Collaboration?Testers in Agile Teams—Isolation or Collaboration?
Testers in Agile Teams—Isolation or Collaboration?
 
Become an Influential Tester: Learn How to Be Heard
Become an Influential Tester: Learn How to Be HeardBecome an Influential Tester: Learn How to Be Heard
Become an Influential Tester: Learn How to Be Heard
 

Similar to Fundamentals of Risk-based Testing

Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based TestingTechWell
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based TestingTechWell
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based TestingTechWell
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in PracticeTechWell
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in PracticeTechWell
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJShildredzr1di
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Ian McDonald
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
Effective Test Estimation
Effective Test EstimationEffective Test Estimation
Effective Test EstimationTechWell
 

Similar to Fundamentals of Risk-based Testing (20)

Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based Testing
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based Testing
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based Testing
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in Practice
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in Practice
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJS
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2
 
Albert S. QA Mgr. -Resume
Albert S. QA Mgr. -ResumeAlbert S. QA Mgr. -Resume
Albert S. QA Mgr. -Resume
 
Test manager resume
Test manager resumeTest manager resume
Test manager resume
 
sutapa_resume
sutapa_resumesutapa_resume
sutapa_resume
 
Resume
ResumeResume
Resume
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Effective Test Estimation
Effective Test EstimationEffective Test Estimation
Effective Test Estimation
 
LGM_CV_2016
LGM_CV_2016LGM_CV_2016
LGM_CV_2016
 

More from TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and RecoveringTechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartTechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyTechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowTechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipTechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsTechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationTechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessTechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateTechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessTechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
 

More from TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Recently uploaded

Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 

Recently uploaded (20)

Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 

Fundamentals of Risk-based Testing

  • 1. MC Full-day Tutorial 4/29/13 8:30AM Fundamentals of Risk-based Testing Presented by: Dale Perry Software Quality Engineering Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Dale Perry Dale Perry has more than thirty-six years of experience in information technology as a programmer/analyst, database administrator, project manager, development manager, tester, and test manager. Dale’s project experience includes large-system development and conversions, distributed systems, and both web-based and client/server applications. A professional instructor for more than twenty years, he has presented at numerous industry conferences on development and testing. With Software Quality Engineering for fifteen years, Dale has specialized in training and consulting on testing, inspections and reviews, and other testing and quality-related topics.
  • 3. ©2013 SQE Training - STAR East 2013 1
  • 4. This page left blank ©2013 SQE Training - STAR East 2013 2
  • 5. Notice of Rights Entire contents © 1986-2013 by SQE Training, unless otherwise noted on specific items. All rights reserved. No material in this publication may be reproduced in any form without the express written permission of SQE Training. Home Office SQE Training 330 Corporate Way, Suite 300 Orange Park, FL 32073 U.S.A. (904) 278-0524 (904) 278-4380 fax www.sqetraining.com Notice of Liability The information provided in this book is distributed on an “as is” basis, without warranty. Neither the author nor SQE Training shall have any liability to any person or entity with respect to any loss or damage caused or alleged to have been caused directly or indirectly by the content provided in this course. ©2013 SQE Training - STAR East 2013 3
  • 6. ©2013 SQE Training - STAR East 2013 4
  • 7. ©2013 SQE Training - STAR East 2013 5
  • 8. Formal definitions of testing: IEEE Standard 829-2008 (A) An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component. (B) To conduct an activity as in (A). IEEE Standard 610.12-1990 Testing: The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. ©2013 SQE Training - STAR East 2013 6
  • 9. ©2013 SQE Training - STAR East 2013 7
  • 10. ©2013 SQE Training - STAR East 2013 8
  • 11. ©2013 SQE Training - STAR East 2013 9
  • 12. Testing every possible data value, every possible navigation path through the code, and every possible combination of input values is almost always an infinite task which never can be completed. Even if it were possible, it is not even a good idea because many of the test cases would be redundant, consume resources to create, delay time to market, and not add anything of value. ©2013 SQE Training - STAR East 2013 10
  • 13. ©2013 SQE Training - STAR East 2013 11
  • 14. ©2013 SQE Training - STAR East 2013 12
  • 15. ©2013 SQE Training - STAR East 2013 13
  • 16. ©2013 SQE Training - STAR East 2013 14
  • 17. Risk Factor 1 Ambiguous Improvement Targets 2 Artificial Maturity Levels 3 Canceled Projects 4 Corporate Politics 5 Cost Overruns 6 Creeping User Requirements 7 Crowded Office Conditions 8 Error Prone Modules 9 Excessive Paperwork 10 Excessive schedule Pressure 11 Excessive Time to Market 12 False Productivity Claims 13 Friction Between Clients and Software Contractors 14 Friction Between Software Management and Senior Executives 15 High Maintenance Costs 16 Inaccurate Cost Estimating 17 Inaccurate Metrics 18 Inaccurate Quality Estimating 19 Inaccurate Sizing of Deliverables 20 Inadequate Assessments 21 Inadequate Compensation Plans ©2013 SQE Training - STAR East 2013 15
  • 18. Risk Factor 22 Inadequate Configuration Control and Project Repositories 23 Inadequate Curricula (Software Engineering) 24 Inadequate Curricula (Software Management) 25 Inadequate Measurement 26 Inadequate Package Acquisition Methods 27 Inadequate Research and Reference Facilities 28 Inadequate Software Policies and Standards 29 Inadequate Project Risk Analysis 30 Inadequate Project value Analysis 31 Inadequate Tools and Methods (Project Management) 32 Inadequate Tools and Methods (Quality Assurance) 33 Inadequate Tools and Methods (Software Engineering) 34 Inadequate Tools and Methods (Technical Documentation) 35 Lack of Reusable Architecture 36 Lack of Reusable Code 37 Lack of Reusable Data 38 Lack of Reusable Designs (Blueprints) 39 Lack of Reusable Documentation 40 Lack of Reusable Estimates (Templates) ©2013 SQE Training - STAR East 2013 16
  • 19. Risk Factor 41 Lack of Reusable Human Interfaces 42 Lack of Reusable Project Plans 43 Lack of Reusable Requirements 44 Lack of Reusable Test Plans. Test Cases and Test Data 45 Lack of Specialization 46 Long Service Life of Obsolete Systems 47 Low Productivity 48 Low Quality 49 Low Status of Software Personnel and Management 50 Low User Satisfaction 51 Malpractice (Project Management) 52 Malpractice (Technical Staff) 53 Missed Schedules 54 Partial Life-Cycle Definitions 55 Poor Organization Structures 56 Poor Technology Investments 57 Severe Layoffs and Cutbacks of Staff 58 Short-Range Improvement Planning 59 Silver Bullet Syndrome 60 Slow Technology Transfer ©2013 SQE Training - STAR East 2013 17
  • 20. ©2013 SQE Training - STAR East 2013 18
  • 21. ©2013 SQE Training - STAR East 2013 19
  • 22. ©2013 SQE Training - STAR East 2013 20
  • 23. ©2013 SQE Training - STAR East 2013 21
  • 24. ©2013 SQE Training - STAR East 2013 22
  • 25. The purpose of discussing product risk is to determine what the primary focus of testing should be. Generally speaking, most organizations find that their resources are inadequate to test everything in a given release. Outlining product risks helps the testers prioritize what to test and allows them to concentrate on those areas that are likely to fail or have a critical impact on the customer if they do fail. Risks are used to decide where to start testing and where to test more. Testing is used to reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect. ©2013 SQE Training - STAR East 2013 23
  • 26. Organizations that work on safety-critical software usually can use the information from their safety and hazard analysis. However, in many other companies, no attempt is made to verbalize product risks in any fashion. If your company does not currently do any type of risk analysis, try a brainstorming session among a small group of users, developers, and testers to identify concerns. ©2013 SQE Training - STAR East 2013 24
  • 27. ©2013 SQE Training - STAR East 2013 25
  • 28. ©2013 SQE Training - STAR East 2013 26
  • 29. ©2013 SQE Training - STAR East 2013 27
  • 30. ©2013 SQE Training - STAR East 2013 28
  • 31. ©2013 SQE Training - STAR East 2013 29
  • 32. ©2013 SQE Training - STAR East 2013 30
  • 33. ©2013 SQE Training - STAR East 2013 31
  • 34. The concept of risk driven testing applies to all software development models and processes. It is critical to developing quality software that meets user/customer expectations and is the focus of both the STEP™ methodology and many of the new agile development processes. If you analyze the newer “agile” development methods, this is one of the key concepts. It’s interesting that this is not really a new concept at all; it has been around for a couple of decades. ©2013 SQE Training - STAR East 2013 32
  • 35. There are many different software lifecycle approaches: waterfall, spiral, incremental delivery, prototyping (evolutionary and throwaway), RAD, extreme programming (XP), SCRUM, DSDM, etc. The key is to know which process the project is following and to integrate into that process as soon as possible (reasonable). The later you get involved, the less chance you have to prevent problems. ©2013 SQE Training - STAR East 2013 33
  • 36. Testing is, in fact, dependent on the development processes and activities. As testers we test—evaluate, verify, validate—what the development groups create. When we refer to the development process, we are talking about the creation of an automated solution to a problem using a computer as the basis for the solution. The term development in this course relates to the construction of a software solution to solve a problem. The problem can be of a business nature or other type of system such as an embedded controller in a car or medical device. ©2013 SQE Training - STAR East 2013 34
  • 37. ©2013 SQE Training - STAR East 2013 35
  • 38. ©2013 SQE Training - STAR East 2013 36
  • 39. ©2013 SQE Training - STAR East 2013 37
  • 40. A master level plan for the entire project provides a “global” view of the testing efforts and goals. One (or more) detailed, individual level test plan focuses the efforts of a specific group. The number of levels will be determined by the time, cost, and risk factors affecting the project. ©2013 SQE Training - STAR East 2013 38
  • 41. Throughout the project you will be revisiting these processes as changes are made to the project or as new information is uncovered. These processes will help in various aspects of the planning process, such as estimating, budget, skills required, etc. ©2013 SQE Training - STAR East 2013 39
  • 42. One of the most important deliverables in testing is an assessment of the failure risk associated with the software—not just numbers of test cases, defects found, and other statistics. As testers we provide critical information to management about the potential failure risk of the software based on the thoroughness and comprehensiveness (effectiveness) of our test efforts. We plan, analyze, and design tests, and then execute those tests. The results of the test execution are analyzed and compared to the planned (designed) tests; then we evaluate the testing and development processes, the test results, and the potential for failure based on this assessment. • The decision to stop testing should be based on this type of analysis of the testing and development processes. ©2013 SQE Training - STAR East 2013 40
  • 43. ©2013 SQE Training - STAR East 2013 41
  • 44. ©2013 SQE Training - STAR East 2013 42
  • 45. ©2013 SQE Training - STAR East 2013 43
  • 46. More information can be obtained on the planning process from several courses offered by SQE as well as the STAR tutorial noted on the earlier slide. Two courses offered through SQE Training address planning issues in more depth. Systematic Software Testing – This course covers this process in more detail including an extended section on planning. Test Process Management - A managers course that focuses mainly on planning and strategic issues. It is similar to SST but its focus is on managers and does not go into details on test design etc. ©2013 SQE Training - STAR East 2013 44
  • 47. ©2013 SQE Training - STAR East 2013 45
  • 48. ©2013 SQE Training - STAR East 2013 46
  • 49. ©2013 SQE Training - STAR East 2013 47
  • 50. ©2013 SQE Training - STAR East 2013 48
  • 51. ©2013 SQE Training - STAR East 2013 49
  • 52. Reference materials include any information available that can assist in determining the testing objects/conditions. Some lifecycles do not have formal sources of documentation. No formal requirements are written. However, there is usually some information about what type of system is being created, the platform on which it will run, the goals of the client, etc. Any information you can gather will help you better understand the test requirements for this project. ©2013 SQE Training - STAR East 2013 50
  • 53. Risk identification and assessment needs to include the viewpoints noted earlier The key is to include testers to focus the team on the testing issues and to help determine the priority of the features to be developed Testers need to know the risks and issues in order to properly analyze and design reasonable tests Different groups have different ideas about software. The more of these disparate groups you can combine, the more accurate the picture you will have of the risks, priorities, and goals for development, and the more accurate the testing goals and objects/conditions for this project become. ©2013 SQE Training - STAR East 2013 51
  • 54. ©2013 SQE Training - STAR East 2013 52
  • 55. ©2013 SQE Training - STAR East 2013 53
  • 56. The objects process is an iterative process. You begin the process at requirements and continue the process at each stage of the development process. How far you take the process is determined by the scope and risks associated with the software being tested. In addition to the process being iterative, it is also cumulative. The information from requirements is used to improve the requirements (static testing/reviews), to focus the design, and possibly to improve the design. At the design stage of a project the information from the requirements inventory process is used to evaluate the design and to ensure problems are corrected and additional items are gathered from the design. This process can be continued as far as the risks to the project warrant. ©2013 SQE Training - STAR East 2013 54
  • 57. ©2013 SQE Training - STAR East 2013 55
  • 58. ©2013 SQE Training - STAR East 2013 56
  • 59. ©2013 SQE Training - STAR East 2013 57
  • 60. ©2013 SQE Training - STAR East 2013 58
  • 61. ©2013 SQE Training - STAR East 2013 59
  • 62. ©2013 SQE Training - STAR East 2013 60
  • 63. ©2013 SQE Training - STAR East 2013 61
  • 64. ©2013 SQE Training - STAR East 2013 62
  • 65. ©2013 SQE Training - STAR East 2013 63
  • 66. There are some common aspects of applications that can be drawn from the design specifications ©2013 SQE Training - STAR East 2013 64
  • 67. ©2013 SQE Training - STAR East 2013 65
  • 68. ©2013 SQE Training - STAR East 2013 66
  • 69. ©2013 SQE Training - STAR East 2013 67
  • 70. ©2013 SQE Training - STAR East 2013 68
  • 71. ©2013 SQE Training - STAR East 2013 69
  • 72. ©2013 SQE Training - STAR East 2013 70
  • 73. Once the inventory has been built, the next step is to determine the Impact and likelihood of something going wrong with each of the elements identified in the inventory. • Determine the impact (loss or damage) and likelihood (frequency or probability) of the feature or attribute failing. While some organizations like to use percentages, number of days/years between occurrences, or even probability “half lives,” using a set of simple categories such as the ones listed in the slide above typically provide sufficient accuracy. If the likelihood or impact of something going wrong is none or zero, then this item could be removed from the analysis. However, the removal should be documented. • This is not recommended. Just leave it in the inventory, it will naturally drop to the bottom. Keep it on the list because the assessment may change later as more information becomes available. ©2013 SQE Training - STAR East 2013 71
  • 74. ©2013 SQE Training - STAR East 2013 72
  • 75. ©2013 SQE Training - STAR East 2013 73
  • 76. ©2013 SQE Training - STAR East 2013 74
  • 77. ©2013 SQE Training - STAR East 2013 75
  • 78. ©2013 SQE Training - STAR East 2013 76
  • 79. Risk is in the eye of the beholder as noted earlier • Any two people may look at the same event and see an entirely different set of issues. What is critical to one may be trivial to the other ©2013 SQE Training - STAR East 2013 77
  • 80. ©2013 SQE Training - STAR East 2013 78
  • 81. Likelihood = The probability or chance of an event occurring (e.g., the likelihood that a user will make a mistake and, if a mistake is made, the likelihood that it will go undetected by the software) Impact = The damage that results from a failure (e.g., the system crashing or corrupting data might be considered high impact) H = High which has a value of 3 M = Medium which has a value of 2 L = Low ©2013 SQE Training - STAR East 2013 which has a value of 1 79
  • 82. ©2013 SQE Training - STAR East 2013 80
  • 83. Under likelihood and impact, there may be differences of opinion as to the risk. It can be high business risk but low technical risk, etc. So you may have to compromise on an acceptable level of risk. The numbers are calculated using the values from our original matrix (page 78) and multiplying them. H = High which has a value of 3 M = Medium which has a value of 2 L = Low ©2013 SQE Training - STAR East 2013 which has a value of 1 81
  • 84. Make adjustments and sort by the agreed priority. We now have a risk-based assessment of what needs to be tested. ©2013 SQE Training - STAR East 2013 82
  • 85. Of course, successful prioritization of risks does not help unless test cases are defined for each risk—with the highest priority risks being assigned the most comprehensive tests and priority scheduling. The objective of each test case is to mitigate at least one of these risks. If time or resources are an issue, then the priority associated with each feature or attribute can be used to determine which test cases should be created and/or run. If testing must be cut, then the risk priority can be used to determine how and what to drop. • Cut low risk completely (indicated by the horizontal line). If you plan to ship the low risk features, you may want to consider an across the board approach At least that way, the features do not ship untested (risk unknown). This will entail some additional risk as higher risk features get less testing. ©2013 SQE Training - STAR East 2013 83
  • 86. The process of “inventorying” test objects can help establish and define the scope of the testing as well as identify possible points of misunderstanding and mismatched assumptions. Even a very high level list of test objects can help in clarifying the goals of the testing and thereby reduce problems later in the project. How much effort you put into this process and how often you have to revisit the “inventory” will depend on many factors, • The type of development life-cycle used in the project • The quality of the specifications (detail, scope, etc.) • The goals of the testing process, how much is enough • The time, resources, skills, etc. available to the project One of the key goals of this process is to help understand what really should be tested and what to do if you cannot test it all, which is usually the case. ©2013 SQE Training - STAR East 2013 84
  • 87. ©2013 SQE Training - STAR East 2013 85
  • 88. ©2013 SQE Training - STAR East 2013 86
  • 89. ©2013 SQE Training - STAR East 2013 87
  • 90. Although exploratory testing primarily relies on the skills and knowledge of the tester and tends to be more dynamic than traditional technique-driven design, it too can be more formalized. Using the inventory process as part of an exploratory test process can add structure to the definition of the areas to be investigated rather than relying only on the skills of the individual tester. ©2013 SQE Training - STAR East 2013 88
  • 91. ©2013 SQE Training - STAR East 2013 89
  • 92. ©2013 SQE Training - STAR East 2013 90
  • 93. ©2013 SQE Training - STAR East 2013 91
  • 94. ©2013 SQE Training - STAR East 2013 92
  • 95. The level and complexity of documentation represents a serious risk to the testing process. • Too much overly detailed, complex documentation takes significant time to design and create. When things change, the maintenance costs can be extreme. • Excessive detail is not necessarily a good characteristic of documentation. Too little documentation—with insufficient information to allow for the analysis, understanding, and maintenance of the tests—is equally bad. • The time spent reacquiring lost knowledge can be very expensive. The key is to strike a balance between the level of detail in test documentation and the time and cost to define, create, and maintain that same documentation. ©2013 SQE Training - STAR East 2013 93
  • 96. The goal is to avoid gaps in the testing as well as to avoid overlapping testing too much. Depending on how you define your inventories, based on generic groupings or application specific groupings, the idea to decide who will test which object at what stage/level. Some objects cannot be tested until later stages of the process (i.e., scenarios and usage based objects). Conversely some elements, such as field edits, valid ranges, error messages etc., are best tested in the earlier stages. These code logic elements, created by the programmers, are best tested at that stage of the process. Finding such errors late in the process can be very costly. ©2013 SQE Training - STAR East 2013 94
  • 97. ©2013 SQE Training - STAR East 2013 95
  • 98. ©2013 SQE Training - STAR East 2013 96
  • 99. ©2013 SQE Training - STAR East 2013 97
  • 100. ©2013 SQE Training - STAR East 2013 98
  • 101. ©2013 SQE Training - STAR East 2013 99
  • 102. ©2013 SQE Training - STAR East 2013 100
  • 103. ©2013 SQE Training - STAR East 2013 101
  • 104. ©2013 SQE Training - STAR East 2013 102
  • 105. Software configuration management is critical to the testing effort. It is so important that, if the software configuration management is done poorly, the testing effort and the entire project may fail. While this session only covers software, it should be noted that configuration management typically encompasses more than just the software being developed for a particular application. ©2013 SQE Training - STAR East 2013 103
  • 106. There are several key issues that must also be resolved with the test environment. • Is the equipment available when needed? • Has new equipment acquisition been planned and budgeted for? •Is there a test lab? • Is it shared or dedicated? • Can development share the test lab? • Can they change it without the test groups approval? Having the correct environment is critical to good testing. It is hard to validate or verify something if you do not have the necessary systems, applications, etc. The more complex and risk sensitive the application the more important the environment becomes. How much time may be lost just getting the necessary test environment set up for testing if you start too late in the project? If you’re spending three to five days setting up the test lab how much testing are you doing. ©2013 SQE Training - STAR East 2013 104
  • 107. ©2013 SQE Training - STAR East 2013 105
  • 108. ©2013 SQE Training - STAR East 2013 106
  • 109. This information will allow us to assess and report on risks related to testing activities. ©2013 SQE Training - STAR East 2013 107
  • 110. ©2013 SQE Training - STAR East 2013 108
  • 111. ©2013 SQE Training - STAR East 2013 109
  • 112. ©2013 SQE Training - STAR East 2013 110
  • 113. ©2013 SQE Training - STAR East 2013 111
  • 114. ©2013 SQE Training - STAR East 2013 112
  • 115. ©2013 SQE Training - STAR East 2013 113
  • 116. Thoroughness (Depth) – The degree to which inventoried items are tested relative to risk. How completely did the tests cover the specified risks? Comprehensiveness (Breadth) – The fraction of inventoried items that are tested. Not everything gets tested as we noted earlier. The key here is: How much of what was agreed to be tested was actually tested? ©2013 SQE Training - STAR East 2013 114
  • 117. A Sample Approach Develop requirements and design-based tests (as the software is specified) Run the requirements and design-based tests using a code coverage analyzer (as soon as the software item is coded and becomes available) Analyze unexecuted code to determine supplemental objects and tests (if needed) Study the implementation and identify any additional objects and tests required ©2013 SQE Training - STAR East 2013 115
  • 118. ©2013 SQE Training - STAR East 2013 116
  • 119. Metrics indicate where there are elements to be investigated, they don’t necessarily tell us why something is occurring ,only that it is and requires analysis. Assessing test execution on an ongoing basis helps us avoid worse problems later in the project and gives us information that we can use to improve the overall development and testing processes. ©2013 SQE Training - STAR East 2013 117
  • 120. This appears better overall, but as before, it does not tell us why something is occurring. ©2013 SQE Training - STAR East 2013 118
  • 121. The sequence is • Planned • Specified • Implemented • Executed • Passed/failed Again, a trace matrix can help. If you don’t know how many tests were planned, how do you assess progress? Passed/failed only has meaning if you know what you were intending to do. ©2013 SQE Training - STAR East 2013 119
  • 122. ©2013 SQE Training - STAR East 2013 120
  • 123. ©2013 SQE Training - STAR East 2013 121
  • 124. ©2013 SQE Training - STAR East 2013 122
  • 125. ©2013 SQE Training - STAR East 2013 123
  • 126. ©2013 SQE Training - STAR East 2013 124
  • 127. ©2013 SQE Training - STAR East 2013 125
  • 128. ©2013 SQE Training - STAR East 2013 126
  • 129. ©2013 SQE Training - STAR East 2013 127
  • 130. ©2013 SQE Training - STAR East 2013 128
  • 131. ©2013 SQE Training - STAR East 2013 129
  • 132. ©2013 SQE Training - STAR East 2013 130
  • 133. ©2013 SQE Training - STAR East 2013 131
  • 134. ©2013 SQE Training - STAR East 2013 132
  • 135. ©2013 SQE Training - STAR East 2013 133
  • 136. ©2013 SQE Training - STAR East 2013 134
  • 137. ©2013 SQE Training - STAR East 2013 135
  • 138. ©2013 SQE Training - STAR East 2013 136
  • 139. ©2013 SQE Training - STAR East 2013 137
  • 140. ©2013 SQE Training - STAR East 2013 138
  • 141. Using the PICT data I have created a matrix of “valid” test cases ©2013 SQE Training - STAR East 2013 139
  • 142. ©2013 SQE Training - STAR East 2013 140
  • 143. ©2013 SQE Training - STAR East 2013 141
  • 144. ©2013 SQE Training - STAR East 2013 142
  • 145. ©2013 SQE Training - STAR East 2013 143
  • 146. ©2013 SQE Training - STAR East 2013 144
  • 147. ©2013 SQE Training - STAR East 2013 145
  • 148. The first document in this series is the overall project overview and the project requirements specification. ©2013 SQE Training - STAR East 2013 146
  • 149. Reassigned Sales – Project Overview This project is expected to take approximately 8 months to implement. The staffing for this project will come from the following internal Widgits departments: • Marketing and Sales • Internal MIS • Applications development • Quality Assurance • Infrastructure • Data base administration • Network administration Each department will be responsible to the overall project manager for providing identified resources. The specific roles and responsibilities will be determined by the project manager working within corporate guidelines. Additionally our vendor Zippy Corp. will be providing assistance in the following areas: Applications development • XML programming and support • AS/400 programming assistance Testing and verification • Overall testing strategy • Test design and specification • Verification of the conversion of XML data into Widgits database formats • Working with the Marketing and Sales department to create and verify an acceptance test plan ©2013 SQE Training - STAR East 2013 147
  • 150. This application consists of a manufacturer (Widgets Inc.), that sells a product through a sales staff but also allows its’ products to be sold by third party distributors and retail outlets (resellers). Because the company allows this invasion into their sales people’s territories they have a method for calculating the degree of encroachment and compensation for the affected sales representative. 1. The reassigned sales system will receive reassigned sales data though an XML interface using standard data definitions. 1A. A separate transaction type will be used to identify sales data from client address data and both types of data will be passed to the new reassigned sales process. • It is recognized that each third party may have separate internal account numbers for their customers. The Widgits database will have to provide a mechanism where-by a Widgits account number can be associated with multiple third party account numbers; all sharing one address record in the Widgits database. • Account information will use a separate XML data format transaction type and will be separated into their own data base files. • The account information will be validated against the existing customer master file to ensure no duplicates are stored. • A separate cross reference table will be created to identify individual accounts that purchase both directly from us and a distributor and those that purchase from multiple distributors. • There is a minimum amount of account identification that must be received to create an account. The process must identify any incomplete accounts and the sales administration must have a method for correcting these records. 2. An on-line process will be created for the sales administration staff providing the following functions. 2A. Provide data roll-up displays allowing sales staff to review sales data by individual week or by the month. 2B. Allow sales data received in a week to be posted back to the previous sales week, eliminating the need for manual adjustments. • A cutoff period will be defined as to how long the system will wait for a reseller to report sales data. Once that cutoff has been reached all reported sales will be moved to the next sales period for calculating a sales person’s compensation. 2C. The application will allow the sales staff to eliminate groups and sets of transactions (received through the XML interface) that are in error, or appear to be duplicated. ©2013 SQE Training - STAR East 2013 148 148
  • 151. 2D. Provide on-line review capabilities to allow the sales staff to review whether a reseller has or has not sent in their sales data. • A form of notification process will be provided to do the following: • Notify the sales commission staff of a late distributor. • Provide a mechanism (XML based) to contact the distributor. • Provide a process for the vendor to request a delay. If delayed the data would automatically move to the next reporting period. • A notification to the affected sales person(s) of this delay in commission credit. 2E. Allow the sales staff to activate the commissions posting process after all data has been reviewed. • The posting process “must be” initiated by a staff person. 2F. Provide a reports menu with reports by sales region, sales person, month, week, product line. 3. The system must provide an archive system to remove posted transactions from the active files on a monthly basis. • This process must execute after the monthly close has been processed. • Transactions that are not in a valid state and that have not been posted will be removed from the active system during archiving but will be retained on the archive for later analysis. • Transactions in a valid state, that have not been posted will be retained on the active files and will be rolled in into the next period (first week) sales data, regardless of sales date. • These transactions will not be archived until they have been posted. 4. Prototype screen layouts and report layouts will be provided for user review during the early stages of system development. • Once the prototypes have been approved, all additional changes will be addressed on an as needed, priority basis within the constraints of the project time line. 5. An initial accounts address generation run can be made by a distributor to send in all available account information prior to starting the XML process. This will enable us to have an initial set of account records in place. • Vendors must be provided the opportunity to add, change and delete customer address records related to their sales activities. • However, once a record is added to the Widgit’s master address file it will not be deleted unless all the following are true. • There are no other vendors related to the same end client. • There are no direct sales to the client by Widgit’s sales persons. • The record has not had a recorded purchase in the last two years. ©2013 SQE Training - STAR East 2013 149 149
  • 152. The following are the initial test objects for the Reassigned Sales project. An initial high level risk assessment will be done on these items in conjunction with the systems design process. ©2013 SQE Training - STAR East 2013 150
  • 153. TEST OBJECT INVENTORY - REQUIREMENTS BASED 1. REQUIREMENTS A. B. C. D. E. F. G. H. XML Order Entry shared interface New/modified screens New/modified reports Sales account information Sales calculations Legacy systems interfaces Archive 2. FEATURES AND FUNCTIONS A. B. C. D. Order Entry interface Manual interface Reports interface Sales account information 3. TRANSACTIONS A. B. C. D. XML from distributors XML to distributors Mailbox management External applications (legacy, back office) 4. DATA A. B. C. D. E. New format AS/400 database files Messages Conversion Archive Recovery and backups 5. INTERFACES A. Order Entry B. Manual User interface C. Reports interface D. Sales account information (existing) E. XML ©2013 SQE Training - STAR East 2013 151
  • 154. F. Account data files G. External applications 6. PERFORMANCE A. B. C. D. E. Data downloads from mailbox Manual user screens (roll-ups etc.) response time Archive (delays in processing could result for OE) Sales commissions reports (timeliness do to delayed postings) Database responsiveness (volume) 7. CONSTRAINTS A. Security access to screens B. Access to mailbox service 8. BACK-UP AND RECOVERY A. Archive ©2013 SQE Training - STAR East 2013 152
  • 155. The following is the risk assessment based on the object inventory developed from the requirements specification. ©2013 SQE Training - STAR East 2013 153
  • 156. Some potential risk factors to consider include: Impact risk factors • Endangerment of human life or highly valued resources increases the impact risk • For non-critical features, the more immediate the failure detection the lower the impact risk • The increased availability of a practical work-around lowers the impact risk • Updating critical data structures is riskier than just accessing them • Interfaces with critical functions are riskier Likelihood risk factors • New components are riskier than reliable, existing ones • Components with a history of unreliability are riskier • Frequently changed components tend to get disorganized over time • Components developed by personnel with a record of poor product reliability are riskier • Components developed by personnel with a poor understanding of either the requirements or the design are riskier • This can be compounded by projects that use outsourced services • Components with frequently changing requirements are riskier • Poorly designed components are riskier • Programs solving complex problems are riskier than those solving simpler ones • Programs doing multiple functions are riskier that those doing the corresponding single functions • The more dynamic and complex the data structure, the riskier ©2013 SQE Training - STAR East 2013 154
  • 157. The following inventory and risk assessment will be used in the test planning process. Close coordination with the developers will be required to ensure critical features are developed in a priority sequence where possible. The development of any non-critical features, early in the schedule, must be approved by all management groups (project, development and testing). Elements indicated, as High risk must be considered for development first as other features are, to a great degree, dependent on the completion of those features first. All risk assessments included user, technical and testing considerations in assigning the risk and priorities. The priorities are in descending order from 10 (highest) to 1 (lowest) Categories break down as follows High 10, 9, 8 Medium 7, 6, 5 Low 4, 3 Very low 2 No risk 1 OBJECT INVENTORY REQUIREMENTS BASED INITIAL RISK ASSESSMENT (COMBINED) RISK PRIORITY 1. REQUIREMENTS A. XML B. Order Entry shared interface C. New/modified screens D. New/modified reports E. Sales account information F. Sales calculations G. Legacy systems interfaces H. Archive High High Medium Low High High Medium Low 10 10 7 4 9 10 7 3 High Medium Low High 10 7 4 9 2. FEATURES AND FUNCTIONS A. Order Entry interface B. Manual interface C. Reports interface D. Sales account information ©2013 SQE Training - STAR East 2013 155
  • 158. 3. TRANSACTIONS A. XML from distributors B. XML to distributors C. Mailbox management D. External applications (legacy, back office) High High Medium Medium 10 10 7 7 Low Low High Low Medium 4 4 10 4 5 High Medium Low High High High Medium 10 7 4 9 10 9 7 High Low 9 4 Low 3 Medium 5 Low 3 4. DATA A. New format AS/400 database files B. Messages C. Conversion D. Archive E. Recovery and backups 5. INTERFACES A. Order Entry B. Manual User interface C. Reports interface D. Sales account information (existing) E. XML F. Account data files G. External applications 6. PERFORMANCE A. Data downloads from mailbox B. Manual user screens (roll-ups etc.) response time C. Archive (delays in processing could result for OE) D. Sales commissions reports (timeliness do to delayed postings) E. Database responsiveness (volume) ©2013 SQE Training - STAR East 2013 156
  • 159. 7. CONSTRAINTS A. Security access to screens B. Access to mailbox service Medium Low 6 4 Low 3 8. BACKUP AND RECOVERY A. Archive ©2013 SQE Training - STAR East 2013 157
  • 160. The next document is the system level test plan for the project. ©2013 SQE Training - STAR East 2013 158
  • 161. 1. TEST PLAN IDENTIFIER RS-STP01.3 2. REFERENCES 1. Reassigned Sales System Rewrite Requirements - SST_RQMT04.1 2. Reassigned Sales Master test plan RS-MTP01.3 2. Reassigned Sales General Design Specification - RS-SDS01.3 3. INTRODUCTION This is the System/Integration Test Plan for the Reassigned Sales project. This plan will address only those items and elements that are related to the Reassigned Sales process, both directly and indirectly affected elements will be addressed. The primary focus of this plan is to ensure that the new Reassigned Sales application functions within the proscribed limits and meets the minimum functions specified in both the requirements specification and the general design specification. The system/integration testing will begin as soon as the first complete increment of the application is available. It is anticipated that the application will be available in several increments as identified in the test items section. Systems/integration will be conducted by the development team with the assistance of one full time test person. The sales administration team will be involved in the screen/report verification process only. Final user approval and acceptance will be during acceptance testing. 4. TEST ITEMS The following is a list of the functional areas to focus on during systems/integration testing. Each area will be a separate test cycle with the complete process being tested as the final phase. Test 1. - XML Interface Test 2. - Translator Interface (both sales data and account information) Test 3. - Manual Intervention interface (including all review/update screens) Test 4. - Reassigned sales posting process Test 5. - Reassigned sales reports Test 6. - Archiving Test 7. - Backup and recovery ©2013 SQE Training - STAR East 2013 159
  • 162. 5. SOFTWARE RISK ISSUES There are several interface issues that require additional focus during systems test in addition to those issues identified in the Master test plan RS-MTP01.3 A. The XML interfaces’ capability to support the added reassigned sales transaction volume in addition to the current Order Entry transaction Volume. B. The different timing of the two interfaces pulling from the shared mailbox on the Advantis network. The reassigned sales transactions must append to the existing files until the process is executed to process the data. C. The reformatting of the XML data transaction formats into the appropriate reassigned sale control files by the translation process is critical to application success. D. The maintenance of the accounts cross reference file to prevent multiple accounts from being created must be closely monitored. The manual user intervention required to correct accounts in error will require close scrutiny to prevent overload of the user process. E. Proper identification of existing accounts to prevent duplicate accounts is critical to the accounts process. Single accounts shared by multiple distributors must be properly controlled through the cross reference file. F. Availability of the XML interface at the initial distributor. It is critical to beginning systems/integration testing and will also impact some unit testing. G. Access to, and updating of, existing customer master file shared with Order Entry. As Order Entry is an on-line, interactive process file contention and record locking will be a major concern as our process may generate a large column of account updates to the file. The distributors have agreed to send initial client files identifying all their current accounts with their internal account number as well as the information required to identify the account locally. These will be verified and put through the system in bulk after hours to avoid problems with Order Entry. However, it is critical that the process be complete and verified prior to the next days business. H. Posting the reassigned sales data to the existing summary and history files must be closely monitored to ensure that the correct to/from accounts are identified and that all transactions totals balance by distributor. Errors in postings can cause errors in the Order Entry systems credits and balances processes. ©2013 SQE Training - STAR East 2013 160
  • 163. 6. FEATURES TO BE TESTED The following is a list of the areas to be focused on during testing of the application. Key areas by test cycle are noted. Test Cycle 1. - XML Interface A. Receipt of transactions. 1. Transaction reformatting 2. Single Distributor 3. Multiple Distributors 4. Single daily pull 5. Multiple pulls in a single day 6. With Order Entry data 7. Sales data alone 8. Overlapping requests B. Error recovery C. Backups D. Access to XML process and menus Test Cycle 2. - Translator Interface (both reassigned sales and account information) A. Sales transaction 1. Valid transaction 2. Error transactions 3. Error report B. Account transactions 1. New accounts 2. Account updates 3. Duplicate accounts 4. Account errors 5. Cross reference file maintenance 6. Error and status reports 7. Weekly control files updates 8. Control table processing Test Cycle 3. - Manual Intervention interface (including all review/update screens) A. Access controls (security) B. Account review screen(s) 1. Account Errors 2. Valid account review 3. Customer master file updates/adds 4. Cross reference file updates/adds 5. Account change process C. Sales transaction review 1. Monthly screen(s) 2. Weekly screen(s) ©2013 SQE Training - STAR East 2013 161
  • 164. D. Accounts generation 1. Holding file processing 2. Cross reference file 3. Customer master file 4. Submission of update job E. Reports 1. Monthly transmission report 2. Weekly transmission report 3. New accounts report 4. Territory report 5. Account match report Test Cycle 4. - Reassigned sales posting process A. Sales transaction postings B. Weekly control file updates C. Sales History file updates D. Decision support system file updates Test Cycle 5. - Archiving A. Manual archive request B. Automated monthly archiving process C. File cleanup and compression Test Cycle 6. - Independent reports (converted from prior system) A. Distributor reports B. Retail reports C. High volume purchase reports D. Variance reports E. Decision support system reports Test Cycle 7. - Backup and recovery A. Recovery of interrupted XML transmission B. Restart of translation process at each step in the process 1. Verification of control areas for restart C. Restart of Account update process after interrupt D. Restart of Posting process after interrupt 7. FEATURES NOT TO BE TESTED Other than those areas identified in the master test plan no additional areas have been identified for exclusion. ©2013 SQE Training - STAR East 2013 162
  • 165. 8. APPROACH System/Integration (combined) will commence as soon as the function parts of the application are available as identified in the individual test cycles in section, features to be tested. A requirement has been noted for an additional full time independent test person for system/integration testing. However, with the budget constraints and time line established; most testing will be done by the test manager with the development teams participation. Entry into system testing will be controlled by the test manager. Proof of unit testing must be provided by the programmer to the team leader before unit testing will be accepted and passed on to the test person. All unit test information will also be provided to the test person. Program versions submitted for systems/integration testing will be copied from the development libraries into the test team libraries and will be deleted from the development library unless the module is segmented and will be used in several overlapping test cycles. If the module is to be delivered in functional segments then additional regression testing of previous test cycles will be performed to ensure all functions still work properly. 9. ITEM PASS/FAIL CRITERIA Each test cycle will be evaluated on an individual basis. If a cycle has no critical defects and only one (1) major defect, providing it has a functional, reasonable, work-around, the cycle will be considered complete in terms of starting the next, independent cycle. The major defect will have to be corrected prior to going to acceptance testing. Minor defects will be addressed on an as needed basis depending on resource availability and project schedule. However, if there are more than fifteen minor defects in a single aspect of the application the systems test cycle will be considered incomplete. Acceptance testing can begin even if there are two major defects in the entire application. This is acceptable if there are reasonable workarounds. All Major defects must be repaired prior to pilot testing and final acceptance testing. In some instances; (low impact majors and minors), the application can be corrected and bypass systems/integration testing. This decision will be made by the Test Manager and Project Manager on an on-going basis ©2013 SQE Training - STAR East 2013 163
  • 166. 10. SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS 1. No Distributors are ready for testing when system testing is scheduled to begin. Some testing can be done in areas such as general application flow and module integration but actual data validation and verification cannot be done until data is received from the distributors. Systems testing will be delayed for a time period to be determined based on the delay in receiving data from the distributor(s). 11. TEST DELIVERABLES A. B. C. D. High level System/Integration test design Defect reports Weekly testing status reports Sample reports from process execution 12. REMAINING TEST TASKS TASK Assigned To Create Requirements based Inventories Client, PM, TM, Dev, Test Create/Update Design inventories Dev, TM, PM, Test Create System/Integration Test Design TM, PM, Test Define System/Integration Test rules and Procedures TM, PM, Test Setup Controlled Test environment Status TM, Test ©2013 SQE Training - STAR East 2013 164
  • 167. 13. ENVIRONMENTAL NEEDS The following elements are required to support the Systems/Integration testing. A. Access to both the development and production AS/400 systems; for development, data acquisition and testing. B. Creation and control of test team libraries for systemsintegration testing. A separate set of source control libraries, data files, and control tables will be required to ensure the quality of systems testing. C. A time segment on the data transmission interface to receive test XML transmissions from the distributors. This segment should also have time where overlap exists with the Order Entry process. 14. STAFFING AND TRAINING NEEDS Time will have to be allocated to the test team to allow for source file movement and data acquisition and control. Time will also have to be allocated to allow for weekly defect and status reports to be prepared for the team and a meeting will have to be scheduled to report on system/integration test results. Time must be allocated to the development team members to attend systems/integration status meetings when required. 15. RESPONSIBILITIES TM PM Dev Team Test Team Client Create Requirements based Inventories X X X X X Create/Update Design inventories X X X X Create System/Integration Test Design X Define System/Integration Test rules and Procedures X Setup Controlled Test environment X X X X X The entire project team will participate in the review of the system and detail designs as well as review of any change requests that are generated by the user or as a result of defects discovered during development and testing. The full team will participate in the initial development of the high level ©2013 SQE Training - STAR East 2013 165
  • 168. 16. SCHEDULE All scheduling is documented in the project plan time line and is the responsibility of the project manager to maintain. The Test Manager will provide task estimates as required. 17. PLANNING RISKS AND CONTINGENCIES 1. Limited Testing staff. The test team is currently comprised of the developers and the Test Manager only. Additional resources are identified to assist in testing but if those resources are not available then the development team will have to provide additional assistance to the Test Manager during systems/integration testing. If development must assist in systems/integration testing then there is the possibility that both development and testing will be delayed due to the overlap of resource requirements. 18. APPROVALS Project Sponsor - Steve Putnam Development Management - Ron Meade EDI Project Manager - Peggy Bloodworth RS Test Manager - Dale Perry RS Development Team Manager - Dale Perry Reassigned Sales - Cathy Capelli Order Entry EDI Team Manager - Julie Cross ©2013 SQE Training - STAR East 2013 166