SlideShare a Scribd company logo
MC
Full Day Tutorial
10/13/2014 8:30:00 AM
"Getting Started with Risk-Based
Testing"
Presented by:
Dale Perry
Software Quality Engineering
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Dale Perry
Software Quality Engineering
Dale Perry has more than thirty-eight years of experience in information technology as a
programmer/analyst, database administrator, project manager, development manager, tester,
and test manager. Dale’s project experience includes large-system development and
conversions, distributed systems, and both client/server and web-based online applications. A
professional instructor for more than twenty-four years, he has presented at numerous industry
conferences on development and testing. With Software Quality Engineering for eighteen years,
Dale has specialized in training and consulting on testing, inspections and reviews, and other
testing and quality-related topics.
Speaker Presentations
1© SQE Training - STAR West 2014
This page left blank
2© SQE Training - STAR West 2014
Notice of Rights
Entire contents © 2010-2014 by SQE Training, unless otherwise noted on specific
items. All rights reserved. No material in this publication may be reproduced in any
form without the express written permission of SQE Training.
Home Office
SQE Training
340 Corporate Way, Suite 300
Orange Park FL 32073 U.S.A.
904. 278.0524
904.278.4380 fax
www.sqetraining.com
Notice of Liability
The information provided in this book is distributed on an “as is” basis, without
warranty. Neither the author nor SQE Training shall have any liability to any person or
entity with respect to any loss or damage caused or alleged to have been caused
directly or indirectly by the content provided in this course.
3© SQE Training - STAR West 2014
4© SQE Training - STAR West 2014
5© SQE Training - STAR West 2014
6© SQE Training - STAR West 2014
Many of the development models in use today originated with the concepts
developed by Dr. Shewhart in 1938 at Bell Laboratories and extended to
manufacturing and quality control by Dr. Demming in the 1950s.
This concept was called the life cycle of life cycles.
7© SQE Training - STAR West 2014
This page left blank
8© SQE Training - STAR West 2014
9© SQE Training - STAR West 2014
10© SQE Training - STAR West 2014
Static testing: Testing of a component or system at specification or
implementation level without execution of that software, e.g., reviews orimplementation level without execution of that software, e.g., reviews or
static code analysis.
Dynamic testing: Testing that involves the execution of the software of a
component or system.
11© SQE Training - STAR West 2014
Testing every possible data value, every possible navigation path through
the code, and every possible combination of input values is almost always
an infinite task which never can be completed.
Even if it were possible, it is not necessarily a good idea because many of
the test cases would be redundant, consume resources to create, delay time
to market, and not add anything of value.
For example, a single screen (GUI) or data stream, with thirteen variables,
each with three values
• To test every possible combination is 3(13) possible combinations
or 1,594,323 tests
• Plus testing of interfaces, etc.
12© SQE Training - STAR West 2014
13© SQE Training - STAR West 2014
14© SQE Training - STAR West 2014
15© SQE Training - STAR West 2014
16© SQE Training - STAR West 2014
17© SQE Training - STAR West 2014
18© SQE Training - STAR West 2014
19© SQE Training - STAR West 2014
20© SQE Training - STAR West 2014
21© SQE Training - STAR West 2014
The purpose of discussing software risk is to determine the primary focus of
testing. Generally speaking, most organizations find that their resources are
inadequate to test everything in a given release.
Outlining software risks helps the testers prioritize what to test and allows
them to concentrate on those areas that are likely to fail or those areas that
will critically impact the customer if they do fail.
Risks are used to decide where to start testing and where to test more.
Testing is used to reduce the risk of an adverse effect occurring or to reduce
the impact of an adverse effect.
22© SQE Training - STAR West 2014
Organizations that work on safety-critical software usually can use the
information from their safety and hazard analysis to identify areas of risk.
However, many companies make no attempt to verbalize software risks in
any fashion. If your company does not currently do any type of risk analysis,
try a brainstorming session among a small group of users, developers, and
testers to identify concerns.
23© SQE Training - STAR West 2014
24© SQE Training - STAR West 2014
Risk Factor
1 Ambiguous Improvement Targets
2 Artificial Maturity Levels
3 Canceled Projects
4 Corporate Politics
5 Cost Overruns
6 Creeping User Requirements
7 Crowded Office Conditions
8 Error Prone Modules
9 Excessive Paperwork
10 Excessive schedule Pressure
11 Excessive Time to Market
1212 False Productivity Claims
13 Friction Between Clients and Software
Contractors
14 Friction Between Software Management and
Senior Executives
15 High Maintenance Costs
16 Inaccurate Cost Estimating
17 Inaccurate Metrics
18 Inaccurate Quality Estimating
19 Inaccurate Sizing of Deliverables
20 Inadequate Assessments
21 Inadequate Compensation Plans
25© SQE Training - STAR West 2014
Risk Factor
22 Inadequate Configuration Control and
Project Repositories
23 Inadequate Curricula (Software Engineering)
24 Inadequate Curricula (Software
Management)
25 Inadequate Measurement
26 Inadequate Package Acquisition Methods
27 Inadequate Research and Reference
Facilities
28 Inadequate Software Policies and Standards
29 Inadequate Project Risk Analysis
30 Inadequate Project value Analysis
31 Inadequate Tools and Methods (Project31 Inadequate Tools and Methods (Project
Management)
32 Inadequate Tools and Methods (Quality
Assurance)
33 Inadequate Tools and Methods (Software
Engineering)
34 Inadequate Tools and Methods (Technical
Documentation)
35 Lack of Reusable Architecture
36 Lack of Reusable Code
37 Lack of Reusable Data
38 Lack of Reusable Designs (Blueprints)
39 Lack of Reusable Documentation
40 Lack of Reusable Estimates (Templates)
26© SQE Training - STAR West 2014
Risk Factor
41 Lack of Reusable Human Interfaces
42 Lack of Reusable Project Plans
43 Lack of Reusable Requirements
44 Lack of Reusable Test Plans. Test Cases
and Test Data
45 Lack of Specialization
46 Long Service Life of Obsolete Systems
47 Low Productivity
48 Low Quality
49 Low Status of Software Personnel and
Management
50 Low User Satisfaction
51 Malpractice (Project Management)
52 Malpractice (Technical Staff)
53 Missed Schedules
54 Partial Life-Cycle Definitions
55 Poor Organization Structures
56 Poor Technology Investments
57 Severe Layoffs and Cutbacks of Staff
58 Short-Range Improvement Planning
59 Silver Bullet Syndrome
60 Slow Technology Transfer
27© SQE Training - STAR West 2014
28© SQE Training - STAR West 2014
Managers tend to focus on two key elements
• Controlling and managing costs
• Return on investment (ROI)
Marketing and sales people tend to be driven by a singular goal, “competitive
advantage”
• The concerns of marketing and sales are not necessarily related to
functionality, they need an edge
Engineers (developers, analysts, etc.) tend to be focused on the technology
itself
• Driven by the use of “new” technology and techniques
• Not interested in functionality except as it relates to the use of
technology
• Some technically valid decisions may cause functional
problems
29© SQE Training - STAR West 2014
Customers are seeking the answer to a very simple question, “can I do my
job?”
Their view of a product is a tool to do their job
If they can’t use your product to that end, it’s a bad product
30© SQE Training - STAR West 2014
31© SQE Training - STAR West 2014
Quality assurance is part of an organizations’ overall set of processes.
Testing primarily focuses on product risk. Testing is an evaluation activity, it
uses the processes defined within and organization to assess the quality of a
product as defined within the scope of a project. Testing also assesses the
process used within the organization to ensure they are usable, effective and
appropriate. A more accurate acronym for testing is Q/C (Quality Control).
Simply stated: We can think of testing (Q/C) as the application of process to
a product within the context of a project focusing on risks.
32© SQE Training - STAR West 2014
33© SQE Training - STAR West 2014
The concept of risk driven testing applies to all software development models
and processes. It is critical to developing quality software that meets
user/customer expectations and is the focus of both the STEP™ methodology
and many of the new agile development processes.
If you analyze the newer “agile” development methods, this is one of the key
concepts. It’s interesting that this is not really a new concept at all; it has been
around for a couple of decades.
34© SQE Training - STAR West 2014
There are many different software lifecycle approaches: waterfall, spiral,
incremental delivery, prototyping (evolutionary and throwaway), RAD,
extreme programming (XP), SCRUM, DSDM, etc. The key is to know which
process the project is following and to integrate into that process as soon as
possible (reasonable). The later you get involved, the less chance you have
to prevent problems.
35© SQE Training - STAR West 2014
36© SQE Training - STAR West 2014
37© SQE Training - STAR West 2014
Planning is essential for all projects, regardless of type of development
model. There are certain essential decisions that need to be made early in
the project to ensure the project will be a success and that the testing can be
realistically accomplished within the constraints of the project.
38© SQE Training - STAR West 2014
39© SQE Training - STAR West 2014
40© SQE Training - STAR West 2014
41© SQE Training - STAR West 2014
42© SQE Training - STAR West 2014
43© SQE Training - STAR West 2014
In testing it is critical to identify those elements within the scope of the
project that require testing. Once identified those elements need to be
assessed as to what is absolutely critical to the operation of the software and
the user’s needs and expectations and what can possibly be deferred.
Not all elements within the scope of a project are of equal importance.
44© SQE Training - STAR West 2014
This model is based on the risk analysis model within the STEP™ testing
methodology developed by Software Quality Engineering. The general
process can be applied to any type of product and within any type of
development model.
45© SQE Training - STAR West 2014
All project start with something. The only reason to develop or change
software is because one of the stakeholders in an organization has a
problem with the existing processes or systems.
A problem does not necessarily mean that the software does not function,
although there may be problems with the software there cane be other
reasons for someone to want to change the system. The business model
may have shifted or a government entity may have changed some rule or
process. It may be that an internal marketing team is looking to add some
advantages to the system to improve the competitive nature of the system.
Regardless of the reason, software is developed or changed because
someone has a problem, understanding the nature of the problem is
essential to both develop a solution and to ensure it works (test the solution).
46© SQE Training - STAR West 2014
Reference materials include any information available that can assist in
determining the testing objects/conditions.
Some lifecycles do not have formal sources of documentation. No formal
requirements are written. However, there is usually some information about
what type of system is being created, the platform on which it will run, the
goals of the client, etc.
Any information you can gather will help you better understand the test
requirements for this project.
47© SQE Training - STAR West 2014
Different groups have different ideas about software. The more of theseDifferent groups have different ideas about software. The more of these
disparate groups you can combine, the more accurate the picture you will
have of the risks, priorities, and goals for development, and the more
accurate the testing goals and objects/conditions for this project become.
In addition to the stakeholders noted earlier you may also want to consider
others who have knowledge of current and past issues associated with the
system under discussion. Groups such as the help desk and user support
tend to have in-depth knowledge of current as well as past issues.
The key is to include testers in the discussions to help focus the team on the
testing issues and to help determine the priority of the features to be
developed. Testers need to know the risks and issues in order to properly
analyze and design reasonable tests to mitigate/control those risks or to at
least know what level of risk the stakeholders are willing to accept.
48© SQE Training - STAR West 2014
As soon as the project is established and the general focus of the software is
known, either new development or maintenance, the process of risk
assessment begins.
49© SQE Training - STAR West 2014
When designing an inventory or elements/objects to test we need to asses
three different aspects of the software/system
Functional – Identification of “what” the user/customer expects the
system/.application to provide. These are often referred to as “black box” or
specification based elements (as they are typically derived from some form
of specification (document) or model.
Non-Functional – Identifying aspects of the system that need to be tested
that are not necessarily part of the functions required by the user/customer.
i.e. performance, usability, response time, etc. These are characteristics or
attributes of the system and reflect “how” the system works not “what” it
does or how it’s built.
Structural – Identification of the physical elements of the infrastructure,
architecture, design or code that also require testing. There are often
elements within the system that are not part of user/customer’s requirements
but they still need testing. These represent “how” the system is constructed.
50© SQE Training - STAR West 2014
The inventory can be a separate artifact/document or in can be incorporated
into and existing artifact/document such as a risk prioritized requirements
specification, use case model or story backlog. The inventory can represent
an entire system, part of a system or a single requirement, use case or story.
The key to an inventory is to identify what needs to be tested from as many
viewpoints as possible.
51© SQE Training - STAR West 2014
52© SQE Training - STAR West 2014
53© SQE Training - STAR West 2014
The inventory process is an iterative process. You begin the process at
requirements and continue the process at each stage of the development
process. How far you take the process is determined by the scope and risks
associated with the software being tested. In addition to the process being
iterative, it is also cumulative.
The information from requirements is used to improve the requirements
(static testing/reviews), to focus the design, and possibly to improve the
design.
At the design stage of a project the information from the requirements
inventory process is used to evaluate the design and to ensure problems are
corrected and additional items are gathered from the design. This process
can be continued as far as the risks to the project warrant.
54© SQE Training - STAR West 2014
These can be specific elements to be tested or they can be aspects
(characteristics, attributes) of elements identified in other work products such
as an expansion of a user story or use case from a testing and risk
perspective.
A single story, use case or requirement may have multiple aspects that
requiring testing. E.g. a function may have inputs, and/or outputs that are
retrieved from and sent to a data store via an interface or transaction and
those elements may have specific relationships (rules or constraints), all of
which need to be tested.
The identified (requirement, story or use case) may also participate in a
larger sequence of events (scenario) that also must eventually be tested to
ensure the identified element works correctly at all levels within an
application or system (end-to-end).
Some aspects of the identified element may also be sensitive to the
environment and so will require additional testing in multiple enrivonments.
55© SQE Training - STAR West 2014
While creating the inventory we may also identify aspects of the identified
elements that are not completely defined or may be vague, incomplete,
ambiguous, etc. These issues and questions are logged to ensure that they
are not lost. S
ome of the identified issues may not be resolvable at the requirements stage
of a project and may be further defined later in the development process.
This is perfectly acceptable, provided the information is logged and tracked
somehow. The information can be part of an existing issues/defect tracking
system, part of a backlog, or may be a separate artifact/document. How it is
managed will depend to a large degree on the development process
employed. The key is not to lose tract of the identified issues and questions.
For incomplete information we would come back to our list of identified
issues and questions at the appropriate time within the development process
and ensure that they have, in fact, been addressed to a reasonable degree
and are now testable.
56© SQE Training - STAR West 2014
Once the requirements have been assessed, the initial inventory created and
any issues and questions logged, the team may have to wait for the system
design (architecture and detail aspects) to be defined.
When the design is ready we can continue the inventory process by
assessing the design in the same manner as we did the requirements. The
existing inventory and the list of known issues and questions is the input into
the next step in the process.
57© SQE Training - STAR West 2014
During the assessment of the design, issues and questions that were
deferred until the design was defined can now be revisited. At this point we
should have additional information that will help us clarify open issues from
the requirements assessment.
At this stage three things are occurring:
1. Identification of additional elements to add to our inventory
2. Resolution of issues and questions from the requirements process
3. Identification of new issues and questions related to the design
How far we take the inventory process depends on the development model,
the organizational processes in place and the level of potential risk. There is
no specific rule as to how far the inventory process if taken within a project.
58© SQE Training - STAR West 2014
Reviewing designs may require additional technical skills that not all testers
have. At least one person on the test team needs to have some technical
skills that relate to the system/product being tested in order to ask critical
questions.
Early tests, especially scenario based tests, developed during the
requirements part of the process can be used as walk through models for
the design.
Walking through a design with a set of functional and non-functional tests
can often reveal problems that would be missed if only looked at from an
engineering perspective. After all, development (software engineering) is the
process of solving a problem with a computer. People solving problems have
a different perspective than those trying to ensure the solution works. This
process may even reveal places where changes can be made to aid in
testing, such as exposing the information in an API or memory table.
59© SQE Training - STAR West 2014
There are some common aspects of applications that can be drawn from the
design specifications.
Updating the inventory will be about providing clarification of earlier issues
and questions raised during the requirements inventory review as well as
adding new objects/elements to the inventory. Many of the elements added
at this point will be technical in nature and late will be tested primarily at the
component or integration stages of development. It is critical to know where
these technical elements exist so that the get proper testing.
May failures discovered in system testing are often symptoms of incomplete
or ineffective testing of technical elements within a product/system.
60© SQE Training - STAR West 2014
Analyzing an individual requirement, use case, story, interface etc. allows the
tester to more accurately understand the scope of testing related to a
particular element/object. This can also help in estimating the scope of effort
required to properly test a particular element/object.
61© SQE Training - STAR West 2014
A single requirement, use case or story may have multiple aspects that need
to be tested. Just as we can be a general list of overall elements/objects to
test we can also construct an inventory specific to a single element.
62© SQE Training - STAR West 2014
Just like a functional aspect can be analyzed for elements to test so can a
structural element. The single element is the interface but there are many
aspects of the interface that need to be tested to ensure it works correctly.
63© SQE Training - STAR West 2014
64© SQE Training - STAR West 2014
65© SQE Training - STAR West 2014
© SQE Training - STAR West 2014 66
67© SQE Training - STAR West 2014
68© SQE Training - STAR West 2014
69© SQE Training - STAR West 2014
70© SQE Training - STAR West 2014
Impact is sometimes referred to as severity.
Once the inventory has been built, the next step is to determine the Impact
and likelihood of something going wrong with each of the elements identified
in the inventory.
• Determine the impact (loss or damage) and likelihood (frequency or
probability) of the feature or attribute failing.
While some organizations like to use percentages, number of days/years
between occurrences, or even probability “half lives,” using a set of simple
categories such as the ones listed in the slide above typically provide
sufficient accuracy.
If the likelihood or impact of something going wrong is none or zero, then
this item could be removed from the analysis. However, the removal should
be documented.
• This is not recommended. Just leave it in the inventory, it will
naturally drop to the bottom.
71© SQE Training - STAR West 2014
Many of the factors that affect likelihood are technical in nature. In
development much of the code developed is about handling the errors and
exceptions that occur within the system. There is typically more code
covering the errors and exceptions than there is code implementing the
general rules and processes requested b y the stakeholders.
72© SQE Training - STAR West 2014
73© SQE Training - STAR West 2014
74© SQE Training - STAR West 2014
Impact is typically viewed as the impact on the user of the product./system.
Impact is normally assessed from a business perspective, what is the
damage to the business should an identified risk occur.
75© SQE Training - STAR West 2014
76© SQE Training - STAR West 2014
Most organization use a combination of both methods.
• Qualitative helps express risk in terms a person can understand.
• Quantitative is then used in conjunction with qualitative categories
to create a matrix with each risk given a weighted priority.
Each qualitative category needs to be defined in the organization’s overall
process directives. Ideally, there needs to be several examples of each risk
assessment category to aid people is determining which category is
appropriate to the object they are assessing.
Risk is in the eye of the beholder as noted earlier.
• Any two people may look at the same event and see an entirely
different set of issues. What is critical to one may be trivial to the
other.
77© SQE Training - STAR West 2014
Likelihood = The probability or chance of an event occurring (e.g., the
likelihood that a user will make a mistake and, if a mistake is made, the
likelihood that it will go undetected by the software).
Impact = The damage that results from a failure (e.g., the system crashing or
corrupting data might be considered high impact).
Something could have a low likelihood (technical) risk and yet have a very
high impact (business) risk and vice-versa. It is the combination of the two
factors that helps to determine the overall risk level for a particular element.
78© SQE Training - STAR West 2014
Within an organization it may be necessary to set up different risk models for
different products/systems. This is not a problem as not all products/systems
should necessarily follow the exact same model.
The “quantitative” factors can be changed and adjusted as needed, however,
the “qualitative” elements should remain consistent across different risk
models. This will ensure that risks are assessed the same across the
organization and still allow for adjustments for differences in relative risk
levels.
By adjusting the “quantitative” factors we can allow for applications that have
more business than technical risk and vice-versa but still have a consistent
model that can be used across the organization.
79© SQE Training - STAR West 2014
Under likelihood and impact, there may be differences of opinion as to the
risk. It can be high business risk but low technical risk, etc. So you may have
to compromise on an acceptable level of risk.
The numbers are calculated using the values from our original matrix (page
70 and multiplying them.
H = High which has a value of 3
M = Medium which has a value of 2
L = Low which has a value of 1
80© SQE Training - STAR West 2014
Make adjustments and sort by the agreed priority. We now have a risk-based
assessment of what needs to be tested.
81© SQE Training - STAR West 2014
Of course, successful prioritization of risks does not help unless test cases
are defined for each risk—with the highest priority risks being assigned the
most comprehensive tests and priority scheduling. The object of each test
case is to mitigate at least one of these risks.
If time or resources are an issue, then the priority associated with each
feature or attribute can be used to determine which test cases should be
created and/or run. If testing must be cut, then the risk priority can be used
to determine how and what to drop.
• Cut low risk completely (indicated by the horizontal line).
If you plan to ship the low risk features, you may want to consider an across
the board approach, where the high risk features are fully tested and the
lower risk are tested a little less based on the level of risk associated with
the feature. So medium risk elements get less that high but mode than low
risk elements. At least that way, the features do not ship untested (risk
unknown). This will entail some additional risk as higher risk features get
less testing.
82© SQE Training - STAR West 2014
Appendix has an expanded version of the FMEA (Failure Modes and Effects
Analsys) model as well as the ISO 9126 quality model.
83© SQE Training - STAR West 2014
84© SQE Training - STAR West 2014
85© SQE Training - STAR West 2014
Risk mitigation/Risk control: The process through which decisions are
reached and protective measures are implemented for reducing risks to, or
maintaining risks within, specified levels.
Risk type: A specific category of risk related to the type of testing that can
mitigate (control) that category. For example the risk of user-interactions
being misunderstood can be mitigated by usability testing.
86© SQE Training - STAR West 2014
Murphy’s Law: “If anything can go wrong, it will, and will do so at the worst
possible time”.
87© SQE Training - STAR West 2014
88© SQE Training - STAR West 2014
89© SQE Training - STAR West 2014
90© SQE Training - STAR West 2014
91© SQE Training - STAR West 2014
92© SQE Training - STAR West 2014
93© SQE Training - STAR West 2014
This page left blank
94© SQE Training - STAR West 2014
95© SQE Training - STAR West 2014
96© SQE Training - STAR West 2014
97© SQE Training - STAR West 2014
98© SQE Training - STAR West 2014
99© SQE Training - STAR West 2014
100© SQE Training - STAR West 2014
101© SQE Training - STAR West 2014
102© SQE Training - STAR West 2014
103© SQE Training - STAR West 2014
104© SQE Training - STAR West 2014
105© SQE Training - STAR West 2014
Although exploratory testing primarily relies on the skills and knowledge of
the tester and tends to be more dynamic than traditional technique-driven
design, it too can be more formalized. Using the inventory process as part
of an exploratory test process can add structure to the definition of the areas
to be investigated rather than relying only on the skills of the individual
tester.
106© SQE Training - STAR West 2014
107© SQE Training - STAR West 2014
The goal is to avoid gaps in the testing as well as to avoid overlapping
testing too much.
Depending on how you define your inventories, based on generic groupings
or application specific groupings, the idea to decide who will test which
objects at what stage/level.
Some objects cannot be tested until later stages of the process (i.e.,
scenarios and usage based objects). Conversely some elements, such as
field edits, valid ranges, error messages etc., are best tested in the earlier
stages. These code logic elements, created by the programmers, are best
tested at that stage of the process. Finding such errors late in the process
can be very costly.
108© SQE Training - STAR West 2014
109© SQE Training - STAR West 2014
This page left blank
110© SQE Training - STAR West 2014
111© SQE Training - STAR West 2014
112© SQE Training - STAR West 2014
These three aspects of test execution have to be evaluated together.
Looking at any single aspect provides no real useful information.
113© SQE Training - STAR West 2014
© SQE Training - STAR West 2014 114
Requirements Coverage:
• Which requirements have been tested?
• What are the key risk indicators?
• Degree of coverage for each requirement
• Relative to defined tests
• Failure rates
• Defect information (density, patterns, trends, root cause,
etc.
Design Coverage
• Have we addressed key design issues?
• Do we require coverage measures?
• Integration complexity
• Interface (API) coverage
• What are the key risk indicators?
• Failure rates within the design
• Defect information (density, patterns, trends, root
cause, etc.
115© SQE Training - STAR West 2014
Risk Coverage
• Are we covering the high risk areas first?
• Are the executed tests balanced across the risk areas or narrowly
focused on specific risk areas?
• What risks areas are still unaddressed?
• What is the level of risk in untested elements?
Code Coverage
• What percentage of the code has been tested?
• Which coverage measures are important?
• Statement coverage
• Decision/branch coverage
• Condition coverage
• Path coverage
• Key measures such as cyclomatic complexity
• Other code coverage measures
116© SQE Training - STAR West 2014
117© SQE Training - STAR West 2014
© SQE Training - STAR West 2014 118
© SQE Training - STAR West 2014 119
120© SQE Training - STAR West 2014
121© SQE Training - STAR West 2014
© SQE Training - STAR West 2014 122
123© SQE Training - STAR West 2014
124© SQE Training - STAR West 2014
125© SQE Training - STAR West 2014
126© SQE Training - STAR West 2014
127© SQE Training - STAR West 2014
128© SQE Training - STAR West 2014
129© SQE Training - STAR West 2014
130© SQE Training - STAR West 2014
131© SQE Training - STAR West 2014
132© SQE Training - STAR West 2014
133© SQE Training - STAR West 2014
134© SQE Training - STAR West 2014
135© SQE Training - STAR West 2014
136© SQE Training - STAR West 2014
137© SQE Training - STAR West 2014
138© SQE Training - STAR West 2014
139© SQE Training - STAR West 2014
140© SQE Training - STAR West 2014
141© SQE Training - STAR West 2014
142© SQE Training - STAR West 2014
Web site references
General testing sites for research etc.
www.Stickyminds.com
www.techwell.com
www.sqaforums.com
www.softwareqatest.com
www.testingeduction.org
Sites for open source elements (tools etc.)
www.sourceforge.net
www.opensourcetesting.org
www.pairwise.org
www.pairwise.org/tools.asp
Sites related to performance, performance services and statistics
Performance-testing.org
www.sandvine.com
TPC.org
www.internetworldstats.com
www.keynote.comwww.keynote.com
www.webtrends.com
www.shunra.com
www.alexa.com
www.mentora.com
www.plunkettresearch.com
Sites related to usability etc.
www.useit.com
www.nngroup.com
www.webpagesthatsuck.com (yes this is a serious site)
sumi.ucc.ie
www.wammi.com
General security related sites
www.sans.org
www.nist.gov
csrc.nist.gov
143© SQE Training - STAR West 2014
This page left blank.
144© SQE Training - STAR West 2014
145© SQE Training - STAR West 2014
146© SQE Training - STAR West 2014
Related terms:
FMECA
SFMEA
SFMECA
Every product has modes of failure. The effects represent the impact of
failures.
— adapted from the Quality Training Portal
147© SQE Training - STAR West 2014
— adapted from the Quality Training Portal
148© SQE Training - STAR West 2014
149© SQE Training - STAR West 2014
FMEA can be applied at any level and should be done iteratively.
150© SQE Training - STAR West 2014
151© SQE Training - STAR West 2014
— adapted from the Quality Training Portal
152© SQE Training - STAR West 2014
153© SQE Training - STAR West 2014
154© SQE Training - STAR West 2014
155© SQE Training - STAR West 2014
156© SQE Training - STAR West 2014
ISO 9126 extract
6.1 Functionality
The capability of the software product to provide functions, which meet stated and implied
needs when the software is used under specified conditions. This characteristic is concerned
with what the software does to fulfill needs, whereas the other characteristics are mainly
concerned with when and how it fulfils needs. For a system, which is operated by a user, the
combination of functionality, reliability, usability and efficiency can be measured externally by
quality in use.
Subcategories
6.1.1 Suitability - The capability of the software product to provide an appropriate set of
functions for specified tasks and user objectives.
6.1.2 Accuracy - The capability of the software product to provide the right, or agreed
results, or effects with the needed degree of precision.
6.1.3 Interoperability - The capability of the software product to interact with one or
more specified systems.
6.1.4 Security - The capability of the software product to protect information and data6.1.4 Security - The capability of the software product to protect information and data
so that unauthorized persons or systems cannot read or modify them and authorized
persons or systems are not denied access to them.
6.1.5 Functionality compliance - The capability of the software product to adhere to
standards, conventions or regulations in laws and similar prescriptions relating to
functionality.
6.2 Reliability
The capability of the software product to maintain a specified level of performance when used
under specified conditions. Wear or ageing does not occur in software. Limitations in reliability
are due to faults in requirements, design, and implementation. Failures due to these faults
depend on the way the software product is used and the program options selected rather than
on elapsed time. The definition of reliability in ISO/IEC 2382-14:1997 is "The ability of functional
unit to perform a required function...". In this document, functionality is only one of the
characteristics of software quality. Therefore, the definition of reliability has been broadened to
"maintain a specified level of performance..." instead of "...perform a required function".
Subcategories
6.2.1 Maturity
The capability of the software product to avoid failure as a result of faults in the
software.
157© SQE Training - STAR West 2014
6.2.2 Fault tolerance - The capability of the software product to maintain a specified
level of performance in cases of software faults or of infringement of its specified
interface.
NOTE The specified level of performance may include fail safe capability.
6.2.3 Recoverability - The capability of the software product to re-establish a specified
level of performance and recover the data directly affected in the case of a failure.
NOTE 1 Following a failure, a software product will sometimes be down for a certain
period of time, the length of which is assessed by its recoverability.
NOTE 2 Availability is the capability of the software product to be in a state to perform
a required function at a given point in time, under stated conditions of use. Externally,
availability can be assessed by the proportion of total time during which the software
product is in an up state. Availability is therefore a combination of maturity (which
governs the frequency of failure), fault tolerance and recoverability (which governs the
length of down time following each failure). For this reason it has not been included as a
separate subcharacteristic.
6.2.4 Reliability compliance - The capability of the software product to adhere to
standards, conventions or regulations relating to reliability.
6.3 Usability
The capability of the software product to be understood, learned, used and attractive to the
user, when used under specified conditions. Some aspects of functionality, reliability and
efficiency will also affect usability, but for the purposes of ISO/IEC 9126 they are not classified
as usability. Users may include operators, end users and indirect users who are under the
influence of or dependent on the use of the software. Usability should address all of the
different user environments that the software may affect, which may include preparation for
usage and evaluation of results.
Subcategories
Understandability - The capability of the software product to enable the user to
understand whether the software is suitable, and how it can be used for particular tasks
and conditions of use. This will depend on the documentation and initial impressions
given by the software.
Learnability - The capability of the software product to enable the user to learn its
application. The internal attributes correspond to suitability for learning as defined in
ISO 9241-10.
158© SQE Training - STAR West 2014
Operability - The capability of the software product to enable the user to operate and
control it.
NOTE 1 Aspects of suitability, changeability, adaptability and installability may affect
operability.
NOTE 2 Operability corresponds to controllability, error tolerance and conformity with
user expectations as defined in ISO 9241-10.
NOTE 3 For a system which is operated by a user, the combination of functionality,
reliability, usability andefficiency can be measured externally by quality in use.
Attractiveness - The capability of the software product to be attractive to the user. This
refers to attributes of the software intended to make the software more attractive to
the user, such as the use of color and the nature of the graphical design.
Usability compliance - The capability of the software product to adhere to standards,
conventions, style guides or regulations relating to usability.
6.4 Efficiency - The capability of the software product to provide appropriate performance,
relative to the amount of resources used, under stated conditions. Resources may include other
software products, the software and hardware configuration of the system, and materials (e.g.software products, the software and hardware configuration of the system, and materials (e.g.
print paper, diskettes). For a system which is operated by a user, the combination of
functionality, reliability, usability and efficiency can be measured externally by quality in use.
Subcategories
6.4.1 Time behavior - The capability of the software product to provide appropriate
response and processing times and throughput rates when performing its function,
under stated conditions.
6.4.2 Resource utilization - The capability of the software product to use appropriate
amounts and types of resources when the software performs its function under stated
conditions. Human resources are included as part of productivity.
6.4.3 Efficiency compliance - The capability of the software product to adhere to
standards or conventions relating to efficiency.
6.5 Maintainability
The capability of the software product to be modified. Modifications may include corrections,
improvements or adaptation of the software to changes in environment, and in requirements
and functional specifications.
159© SQE Training - STAR West 2014
Subcategories
6.5.1 Analysability - The capability of the software product to be diagnosed for
deficiencies or causes of failures in the software, or for the parts to be modified to be
identified.
6.5.2 Changeability - The capability of the software product to enable a specified
modification to be implemented. Implementation includes coding, designing and
documenting changes. If the software is to be modified by the end user, changeability
may affect operability.
6.5.3 Stability - The capability of the software product to avoid unexpected effects from
modifications of the software.
6.5.4 Testability - The capability of the software product to enable modified software to
be validated.
6.5.5 Maintainability compliance - The capability of the software product to adhere to
standards or conventions relating to maintainability.
6.6 Portability - The capability of the software product to be transferred from one environment
to another. The environment may include organizational, hardware or software environment.
Subcategories
6.6.1 Adaptability - The capability of the software product to be adapted for different
specified environments without applying actions or means other than those provided
for this purpose for the software considered. Adaptability includes the scalability of
internal capacity (e.g. screen fields, tables, transaction volumes, report formats, etc.). If
the software is to be adapted by the end user, adaptability corresponds to suitability for
individualization as defined in ISO 9241-10, and may affect operability.
6.6.2 Installability - The capability of the software product to be installed in a specified
environment.
NOTE If the software is to be installed by an end user, installability can affect the
resulting suitability and operability.
6.6.3 Co-existence - The capability of the software product to co-exist with other
independent software in a common environment sharing common resources.
160© SQE Training - STAR West 2014
6.6.4 Replaceability - The capability of the software product to be used in place of
another specified software product for the same purpose in the same environment. For
example, the replaceability of a new version of a software product is important to the
user when upgrading. Replaceability is used in place of compatibility in order to avoid
possible ambiguity with interoperability. Replaceability may include attributes of both
installability and adaptability. The concept has been introduced as a sub characteristic of
its own because of its importance.
6.6.5 Portability compliance - The capability of the software product to adhere to
standards or conventions relating to portability
161© SQE Training - STAR West 2014
This page left blank
162© SQE Training - STAR West 2014

More Related Content

What's hot

The Quest for Quality at Speed
The Quest for Quality at SpeedThe Quest for Quality at Speed
The Quest for Quality at Speed
Marc Hornbeek
 
User Acceptance Testing in the Testing Center of Excellence
User Acceptance Testing in the Testing Center of ExcellenceUser Acceptance Testing in the Testing Center of Excellence
User Acceptance Testing in the Testing Center of Excellence
TechWell
 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning
Udayantha de Silva
 
Agile testing guide_2021
Agile testing guide_2021Agile testing guide_2021
Agile testing guide_2021
QMetry
 
Qa focus 2015 2020
Qa focus 2015 2020Qa focus 2015 2020
Qa focus 2015 2020
anuvip
 
Engineering DevOps Right the First Time
Engineering DevOps Right the First TimeEngineering DevOps Right the First Time
Engineering DevOps Right the First Time
Marc Hornbeek
 
Quality Engineering in the New Era
Quality Engineering in the New EraQuality Engineering in the New Era
Quality Engineering in the New Era
Cygnet Infotech
 
Agile Testing Days -the Challenges Ahead
Agile Testing Days -the Challenges AheadAgile Testing Days -the Challenges Ahead
Agile Testing Days -the Challenges Ahead
Derk-Jan de Grood
 
Communication skills for testers
Communication skills for testersCommunication skills for testers
Communication skills for testers
PractiTest
 
'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo
TEST Huddle
 
Value stream mapping for DevOps
Value stream mapping for DevOpsValue stream mapping for DevOps
Value stream mapping for DevOps
Marc Hornbeek
 
Quality Engineering and Testing with TMAP in DevOps IT delivery
Quality Engineering and Testing with TMAP in DevOps IT deliveryQuality Engineering and Testing with TMAP in DevOps IT delivery
Quality Engineering and Testing with TMAP in DevOps IT delivery
Rik Marselis
 
Use Layered Model-Based Requirements to Achieve Continuous Testing
Use Layered Model-Based Requirements to Achieve Continuous TestingUse Layered Model-Based Requirements to Achieve Continuous Testing
Use Layered Model-Based Requirements to Achieve Continuous Testing
TechWell
 
Estimating test effort part 2 of 2
Estimating test effort part 2 of 2Estimating test effort part 2 of 2
Estimating test effort part 2 of 2
Ian McDonald
 
Introduction to Reliability and Maintenance Management
Introduction to Reliability and Maintenance ManagementIntroduction to Reliability and Maintenance Management
Introduction to Reliability and Maintenance Management
Accendo Reliability
 
Quality for DevOps teams - Quality engineering in the DevOps culture
Quality for DevOps teams - Quality engineering in the DevOps cultureQuality for DevOps teams - Quality engineering in the DevOps culture
Quality for DevOps teams - Quality engineering in the DevOps culture
Rik Marselis
 
Communication and Testing: Why You Have Been Wrong All Along!
Communication and Testing: Why You Have Been Wrong All Along!Communication and Testing: Why You Have Been Wrong All Along!
Communication and Testing: Why You Have Been Wrong All Along!
TechWell
 
Selenium DeTox for Achieving the Right Testing Pyramid
Selenium DeTox for Achieving the Right Testing PyramidSelenium DeTox for Achieving the Right Testing Pyramid
Selenium DeTox for Achieving the Right Testing Pyramid
Naresh Jain
 
Neil Pandit - A Visual Approach to Risk Based Integration Testing
Neil Pandit - A Visual Approach to Risk Based Integration TestingNeil Pandit - A Visual Approach to Risk Based Integration Testing
Neil Pandit - A Visual Approach to Risk Based Integration Testing
TEST Huddle
 
Implementing a Test Dashboard to Boost Quality
Implementing a Test Dashboard to Boost QualityImplementing a Test Dashboard to Boost Quality
Implementing a Test Dashboard to Boost Quality
TechWell
 

What's hot (20)

The Quest for Quality at Speed
The Quest for Quality at SpeedThe Quest for Quality at Speed
The Quest for Quality at Speed
 
User Acceptance Testing in the Testing Center of Excellence
User Acceptance Testing in the Testing Center of ExcellenceUser Acceptance Testing in the Testing Center of Excellence
User Acceptance Testing in the Testing Center of Excellence
 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning
 
Agile testing guide_2021
Agile testing guide_2021Agile testing guide_2021
Agile testing guide_2021
 
Qa focus 2015 2020
Qa focus 2015 2020Qa focus 2015 2020
Qa focus 2015 2020
 
Engineering DevOps Right the First Time
Engineering DevOps Right the First TimeEngineering DevOps Right the First Time
Engineering DevOps Right the First Time
 
Quality Engineering in the New Era
Quality Engineering in the New EraQuality Engineering in the New Era
Quality Engineering in the New Era
 
Agile Testing Days -the Challenges Ahead
Agile Testing Days -the Challenges AheadAgile Testing Days -the Challenges Ahead
Agile Testing Days -the Challenges Ahead
 
Communication skills for testers
Communication skills for testersCommunication skills for testers
Communication skills for testers
 
'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo
 
Value stream mapping for DevOps
Value stream mapping for DevOpsValue stream mapping for DevOps
Value stream mapping for DevOps
 
Quality Engineering and Testing with TMAP in DevOps IT delivery
Quality Engineering and Testing with TMAP in DevOps IT deliveryQuality Engineering and Testing with TMAP in DevOps IT delivery
Quality Engineering and Testing with TMAP in DevOps IT delivery
 
Use Layered Model-Based Requirements to Achieve Continuous Testing
Use Layered Model-Based Requirements to Achieve Continuous TestingUse Layered Model-Based Requirements to Achieve Continuous Testing
Use Layered Model-Based Requirements to Achieve Continuous Testing
 
Estimating test effort part 2 of 2
Estimating test effort part 2 of 2Estimating test effort part 2 of 2
Estimating test effort part 2 of 2
 
Introduction to Reliability and Maintenance Management
Introduction to Reliability and Maintenance ManagementIntroduction to Reliability and Maintenance Management
Introduction to Reliability and Maintenance Management
 
Quality for DevOps teams - Quality engineering in the DevOps culture
Quality for DevOps teams - Quality engineering in the DevOps cultureQuality for DevOps teams - Quality engineering in the DevOps culture
Quality for DevOps teams - Quality engineering in the DevOps culture
 
Communication and Testing: Why You Have Been Wrong All Along!
Communication and Testing: Why You Have Been Wrong All Along!Communication and Testing: Why You Have Been Wrong All Along!
Communication and Testing: Why You Have Been Wrong All Along!
 
Selenium DeTox for Achieving the Right Testing Pyramid
Selenium DeTox for Achieving the Right Testing PyramidSelenium DeTox for Achieving the Right Testing Pyramid
Selenium DeTox for Achieving the Right Testing Pyramid
 
Neil Pandit - A Visual Approach to Risk Based Integration Testing
Neil Pandit - A Visual Approach to Risk Based Integration TestingNeil Pandit - A Visual Approach to Risk Based Integration Testing
Neil Pandit - A Visual Approach to Risk Based Integration Testing
 
Implementing a Test Dashboard to Boost Quality
Implementing a Test Dashboard to Boost QualityImplementing a Test Dashboard to Boost Quality
Implementing a Test Dashboard to Boost Quality
 

Viewers also liked

Is Agile the Prescription for the Public Sector’s IT Woes?
Is Agile the Prescription for the Public Sector’s IT Woes?Is Agile the Prescription for the Public Sector’s IT Woes?
Is Agile the Prescription for the Public Sector’s IT Woes?
TechWell
 
Getting Your Message Across: Communication Skills for Testers
Getting Your Message Across: Communication Skills for TestersGetting Your Message Across: Communication Skills for Testers
Getting Your Message Across: Communication Skills for Testers
TechWell
 
Playwriting, Imagination, and Agile Software Development … Oh My!
Playwriting, Imagination, and Agile Software Development … Oh My!Playwriting, Imagination, and Agile Software Development … Oh My!
Playwriting, Imagination, and Agile Software Development … Oh My!
TechWell
 
The Doctor Is In: Diagnosing Test Automation Diseases
The Doctor Is In: Diagnosing Test Automation DiseasesThe Doctor Is In: Diagnosing Test Automation Diseases
The Doctor Is In: Diagnosing Test Automation Diseases
TechWell
 
Test Automation Strategies for the Agile World
Test Automation Strategies for the Agile WorldTest Automation Strategies for the Agile World
Test Automation Strategies for the Agile World
TechWell
 
EARS: The Easy Approach to Requirements Syntax
EARS: The Easy Approach to Requirements SyntaxEARS: The Easy Approach to Requirements Syntax
EARS: The Easy Approach to Requirements Syntax
TechWell
 
Making Your Test Automation Transparent
Making Your Test Automation TransparentMaking Your Test Automation Transparent
Making Your Test Automation Transparent
TechWell
 
Balancing the Crusty and Old with the Shiny and New
Balancing the Crusty and Old with the Shiny and NewBalancing the Crusty and Old with the Shiny and New
Balancing the Crusty and Old with the Shiny and New
TechWell
 
Being Creative: A Visual Testing Workshop
Being Creative: A Visual Testing WorkshopBeing Creative: A Visual Testing Workshop
Being Creative: A Visual Testing Workshop
TechWell
 
Satisfying Auditors: Plans and Evidence in a Regulated Environment
Satisfying Auditors: Plans and Evidence in a Regulated EnvironmentSatisfying Auditors: Plans and Evidence in a Regulated Environment
Satisfying Auditors: Plans and Evidence in a Regulated Environment
TechWell
 
Docker Containers in the Enterprise DevOps Journey
Docker Containers in the Enterprise DevOps JourneyDocker Containers in the Enterprise DevOps Journey
Docker Containers in the Enterprise DevOps Journey
TechWell
 
Executives’ Influence on Agile: The Good, the Bad, and the Ugly
Executives’ Influence on Agile: The Good, the Bad, and the UglyExecutives’ Influence on Agile: The Good, the Bad, and the Ugly
Executives’ Influence on Agile: The Good, the Bad, and the Ugly
TechWell
 
Innovation Thinking: Evolve and Expand Your Capabilities
Innovation Thinking: Evolve and Expand Your CapabilitiesInnovation Thinking: Evolve and Expand Your Capabilities
Innovation Thinking: Evolve and Expand Your Capabilities
TechWell
 
Design for Testability in Practice
Design for Testability in PracticeDesign for Testability in Practice
Design for Testability in Practice
TechWell
 
Build Your Open Source Performance Testing Platform in the Cloud
Build Your Open Source Performance Testing Platform in the CloudBuild Your Open Source Performance Testing Platform in the Cloud
Build Your Open Source Performance Testing Platform in the Cloud
TechWell
 
IoT and Embedded Testing: A Roku Case Study
IoT and Embedded Testing: A Roku Case StudyIoT and Embedded Testing: A Roku Case Study
IoT and Embedded Testing: A Roku Case Study
TechWell
 

Viewers also liked (16)

Is Agile the Prescription for the Public Sector’s IT Woes?
Is Agile the Prescription for the Public Sector’s IT Woes?Is Agile the Prescription for the Public Sector’s IT Woes?
Is Agile the Prescription for the Public Sector’s IT Woes?
 
Getting Your Message Across: Communication Skills for Testers
Getting Your Message Across: Communication Skills for TestersGetting Your Message Across: Communication Skills for Testers
Getting Your Message Across: Communication Skills for Testers
 
Playwriting, Imagination, and Agile Software Development … Oh My!
Playwriting, Imagination, and Agile Software Development … Oh My!Playwriting, Imagination, and Agile Software Development … Oh My!
Playwriting, Imagination, and Agile Software Development … Oh My!
 
The Doctor Is In: Diagnosing Test Automation Diseases
The Doctor Is In: Diagnosing Test Automation DiseasesThe Doctor Is In: Diagnosing Test Automation Diseases
The Doctor Is In: Diagnosing Test Automation Diseases
 
Test Automation Strategies for the Agile World
Test Automation Strategies for the Agile WorldTest Automation Strategies for the Agile World
Test Automation Strategies for the Agile World
 
EARS: The Easy Approach to Requirements Syntax
EARS: The Easy Approach to Requirements SyntaxEARS: The Easy Approach to Requirements Syntax
EARS: The Easy Approach to Requirements Syntax
 
Making Your Test Automation Transparent
Making Your Test Automation TransparentMaking Your Test Automation Transparent
Making Your Test Automation Transparent
 
Balancing the Crusty and Old with the Shiny and New
Balancing the Crusty and Old with the Shiny and NewBalancing the Crusty and Old with the Shiny and New
Balancing the Crusty and Old with the Shiny and New
 
Being Creative: A Visual Testing Workshop
Being Creative: A Visual Testing WorkshopBeing Creative: A Visual Testing Workshop
Being Creative: A Visual Testing Workshop
 
Satisfying Auditors: Plans and Evidence in a Regulated Environment
Satisfying Auditors: Plans and Evidence in a Regulated EnvironmentSatisfying Auditors: Plans and Evidence in a Regulated Environment
Satisfying Auditors: Plans and Evidence in a Regulated Environment
 
Docker Containers in the Enterprise DevOps Journey
Docker Containers in the Enterprise DevOps JourneyDocker Containers in the Enterprise DevOps Journey
Docker Containers in the Enterprise DevOps Journey
 
Executives’ Influence on Agile: The Good, the Bad, and the Ugly
Executives’ Influence on Agile: The Good, the Bad, and the UglyExecutives’ Influence on Agile: The Good, the Bad, and the Ugly
Executives’ Influence on Agile: The Good, the Bad, and the Ugly
 
Innovation Thinking: Evolve and Expand Your Capabilities
Innovation Thinking: Evolve and Expand Your CapabilitiesInnovation Thinking: Evolve and Expand Your Capabilities
Innovation Thinking: Evolve and Expand Your Capabilities
 
Design for Testability in Practice
Design for Testability in PracticeDesign for Testability in Practice
Design for Testability in Practice
 
Build Your Open Source Performance Testing Platform in the Cloud
Build Your Open Source Performance Testing Platform in the CloudBuild Your Open Source Performance Testing Platform in the Cloud
Build Your Open Source Performance Testing Platform in the Cloud
 
IoT and Embedded Testing: A Roku Case Study
IoT and Embedded Testing: A Roku Case StudyIoT and Embedded Testing: A Roku Case Study
IoT and Embedded Testing: A Roku Case Study
 

Similar to Getting Started with Risk-Based Testing

Fundamentals of Risk-based Testing
Fundamentals of Risk-based TestingFundamentals of Risk-based Testing
Fundamentals of Risk-based Testing
TechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
TechWell
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
TechWell
 
S&OP A Better Way to Run Your Business- Eyeon Solutions
S&OP A Better Way to Run Your Business- Eyeon SolutionsS&OP A Better Way to Run Your Business- Eyeon Solutions
S&OP A Better Way to Run Your Business- Eyeon Solutions
Steelwedge
 
Rana Mansoor Ahmed
Rana Mansoor AhmedRana Mansoor Ahmed
Rana Mansoor Ahmed
Rana Mansoor
 
SQA V And V Intro & History
SQA V And V Intro & HistorySQA V And V Intro & History
SQA V And V Intro & History
Douglas Gabel
 
Sqa V And V Share
Sqa V And V ShareSqa V And V Share
Sqa V And V Share
guest0b67e9
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
TechWell
 
Genela-HASQTS-2015
Genela-HASQTS-2015Genela-HASQTS-2015
Genela-HASQTS-2015
Genela Hardin
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
TechWell
 
LGM_CV_2016
LGM_CV_2016LGM_CV_2016
LGM_CV_2016
Lisa Moliner
 
Test manager resume
Test manager resumeTest manager resume
Test manager resume
Sundara Moorthy
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
TechWell
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
TechWell
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
TechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
TechWell
 
Ez- template-IT- Quality Assurance
Ez- template-IT- Quality AssuranceEz- template-IT- Quality Assurance
Ez- template-IT- Quality Assurance
Ezni Serafina
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
TechWell
 
Crucial Factors for Determining The Right Testing Method for Software Testing...
Crucial Factors for Determining The Right Testing Method for Software Testing...Crucial Factors for Determining The Right Testing Method for Software Testing...
Crucial Factors for Determining The Right Testing Method for Software Testing...
Matthew Allen
 
Ramesh_Resume_1215
Ramesh_Resume_1215Ramesh_Resume_1215
Ramesh_Resume_1215
Ramesh chirumavilla
 

Similar to Getting Started with Risk-Based Testing (20)

Fundamentals of Risk-based Testing
Fundamentals of Risk-based TestingFundamentals of Risk-based Testing
Fundamentals of Risk-based Testing
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
S&OP A Better Way to Run Your Business- Eyeon Solutions
S&OP A Better Way to Run Your Business- Eyeon SolutionsS&OP A Better Way to Run Your Business- Eyeon Solutions
S&OP A Better Way to Run Your Business- Eyeon Solutions
 
Rana Mansoor Ahmed
Rana Mansoor AhmedRana Mansoor Ahmed
Rana Mansoor Ahmed
 
SQA V And V Intro & History
SQA V And V Intro & HistorySQA V And V Intro & History
SQA V And V Intro & History
 
Sqa V And V Share
Sqa V And V ShareSqa V And V Share
Sqa V And V Share
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Genela-HASQTS-2015
Genela-HASQTS-2015Genela-HASQTS-2015
Genela-HASQTS-2015
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
LGM_CV_2016
LGM_CV_2016LGM_CV_2016
LGM_CV_2016
 
Test manager resume
Test manager resumeTest manager resume
Test manager resume
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Ez- template-IT- Quality Assurance
Ez- template-IT- Quality AssuranceEz- template-IT- Quality Assurance
Ez- template-IT- Quality Assurance
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
Crucial Factors for Determining The Right Testing Method for Software Testing...
Crucial Factors for Determining The Right Testing Method for Software Testing...Crucial Factors for Determining The Right Testing Method for Software Testing...
Crucial Factors for Determining The Right Testing Method for Software Testing...
 
Ramesh_Resume_1215
Ramesh_Resume_1215Ramesh_Resume_1215
Ramesh_Resume_1215
 

More from TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
TechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
TechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
TechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
TechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
TechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
TechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
TechWell
 
Ma 15
Ma 15Ma 15
Ma 15
TechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
TechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
TechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
TechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
TechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
TechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
TechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
TechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
TechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
TechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
TechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
TechWell
 

More from TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Recently uploaded

Essentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation ParametersEssentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation Parameters
Safe Software
 
Principle of conventional tomography-Bibash Shahi ppt..pptx
Principle of conventional tomography-Bibash Shahi ppt..pptxPrinciple of conventional tomography-Bibash Shahi ppt..pptx
Principle of conventional tomography-Bibash Shahi ppt..pptx
BibashShahi
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
What is an RPA CoE? Session 1 – CoE Vision
What is an RPA CoE?  Session 1 – CoE VisionWhat is an RPA CoE?  Session 1 – CoE Vision
What is an RPA CoE? Session 1 – CoE Vision
DianaGray10
 
Northern Engraving | Nameplate Manufacturing Process - 2024
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving | Nameplate Manufacturing Process - 2024
Northern Engraving | Nameplate Manufacturing Process - 2024
Northern Engraving
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
Javier Junquera
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
Mutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented ChatbotsMutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented Chatbots
Pablo Gómez Abajo
 
AppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSFAppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSF
Ajin Abraham
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
MichaelKnudsen27
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
Antonios Katsarakis
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
panagenda
 
Skybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoptionSkybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoption
Tatiana Kojar
 
Apps Break Data
Apps Break DataApps Break Data
Apps Break Data
Ivo Velitchkov
 
The Microsoft 365 Migration Tutorial For Beginner.pptx
The Microsoft 365 Migration Tutorial For Beginner.pptxThe Microsoft 365 Migration Tutorial For Beginner.pptx
The Microsoft 365 Migration Tutorial For Beginner.pptx
operationspcvita
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
Neo4j
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
Y-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PPY-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PP
c5vrf27qcz
 
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
Edge AI and Vision Alliance
 

Recently uploaded (20)

Essentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation ParametersEssentials of Automations: Exploring Attributes & Automation Parameters
Essentials of Automations: Exploring Attributes & Automation Parameters
 
Principle of conventional tomography-Bibash Shahi ppt..pptx
Principle of conventional tomography-Bibash Shahi ppt..pptxPrinciple of conventional tomography-Bibash Shahi ppt..pptx
Principle of conventional tomography-Bibash Shahi ppt..pptx
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
What is an RPA CoE? Session 1 – CoE Vision
What is an RPA CoE?  Session 1 – CoE VisionWhat is an RPA CoE?  Session 1 – CoE Vision
What is an RPA CoE? Session 1 – CoE Vision
 
Northern Engraving | Nameplate Manufacturing Process - 2024
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving | Nameplate Manufacturing Process - 2024
Northern Engraving | Nameplate Manufacturing Process - 2024
 
GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
Mutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented ChatbotsMutation Testing for Task-Oriented Chatbots
Mutation Testing for Task-Oriented Chatbots
 
AppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSFAppSec PNW: Android and iOS Application Security with MobSF
AppSec PNW: Android and iOS Application Security with MobSF
 
Nordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptxNordic Marketo Engage User Group_June 13_ 2024.pptx
Nordic Marketo Engage User Group_June 13_ 2024.pptx
 
Artificial Intelligence and Electronic Warfare
Artificial Intelligence and Electronic WarfareArtificial Intelligence and Electronic Warfare
Artificial Intelligence and Electronic Warfare
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
 
HCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAUHCL Notes and Domino License Cost Reduction in the World of DLAU
HCL Notes and Domino License Cost Reduction in the World of DLAU
 
Skybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoptionSkybuffer SAM4U tool for SAP license adoption
Skybuffer SAM4U tool for SAP license adoption
 
Apps Break Data
Apps Break DataApps Break Data
Apps Break Data
 
The Microsoft 365 Migration Tutorial For Beginner.pptx
The Microsoft 365 Migration Tutorial For Beginner.pptxThe Microsoft 365 Migration Tutorial For Beginner.pptx
The Microsoft 365 Migration Tutorial For Beginner.pptx
 
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge GraphGraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
Y-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PPY-Combinator seed pitch deck template PP
Y-Combinator seed pitch deck template PP
 
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
“How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-eff...
 

Getting Started with Risk-Based Testing

  • 1. MC Full Day Tutorial 10/13/2014 8:30:00 AM "Getting Started with Risk-Based Testing" Presented by: Dale Perry Software Quality Engineering Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Dale Perry Software Quality Engineering Dale Perry has more than thirty-eight years of experience in information technology as a programmer/analyst, database administrator, project manager, development manager, tester, and test manager. Dale’s project experience includes large-system development and conversions, distributed systems, and both client/server and web-based online applications. A professional instructor for more than twenty-four years, he has presented at numerous industry conferences on development and testing. With Software Quality Engineering for eighteen years, Dale has specialized in training and consulting on testing, inspections and reviews, and other testing and quality-related topics. Speaker Presentations
  • 3. 1© SQE Training - STAR West 2014
  • 4. This page left blank 2© SQE Training - STAR West 2014
  • 5. Notice of Rights Entire contents © 2010-2014 by SQE Training, unless otherwise noted on specific items. All rights reserved. No material in this publication may be reproduced in any form without the express written permission of SQE Training. Home Office SQE Training 340 Corporate Way, Suite 300 Orange Park FL 32073 U.S.A. 904. 278.0524 904.278.4380 fax www.sqetraining.com Notice of Liability The information provided in this book is distributed on an “as is” basis, without warranty. Neither the author nor SQE Training shall have any liability to any person or entity with respect to any loss or damage caused or alleged to have been caused directly or indirectly by the content provided in this course. 3© SQE Training - STAR West 2014
  • 6. 4© SQE Training - STAR West 2014
  • 7. 5© SQE Training - STAR West 2014
  • 8. 6© SQE Training - STAR West 2014
  • 9. Many of the development models in use today originated with the concepts developed by Dr. Shewhart in 1938 at Bell Laboratories and extended to manufacturing and quality control by Dr. Demming in the 1950s. This concept was called the life cycle of life cycles. 7© SQE Training - STAR West 2014
  • 10. This page left blank 8© SQE Training - STAR West 2014
  • 11. 9© SQE Training - STAR West 2014
  • 12. 10© SQE Training - STAR West 2014
  • 13. Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g., reviews orimplementation level without execution of that software, e.g., reviews or static code analysis. Dynamic testing: Testing that involves the execution of the software of a component or system. 11© SQE Training - STAR West 2014
  • 14. Testing every possible data value, every possible navigation path through the code, and every possible combination of input values is almost always an infinite task which never can be completed. Even if it were possible, it is not necessarily a good idea because many of the test cases would be redundant, consume resources to create, delay time to market, and not add anything of value. For example, a single screen (GUI) or data stream, with thirteen variables, each with three values • To test every possible combination is 3(13) possible combinations or 1,594,323 tests • Plus testing of interfaces, etc. 12© SQE Training - STAR West 2014
  • 15. 13© SQE Training - STAR West 2014
  • 16. 14© SQE Training - STAR West 2014
  • 17. 15© SQE Training - STAR West 2014
  • 18. 16© SQE Training - STAR West 2014
  • 19. 17© SQE Training - STAR West 2014
  • 20. 18© SQE Training - STAR West 2014
  • 21. 19© SQE Training - STAR West 2014
  • 22. 20© SQE Training - STAR West 2014
  • 23. 21© SQE Training - STAR West 2014
  • 24. The purpose of discussing software risk is to determine the primary focus of testing. Generally speaking, most organizations find that their resources are inadequate to test everything in a given release. Outlining software risks helps the testers prioritize what to test and allows them to concentrate on those areas that are likely to fail or those areas that will critically impact the customer if they do fail. Risks are used to decide where to start testing and where to test more. Testing is used to reduce the risk of an adverse effect occurring or to reduce the impact of an adverse effect. 22© SQE Training - STAR West 2014
  • 25. Organizations that work on safety-critical software usually can use the information from their safety and hazard analysis to identify areas of risk. However, many companies make no attempt to verbalize software risks in any fashion. If your company does not currently do any type of risk analysis, try a brainstorming session among a small group of users, developers, and testers to identify concerns. 23© SQE Training - STAR West 2014
  • 26. 24© SQE Training - STAR West 2014
  • 27. Risk Factor 1 Ambiguous Improvement Targets 2 Artificial Maturity Levels 3 Canceled Projects 4 Corporate Politics 5 Cost Overruns 6 Creeping User Requirements 7 Crowded Office Conditions 8 Error Prone Modules 9 Excessive Paperwork 10 Excessive schedule Pressure 11 Excessive Time to Market 1212 False Productivity Claims 13 Friction Between Clients and Software Contractors 14 Friction Between Software Management and Senior Executives 15 High Maintenance Costs 16 Inaccurate Cost Estimating 17 Inaccurate Metrics 18 Inaccurate Quality Estimating 19 Inaccurate Sizing of Deliverables 20 Inadequate Assessments 21 Inadequate Compensation Plans 25© SQE Training - STAR West 2014
  • 28. Risk Factor 22 Inadequate Configuration Control and Project Repositories 23 Inadequate Curricula (Software Engineering) 24 Inadequate Curricula (Software Management) 25 Inadequate Measurement 26 Inadequate Package Acquisition Methods 27 Inadequate Research and Reference Facilities 28 Inadequate Software Policies and Standards 29 Inadequate Project Risk Analysis 30 Inadequate Project value Analysis 31 Inadequate Tools and Methods (Project31 Inadequate Tools and Methods (Project Management) 32 Inadequate Tools and Methods (Quality Assurance) 33 Inadequate Tools and Methods (Software Engineering) 34 Inadequate Tools and Methods (Technical Documentation) 35 Lack of Reusable Architecture 36 Lack of Reusable Code 37 Lack of Reusable Data 38 Lack of Reusable Designs (Blueprints) 39 Lack of Reusable Documentation 40 Lack of Reusable Estimates (Templates) 26© SQE Training - STAR West 2014
  • 29. Risk Factor 41 Lack of Reusable Human Interfaces 42 Lack of Reusable Project Plans 43 Lack of Reusable Requirements 44 Lack of Reusable Test Plans. Test Cases and Test Data 45 Lack of Specialization 46 Long Service Life of Obsolete Systems 47 Low Productivity 48 Low Quality 49 Low Status of Software Personnel and Management 50 Low User Satisfaction 51 Malpractice (Project Management) 52 Malpractice (Technical Staff) 53 Missed Schedules 54 Partial Life-Cycle Definitions 55 Poor Organization Structures 56 Poor Technology Investments 57 Severe Layoffs and Cutbacks of Staff 58 Short-Range Improvement Planning 59 Silver Bullet Syndrome 60 Slow Technology Transfer 27© SQE Training - STAR West 2014
  • 30. 28© SQE Training - STAR West 2014
  • 31. Managers tend to focus on two key elements • Controlling and managing costs • Return on investment (ROI) Marketing and sales people tend to be driven by a singular goal, “competitive advantage” • The concerns of marketing and sales are not necessarily related to functionality, they need an edge Engineers (developers, analysts, etc.) tend to be focused on the technology itself • Driven by the use of “new” technology and techniques • Not interested in functionality except as it relates to the use of technology • Some technically valid decisions may cause functional problems 29© SQE Training - STAR West 2014
  • 32. Customers are seeking the answer to a very simple question, “can I do my job?” Their view of a product is a tool to do their job If they can’t use your product to that end, it’s a bad product 30© SQE Training - STAR West 2014
  • 33. 31© SQE Training - STAR West 2014
  • 34. Quality assurance is part of an organizations’ overall set of processes. Testing primarily focuses on product risk. Testing is an evaluation activity, it uses the processes defined within and organization to assess the quality of a product as defined within the scope of a project. Testing also assesses the process used within the organization to ensure they are usable, effective and appropriate. A more accurate acronym for testing is Q/C (Quality Control). Simply stated: We can think of testing (Q/C) as the application of process to a product within the context of a project focusing on risks. 32© SQE Training - STAR West 2014
  • 35. 33© SQE Training - STAR West 2014
  • 36. The concept of risk driven testing applies to all software development models and processes. It is critical to developing quality software that meets user/customer expectations and is the focus of both the STEP™ methodology and many of the new agile development processes. If you analyze the newer “agile” development methods, this is one of the key concepts. It’s interesting that this is not really a new concept at all; it has been around for a couple of decades. 34© SQE Training - STAR West 2014
  • 37. There are many different software lifecycle approaches: waterfall, spiral, incremental delivery, prototyping (evolutionary and throwaway), RAD, extreme programming (XP), SCRUM, DSDM, etc. The key is to know which process the project is following and to integrate into that process as soon as possible (reasonable). The later you get involved, the less chance you have to prevent problems. 35© SQE Training - STAR West 2014
  • 38. 36© SQE Training - STAR West 2014
  • 39. 37© SQE Training - STAR West 2014
  • 40. Planning is essential for all projects, regardless of type of development model. There are certain essential decisions that need to be made early in the project to ensure the project will be a success and that the testing can be realistically accomplished within the constraints of the project. 38© SQE Training - STAR West 2014
  • 41. 39© SQE Training - STAR West 2014
  • 42. 40© SQE Training - STAR West 2014
  • 43. 41© SQE Training - STAR West 2014
  • 44. 42© SQE Training - STAR West 2014
  • 45. 43© SQE Training - STAR West 2014
  • 46. In testing it is critical to identify those elements within the scope of the project that require testing. Once identified those elements need to be assessed as to what is absolutely critical to the operation of the software and the user’s needs and expectations and what can possibly be deferred. Not all elements within the scope of a project are of equal importance. 44© SQE Training - STAR West 2014
  • 47. This model is based on the risk analysis model within the STEP™ testing methodology developed by Software Quality Engineering. The general process can be applied to any type of product and within any type of development model. 45© SQE Training - STAR West 2014
  • 48. All project start with something. The only reason to develop or change software is because one of the stakeholders in an organization has a problem with the existing processes or systems. A problem does not necessarily mean that the software does not function, although there may be problems with the software there cane be other reasons for someone to want to change the system. The business model may have shifted or a government entity may have changed some rule or process. It may be that an internal marketing team is looking to add some advantages to the system to improve the competitive nature of the system. Regardless of the reason, software is developed or changed because someone has a problem, understanding the nature of the problem is essential to both develop a solution and to ensure it works (test the solution). 46© SQE Training - STAR West 2014
  • 49. Reference materials include any information available that can assist in determining the testing objects/conditions. Some lifecycles do not have formal sources of documentation. No formal requirements are written. However, there is usually some information about what type of system is being created, the platform on which it will run, the goals of the client, etc. Any information you can gather will help you better understand the test requirements for this project. 47© SQE Training - STAR West 2014
  • 50. Different groups have different ideas about software. The more of theseDifferent groups have different ideas about software. The more of these disparate groups you can combine, the more accurate the picture you will have of the risks, priorities, and goals for development, and the more accurate the testing goals and objects/conditions for this project become. In addition to the stakeholders noted earlier you may also want to consider others who have knowledge of current and past issues associated with the system under discussion. Groups such as the help desk and user support tend to have in-depth knowledge of current as well as past issues. The key is to include testers in the discussions to help focus the team on the testing issues and to help determine the priority of the features to be developed. Testers need to know the risks and issues in order to properly analyze and design reasonable tests to mitigate/control those risks or to at least know what level of risk the stakeholders are willing to accept. 48© SQE Training - STAR West 2014
  • 51. As soon as the project is established and the general focus of the software is known, either new development or maintenance, the process of risk assessment begins. 49© SQE Training - STAR West 2014
  • 52. When designing an inventory or elements/objects to test we need to asses three different aspects of the software/system Functional – Identification of “what” the user/customer expects the system/.application to provide. These are often referred to as “black box” or specification based elements (as they are typically derived from some form of specification (document) or model. Non-Functional – Identifying aspects of the system that need to be tested that are not necessarily part of the functions required by the user/customer. i.e. performance, usability, response time, etc. These are characteristics or attributes of the system and reflect “how” the system works not “what” it does or how it’s built. Structural – Identification of the physical elements of the infrastructure, architecture, design or code that also require testing. There are often elements within the system that are not part of user/customer’s requirements but they still need testing. These represent “how” the system is constructed. 50© SQE Training - STAR West 2014
  • 53. The inventory can be a separate artifact/document or in can be incorporated into and existing artifact/document such as a risk prioritized requirements specification, use case model or story backlog. The inventory can represent an entire system, part of a system or a single requirement, use case or story. The key to an inventory is to identify what needs to be tested from as many viewpoints as possible. 51© SQE Training - STAR West 2014
  • 54. 52© SQE Training - STAR West 2014
  • 55. 53© SQE Training - STAR West 2014
  • 56. The inventory process is an iterative process. You begin the process at requirements and continue the process at each stage of the development process. How far you take the process is determined by the scope and risks associated with the software being tested. In addition to the process being iterative, it is also cumulative. The information from requirements is used to improve the requirements (static testing/reviews), to focus the design, and possibly to improve the design. At the design stage of a project the information from the requirements inventory process is used to evaluate the design and to ensure problems are corrected and additional items are gathered from the design. This process can be continued as far as the risks to the project warrant. 54© SQE Training - STAR West 2014
  • 57. These can be specific elements to be tested or they can be aspects (characteristics, attributes) of elements identified in other work products such as an expansion of a user story or use case from a testing and risk perspective. A single story, use case or requirement may have multiple aspects that requiring testing. E.g. a function may have inputs, and/or outputs that are retrieved from and sent to a data store via an interface or transaction and those elements may have specific relationships (rules or constraints), all of which need to be tested. The identified (requirement, story or use case) may also participate in a larger sequence of events (scenario) that also must eventually be tested to ensure the identified element works correctly at all levels within an application or system (end-to-end). Some aspects of the identified element may also be sensitive to the environment and so will require additional testing in multiple enrivonments. 55© SQE Training - STAR West 2014
  • 58. While creating the inventory we may also identify aspects of the identified elements that are not completely defined or may be vague, incomplete, ambiguous, etc. These issues and questions are logged to ensure that they are not lost. S ome of the identified issues may not be resolvable at the requirements stage of a project and may be further defined later in the development process. This is perfectly acceptable, provided the information is logged and tracked somehow. The information can be part of an existing issues/defect tracking system, part of a backlog, or may be a separate artifact/document. How it is managed will depend to a large degree on the development process employed. The key is not to lose tract of the identified issues and questions. For incomplete information we would come back to our list of identified issues and questions at the appropriate time within the development process and ensure that they have, in fact, been addressed to a reasonable degree and are now testable. 56© SQE Training - STAR West 2014
  • 59. Once the requirements have been assessed, the initial inventory created and any issues and questions logged, the team may have to wait for the system design (architecture and detail aspects) to be defined. When the design is ready we can continue the inventory process by assessing the design in the same manner as we did the requirements. The existing inventory and the list of known issues and questions is the input into the next step in the process. 57© SQE Training - STAR West 2014
  • 60. During the assessment of the design, issues and questions that were deferred until the design was defined can now be revisited. At this point we should have additional information that will help us clarify open issues from the requirements assessment. At this stage three things are occurring: 1. Identification of additional elements to add to our inventory 2. Resolution of issues and questions from the requirements process 3. Identification of new issues and questions related to the design How far we take the inventory process depends on the development model, the organizational processes in place and the level of potential risk. There is no specific rule as to how far the inventory process if taken within a project. 58© SQE Training - STAR West 2014
  • 61. Reviewing designs may require additional technical skills that not all testers have. At least one person on the test team needs to have some technical skills that relate to the system/product being tested in order to ask critical questions. Early tests, especially scenario based tests, developed during the requirements part of the process can be used as walk through models for the design. Walking through a design with a set of functional and non-functional tests can often reveal problems that would be missed if only looked at from an engineering perspective. After all, development (software engineering) is the process of solving a problem with a computer. People solving problems have a different perspective than those trying to ensure the solution works. This process may even reveal places where changes can be made to aid in testing, such as exposing the information in an API or memory table. 59© SQE Training - STAR West 2014
  • 62. There are some common aspects of applications that can be drawn from the design specifications. Updating the inventory will be about providing clarification of earlier issues and questions raised during the requirements inventory review as well as adding new objects/elements to the inventory. Many of the elements added at this point will be technical in nature and late will be tested primarily at the component or integration stages of development. It is critical to know where these technical elements exist so that the get proper testing. May failures discovered in system testing are often symptoms of incomplete or ineffective testing of technical elements within a product/system. 60© SQE Training - STAR West 2014
  • 63. Analyzing an individual requirement, use case, story, interface etc. allows the tester to more accurately understand the scope of testing related to a particular element/object. This can also help in estimating the scope of effort required to properly test a particular element/object. 61© SQE Training - STAR West 2014
  • 64. A single requirement, use case or story may have multiple aspects that need to be tested. Just as we can be a general list of overall elements/objects to test we can also construct an inventory specific to a single element. 62© SQE Training - STAR West 2014
  • 65. Just like a functional aspect can be analyzed for elements to test so can a structural element. The single element is the interface but there are many aspects of the interface that need to be tested to ensure it works correctly. 63© SQE Training - STAR West 2014
  • 66. 64© SQE Training - STAR West 2014
  • 67. 65© SQE Training - STAR West 2014
  • 68. © SQE Training - STAR West 2014 66
  • 69. 67© SQE Training - STAR West 2014
  • 70. 68© SQE Training - STAR West 2014
  • 71. 69© SQE Training - STAR West 2014
  • 72. 70© SQE Training - STAR West 2014
  • 73. Impact is sometimes referred to as severity. Once the inventory has been built, the next step is to determine the Impact and likelihood of something going wrong with each of the elements identified in the inventory. • Determine the impact (loss or damage) and likelihood (frequency or probability) of the feature or attribute failing. While some organizations like to use percentages, number of days/years between occurrences, or even probability “half lives,” using a set of simple categories such as the ones listed in the slide above typically provide sufficient accuracy. If the likelihood or impact of something going wrong is none or zero, then this item could be removed from the analysis. However, the removal should be documented. • This is not recommended. Just leave it in the inventory, it will naturally drop to the bottom. 71© SQE Training - STAR West 2014
  • 74. Many of the factors that affect likelihood are technical in nature. In development much of the code developed is about handling the errors and exceptions that occur within the system. There is typically more code covering the errors and exceptions than there is code implementing the general rules and processes requested b y the stakeholders. 72© SQE Training - STAR West 2014
  • 75. 73© SQE Training - STAR West 2014
  • 76. 74© SQE Training - STAR West 2014
  • 77. Impact is typically viewed as the impact on the user of the product./system. Impact is normally assessed from a business perspective, what is the damage to the business should an identified risk occur. 75© SQE Training - STAR West 2014
  • 78. 76© SQE Training - STAR West 2014
  • 79. Most organization use a combination of both methods. • Qualitative helps express risk in terms a person can understand. • Quantitative is then used in conjunction with qualitative categories to create a matrix with each risk given a weighted priority. Each qualitative category needs to be defined in the organization’s overall process directives. Ideally, there needs to be several examples of each risk assessment category to aid people is determining which category is appropriate to the object they are assessing. Risk is in the eye of the beholder as noted earlier. • Any two people may look at the same event and see an entirely different set of issues. What is critical to one may be trivial to the other. 77© SQE Training - STAR West 2014
  • 80. Likelihood = The probability or chance of an event occurring (e.g., the likelihood that a user will make a mistake and, if a mistake is made, the likelihood that it will go undetected by the software). Impact = The damage that results from a failure (e.g., the system crashing or corrupting data might be considered high impact). Something could have a low likelihood (technical) risk and yet have a very high impact (business) risk and vice-versa. It is the combination of the two factors that helps to determine the overall risk level for a particular element. 78© SQE Training - STAR West 2014
  • 81. Within an organization it may be necessary to set up different risk models for different products/systems. This is not a problem as not all products/systems should necessarily follow the exact same model. The “quantitative” factors can be changed and adjusted as needed, however, the “qualitative” elements should remain consistent across different risk models. This will ensure that risks are assessed the same across the organization and still allow for adjustments for differences in relative risk levels. By adjusting the “quantitative” factors we can allow for applications that have more business than technical risk and vice-versa but still have a consistent model that can be used across the organization. 79© SQE Training - STAR West 2014
  • 82. Under likelihood and impact, there may be differences of opinion as to the risk. It can be high business risk but low technical risk, etc. So you may have to compromise on an acceptable level of risk. The numbers are calculated using the values from our original matrix (page 70 and multiplying them. H = High which has a value of 3 M = Medium which has a value of 2 L = Low which has a value of 1 80© SQE Training - STAR West 2014
  • 83. Make adjustments and sort by the agreed priority. We now have a risk-based assessment of what needs to be tested. 81© SQE Training - STAR West 2014
  • 84. Of course, successful prioritization of risks does not help unless test cases are defined for each risk—with the highest priority risks being assigned the most comprehensive tests and priority scheduling. The object of each test case is to mitigate at least one of these risks. If time or resources are an issue, then the priority associated with each feature or attribute can be used to determine which test cases should be created and/or run. If testing must be cut, then the risk priority can be used to determine how and what to drop. • Cut low risk completely (indicated by the horizontal line). If you plan to ship the low risk features, you may want to consider an across the board approach, where the high risk features are fully tested and the lower risk are tested a little less based on the level of risk associated with the feature. So medium risk elements get less that high but mode than low risk elements. At least that way, the features do not ship untested (risk unknown). This will entail some additional risk as higher risk features get less testing. 82© SQE Training - STAR West 2014
  • 85. Appendix has an expanded version of the FMEA (Failure Modes and Effects Analsys) model as well as the ISO 9126 quality model. 83© SQE Training - STAR West 2014
  • 86. 84© SQE Training - STAR West 2014
  • 87. 85© SQE Training - STAR West 2014
  • 88. Risk mitigation/Risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels. Risk type: A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing. 86© SQE Training - STAR West 2014
  • 89. Murphy’s Law: “If anything can go wrong, it will, and will do so at the worst possible time”. 87© SQE Training - STAR West 2014
  • 90. 88© SQE Training - STAR West 2014
  • 91. 89© SQE Training - STAR West 2014
  • 92. 90© SQE Training - STAR West 2014
  • 93. 91© SQE Training - STAR West 2014
  • 94. 92© SQE Training - STAR West 2014
  • 95. 93© SQE Training - STAR West 2014
  • 96. This page left blank 94© SQE Training - STAR West 2014
  • 97. 95© SQE Training - STAR West 2014
  • 98. 96© SQE Training - STAR West 2014
  • 99. 97© SQE Training - STAR West 2014
  • 100. 98© SQE Training - STAR West 2014
  • 101. 99© SQE Training - STAR West 2014
  • 102. 100© SQE Training - STAR West 2014
  • 103. 101© SQE Training - STAR West 2014
  • 104. 102© SQE Training - STAR West 2014
  • 105. 103© SQE Training - STAR West 2014
  • 106. 104© SQE Training - STAR West 2014
  • 107. 105© SQE Training - STAR West 2014
  • 108. Although exploratory testing primarily relies on the skills and knowledge of the tester and tends to be more dynamic than traditional technique-driven design, it too can be more formalized. Using the inventory process as part of an exploratory test process can add structure to the definition of the areas to be investigated rather than relying only on the skills of the individual tester. 106© SQE Training - STAR West 2014
  • 109. 107© SQE Training - STAR West 2014
  • 110. The goal is to avoid gaps in the testing as well as to avoid overlapping testing too much. Depending on how you define your inventories, based on generic groupings or application specific groupings, the idea to decide who will test which objects at what stage/level. Some objects cannot be tested until later stages of the process (i.e., scenarios and usage based objects). Conversely some elements, such as field edits, valid ranges, error messages etc., are best tested in the earlier stages. These code logic elements, created by the programmers, are best tested at that stage of the process. Finding such errors late in the process can be very costly. 108© SQE Training - STAR West 2014
  • 111. 109© SQE Training - STAR West 2014
  • 112. This page left blank 110© SQE Training - STAR West 2014
  • 113. 111© SQE Training - STAR West 2014
  • 114. 112© SQE Training - STAR West 2014
  • 115. These three aspects of test execution have to be evaluated together. Looking at any single aspect provides no real useful information. 113© SQE Training - STAR West 2014
  • 116. © SQE Training - STAR West 2014 114
  • 117. Requirements Coverage: • Which requirements have been tested? • What are the key risk indicators? • Degree of coverage for each requirement • Relative to defined tests • Failure rates • Defect information (density, patterns, trends, root cause, etc. Design Coverage • Have we addressed key design issues? • Do we require coverage measures? • Integration complexity • Interface (API) coverage • What are the key risk indicators? • Failure rates within the design • Defect information (density, patterns, trends, root cause, etc. 115© SQE Training - STAR West 2014
  • 118. Risk Coverage • Are we covering the high risk areas first? • Are the executed tests balanced across the risk areas or narrowly focused on specific risk areas? • What risks areas are still unaddressed? • What is the level of risk in untested elements? Code Coverage • What percentage of the code has been tested? • Which coverage measures are important? • Statement coverage • Decision/branch coverage • Condition coverage • Path coverage • Key measures such as cyclomatic complexity • Other code coverage measures 116© SQE Training - STAR West 2014
  • 119. 117© SQE Training - STAR West 2014
  • 120. © SQE Training - STAR West 2014 118
  • 121. © SQE Training - STAR West 2014 119
  • 122. 120© SQE Training - STAR West 2014
  • 123. 121© SQE Training - STAR West 2014
  • 124. © SQE Training - STAR West 2014 122
  • 125. 123© SQE Training - STAR West 2014
  • 126. 124© SQE Training - STAR West 2014
  • 127. 125© SQE Training - STAR West 2014
  • 128. 126© SQE Training - STAR West 2014
  • 129. 127© SQE Training - STAR West 2014
  • 130. 128© SQE Training - STAR West 2014
  • 131. 129© SQE Training - STAR West 2014
  • 132. 130© SQE Training - STAR West 2014
  • 133. 131© SQE Training - STAR West 2014
  • 134. 132© SQE Training - STAR West 2014
  • 135. 133© SQE Training - STAR West 2014
  • 136. 134© SQE Training - STAR West 2014
  • 137. 135© SQE Training - STAR West 2014
  • 138. 136© SQE Training - STAR West 2014
  • 139. 137© SQE Training - STAR West 2014
  • 140. 138© SQE Training - STAR West 2014
  • 141. 139© SQE Training - STAR West 2014
  • 142. 140© SQE Training - STAR West 2014
  • 143. 141© SQE Training - STAR West 2014
  • 144. 142© SQE Training - STAR West 2014
  • 145. Web site references General testing sites for research etc. www.Stickyminds.com www.techwell.com www.sqaforums.com www.softwareqatest.com www.testingeduction.org Sites for open source elements (tools etc.) www.sourceforge.net www.opensourcetesting.org www.pairwise.org www.pairwise.org/tools.asp Sites related to performance, performance services and statistics Performance-testing.org www.sandvine.com TPC.org www.internetworldstats.com www.keynote.comwww.keynote.com www.webtrends.com www.shunra.com www.alexa.com www.mentora.com www.plunkettresearch.com Sites related to usability etc. www.useit.com www.nngroup.com www.webpagesthatsuck.com (yes this is a serious site) sumi.ucc.ie www.wammi.com General security related sites www.sans.org www.nist.gov csrc.nist.gov 143© SQE Training - STAR West 2014
  • 146. This page left blank. 144© SQE Training - STAR West 2014
  • 147. 145© SQE Training - STAR West 2014
  • 148. 146© SQE Training - STAR West 2014
  • 149. Related terms: FMECA SFMEA SFMECA Every product has modes of failure. The effects represent the impact of failures. — adapted from the Quality Training Portal 147© SQE Training - STAR West 2014
  • 150. — adapted from the Quality Training Portal 148© SQE Training - STAR West 2014
  • 151. 149© SQE Training - STAR West 2014
  • 152. FMEA can be applied at any level and should be done iteratively. 150© SQE Training - STAR West 2014
  • 153. 151© SQE Training - STAR West 2014
  • 154. — adapted from the Quality Training Portal 152© SQE Training - STAR West 2014
  • 155. 153© SQE Training - STAR West 2014
  • 156. 154© SQE Training - STAR West 2014
  • 157. 155© SQE Training - STAR West 2014
  • 158. 156© SQE Training - STAR West 2014
  • 159. ISO 9126 extract 6.1 Functionality The capability of the software product to provide functions, which meet stated and implied needs when the software is used under specified conditions. This characteristic is concerned with what the software does to fulfill needs, whereas the other characteristics are mainly concerned with when and how it fulfils needs. For a system, which is operated by a user, the combination of functionality, reliability, usability and efficiency can be measured externally by quality in use. Subcategories 6.1.1 Suitability - The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. 6.1.2 Accuracy - The capability of the software product to provide the right, or agreed results, or effects with the needed degree of precision. 6.1.3 Interoperability - The capability of the software product to interact with one or more specified systems. 6.1.4 Security - The capability of the software product to protect information and data6.1.4 Security - The capability of the software product to protect information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them. 6.1.5 Functionality compliance - The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions relating to functionality. 6.2 Reliability The capability of the software product to maintain a specified level of performance when used under specified conditions. Wear or ageing does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation. Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time. The definition of reliability in ISO/IEC 2382-14:1997 is "The ability of functional unit to perform a required function...". In this document, functionality is only one of the characteristics of software quality. Therefore, the definition of reliability has been broadened to "maintain a specified level of performance..." instead of "...perform a required function". Subcategories 6.2.1 Maturity The capability of the software product to avoid failure as a result of faults in the software. 157© SQE Training - STAR West 2014
  • 160. 6.2.2 Fault tolerance - The capability of the software product to maintain a specified level of performance in cases of software faults or of infringement of its specified interface. NOTE The specified level of performance may include fail safe capability. 6.2.3 Recoverability - The capability of the software product to re-establish a specified level of performance and recover the data directly affected in the case of a failure. NOTE 1 Following a failure, a software product will sometimes be down for a certain period of time, the length of which is assessed by its recoverability. NOTE 2 Availability is the capability of the software product to be in a state to perform a required function at a given point in time, under stated conditions of use. Externally, availability can be assessed by the proportion of total time during which the software product is in an up state. Availability is therefore a combination of maturity (which governs the frequency of failure), fault tolerance and recoverability (which governs the length of down time following each failure). For this reason it has not been included as a separate subcharacteristic. 6.2.4 Reliability compliance - The capability of the software product to adhere to standards, conventions or regulations relating to reliability. 6.3 Usability The capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Some aspects of functionality, reliability and efficiency will also affect usability, but for the purposes of ISO/IEC 9126 they are not classified as usability. Users may include operators, end users and indirect users who are under the influence of or dependent on the use of the software. Usability should address all of the different user environments that the software may affect, which may include preparation for usage and evaluation of results. Subcategories Understandability - The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. This will depend on the documentation and initial impressions given by the software. Learnability - The capability of the software product to enable the user to learn its application. The internal attributes correspond to suitability for learning as defined in ISO 9241-10. 158© SQE Training - STAR West 2014
  • 161. Operability - The capability of the software product to enable the user to operate and control it. NOTE 1 Aspects of suitability, changeability, adaptability and installability may affect operability. NOTE 2 Operability corresponds to controllability, error tolerance and conformity with user expectations as defined in ISO 9241-10. NOTE 3 For a system which is operated by a user, the combination of functionality, reliability, usability andefficiency can be measured externally by quality in use. Attractiveness - The capability of the software product to be attractive to the user. This refers to attributes of the software intended to make the software more attractive to the user, such as the use of color and the nature of the graphical design. Usability compliance - The capability of the software product to adhere to standards, conventions, style guides or regulations relating to usability. 6.4 Efficiency - The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions. Resources may include other software products, the software and hardware configuration of the system, and materials (e.g.software products, the software and hardware configuration of the system, and materials (e.g. print paper, diskettes). For a system which is operated by a user, the combination of functionality, reliability, usability and efficiency can be measured externally by quality in use. Subcategories 6.4.1 Time behavior - The capability of the software product to provide appropriate response and processing times and throughput rates when performing its function, under stated conditions. 6.4.2 Resource utilization - The capability of the software product to use appropriate amounts and types of resources when the software performs its function under stated conditions. Human resources are included as part of productivity. 6.4.3 Efficiency compliance - The capability of the software product to adhere to standards or conventions relating to efficiency. 6.5 Maintainability The capability of the software product to be modified. Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications. 159© SQE Training - STAR West 2014
  • 162. Subcategories 6.5.1 Analysability - The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. 6.5.2 Changeability - The capability of the software product to enable a specified modification to be implemented. Implementation includes coding, designing and documenting changes. If the software is to be modified by the end user, changeability may affect operability. 6.5.3 Stability - The capability of the software product to avoid unexpected effects from modifications of the software. 6.5.4 Testability - The capability of the software product to enable modified software to be validated. 6.5.5 Maintainability compliance - The capability of the software product to adhere to standards or conventions relating to maintainability. 6.6 Portability - The capability of the software product to be transferred from one environment to another. The environment may include organizational, hardware or software environment. Subcategories 6.6.1 Adaptability - The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. Adaptability includes the scalability of internal capacity (e.g. screen fields, tables, transaction volumes, report formats, etc.). If the software is to be adapted by the end user, adaptability corresponds to suitability for individualization as defined in ISO 9241-10, and may affect operability. 6.6.2 Installability - The capability of the software product to be installed in a specified environment. NOTE If the software is to be installed by an end user, installability can affect the resulting suitability and operability. 6.6.3 Co-existence - The capability of the software product to co-exist with other independent software in a common environment sharing common resources. 160© SQE Training - STAR West 2014
  • 163. 6.6.4 Replaceability - The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. For example, the replaceability of a new version of a software product is important to the user when upgrading. Replaceability is used in place of compatibility in order to avoid possible ambiguity with interoperability. Replaceability may include attributes of both installability and adaptability. The concept has been introduced as a sub characteristic of its own because of its importance. 6.6.5 Portability compliance - The capability of the software product to adhere to standards or conventions relating to portability 161© SQE Training - STAR West 2014
  • 164. This page left blank 162© SQE Training - STAR West 2014