1. “Risk of failure increases where there is
an undefined problem area”
Jeff Findlay
2. Defining Quality
• ISO: 9126
• An International software
product evaluation standard
• First published in 1991
• Aimed at reducing rework by
aligning requirements and
desired quality characteristics
• Functions that satisfy stated or
implied needs
• Capability to maintain performance
under stated conditions
• Effort needed for use and individual
assessment of such use
• Relationships between
performance and resources used
• Effort required to make specific
modifications
• Ability to transfer software from
one environment to another
3. • Manager’s View
• Over all (balanced) quality
rather than specific quality
• Schedules and costs will
lead to “optimising” quality
• User’s View
• The effect of quality on the
performance and function
• Quality needs generally set
in isolation
• Developer’s / Tester’s View
• Different quality metric
impact requirements at
different dev./test phases
Quality Requirements
4. Weighted Risk Factors
• Manager’s View
• Stated and implied risks in
terms of the project goals
• Weighted risks according to
delivery and cost
• User’s View
• Prioritised risks that can
result in difficulties : failure
• Developer’s / Tester’s View
• Identified complexity that
impact capability
• Lack of clarity that requires
interpretation
• ...
5. Risk Based Requirements
• Linking quality attributes to
risk factors
• Focuses and prioritises
project effort
• Enables quality based
measurable gap analysis
• Linking requirements to risk
factors
• Prioritises stated and
implied needs against
potential risk of failure
• Risks stated in terms of
impact on the business
• Clarifies development and
testing priorities
6. Risk Based Requirements
• Business goal (requirement)
• A stable, secure and reliable
Shopping Cart for efficient
customers use
Yes... It’s under stated
it’s only an example
• Boundaries (risk of failure)
• Products must be available
• Data must be secure
• Response time must be
“measurably fast”
• Technical considerations
• Log files...
• Transaction recovery...
7. Requirements Driven Risk Based Testing
• Risk of failure increases
where there is an undefined
problem area
• Prioritised tests, based on a
risk of failure, pin-points
potential problem areas
• Relating quality and risk
underpins and justifies the
test strategy
• Testing success is measured
by risk mitigation and
delivered quality
• Requirements that respect
risk mitigation drive quality
outcomes
8. Requirements Driven Risk Based Testing
• This is not new...
• 1951 - Juran’s Quality Control
Handbook (Joseph Juran)
• 1991 – ISO: 9126
• 1994 - MoSCoW principle
introduced (Dai Clegg)
• Risk of failure increases where
there is an undefined problem
area
• Requirements that are focused
on risk mitigation drive project
focused quality outcomes
Editor's Notes
“Risk of failure increases where there is an undefined problem area”
Lets explore the associations between;
Quality Attributes - as defined by ISO 9126
Risk Factors - as defined by MoSCoW (or any other quantifiable method)
Requirements - a explicitly defined part of a goal, idea, need and/or boundary
Test - a means validate
This presentation will not focus on the testing processes, rather a method of determining;
What is important
Based on a desire quality outcome
Filtered by a weighted Risk of Failure
What is ISO9126
An international software product evaluation standard
Its not new... First publish in 1991 and amended ~2005 (ISO-25020 – 25024)
Why was it introduced
To reduce rework caused by poorly aligned requirements by introducing and linking requirements to desired Quality Characteristics
~19 years ago there was a clear understanding that the quality of requirements was poor
Guess what...
We are still talking about the same issues in 2010
Lets explore what is mean by Quality Attributes and the categories defined in the standard;
Functionality
Functions that satisfy stated or implied needs
Suitability
Accurateness (accuracy)
Interoperability
Security
Compliance (In each category)
Reliability
Capability to maintain performance under stated conditions
Usability
Effort needed for use and individual assessment of such use
Efficiency
Relationships between performance and resources used
Maintainability
Effort required to make specific modifications
Portability
Ability to transfer software from one environment to another
These should be seen as guidelines and can be modified to suite particular organizational needs
Different roles will see things differently
Its important to recognise that different roles in an organization will have different views on how relevant a Quality Characteristic is to a specific requirement
It should also be acknowledge that these views will differ depending on one’s organizational perspective
Manager’s View
Looking to find an over all view of “balanced” quality rather than specific quality
They understand what level of quality is acceptable – based on time and cost – therefore they are prepared to “optimize” quality
User’s View
Quality Characteristics will be assessed in isolation according to their specific role and experience
They tend to be focused on performance and functional quality Characteristics
Developer / Tester’s View
These will change depending on the stage/phase or point-in-time of the project
i.e., 2 weeks to go be fore release... Which of these tests do we thing are important...
The problem with this approach is;
Understanding which of the Quality Characteristics carries more weight
These simply become attributes to the requirement and can readily be dismissed/overlooked or categorized as “information only”
This is the major reason why ISO-9126 has not been widely used and/or why it has been heavily modified where used.
Making ISO-9126 successful
The most successful implementation of ISO-9126 has happened with the introduction of Weighted Risk Factors
There are a number of adaptations used by commercial organizations (particularly in Europe) ... Logica’s Q-talk for example
Each of these have introduce a “Risk Layer” between the Quality Attributes and the requirements
Business Priority (Risk)
Manager’s View
With respect to the overall project goals, Managers are able to introduce stated and implied risk according to quality outcomes
They can also weight these risk factors according to delivery time frames and cost restraints
User’s View
Generally Users prioritise the Risk of Failure according to perceived problem areas, known difficulties and unknowns factors
It is important to categorise the risks; 1,2,3,4 (is ambiguous) better to use nomenclature such as MoSCoW or Critical-High- Medium-Low
Technical Risk
Developer / Tester’s View
Risk of Failure based on complexity, capability and/or technical compliance
Lack of clarity dur to the need to interpret
Lets simplify the view by only referring to the “Shopping Cart” requirement
Yes... This diagram looks complex... That’s because there are may things to consider.
However this is a staged approach;
First the left hand side - Quality Attribute to Risk Factors
Then the right hand side - Requirements to Risk Factors
Linking Quality Attributes to Risk Factors
Focuses and priorities those Quality Attributes that affect the project – The Manager’s View
Aligns specific functional needs against Risk factors – The User’s View
This enables quality base measurable gap analysis
Linking Quality Requirements to Risk Factors
Priorities stated and implied needs against potential risk of failure – The Manager’s View
Risks are stated in terms of the impact on business – The User’s View
Developer / Tester’s View
Enables clarification of priorities for both development and testing
So... How does this impact the “Shopping Cart” requirement
Yes... This is an example, so the slide shows an understated set on relationships
From the business goal (requirement) perspective... The shopping cart must be;
Functional
Stable - Must
Secure - Must
Reliable
Mature - Must
Reliable - Must
- Could
Efficient
Timely Behaviour - Should
From a Risk of Failure perspective... The shopping carts must be;
Available
Have secure data
Respond in a times manner
Technical considerations being;
Log files being clear and concise
In case of failure... The transactions are recoverable
From a testing perspective...
The Risk of failure increases where there is an undefined problem area...
Something which has not been identified IS a potential issue
Prioritised tests, based on a potential risk of failure;
Focuses – “Pin-Points” these potential problem areas from a business perspective
Have you ever had to justify why testing cost so much... Or takes so long...
Relating Quality Characteristics – Risk – Requirements underpins and helps justify the test strategy
These are in terms that the business understands
“Testing success is measured by risk mitigation and delivered quality”
“Requirements that respect risk mitigation, drive quality outcomes”
None of this is new...
In fact, I have specifically chosen older methods and practices to reinforce the fact that we, the IT world are not learning from our mistakes.
1951 - Juran’s Quality Control Handbook (Joseph Juran)
1991 – ISO: 9126
1994 - MoSCoW principle introduced (Dai Clegg
How many more presentation will we sit though and see Standish or Forester statistics saying the “Poor Requirements” are the major cause of project failure
Or
The cost of defect correction is 1000x greater after they are discover in deployment
Its above time we understood and managed the relationship between Quality and Risk