3. What is a Defect?
• A problem which, if not removed, will
cause software to stop running or produce
incorrect results
• Software defects correlates with quality
characteristics such as customer
satisfaction, correctness, reliability, etc.
4. Error – Defect – Failure
A person makes
an error..
…that can cause
a failure
in operation
…that creates a
defect in the
software…
5. What is a Defect? – cont.
• Error (mistake): a human action producing an
incorrect result
• Defect (bug, fault): a flaw in the system
• Failure: a deviation of the system from its
expected delivery
6. What Is a Defect? – cont.
• Distinction: a defect is a state and a fault is an event.
• If a calculator program doesn't check the divisor for 0
before dividing by it, that's a defect in the program.
• Each time we run the program and it crashes because it
divides by zero, that's a fault in the calculations.
• If there is no way to recover and get the calculations
right anyway, it's also a failure in the system.
7. What Is a Defect? – cont.
Defect Fault Failure
Coding Execution Result
8. Defects and Quality
• Two key aspects of quality
– Defect density of software applications
• Measures the number of errors or bugs found in
artifacts divided by the size of the component or
system
• Requirements, design, source-code, user manuals,
bad fixes or defects resulting from repairs
– Defect removal efficiency of the operations
• Ratio between bugs found during development
and during the defined period of measurement, in
operation
9. • Different definitions
for software quality
– Low level of defects
– High reliability
– User satisfaction
– Quick answers
– Rapid repairs for
defects
– Compliance to
requirements
Factors Affecting Software Quality
• Different origins
– Users
– Stakeholders
– Human factor
– Hardware limitations
– Requirements
– Design
– Source code
– User manual
– Bad fixes or repairs
– Flawed test cases
10. Factors Affecting Software Quality
• Root causes
– Inadequate training
– Inadequate cost estimation
to fix defects
– Excessive schedule
pressure
– Insufficient defect removal
methods
• Defect elimination
strategies
– Effective defect prevention
– High defect removal
efficiency
– Accurate defect prediction
– Accurate defect tracking
Ø See for example Standish
Group example presented
at previous lectures
13. Defect Detection Methods
• Static test techniques (No execution of the system)
– Reviews/Inspections/Walkthroughs (static techniques)
• Requirements
• Design
• Code
– Pair work/programming
– Audits
• Dynamic test techniques (Execute the system)
– Unit
– Integration
– Function
– Feature
– System
15. Defect prevention
• It is easy to understand, but a challenge to
implement
• Software systems complexity increases
• Software systems are used more intensively
• Software is designed by humans (still), and
humans do mistakes
• If we do not act, the number of defects will
increase
• Defect prevention is more of a mindset
16. Why defect prevention?
• Did you remember?
– 57% of contracts experienced cost overruns
– Cost overruns totaled £9.0 billion
– The average percentage cost overrun is
30.5%
– 33% of contracts suffered major delays
– 30% of contracts were terminated
17. Why defect prevention?
• Fixing errors is a waste of time?
• Some reports indicate that up to 50% of the development bill is
spent on bug-fixing
• Do you ever believe that the customers wants to pay for bug-fixing?
• Defects can be life-threatening (space programs, pacemakers,
nuclear plants, etc.)
• Back in time, there are examples of a misplaced ; in a Fortran
program, causing severe failures in space programs
• In Ariane 5, 1996, a 64 bit floating point number relating to the
horizontal velocity of the rocket with respect to the platform was
converted to a 16 bit signed integer. The rocket cost was $7 billion
and its cargo was worth $500 million.
18. More famous software errors
• NASA’s Mars orbit (1998) – failed to convert from
English units to meter, cost was estimated to $125
million
• Heathrow terminal 5 opening 2008 – during the first 10
days 140000 bags failed to travel with their owners and
500 flights were cancelled. Cause: Did not consider
when bags were removed manually.
• Denver airport luggage problem 1995 – 16 months late,
$560 million over budget. Cause: Scope creep.
Corrective action: A new manual system was built
instead.
19. Defect prevention – principles (Humphrey)
• Programmers must evaluate their own errors
• Feedback of performance is essential
• There is no single cure-all that will solve all
problems
• Defect prevention includes process
improvement
• Defect prevention takes time, i.e. costs money
20. Steps to Defect Prevention (Humphrey)
1. Defect reporting
2. Root cause analysis
3. Action plan development
4. Action implementation
5. Performance tracking
6. Starting over..
• So.. We can use PDCA and/or GQM+Strategies
for steps to Defect prevention
21. ISO9001:2015, clause 10.1
10. Improvement
10.1 General
The organization shall determine and select opportunities for improvement and
implement any necessary actions to meet customer requirements and enhance
customer satisfaction.
These shall include:
a) improving products and services to meet requirements as well as to
address future needs and expectations;
b) correcting, preventing or reducing undesired effects;
c) improving the performance and effectiveness of the quality
management system
23. Agile Manifesto
• Individuals and interaction over processes and tools
• Working software over comprehensive documentation
• Customer collaboration over contract negotiation
• Responding to change over following a plan
• That is, while there is value in the items to the right, we value the
items to the left more.
24. Agile programming
methodology
• Short iterations
• Frequent deliveries
• Open communication
• Close collaboration
• Tight teaming
• Simplicity
• Refactoring
• Continuous testing
• Proactive management
25. Some agile ideas
• Involve the customer – most important part first
• Short delivery cycles – quick response
• The idea behind agile development is that the
whole team should be empowered to make
decisions – the term “project manager” is
sometimes not used.
• EXPECT THE SYSTEM AND THE PROJECT
TO CHANGE
26. Agile planning
• “The customer has the right to see the overall plan” (Kent
Beck).
• The planning game
– Release planning
– Sprint planning (2-4 weeks)
• Small teams
• The focus is on a working system; every day, week, every
second week, month, or even every second month.
• User story focus. Feature focus. Functional focus.
• Bi-weekly, weekly and/or daily (morning) planning meetings
27. Agile Sprints
• Sprint planning
– Velocity planning,
– Identification and estimation of activities
– Story point estimation, relative estimations
– Planning poker (Wideband Delphi)
– No detailed scheduling
• Sprint follow-up
– Task board
– Daily scrum
– Burndown chart (focus on the time left for finishing an activity)
29. Kanban techniques
• Developed by Toyota
• Use lean and just-in-time concepts
• The problem/challenge is to get things done!
• Kanban simply described:
– Limit the number of on-going tasks!
– Use visual communication (flags, signs, post-it-notes)
– Eliminate bottlenecks – focus on solving these problems
– Continuously improve your working process – feedback.
• Communicate visually:
1. Backlog items
2. On-going (in-progress) items
3. Finished items
30. Agile Retrospectives
• "At regular intervals, the team reflects on how to become
more effective, then tunes and adjusts its behaviour
accordingly.”
• A meeting in the end of the sprint
• What happened during the sprint?
• Each member of the team should answer
– What worked well for us?
– What did not work well for us?
– What actions can we take to improve our process going forward?
31. Agile Retrospectives
• Organized by the Scrum Master
• Time-boxed
• “Lessons learned”
• What changes shall we make in the next iteration?
• Retrospectives are team-driven – the team decides how
the meetings shall be run
• Built on honesty and trust
• Stresses continuous improvement
• Identify actions for improvement
32. Agile Retrospectives
• “Everyone did the best job they could”
– given what they knew at the time
– their skills and abilities
– the resources available
– the situation at hand.
• By some people considered as the most
important Agile practice
• Sometimes you search for the root cause,
sometimes you don’t
33. Agile Retrospectives
• Negative comments are not bad
• Negative comments are necessary
• Try to not make negative comments
personalized (but this is hard)
• Focus participation in the retrospectives
• You need data in order to discuss what you did
well
• The team should vote on which items to focus
on in the next sprint
34. Retrospectives – Do NOT
• Focus on individual performance feedback –
de-personalize
• Provide the answer for inadequate technical
skills or collaboration skills
• Make “complaint sessions” of the retrospectives
• List too many improvements
• Keep the same retrospective process if people
are “bored” – change and adapt the process
35. Uncertainty Management (PMBOK)
• “The Uncertainty Performance Domain addresses
activities and functions associated with risk and
uncertainty”.
• Uncertainty: A lack of understanding and awareness of
issues, events, paths to follow, or solutions to pursue.
• Ambiguity: A state of being unclear, having multiple
options from which to choose
• Complexity: A characteristic of a project that is difficult to
manage.
• Volatility: The possibility for rapid and unpredictable
change.
36. Options for Responding to Uncertainty
• Gather information – conduct research, engage
experts
• Prepare for multiple outcomes – prepare for the
different outcomes
• Set-based design – multiple designs, explore
options
• Build on resilience – adapt and respond quickly
to unexpected changes
37. Options for Responding to Uncertainty, cont.
• Progressive elaboration
• Experiment
• Prototyping
• Simulation
• Diverse perspectives
• Build in redundancy
• Alternatives analysis
• Cost reserve
38. Uncertainty management
• “Project management is uncertainty management”
• Identify potential problems before they occur
• Put preventive actions in place before
unrecoverable harm occurs
• Risk – possible event that may affect the project
negatively (Old definition)
• Risk: An uncertain event or condition that, if it occurs, has
a positive or negative effect on one or more project
objectives. (New definition, PMBOK)
39. Risk management
• Problems – events that have and/or will affect the
project negatively
• Stages in risk management:
– Risk identification; project, product and business risks
– Risk analysis; probability and consequences, triggers
– Risk planning; how to address risks – eliminate, reduce, ignore
– Risk monitoring; controlling and updating the risks
• In risk management you deal with both threats and
opportunities!
• Risk factor – it is common that likelihood*consequence is
used, e.g., with the scales 1-5
40. Risk management, cont.
Categories
• Technology
• People
• Organizational
• Tools
• Requirements
• Estimations
Top ten risks according to Boehm (91)
• Personnel Shortfalls
• Unrealistic schedules/budgets
• Developing the wrong software
functions
• Developing the wrong user interface
• Gold plating
• Continuing stream of requirements
changes
• Shortfalls in externally furnished
components
• Shortfalls in externally performed
tasks
• Real-time performance shortfalls
• Straining computer science
capabilities
41. Top ten risks according to codebots.com (2020)
1. Inaccurate estimations
2. Scope variations
3. End-user engagement
4. Stakeholder expectations
5. Poor quality code
6. Poor productivity
7. Inadequate risk management
8. Low stakeholder engagement
9. Inadequate human resources
10. Lack of ownership
42. Risks - Top seven list
(Literature survey by Arnuphaptrairong 2011)
1. Misunderstanding of requirements
2. Lack of management commitment and support
3. Lack of adequate user involvement
4. Failure to gain user commitment
5. Failure to manage end user expectation
6. Changes to requirements
7. Lack of an effective project management
methodology
43. Risk identification
• Identify major threats
• Should involve team members, stakeholders, etc
• Risk types:
– Technology
– People
– Organizational
– Tools
– Requirements
– Estimations
• Reduce the list to a manageable size
44. Risk analysis
• Analyze and discuss the risks identified
• Group risks
• Merge risks
• Make the risks specific
• Analyze probability and consequence, discuss in workshops
e.g.
• Different people have different opinions – consolidation
needed
• Rank the risks, voting?
• Focus on a limited number of risks
• Risk factor – it is common that likelihood*consequence is
used, e.g., with the scales 1-5
45. Risk analysis example
Risk P -
Probabili
ty (1-5)
C -
Consequenc
e (1-5)
Risk
factor
P*C
Consequence description Mitigation
System test may be
scheduled for a too
short period
4 5 20 More faults in delivered
product
Reduce scope and/or allocate more resources
Poor estimations for
x module
3 3 9 All stories will not be
implemented
Continuously refine estimations
Project will have low
priority
1 5 5 Resources will be moved to
other projects
No mitigation – just monitor
Poor competence in
React framework
4 3 12 Poor quality and delays Schedule education and training
46. Risk planning
• Focus on the top risks
• Define strategy for each top risk.
• For threats choose between:
– Accept (but monitor)
– Mitigate (reduce)
– Avoid (remove)
– Escalate (threat is outside the scope of the project)
– Transfer (shifting ownership)
47. Risk planning
• Define strategies for handling opportunities.
• For opportunities choose between:
– Exploit: ensure that it occurs
– Escalate: it is outside the scope of the project
– Share: shift ownership
– Enhance: increase the probability of occurrence
– Accept: accept but no action is planned
48. Risk monitoring
• Check the status of the risks
• Update and take care of risk changes
• Probability changed?
• Consequence changed?
• New risks? Top risks?
• Not only continuously within the team, but also include
the stakeholders continuously
49. Risk management – common mistakes
• Risk analysis is often done early in the project,
and then never considered again
• Risk analysis is done by the project manager,
solely
• Risks from the development is not tracked
• Top risks are not selected
• The risk is already a problem
• Corrective actions are not executed
• The organisation’s risk process is not followed
50.
51. Problem management
• A risk can happen – become a problem
• If it is a problem – do not ignore it!
• There is a high risk that the problem will
generate new, worse problems
• An existing problem can propagate into
something worse
• Initiate actions immediately for eliminating the
problem. Prioritize this action.
52. Tips on Uncertainty and Problem Management
• Report openly and honestly
• Document so that you can go back in time
• Prepare proposals for action
• Get stakeholders and upper management acceptance
for actions
• Request help if needed
• Continuously update and report status of uncertainties
and problems!!!
53. Root Cause Analysis (RCA) process -
example
• What is the root cause of a problem, fault,
customer dissatisfaction, etc.?
• A workshop is to prefer
– Introduction
– Describe what happened
– WHAT-analysis – consequences
– WHY-analysis – causes
– Action definition – avoiding
– Summary – responsibility and follow-up
54. RCA Process guideline
• Introduction
– Presentation RCA workshop, present the planning
– Handshake the problem description
– Give an account of the course of events
– Be sure that everyone understands the purpose and
goal of this session
• Describe what happened.
– Collect and organize the facts surrounding the event
to understand what happened.
See: Guidance_for_RCA
(note the error in the headline for Step 2, page 2)
55. RCA Process guideline
• WHAT are the consequences?
– Brainstorming (preferably round-table discussion).
Both negative and positive consequences should be
analyzed.
– Prioritization (simple)
– Hints concerning affected areas for WHAT: cost
(internal/external), internal efficiency, customer
satisfaction, employee motivation, product quality
56. RCA Process guideline, cont.
• WHY did the fault occur?
– Brainstorming (quiet – write on stickers). Limit the number
to 7 notes.
– Walk-through, the participants shall, one at a time, read
their notes. Perform the grouping by letting each
participant “suggests” the potential belonging for each
issue
– Ask the question WHY for every cause in order to identify
the “real” root cause
– If necessary: grouping (e.g. divide into teams who perform
the grouping)
57. RCA Process guideline, cont.
• WHY did the fault occur?
– Estimation of the effect and realization difficulties, if this
cause is solved (use the ”four-fielder” – in Swedish:
fyrfältare)
– Prioritization, everyone have e.g., 7 ”dots”
– Examples of potential groups: Poor documentation,
processes, competence, management, rules,
communication, handovers, Milestone/TG passages, etc.
58. RCA Process guideline, cont.
• Options for brainstorming:
• WHERE (was (should) the fault (have been) detected?
• WHEN (was (should) the fault (have been) detected?)
• Define Actions
– The goal is to minimize the chance of having the same problem
again
– Choose a suitable number of “Why” (use the prioritized list)
– Divide into groups and let each group select one or two “Why”
– Each group shall propose
Let each group present their proposal, adjust if necessary
Action Time plan Responsible
59. RCA Process guideline, cont.
• Who will document and distribute the result from the
workshop?
• Who is responsible for tracking the actions?
• Budget/payment?
60. RCA workshop agenda
1. Introduction
2. Failure/problem definition consensus
3. Course of events
4. Consequence analysis
5. Cause analysis
6. Cause prioritization/grading
7. Identification of response actions
8. Conclusion/summary
61. RCA brainstorming guidelines
• Be specific, try to avoid being too general,
example: “insufficient testing” is (perhaps)
too general.
• One sticker==one issue
• Initials in order to identify who wrote it
• Write legible, short sentences if possible
62. Audit process
To evaluate:
– Software elements
– The processes for producing them
– Projects
– Entire quality programs
– To check the compliance to ISO9001, TickITplus, CMMI
– To find elements for improvement
– To give an independent view of the status
– Are you following the processes defined?
– Can you find requested information
63. When to Audit
• A project milestone, calendar date, or other
criterion has been (or is to be) met
• The audit is initiated by earlier plans
• External parties, stakeholders, require an audit
• A local organizational element has requested
the audit
64. Audit plans
• Which project processes to examine?
• Which software to examine. Sampling?
Selection criteria? Sample size?
• Reporting requirements. Necessary
improvements? Recommendations? Findings?
Defects?
• Required follow-up activities
• Activities, elements, and procedures necessary
to meet the scope of the audit
65. Audit plan, cont.
• Criteria that provide the basis for determining
compliance (provided as input)
• Audit procedures and checklists
• Audit personnel requirements (e.g., number,
skills, experience, and responsibilities)
• Organizations involved in the audit
• Date, time, place, agenda, and intended
auditees
66. Audits – overview meeting
• Overview of existing agents (e.g., audit scope, plan, and
related contracts)
• Overview of production and processes being audited
• Overview of the audit process, its objectives, and outputs
• Expected contributions of the audited organization to the
audit process (the number of people to be interviewed,
meeting facilities, etc.)
• Specific audit schedule
67. Audit preparation
• Understand the organization
• Understand the products and processes
• Understand the objective audit criteria
• Prepare for the audit report
• Detail the audit plan
68. Audit preparation, cont.
• Team orientation and training
• Facilities for audit interviews
• Materials, documents, and tools required by the
audit procedures
• The software elements to be audited (e.g.,
documents, computer files, personnel to be
interviewed)
• Scheduling interviews
69. Audits – criteria for completion,
examples
• Each element within the scope has been examined
• Response to draft findings have been received
• Findings have been presented
• A formal report has been written and distributed
• A recommendation plan has been presented
• Follow-up actions have been identified, planned,
executed, followed-up and completed.
• Verification of effectiveness of taken actions
70. The Audit shall not be perceived as …
• An eager attempt to find a lot of deficiencies
• An occasion for the auditor to show off
• Forced by upper management (or quality department)
• An activity which takes up unnecessary time for the
participants
• A necessary evil that must be done
• Something you can forget about after it has been done
• The issues found are too heavy to solve
71. Audits – Common problems
• Issues are not taken care of/solved
• Issues are not linked to business goals à The focus is NOT on
“how well”, but on “out of control”
• Goals and objectives are neither defined nor followed-up
• Only project managers are audited. ”Be where the action is”
• The follow-up is too weak
• Upper management have poor focus/commitment
• Project Management, PM, has poor focus
• The priority on solving issues are low
• Solving the issues seems to be a burden for the organization
• Work pressure results in low priority for preventive actions
72. Audits – Item ratings
RED The issue is not under control. Several lacks/deficiencies
have been identified.
YELLOW The issue is partly under control. A few lacks/deficiencies
have been identified.
GREEN The issue is under control. No major/severe deficiencies
could be identified.
BLACK Not applicable, or the issue was not handled/discussed
during the audit.
73. Experiences from Industrial Audits
• Mostly under control:
– Progress reporting
– CM handling
– Document handling
– Risk analysis
– Test environment
– Change request handling
– Sponsoring
– Product handling
– Service/Support handling
– Quite often – continuous improvement, the heart of ISO9001
74. Experiences from Industry Audits
• Needs improvement:
• Using defined and tailored processes
• Reviews – planning and execution
• Communication
• Follow-up of risks – risk management
• Statement of compliance – “we know we can not fulfil the
requirements but have not negotiated it”
• Backwards compatibility
• Hardware requirements and availability of hardware
• Traceability
• Goals and Measurements
• Management commitment
75. More Defect Prevention Methods?
Or just follow existing processes?
• Training of Personnel
• Pair Programming
• Tools
• Test Driven Development
(TDD)
• Short sprints
• Structured Design (Object
Oriented)
• Requirements tracking
• Demos
• Pilots
• 7 management tools
• WBS
• Estimating methods
• Frequent feedback
meetings
• Etc.