More Related Content Similar to How to (Effectively) Measure Quality across Software Deliverables (20) How to (Effectively) Measure Quality across Software Deliverables2. David Herron
DCG
One of the original founders of the David Consulting Group, David
Herron is now a business development manager and VP of knowledge
solution services with DCG. With more than thirty-five years of experience
in functional measurement and software process improvement, David has
provided consulting and coaching services to a variety of IT organizations
throughout the US and Canada. He is an acknowledged authority in
performance measurement, process improvement, and organizational
change management; an advisor on functional measures, software process
improvement, project estimating, and agile; and a lecturer and coauthor of
several books on IT performance measurement. Contact him
at dherron@davidconsultinggroup.com.
3. Better Software Conference 2013
How to (Effectively) Measure Quality
Across Software Deliverables
Presenter:
David Herron
dherron@davidconsultinggroup.com
6. Tracking Software Quality
• Mr. I.M.A Pib is upset. He is the VP of the Store Systems Division.
• He has just seen the first quarter dashboard of results and his #1
priority project, Store Inventory, has the greatest number of defects
• Here is what was reported to Mr. Pib
Project
Delivery
Cost
Quality
(000's)
Defects
On Time
$500
12
Vendor Mods
Late
$760
18
Prising Adj.
Early
$80
5
On Time
$990
22
PO Special
Store Inventory
• You are the development manager. How might you respond to Mr.
Pib? Do we have all the information we need to properly evaluate
these outcomes?
©2012 David Consulting Group
3
7. Tracking Software Quality
• Size (value) can serve as a normalizing metric
• A cost per unit of work (Rate) can now be calculated
• Defect Density* for Mr. I. M. A. Pib project is in fact the lowest of all his
projects
Project
Rate
Quality
Size
(000's)
Defects
Value
On Time
$500
12
250
$
2,000.00
0.048
Vendor Mods
Late
$760
18
765
$
993.46
0.024
Prising Adj.
Early
$80
5
100
$
800.00
0.050
On Time
$990
22
1498
$
660.88
0.015
PO Special
Store Inventory
Delivery
Cost
* Defect Density is calculated as defects / size
©2012 David Consulting Group
Density
8. Size Does Matter
Finding –
Nine out of ten projects that fail have not been properly
sized
Consider When you build a house you specify all the functions and
features you want – these are your requirements
The builder then generates an estimate based on the size
(square footage) of your requirements.
• Size is the key to effectively managing software projects
©2012 David Consulting Group
9. Characteristics of an Effective Sizing
Metric
• Meaningful to both developer and business user
• Defined (industry recognized)
• Consistent (methodology)
• Easy to learn and apply
• Accurate, statistically based
• Available when needed (early)
©2012 David Consulting Group
6
10. Function Points - An
Effective Sizing Metric
Function Point Analysis is a standardized
method for measuring the functionality
delivered to an end user.
Benefits:
• Quantitative (Objective) Measure
• Industry Data as Basis for Comparison
• Expectations (Perceived Customer Value) Managed
• Software Process Improvement Requirements Satisfied
©2012 David Consulting Group
7
11. The Function Point Methodology
Five key components are
identified based on logical user view
•
•
•
•
•
External Inputs
External Outputs
External Inquiries
Internal Logical Files
External Interface Files
External
Input
External
Inquiry
External
Output
Internal
Logical
Files
External
Interface
File
©2012 David Consulting Group
Application
8
12. What Do We Count?
INPUT FILES
AND
INPUT TRANSACTIONS
APPLICATION
SCREENS
(ADDS, CHANGES,
DELETES, QUERIES)
OUTPUT FILES
AND OUTPUT
TRANSACTIONS
(BATCH INTERFACES)
INTERNAL LOGICAL FILES
(TABLES, DATA FILES,
CONTROL FILES)
CONTROL INFORMATION
EXTERNAL
TABLES & FILES REFERENCED
from other applications
(Not Maintained)
©2012 David Consulting Group
9
OTHER OUTPUTS
•REPORTS
•FILES
•XML
•VIEWS
•FICHE
•TAPE
•DISKETTES
•LETTERS
•NOTICES
•ALARMS
13. How Do We Count?
• Identify and classify the base functional components
– Measure the data functions
•
•
Internal Groupings of data called Internal Logical Files (ILF)
External Groupings of data or External Interface Files (EIF)
– Measure the transactional functions
•
•
•
External Inputs (EI)
External Outputs (EO)
External Inquires (EQ)
– Each function is assigned a functional complexity (L-A-H) and
a weight (FPs)
• Calculate the functional size
• Document the Function Point Count
• Report the result of the Function Point Count
©2012 David Consulting Group
10
14. Component Complexity & Weights
Complexity calculations are a function of -the number of data elements, the files referenced and data complexity
Complexity
Components:
Low
Data
Relationships
©2012 David Consulting Group
Record
Element
Types
0-1
or
2
File
3+
Types
Referenced
High
Total
__ x 7
__ x 5
__ x 3
__ x 4
__ x 3
__ x 10
__ x 7
__ x 4
__ x 5
__ x 4
__ x 15
__ x 10
__ x 6
__ x 7
__ x 6
___
___
___
___
___
Total Function Points
Internal Logical File (ILF)
External Interface File (EIF)
External Input (EI)
External Output (EO)
External Inquiry (EQ)
Avg .
___
Data Elements (# of unique data fields)
1-4
Low
Low
Average
11
5 - 15
Low
Average
High
16+
Average
High
High
15. The Counting Process
The Process
1) Identify Components
2) Assess Complexity
3) Apply Weightings
4) Compute Function Points
Complexity
Components:
Internal Logical File (ILF)
External Interface File (EIF)
External Input (EI)
External Output (EO)
External Inquiry (EQ)
©2012 David Consulting Group
Low
__
__
__
__
__
x
x
x
x
x
Avg.
7
5
3
4
3
12
__
__
__
__
__
x 10
x 7
x 4
x 5
x 4
High
__
__
__
__
__
x 15
x 10
x 6
x 7
x 6
Total
___
___
___
___
___
17. Sizing Example
USER
The Process
1) Identify Components
2) Assess Complexity
3) Apply Weightings
4) Compute Function Points
PURCHASE
ORDER
SYSTEM
USER
PAYMENTS
INVOICES
VENDOR
USER
ACCOUNTS PAYABLE
USER
Complexity
Components:
Internal Logical File (ILF)
External Interface File (EIF)
External Input (EI)
External Output (EO)
External Inquiry (EQ)
Low
__
__
__
__
__
x
x
x
x
x
Avg.
7
5
3
4
3
x
x
x
x
x
10
7
4
5
4
High
__
__
__
__
__
x 15
x 10
x 6
x 7
x 6
Function Point Size
©2012 David Consulting Group
14
Total
18. Function Point Quality Measures
• Defect Density
– Measures the number of defects identified across one or more phases
of the development project lifecycle and compares that value to the
total size of the application.
Number of defects (by phase or in total)
Total number of function points
• Test Case Coverage
– Measures the number of test cases that are necessary to adequately
support thorough testing of a development project.
Number of test cases
Number of function points
©2012 David Consulting Group
19. Function Point Quality Measures
• Reliability
– A measure of the number of failures an application experiences
relative to its functional size.
Number of production failures
Total application function points
• Rate of Growth
– Growth of an application’s functionality over a specified period of time.
Current number of function points
Original number of function points
• Stability
– Used to monitor how effectively an application or enhancement has
met the expectations of the user.
Number of changes
Number of application function points
©2012 David Consulting Group
20. Measures of Quality
Defect Removal Efficiency
Used to evaluate the effectiveness of
development quality activities
Defect Density
Used to evaluate the overall quality of
the developed software
Delivered Defects
Used to evaluate the quality of the
delivered software
Test Cases Passed First Time
Used to determine the quality of
software being tested
Inspection Rate by Document
Used to determine if inspections
positively impact quality
Volatility
Used to monitor trends in the number of
changes per month
©2012 David Consulting Group
21. Non-FP Quality Measures
Defect Removal Efficiency
Tracks the number of defects removed by lifecycle phase.
Range
Insertion Rate
Defects Found
Removal Efficiency
Peer Reviews
Testing
Design
Code
Unit Test Sys. Test UAT
Prod
Total
21
30
35
17
11
3
117
5
16
27
31
24
12
2
117
4.3%
13.7%
23.1%
26.5%
20.5%
10.3%
1.7%
Review Effectiveness
41.0% Test Effectiveness
57.3%
Reqs.
Customer Satisfaction
Gather information relating to delivery performance,
communication, management, solutions, etc.
Level of importance.
©2012 David Consulting Group
22. A Measurement Baseline Model
QUANTITATIVE
Measures
how you
are doing
QUALITATIVE
Management
Requirements
Build
Test
Environment
Size
Effort
Duration
Cost
Quality
Capability
Maturity
Measured
Performance
Standard of
performance
©2012 David Consulting Group
Baseline of
Performance
19
Identifies
what you
are doing
23. Baseline Results: Example
• Small size projects are the norm.
• Performance levels vary across all
projects.
• The extent of variation is greater than
desired.
• Variation potentially driven by mixing
support and development tasks.
• Duration on small projects reflects
industry norms.
• Relatively high degree of consistency
seen in duration data suggests a basis
for an estimation model.
• Size to duration relationship suggests
that current methods are scalable.
Delivery Rate
Time to Market
300
300
G
250
250
Size
D
200
E
Size
B
F
H
100
A
J
50
I
K
200
K
150
G
E
150
F
100
L
I
50
0
0.0
10.0
©2012 David Consulting Group
20.0
30.0
Productivity (Hrs/FP)
40.0
20
B
H
C
0
D
A
J
L
C
0.0
2.0
4.0
6.0
8.0
Duration (Months)
10.0
12.0
24. Quantitative Performance Evaluation
Example
QUANTITATIVE
Size
Effort
Duration
Cost
Quality
Measured
Performance
Quantitative Assessment
Perform functional sizing on all selected projects.
Collect data on project level of effort, cost, duration
and quality.
Calculate productivity rates for each project, including
functional size delivered per staff month, cost per
functional size, time to market, and defects delivered.
Baseline Results
Average Project Size
Average FP/SM
Average Time-To-Market (Months)
Average Cost/FP
Delivered Defects/FP
©2012 David Consulting Group
Baseline
Productivity
133
10.7
6.9
$939
0.0301
25. Qualitative Performance Evaluation
Qualitative Assessment
Conduct Interviews with members of each project team.
Collect Project Profile information.
Develop Performance Profiles to display strengths and
weaknesses among the selected projects.
QUALITATIVE
Management
Requirements
Build
Test
Environment
Capability
Maturity
Management
Definition
Design
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Team Dynamics
Morale
Project Tracking
Iteration Planning
Release Planning
Automation
Leadership Skills
•
•
•
•
Evolutionary
Requirements
Process
Product Owner
Involvement
Experience Levels
Business Impact
Process
Reviews
Design Reuse
Customer Involvement
Experience
Automation
Build
Test
Environment
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
©2012 David Consulting Group
Code Reviews
Configuration
Management
Code Reuse
Data Administration
Experienced Staff
Automation
Formal Testing Methods
Test Plans
Testing Experience
Effective Test Tools
Customer Involvement
•
New Technology
Automated Process
Adequate Training
Organizational
Dynamics
Certification
26. Modeled Improvements
Project Nam e
Accounts Payable
Priotity One
HR Enhancements
Client Accounts
ABC Release
Screen Redesign
Customer Web
Whole Life
Regional - East
Regional - West
Cashflow
Credit Automation
NISE
Help Desk Automation
Formula One Upgrade
Profile Score
Managem ent
Definition
Design
Build
Test
Environm ent
55.3
27.6
32.3
29.5
44.1
17.0
40.2
29.2
22.7
17.6
40.6
23.5
49.0
49.3
22.8
47.73
50.00
29.55
31.82
31.82
22.73
45.45
56.82
36.36
43.18
56.82
29.55
38.64
54.55
31.82
82.05
48.72
48.72
43.59
53.85
43.59
23.08
28.21
43.59
23.08
71.79
48.72
56.41
74.36
38.46
50.00
11.36
0.00
0.00
34.09
0.00
38.64
22.73
0.00
0.00
0.00
0.00
52.27
20.45
0.00
46.15
38.46
42.31
30.77
38.46
15.38
53.85
26.92
30.77
26.92
38.46
38.46
30.77
53.85
11.54
43.75
0.00
37.50
37.50
53.13
0.00
50.00
18.75
9.38
9.38
43.75
6.25
53.13
50.00
25.00
50.00
42.31
42.31
42.31
42.31
30.77
34.62
53.85
30.77
26.92
38.46
26.92
53.85
38.46
46.15
Performance Improvements:
Productivity ~ +131%
Time to Market ~ -49%
Defect Ratio ~ -75%
Process Improvements:
• Peer Reviews
• Requirements Management
• Configuration Management
Project Nam e
Accounts Payable
Priotity One
HR Enhancements
Client Accounts
ABC Release
Screen Redesign
Customer Web
Whole Life
Regional - East
Regional - West
Cashflow
Credit Automation
NISE
Help Desk Automation
Formula One Upgrade
©2012 David Consulting Group
Average Project Size
Average FP/SM
Average Time-To-Market (Months)
Average Cost/FP
Delivered Defects/FP
Baseline
Productivity
133
10.7
6.9
$939
0.0301
Profile Score
Managem ent
Definition
Design
Build
Test
Environm ent
75.3
57.6
52.3
69.5
74.1
67.0
59.2
50.2
57.7
52.6
67.6
60.5
79.0
79.3
52.8
61.73
57.00
32.55
53.82
55.82
43.73
49.45
49.82
59.36
55.18
66.82
41.55
68.64
64.55
49.82
82.05
55.72
51.72
65.59
69.85
63.59
27.08
32.21
49.59
30.08
71.79
78.72
76.41
74.36
52.46
60.00
18.36
23.00
12.00
49.09
21.00
58.64
27.73
0.00
0.00
0.00
0.00
62.27
47.45
0.00
60.15
45.46
42.31
50.77
52.46
36.38
53.85
31.92
30.77
33.92
49.46
50.46
65.77
63.85
31.54
53.75
22.00
57.50
67.50
63.13
20.00
54.00
24.75
9.38
19.38
53.75
26.25
53.13
54.00
25.00
50.00
49.31
49.31
49.31
49.31
51.77
49.62
53.85
50.77
26.92
49.46
46.92
53.85
58.46
56.15
23
Average Project Size
Average FP/SM
Average Time-To-Market (Months)
Average Cost/FP
Delivered Defects/FP
Productivity
Improvement
133
24.8
3.5
$467
0.0075
27. Overall Information Framework
Executive Management
Dashboard
%
Var
Baseline
1/10/2008
Plan
1/10/2008
1/28/2008
1/28/2008
1/28/2008
2/4/2008
2/4/2008
2/15/2008
11%
0%
19%
6/30/2008
19%
8
11%
6/30/2008
6/1/2008
n'
08
6/15/2008
6/1/2008
Ja
5/30/2008
Checkpoint D – Deploy & Close
08
Go Live
Lessons Learned/Cust Sat Survey Complete
'0
10%
13%
l'0
8
10%
10%
5/15/2008
5/30/2008
O
ct
'0
8
N
ov
'0
8
D
ec
'0
8
20%
4/30/2008
5/15/2008
4/30/2008
5/10/2008
n'0
8
11%
4/15/2008
4/15/2008
4/30/2008
Ju
3/15/2008
3/15/2008
Testing Complete
Training Complete
Ju
2/12/2008
2/28/2008
Development Complete
Checkpoint C– Midpoint
Sep
2/12/2008
Checkpoint B– Planning & Reqs
Design Complete
1,600
1,400
1,200
1,000
800
600
400
200
0
Au
g'
0%
PMP/Schedule Complete
Cum Planned Effort Allocated
Cum Actual Effort Spent
"Earned Value"
Baseline Total Hours
Project Defect Status
Requirements Growth and Stability
1000
200
900
# of Requirements
800
# of Defects
Business
Decisions
Project Resource and Effort Status
7%
2/28/2008
Vendor Selection Complete
Actual
1/10/2008
Ap
r'0
8
M
ay
'0
8
Requirements Complete
Project Resources/Hours
Milestone
Checkpoint A – Charter & Kickoff
Fe
b'0
8
M
ar
'0
8
Enterprise
700
600
500
400
300
200
150
100
50
Performance
Measures
0
100
Process
Ja
n'0
8
Feb
'0
8
M
ar'
08
Ap
r'0
8
M
ay'
08
Ju
n'0
8
Ju
l'0
8
Au
g'0
8
Se
p'0
8
O
ct
'0
8
N
ov'
08
D
ec'
08
8
08
'0
l'0
8
O
ct
'0
8
n'0
8
Ju
Sep
Ju
Total Defects Discovered
Au
g'
Ap
r'0
8
M
ay
'0
8
Ja
M
ar
'0
8
-50
Fe
b'0
8
n'
08
0
Added
Total Closed Defects
Changed
Deleted
Total Reqs
Process Management
Impro
ve
Contr
ol
Defin
e
Exec
ute
Proce
ss
Measurement
Repository
Process
Measures Enterprise
Measur
e
Database
Baseline
..
Project
Score Mngmnt
Req
Des
Build
Test
Environ
BI Product Releases | Q2 2007
56.2
68
62
68
58
41
35
EDW Phase IV: Applicant Tracking System
CRM Product Maintenance Releases | Q3 2007
44.3
60.2
68
73
49
74
57
68
35
65
28
41
35
27
Road to 90: In Bound
36.4
57
44
32
46
22
27
SAR PM 2.0
Meetings | Teleconf. vendor selection
37.5
46.6
50
68
51
62
25
57
46
38
28
25
27
27
Project X Project Y Project Z
©2012 David Consulting Group
24
64
50
46
50
31
72
54
48
20
58
58
41
44
31
31
47.3
61
54
20
58
41
31
Web v2.2 (EPN)
Historical
Measures
77
61
61
Q3 2007 Web v2.1 Enhancements / Maintenance
PAL
53.6
53.2
43.7
59.8
77
69
55
58
53
31
Web v2.2 Enhancements / Maintenance | Q4 2007 44.2
Project
CoBRA Application
Web 2.1
Web 2.0 Q1 Maintenance
61
54
20
65
41
31
End User
Project
Estimates
28. Dashboard / Service Levels
Industry
Median)
Estimating Accuracy
- Effort
(actual labor hours estimated) / estimated
positive values represent
(1000-500)/500 =
overruns; negative underruns +100% overrun
+22%
0%
18%
Estimating Accuracy
- Schedule
(actual calendar months estimated) / estimated
positive values represent
(4 - 3)/3 = +33%
overruns; negative underruns
overrun
+21%
0%
18%
Productivity
function points / labor months
varies with project size
100 FPs/4 staff
months = 25
17
26
20
Unit Cost
dollars / function points
Dollars are estimated from
labor hours @ $110 per hour
* 145 hrs per staff month
$200,000/100 =
$2,000
$938
$613
$800
System Delivery
Rate
function points / calendar
months
QSM value is a mean median not available
100 FPs/ 2
calendar months =
50
32
49
40
Requirements
Volatility
added, changed, deleted /
total baselined rqts
10%
15%
ratings by project manager
10 changed / 100
baselined = 10%
5 = very satisfied
1 = very
unsatisfied
20%
Client Satisfaction
For all but one project, data
not available. Project
manager gave an estimate
For all but three projects,
ratings by clients
unavailable.
4
Not available
4
defects found in system test /
total defects
total defects = defects found
in system test + defects
found in production (first 30
days)
40 / 50 = 90%
83%
90%
90%
production = first 30 days
(5 defects / 200
FPs) * 100 = 2.5
2.3
1.3
1.8
Delivered Defect
Density (Defects per (defects found in production /
100 function points)
function points) * 100
©2012 David Consulting Group
Example
primarily Level 3 Goal by
2012
organizations
Calculation
System Test
Effectiveness
Notes
Median
Measure Name
29. Alternative Sizing Options
Sizing Technique
Standard
Lifecycle
Comparative
Data
Lines of Code
No
Build
No
Modules/Components
No
Design
No
Use Cases
No
Requirements
No
Story Points
No
Requirements
No
Function Points
Yes
Requirements
Yes
COSMIC
Yes
Requirements
Partial
NESMA
Yes
Requirements
Partial
Mark II
Yes
Requirements
Limited
©2012 David Consulting Group
26
30. Alternative Sizing Options
Organizational
Specific
Definitions
Internal v External
Definitions
Power v
Ease of Use
Industry
Defined
Modules, Story Lines of Use Case
Use Cases, Points Code Points
Test Cases
More
Rules
Fewer
Rules
Cosmic,
NESMA FP
IFPUG Function
Points
Mark II
Hours,
Days
Story
Points
Use Case
Points
Easier to
Learn
Consistency/
Accuracy
More Accurate
Story Lines of Use Case COSMIC
Hours,
Points Code Points NESMA FP
Days
IFPUG Function
Points
Mark II
©2012 David Consulting Group
Ease of Use
Less Accurate
COSMIC
NESMA FP
IFPUG Function
Points
Mark II
Power
/ Ease
Index
Power Increases
27
Harder to
Learn
31. Summary
• Quality defined as a measure of value for the customer
• Size is an critical normalizing metric
• FPA serves as an effective sizing method
• Historical baseline data can provide for potential predictive
capabilities
©2012 David Consulting Group
28