1. CAST Confidential 1
CAST
The Leader in Software Analytics & Risk Prevention
The place of non-functional elements in measurement
Nesma Autumn Conference
November 9, 2017
Philippe-E. DOUZIECH
Principal Research Scientist
e: p.douziech@castsoftware.com
2. CAST Confidential 2
Agenda
Context presentation
• Drivers for FSM-related measurement
• Black box / white box measurement
• Integrated measurement
Measurement solutions – OMG standards
• Increase visibility on software size
• Increase visibility on software-related activity
• Increase visibility on software quality
• List of available metrics
Effective sizing metrics
• Objectives and definitions
• Samples
Conclusions
3. CAST Confidential
Drivers for FSM-related measurement
§ Benchmark measure of selected normalized metrics
across a group of application.
§ Trend measure of selected normalized metrics overtime
and aggregate per application.
§ Trend outcome / effort of a specific team (application
team, country, sourcing…)
§ Trend sizing, quality, complexity metrics in correlation to
other metrics (cost, effort, time)
§ Scorecard with breakdown per line of business,
application or custom grouping (in-house or outsourced)
§ Trend of the defined metrics
§ Transparency, quality and risk management, and SLAs
§ Baseline analysis and on-going comparisons on Agility,
responsiveness and Productivity change
§ Sizing, complexity and quality information processed
during a release. Information display per transaction.
§ Standard deviation of effort (estimation vs real vs
measured)
Normalization & Benchmarking
Productivity Measurement & Improvement
Measure Effectiveness of Transformation
Initiative
ADM Supplier Measurement
Optimize ADM Estimation
3
4. CAST Confidential
Cost
Workload
Black box measurement
Client
• Baseline and ongoing effort data (e.g. cost,
hours, headcount)
• Baseline and on-going staffing per release
• Incident / ticket reports
• Closed pre-production defects within time
commitment / Total closed pre-production
defects
• Release schedule
§ Benchmark measure of selected normalized metrics
across a group of application.
§ Trend measure of selected normalized metrics overtime
and aggregate per application.
§ Trend outcome / effort of a specific team (application
team, country, sourcing…)
§ Trend sizing, quality, complexity metrics in correlation to
other metrics (cost, effort, time)
§ Scorecard with breakdown per line of business,
application or custom grouping (in-house or outsourced)
§ Trend of the defined metrics
§ Transparency, quality and risk management, and SLAs
§ Baseline analysis and on-going comparisons on Agility,
responsiveness and Productivity change
§ Sizing, complexity and quality information processed
during a release. Information display per transaction.
§ Standard deviation of effort (estimation vs real vs
measured)
Normalization & Benchmarking
Productivity Measurement & Improvement
Measure Effectiveness of Transformation
Initiative
ADM Supplier Measurement
Optimize ADM Estimation
4
5. CAST Confidential
Quality
Complexity
Size
Cost
Workload
Black box / white box measurement
Applica8on
source
code
Client
• Measure applications within an IT portfolio to establish a
baseline
• Measure on-going activity, trends of application size,
complexity and quality characteristics.
• Normalize size, complexity and quality metrics
• Benchmark of normalized metrics
• Automated Function Points (AFP) and
Automated Enhancement Points (AEP)
• Effort, algorithmic, SQL, object complexity
• Critical violations and violations with high weight
• Health Factor trends
• Baseline and ongoing effort data (e.g. cost,
hours, headcount)
• Baseline and on-going staffing per release
• Incident / ticket reports
• Closed pre-production defects within time
commitment / Total closed pre-production
defects
• Release schedule
§ Benchmark measure of selected normalized metrics
across a group of application.
§ Trend measure of selected normalized metrics overtime
and aggregate per application.
§ Trend outcome / effort of a specific team (application
team, country, sourcing…)
§ Trend sizing, quality, complexity metrics in correlation to
other metrics (cost, effort, time)
§ Scorecard with breakdown per line of business,
application or custom grouping (in-house or outsourced)
§ Trend of the defined metrics
§ Transparency, quality and risk management, and SLAs
§ Baseline analysis and on-going comparisons on Agility,
responsiveness and Productivity change
§ Sizing, complexity and quality information processed
during a release. Information display per transaction.
§ Standard deviation of effort (estimation vs real vs
measured)
Normalization & Benchmarking
Productivity Measurement & Improvement
Measure Effectiveness of Transformation
Initiative
ADM Supplier Measurement
Optimize ADM Estimation
5
6. CAST Confidential
§ Define set of density ratios as
a scorecard
§ Trend set of density ratios
§ Benchmark normalized
metrics
§ Establish distribution channel
for new Metrics (CAST &
internal)
§ Develop rollout
communication & training plan
Quality
Complexity
Size
Cost
Workload
Integrated measurement
Applica8on
source
code
Client
Practitioner
• Measure applications within an IT portfolio to establish a
baseline
• Measure on-going activity, trends of application size,
complexity and quality characteristics.
• Normalize size, complexity and quality metrics
• Benchmark of normalized metrics
• Automated Function Points (AFP) and
Automated Enhancement Points (AEP)
• Effort, algorithmic, SQL, object complexity
• Critical violations and violations with high weight
• Health Factor trends
• Baseline and ongoing effort data (e.g. cost,
hours, headcount)
• Baseline and on-going staffing per release
• Incident / ticket reports
• Closed pre-production defects within time
commitment / Total closed pre-production
defects
• Release schedule
§ Benchmark measure of selected normalized metrics
across a group of application.
§ Trend measure of selected normalized metrics overtime
and aggregate per application.
§ Trend outcome / effort of a specific team (application
team, country, sourcing…)
§ Trend sizing, quality, complexity metrics in correlation to
other metrics (cost, effort, time)
§ Scorecard with breakdown per line of business,
application or custom grouping (in-house or outsourced)
§ Trend of the defined metrics
§ Transparency, quality and risk management, and SLAs
§ Baseline analysis and on-going comparisons on Agility,
responsiveness and Productivity change
§ Sizing, complexity and quality information processed
during a release. Information display per transaction.
§ Standard deviation of effort (estimation vs real vs
measured)
Normalization & Benchmarking
Productivity Measurement & Improvement
Measure Effectiveness of Transformation
Initiative
ADM Supplier Measurement
Optimize ADM Estimation
6
7. CAST Confidential 7
Agenda
Context presentation
• Drivers for FSM-related measurement
• Black box / white box measurement
• Integrated measurement
Measurement solutions – OMG standards
• Increase visibility on software size
• Increase visibility on software-related activity
• Increase visibility on software quality
• List of available metrics
Effective sizing metrics
• Objectives and definitions
• Samples
Conclusions
8. CAST Confidential 8
What are the Available Standards?
AFP Automated Function Point
AEP
AEFP Automated Enhancement Function Point AETP Automated Enhancement Technical Point
Automated Enhancement Point
ASCRM
ASCSM
ASCPEM
ASCMM
Automated Source Code Reliability Measure
Automated Source Code Security Measure
Automated Source Code Performance Efficiency Measure
Automated Source Code Maintainability Measure
9. CAST Confidential 9
When to Use those Standards?
AEP
AEFP
Automated Enhancement Function Point
AETP
Automated Enhancement Technical Point
Automated Enhancement Point
AFP
Automated Function Point
AFP
Automated Function Point
Workload Monitoring
Evaluation Before Evaluation After
ASCRM
ASCSM
ASCPEM
ASCMM
ASCRM
ASCSM
ASCPEM
ASCMM
ASCRM
ASCSM
ASCPEM
ASCMM
Added
Removed
10. CAST Confidential 10
Size – Automated Function Points (AFP)
AFP
Automated Function Point
1
2
Measure
the
number
of
transac8ons
managed
by
the
applica8on
in
order
to
measure
the
amount
of
func*onality
Automated
Func8on
Points
is
a
technology
agnos*c
metric.
12. CAST Confidential 12
AFP Calculation – focus on a Transaction
(Abstract representation of software implementation)
13. CAST Confidential
Automated Function Points
Automated Enhancement
Function Points
13
Activity – Automated Enhancement Points (AEP)
+ 25 AFP
1 to 2. Added new
functionality: increased the
function point count
2 to 3. Removed
localization features:
reduced the function
point count
3 to 4. Modified existing
functionality: no net
change in function point
count
4 to 5. Added new
functionality: increased the
function point count
X
EFP
Y
EFP
Z
EFP
X’
EFP
1 2 3 4 5
AFP
Automated
Functional Point
Version
A
1,915
AFP
Version
B
1,940
AFP
AFP
Automated
Functional Point
AFP AEFP
• Measure
the
number
of
transac8ons
managed
by
the
applica8on
in
order
to
measure
the
amount
of
func*onality
• Automated
Func8on
Points
is
a
technology
agnos*c
metric,
and
independent
of
the
complexity
and
quality
of
an
applica8on.
• Best
used
for
overall
func8onal
sizing
of
an
applica8on
(Used
on
Run
the
Business)
• Enhanced
Func8on
Points
is
a
func8onal
sizing
unit
that
measures
applica*on
enhancements
and
maintenance
ac8vi8es
• Measures
the
func8onal
sizes
of
modifica*ons
(added,
updated,
deleted)
between
two
releases
of
an
applica8on
• Best
used
to
show
func8onal
size
of
changes
(Add/Delete/
Update)
in
releases
(Used
on
Change
the
Business)
Functional Framework development
Optimization (cache
mechanism)
Administration tasks
Technical Debt
Reduction
1212
AEFP
144
AETP
1356
AEP
+ =
AEP
AEFP
Automated Enhancement
Functional Point AETPAutomated Enhancement
Technical Point
Automated Enhancement
Point
Automated Enhancement Point
14. CAST Confidential
Packaging & Delivery Analysis & Calibration AFP Result
14
AEP Calculation
Applica8on
source
code
New
version
Client
15. CAST Confidential 15
AEP Calculation – “non-AFP” code
Application
Functional Artifacts
AFP Automated Function Point
Technical Artifacts
• Every code elements within software boundaries
• Not supporting AFP implementation
• But supporting software functioning
• Maintained
• Evolved
100%
16. CAST Confidential 16
AEP Calculation – Implementation Points
Each Artifact is assigned an Effort Complexity (EC), leading to Implementation Points (IP) when evolved
Algorithm complexity
Thresholds • simple
• medium
• complex
• very complex
Cyclomatic complexity (count program and
control decision statements)
SQL complexity
Thresholds • simple
• medium
• complex
• very complex
Raw SQL Complexity (based on # of tables, #
of Subqueries, # of FROM Clauses and other
GROUP BY per query)
Coupling (Fan in, Fan out)
Thresholds • simple
• medium
• complex
• very complex
Number of Links per components from or to the
component measured
Ratio of documentation
Thresholds
• simple
• medium
• lack of comments
• Not documented
( # of lines of comments - # of bad
comments) / # of line of code
Size of components
Thresholds • small
• medium
• large
• very Large
# of lines of code
Complexity
measurement
Checksum
Checksum of the element, used to check if the
component has been modified
Used Object by a transaction
Complexity measurement in Vn-1
Complexity processed
Complexity in Vn – Complexity Vn-1
Effort Complexity variation
‘Belong to’ information, used to check which transaction
will be view s modified if the checksum changed
17. CAST Confidential 17
AEP Calculation – using IP to compute AEFP
AEFP
Automated Enhancement Functional Point
Application
Functional Artifacts100%
91%
𝟓𝟗𝟖 AEFP
• Complexity Ratio*
• Reuse Ratio
Evolved
Artifacts IP
Evolved
Transactions • Functional Complexity
Technical
Artifacts
Functional transactions * Complexity Ratio include complexity injected and processed.
18. CAST Confidential 18
AEP Calculation – using IP to compute AETP
Artifacts IP
AFP
Equivalent Ratio
Application
Functional Artifacts100%
Equivalent Ratio (ER)
9%
𝟔𝟏AETPAETP
AETP
Automated Enhancement Technical Point
Technical
Artifacts
Evolved technical Artifacts IP
• estimate how many AFP could have been added with the same implementation effort
19. CAST Confidential 19
AEP Calculation – AEFP vs. AETP
New Code
Complexity
Factor
Reuse
Factor
AEFP
Automated Enhancement Functional Point
AETP
Automated Enhancement Technical Point
Enhancement on components
part of Functional Transaction
Enhancement on components
not part of Functional
Transaction
Functional Releases
Technical ReleaseMigration Release
20. CAST Confidential
ASCRM
ASCSM
ASCPEM
ASCMM
1Measure
the
number
of
occurrences
of
severe
quality
issues
Par8cular
focus
on
system-‐level
paFerns.
2
Quality – Automated Source Code * Measures
“System-level coding violations
lead to 90% of production outages.”
OVUM RESEARCH 2014
“Tracking programming practices at the Unit Level
alone may not translate into the anticipated
business impact, […] most devastating defects can only be detected
at the System Level.” 20
25. CAST Confidential 25
Automated Function Points – available metrics
For the whole software
• Total AFP
• Transactional AFP
• Data AFP
For each Data AFP
• # of DET/RET, complexity level, EIF/ILF
For each Transactional AFP
• # of DET/FTR, complexity level, EI/EO (EQ is considered as EO)
26. CAST Confidential 26
Automated Enhancement Points – available metrics
For the whole software
• Total Automated Enhancement Points
• Automated Enhancement Function Points (Added/Deleted/Updated)
• Automated Enhancement Technical Points (Added/Deleted/Updated)
• Implementation Points of AEFP (Added/Deleted/Updated)(Shared/Not shared)
• Implementation Points of AETP (Added/Deleted/Updated)
• Equivalence Ratio
For each Data AFP
• Complexity Factor, DET (evolved)
For each Transactional AFP
• Complexity Factor, Effort Complexity (variation/evolved/shared)
For each Artifact code elements
• Effort Complexity
27. CAST Confidential 27
ASC*M – available metrics
Number of occurrences of ACSRM-xxx-yyy reliability pattern
Number of occurrences of ACSSM-xxx-yyy security pattern
Number of occurrences of ACSPEM-xxx-yyy performance efficiency pattern
Number of occurrences of ACSMM-xxx-yyy maintainability pattern
28. CAST Confidential 28
Agenda
Context presentation
• Drivers for FSM-related measurement
• Black box / white box measurement
• Integrated measurement
Measurement solutions – OMG standards
• Increase visibility on software size
• Increase visibility on software-related activity
• Increase visibility on software quality
• List of available metrics
Effective sizing metrics
• Objectives and definitions
• Samples
Conclusions
29. CAST Confidential 29
Characteristics of Effective Sizing Metrics
Third
party
Metrics
Sizing
Metrics
Trending
Metrics
Added
Cri8cal
Viola8on
Density
Density
of
cri8cal
viola8on
Deleted
cri8cal
viola8on
density
trending
Produc8vity
(effort)
Defect
density
(in
tes8ng
or
prod)
1 2
3
Produc8vity
(cost)
Maintenance
Cost
per
func8on
point
Low
density
High
density
• Meaningful
to
developer
and
user/customer
• Defined
(industry
recognized)
• Consistent
(methodology)
• Easy
to
learn
and
apply
• Accurate,
sta8s8cally
based
• Available
when
needed
(early)
• Addresses
project
level
informa8on
needs
30. CAST Confidential 30
Release Assessment Overview
Normalized Quality InformationNormalized Sizing Information
Normalized Complexity Information
31. CAST Confidential 31
Model Performance
Level 3:
Competitive Benchmarks
Level 1:
Baselining
Level 2:
Internal Benchmarks
Compare
with
a
baseline
By
Reference
By
Technology
1 2
3
By
Industry
Canned pears, 28
ounces, costs $1.35
Canned pears, 16
ounces, costs $1.00
By
Dev
Methodology
By
Business
Unit
By
Vendor
By
Region
• Develop
parametric
models
that
u8lize
historical
data
to
analyze
the
impact
of
selected
process
improvements
• Provide
a
knowledge
base
for
improved
decision
making
• Iden8fy
areas
of
high
impact
(e.g.,
produc8vity
and
quality)
• Create
an
atmosphere
of
measuring
performance
• Opportunity
for
comparison
to
industry
best
prac8ces
33. CAST Confidential 33
Define: Density Measure
Only AIP derived data
Correlation
Numerator
Denominator
AAD Display
Detail
Critical Violation
Density
Critical violation AFP
Monitor outcomes of work performed
based on acceptable risk (Benchmark and
trending)
Added Critical
Violation Density
Added critical violation AEP
Feedback to new teams to encourage
learning and behavioral change; and to
track subsequent progress (trending)
Deleted Critical
Violation Density
Deleted critical
violation
AEP
Track reduction of technical debt.
Identify high performing teams.
Input to tradeoff decision-making.
(trending)
External Data combined with AIP derived data
Defect density
(in testing or prod)
Defect Severity Cat
1&2
(Version to version )
AEP
Monitor outcomes with expected risk
levels.
Correlate with Critical Violation Density
metric.
Low density of violations High density of violations
34. CAST Confidential 34
Define: Unit Price or Unit Effort Measure
External Data combined with AIP derived data
Correlation Numerator Denominator AAD Display Detail
Productivity
(effort)
AEP
Dev + Unit Test
Effort
(t1,t2)
What is the effort required to make changes?
Compare with metric for staffing purposes.
Productivity
(cost)
AEP
Dev Cost + Unit Test
Cost
(Version to Version)
What is the cost required to make changes?
Compare with metric for budgeting purposes.
Maintenance Cost per
function point
Maintenance Cost
(Version to Version)
AFP
What is the maintenance cost required to support
changes?
Target cost reduction.
At $0.04 an ounce, the second can is the better buy, it costs less
per ounce.
Canned pears, 28 ounces,
costs $1.35
Canned pears, 16 ounces,
costs $1.00
35. CAST Confidential
Story Points
Team Expertise
Team Experience
Complexity Process
Complexity injected
Function Points
Complexity injected
35
Choose
the
Right
Sizing
Unit
to
Calculate
Density
Informa*on
Estimation and internal team correlation External team correlation and benchmark
§ Sizing measure should be as close as possible to the activity of the development team
to represent their best guest on the complexity of a story.
§ Sizing measure should enable predictability of the development team.
§ Sizing measure should enable on-going velocity measurement.
§ Sizing measure should be independent of team characteristics (expertise,
experience)
§ Sizing measure should be independent of application characteristics (technology,
complexity).
§ Sizing measure should enable benchmark across team, technology, methodology..
Standard
Scalable
Technology and team
agnostic
Story Points
300
AEP
Defect
Density
for
Run
The
Business
(RTB)
Defect
Density
for
Change
The
Business
(CTB)
AFP
Cri8cal
Viola8on
Maintenance
Effort
Maintenance
Cost
Number of defects introduced in the Latest Release
Number of Automated Enhancement Points in the Latest Release
Total Number of defects in the Application (External data)
Total Number of Automated Function Points in the Application
Data
Collec8on
Metric
Collec8on
Correla8on
Correla8on
Metrics
mapped
to
business
outcomes
Metrics
are
volume-‐based
Metrics
viewed
in
context
of
other
business
metrics
All
metrics
are
mapped
to
business
outcomes
36. CAST Confidential
Client 1– 300 applications
At a glance
309 applications
Minimum 1 scan per month for all 300 (Production)
80 applications scheduled to be scanned weekly
12 application “module” scans on demand (usually 2 a
week for high development)
Full Quality and FP configuration, early AEP stages
Average Scan time is 30-45 min
Mix of Technologies from mainframe (Cobol, RPG, PL1)
to 3G ( C, C++) and current suites (.net, JEE)
Fully Automated (Jenkins) moving to JIRA also
Total support staff for back-office (3)
FP calibration done on 10% of the apps per year
Fully Automated Scanning and Reporting (Jenkins)
Expanding the Data Reporting and Maturity
Source Rec
Level Measure Cost
to
Operate
Business
Delivery
Productivity Note/
Remarks External
Data
Required
CAST Module Tech
Debt
Density
CAST Module Dead
Code
CAST Product Defect
removal
efficiency
% DRE=
total
defect
remediated/
Total
defects
found
before
release
CAST Module In
process
risk
(New
CV/
EFP)
Client Application Average
Ticket
Fix
Effort Time
/
Ticket Tickets,
Time
Client Application Ticket
Volume Include
all
customer
compliants
in
warranty
period
(typically
30
days
from
release)
Tickets,
Time
Client Product Release
Throughput
-‐
Agility Business
functionality
delivered
by
release Specs
products
vs
delivered
(functional)
Mixed Group Effort/EFP
(Or
AEFP) Calculated
Effort/
AEFP
or
Actual
Effort/
AEFP Time
Mixed Group EFP
Delivered/
100
Worked
Hours Time
Mixed Product Defect
per
100
Resource
Hours Time
Mixed Application Development
Impact Calculated
Effort/Tracked
Total
Effort.
May
be
adjusted
for
change
in
quality.
Typically
function
points
est
to
function
points
produced
FP
Estimated
Mixed Program CV
per
100
Resource
Hours Time
Mixed Application FTE/1k
AFP
maintained FTE
-‐
running
average
of
team
size Hours
Mixed Application Cost/1k
AFP
maintained Cost
Primary
indicator
Secondary
Indicator
Future Plans
• Q1-2018
• Integration of model based
estimates ( Statistical models
of like projects)
• Productivity, Cost and Velocity
at the team level
• Q2 – 2018
• Expansion by 100
applications as M&A
completes
• Scan for M&A targets to
estimate workload
• Full DevOps integration with
code drops weekly
“CAST enables speed. As the portfolio changes, the
information about that portfolio becomes the
essential decision tool”
Client Exec
36
37. CAST Confidential
Client 2 – 200 Vendor Managed Applications
At a glance
200 applications
Minimum 1 scan every 4 months (Production)
100 vendor applications scheduled to be scanned
monthly with SLA penalties and Incentives
2 application “module” scans on demand (usually 2 a
week for high development)
Full Quality and FP configuration, early AEP stages
Average Scan time is 1-1.3 hrs
Mix of Technologies from mainframe (Cobol, PL1) to
3G ( C, C++) and current suites (.net, JEE), early
Python Adopter
Fully Automated (Jenkins) moving to JIRA also
Total support staff for back-office (2 FTE) – 10 staff
that handle results consulting as well.
FP calibration done on 10% of the apps per year
Manual FP counts on 40 applications spending 400
effort hours (***) per application
Future Plans
• Q4-2017
• Unit Cost Measure rollout
(Productivity) and model based
estimation
• AEP pilot in progress
• Expansion of Vendor Involvement
• Q1 – 2018
• Module Scan process for faster return
• JIRA flow to deliver results to
Developers faster
• Full DevOps integration with code
drops weekly
• Vendors on the hook to produce increasing FP/HR while not compromising quality
• Results reviewed monthly with true-ups
• All Vendors are on automated process ( Standard of Legitimacy rule)
37
38. CAST Confidential
Client 3 – 10 Applications, target 100
At a glance
10 current applications
Minimum 1 scan every Month (Production)
Rapid response scans as needed based on
production issues.
Full Quality and FP configuration, fully
automated, calibration toolkit
Average Scan time is 30 min
Mix of Technologies from mainframe (Cobol,
PL1) to 3G ( C, C++) but mostly focused on
Cobol
Fully Automated (Jenkins) moving to JIRA also
with a pull from source code control
Total support staff for back-office (2 FTE) .
FP calibration done by toolkit
Future Plans
Q4-2017
• More applications, FASTER
• Improve Predictable scan times
• Tighten the integration to Eclipse
Q1 – 2018
• Push more information to
executive dashboard including
FP counts
• Investigate productivity using FP
or AEP depending on study
results
• Need to push velocity without compromising customer experience or security.
• FP/HR while not compromising quality
38
39. CAST Confidential 39
Agenda
Context presentation
• Drivers for FSM-related measurement
• Black box / white box measurement
• Integrated measurement
Measurement solutions – OMG standards
• Increase visibility on software size
• Increase visibility on software-related activity
• Increase visibility on software quality
• List of available metrics
Effective sizing metrics
• Objectives and definitions
• Samples
Conclusions
40. CAST Confidential 40
Conclusions
CAST delivers analytics
• Based on OMG standards (AFP/AEP/ASCRM/ASCSM/ASCPEM/ASCMM)
• Assessing various aspects of software (size/activity/quality)
• Encompassing both functional and non-functional requirements
To increase visibility
To support effective indicators
41. CAST Confidential
Thank you for attending
APPLICATION INTELLIGENCE PLATFORM
• Approx. 2,300 apps and 3 billion LoC
• Query by industry, technology & geo
• CRASH Annual Report
• CAST Research Labs
• Custom benchmarks
• SaaS, Cloud based
• Source code analyzed where it
resides
• Rapid portfolio analysis
• Portfolio continuous monitoring
• Software flaw detection
• Architectural analysis and
blueprinting
• Critical violation drill down
• Propagation risk
• Standards-based software
metrics
• Automated function points
• Trend analysis
• Transaction risk
41