Measuring Performance:
See the Science of DevOps
Measurement in Action
February 27, 2018
Nicole Forsgren, PhD
2
Housekeeping
▪ This webinar is being recorded
▪ Links to the slides and the
recording will be made available
after the presentation
▪ You can post questions via the
GoToWebinar Control Panel
3
Your Hosts
Dr. Nicole
Forsgren
@nicolefv
Tim Buntel
@tbuntel
4
The Highlights
▪ Measuring Performance
− Common Mistakes
▪ Our Approach: Software Delivery Performance
− Where are you?
▪ Maturity Models
▪ Does Software Delivery Performance Matter?
▪ How to Improve Performance
▪ You can help
Common Mistakes
6
Common Mistakes
▪ Outputs vs. Outcomes
▪ Individual/local vs. Team/global
▪ Some common examples:
− Lines of code
− Velocity
− Utilization
7
Common Mistakes: Lines of Code
▪ More is better?
− Bloated software
− Higher maintenance costs
− Higher cost of change
▪ Less is better?
− Cryptic code that no one can read
▪ Ideal: solve business problems with most efficient code
8
Common Mistakes: Velocity
▪ Agile: problems are broken down into stories, which are
assigned “points” of estimated effort to complete
▪ At end of sprint, total points signed off by customer is
recorded = velocity
▪ Velocity is a capacity planning tool. NOT a productivity tool.
▪ Why doesn’t this work for productivity?
− Velocity is a relative measure, not absolute. So: bad for comparing
teams
− Gaming by inflating estimates
− Focus on team completion at the expense of collaboration (a global
goal)
9
Common Mistakes: Utilization
▪ Utilization is only good up to a point
▪ Higher utilization is better?
− High utilization doesn’t allow slack for unplanned work
− Queue theory: as utilization approaches 100%, lead times approach
infinity
− Once you hit higher and higher levels of utilization (a poor goal of
productivity), teams will take longer and longer to get work done
Our Approach:
Software Delivery
Performance
11
Measuring Software Delivery Performance
▪ Focus on both Outcomes and Global measures:
- Deploy frequency (when business demands)
- Lead Time for Changes
- Mean Time to Recover (MTTR)
- Change Fail Rate
12
We see
More throughput
More stability
In tandem. Without the tradeoffs that some suggest
are necessary.
13
High Performing DevOps Teams
More agile
More frequent
Code deployments
46x
That’s the difference between multiple
times per day and once a week or less.
Faster lead time from commit to
deploy
440x
That’s the difference between less than
an hour and more than a week.
14
High Performing DevOps Teams
More reliable
Faster mean time to
recover from downtime
96x
That means high performers recover in
less than an hour instead of several days
As likely that changes will
fail
1/5x
That means high performers changes fail 0-15% of
the time, compared to 31-45% of the time.
15
Maturity Models
Maturity Models
Don’t Work
18
Maturity models are for CHUMPS
19
Maturity models are for CHUMPS
20
Industry changes:
Year over year
What is good enough in
one year is out of date
the next
Speed Stability
Does Software Delivery
Performance Matter?
22
“IT doesn’t matter.”
-- Nicholas Carr, 2003
23
DevOps is Good for Technology
▪ Measuring DevOps and Software delivery performance
- Deploy frequency (when business demands)
- Lead Time for Changes
- Mean Time to Recover (MTTR)
- Change Fail Rate
24
Software Delivery Performance is comprised of
throughput and stability,
and both are possible without tradeoffs
25
DevOps is
good for organizations
26
High Performing technology organizations are twice
as likely to achieve or exceed
Commercial Goals
• Productivity
• Profitability
• Market Share
• # of customers
2x
27
High Performing technology organizations are twice
as likely to achieve or exceed
Commercial Goals
• Productivity
• Profitability
• Market Share
• # of customers
Non-commercial Goals
• Quantity of products or services
• Operating efficiency
• Customer satisfaction
• Quality of products or services
• Achieving organizational or mission goals
2x
28
High Performing technology organizations are
twice as likely to achieve or exceed
Commercial Goals
• Productivity
• Profitability
• Market Share
• # of customers
Non-commercial Goals
• Quantity of products or services
• Operating efficiency
• Customer satisfaction
• Quality of products or services
• Achieving organizational or mission goals
50%
Higher market cap
growth over 3 years*
2x
How to Improve:
You can accelerate
your journey
30
Using an outcomes-based, capability-
focused approach, you can focus your
efforts on the right capabilities to improve
software delivery performance.
31
We know there are key capabilities that drive
Software Delivery Performance
▪ They fall into four categories:
− Technology and automation
− Process
− Measurement/monitoring
− Culture
32
Technology and Automation
▪ Version control
▪ Deployment
automation
▪ Continuous
integration
▪ Trunk-based
development
▪ Test automation
▪ Test data
management
▪ Shift left on security
▪ Continuous delivery
▪ Loosely-coupled
architecture
▪ Architect for
empowered teams
33
Process
▪ Gather and implement customer feedback
▪ Work in small batches
▪ Lightweight change approval process
▪ Team experimentation
34
Measurement and Monitoring
▪ Visual management
▪ Monitoring for business decisions
▪ Check system health proactively
▪ WIP limits
▪ Visualizations
35
Culture
▪ Westrum organizational culture
▪ Climate for learning
▪ Collaboration among teams
▪ Make work meaningful
▪ Transformational leadership
36
Where should I start?
▪ “It depends.” Everyone is different
▪ Patterns I see often:
− Architecture is highest contributor to Continuous Delivery (SODR 2017)
and shows up for very many teams (DORA: as the need for loosely-
coupled architecture or trunk-based development)
− Lightweight change approval process is a constraint for most teams
(DORA)
− Continuous integration (DORA – and its full complement)
37
So What to Do?
1. Identify your constraints. Pick “a few.”
2. Work to eliminate those constraints.
3. Re-evaluate your environment and system.
4. Rinse and repeat.
38
Additional Resources
You Can Help!
40
Your Role in this
▪ Be the transformational leaders. Own this.
▪ Start by measuring a few things
− Focus on outcomes: Software delivery performance (speed & stability)
− Drive performance improvements through capability improvements–
both with tech and with not tech
▪ Share your stories! Leverage community
41
For More Information:
For our ROI whitepaper, case
studies, the State of DevOps
Reports & peer-reviewed
research, visit
devops-research.com
Questions?
Thank You

Measuring Performance: See the Science of DevOps Measurement in Action

  • 1.
    Measuring Performance: See theScience of DevOps Measurement in Action February 27, 2018 Nicole Forsgren, PhD
  • 2.
    2 Housekeeping ▪ This webinaris being recorded ▪ Links to the slides and the recording will be made available after the presentation ▪ You can post questions via the GoToWebinar Control Panel
  • 3.
  • 4.
    4 The Highlights ▪ MeasuringPerformance − Common Mistakes ▪ Our Approach: Software Delivery Performance − Where are you? ▪ Maturity Models ▪ Does Software Delivery Performance Matter? ▪ How to Improve Performance ▪ You can help
  • 5.
  • 6.
    6 Common Mistakes ▪ Outputsvs. Outcomes ▪ Individual/local vs. Team/global ▪ Some common examples: − Lines of code − Velocity − Utilization
  • 7.
    7 Common Mistakes: Linesof Code ▪ More is better? − Bloated software − Higher maintenance costs − Higher cost of change ▪ Less is better? − Cryptic code that no one can read ▪ Ideal: solve business problems with most efficient code
  • 8.
    8 Common Mistakes: Velocity ▪Agile: problems are broken down into stories, which are assigned “points” of estimated effort to complete ▪ At end of sprint, total points signed off by customer is recorded = velocity ▪ Velocity is a capacity planning tool. NOT a productivity tool. ▪ Why doesn’t this work for productivity? − Velocity is a relative measure, not absolute. So: bad for comparing teams − Gaming by inflating estimates − Focus on team completion at the expense of collaboration (a global goal)
  • 9.
    9 Common Mistakes: Utilization ▪Utilization is only good up to a point ▪ Higher utilization is better? − High utilization doesn’t allow slack for unplanned work − Queue theory: as utilization approaches 100%, lead times approach infinity − Once you hit higher and higher levels of utilization (a poor goal of productivity), teams will take longer and longer to get work done
  • 10.
  • 11.
    11 Measuring Software DeliveryPerformance ▪ Focus on both Outcomes and Global measures: - Deploy frequency (when business demands) - Lead Time for Changes - Mean Time to Recover (MTTR) - Change Fail Rate
  • 12.
    12 We see More throughput Morestability In tandem. Without the tradeoffs that some suggest are necessary.
  • 13.
    13 High Performing DevOpsTeams More agile More frequent Code deployments 46x That’s the difference between multiple times per day and once a week or less. Faster lead time from commit to deploy 440x That’s the difference between less than an hour and more than a week.
  • 14.
    14 High Performing DevOpsTeams More reliable Faster mean time to recover from downtime 96x That means high performers recover in less than an hour instead of several days As likely that changes will fail 1/5x That means high performers changes fail 0-15% of the time, compared to 31-45% of the time.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
    20 Industry changes: Year overyear What is good enough in one year is out of date the next Speed Stability
  • 21.
  • 22.
  • 23.
    23 DevOps is Goodfor Technology ▪ Measuring DevOps and Software delivery performance - Deploy frequency (when business demands) - Lead Time for Changes - Mean Time to Recover (MTTR) - Change Fail Rate
  • 24.
    24 Software Delivery Performanceis comprised of throughput and stability, and both are possible without tradeoffs
  • 25.
    25 DevOps is good fororganizations
  • 26.
    26 High Performing technologyorganizations are twice as likely to achieve or exceed Commercial Goals • Productivity • Profitability • Market Share • # of customers 2x
  • 27.
    27 High Performing technologyorganizations are twice as likely to achieve or exceed Commercial Goals • Productivity • Profitability • Market Share • # of customers Non-commercial Goals • Quantity of products or services • Operating efficiency • Customer satisfaction • Quality of products or services • Achieving organizational or mission goals 2x
  • 28.
    28 High Performing technologyorganizations are twice as likely to achieve or exceed Commercial Goals • Productivity • Profitability • Market Share • # of customers Non-commercial Goals • Quantity of products or services • Operating efficiency • Customer satisfaction • Quality of products or services • Achieving organizational or mission goals 50% Higher market cap growth over 3 years* 2x
  • 29.
    How to Improve: Youcan accelerate your journey
  • 30.
    30 Using an outcomes-based,capability- focused approach, you can focus your efforts on the right capabilities to improve software delivery performance.
  • 31.
    31 We know thereare key capabilities that drive Software Delivery Performance ▪ They fall into four categories: − Technology and automation − Process − Measurement/monitoring − Culture
  • 32.
    32 Technology and Automation ▪Version control ▪ Deployment automation ▪ Continuous integration ▪ Trunk-based development ▪ Test automation ▪ Test data management ▪ Shift left on security ▪ Continuous delivery ▪ Loosely-coupled architecture ▪ Architect for empowered teams
  • 33.
    33 Process ▪ Gather andimplement customer feedback ▪ Work in small batches ▪ Lightweight change approval process ▪ Team experimentation
  • 34.
    34 Measurement and Monitoring ▪Visual management ▪ Monitoring for business decisions ▪ Check system health proactively ▪ WIP limits ▪ Visualizations
  • 35.
    35 Culture ▪ Westrum organizationalculture ▪ Climate for learning ▪ Collaboration among teams ▪ Make work meaningful ▪ Transformational leadership
  • 36.
    36 Where should Istart? ▪ “It depends.” Everyone is different ▪ Patterns I see often: − Architecture is highest contributor to Continuous Delivery (SODR 2017) and shows up for very many teams (DORA: as the need for loosely- coupled architecture or trunk-based development) − Lightweight change approval process is a constraint for most teams (DORA) − Continuous integration (DORA – and its full complement)
  • 37.
    37 So What toDo? 1. Identify your constraints. Pick “a few.” 2. Work to eliminate those constraints. 3. Re-evaluate your environment and system. 4. Rinse and repeat.
  • 38.
  • 39.
  • 40.
    40 Your Role inthis ▪ Be the transformational leaders. Own this. ▪ Start by measuring a few things − Focus on outcomes: Software delivery performance (speed & stability) − Drive performance improvements through capability improvements– both with tech and with not tech ▪ Share your stories! Leverage community
  • 41.
    41 For More Information: Forour ROI whitepaper, case studies, the State of DevOps Reports & peer-reviewed research, visit devops-research.com
  • 42.
  • 43.

Editor's Notes

  • #8 Historically: most measures of performance have focused on productivity
  • #32 Aka the DevOps