Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Are We There Yet? Signposts On Your Journey to Awesome

668 views

Published on

If you listen to grandiose tales of DevOps journeys, everything is awesome. But how can those of us not living in The Lego Movie transform our technology in smart and systematic ways? What is “awesome”? How do we point our organizations in that direction, and how will we know progress when we see it?

The best-performing IT organizations have the highest quality, throughput, and reliability while also showing value on the bottom line. When embarking on a journey of transformation, you want to measure your current status and subsequent progress while keeping tabs on factors that drive improvement in technology performance. Nicole Forsgren explains the importance of knowing how (and what) to measure—ensuring you catch successes and failures when they first show up, not just when they’re epic. Measuring progress lets you focus on what’s important and helps you communicate this progress to peers, leaders, and executives who decide budget. Business outcomes don’t realize themselves, after all, and “doing DevOps” doesn’t define stakeholder value any more than “being awesome” does.

Published in: Technology

Are We There Yet? Signposts On Your Journey to Awesome

  1. 1. @nicolefv Are we there yet? Signposts on your journey to AWESOME Nicole Forsgren, PhD CEO and Chief Scientist DevOps Research and Assessment (DORA)
  2. 2. @nicolefv Who (What?) is DORA? @nicolefv
  3. 3. @nicolefv Signposts • Direction, not destination • Your DevOps direction • Moving along your journey • Checking progress • Measures @nicolefv
  4. 4. @nicolefv Maturity models are for CHUMPS @nicolefv
  5. 5. @nicolefv Maturity models are for CHUMPS @nicolefv
  6. 6. @nicolefv Direction, NOT destination Follow a Continuous Improvement paradigm
  7. 7. @nicolefv DevOps Direction? DevOps: the use of technology, process, and culture to deliver value to an organization. Is there “one metric that matters”?
  8. 8. @nicolefv Our direction in DevOps IT Performance: developing and delivering software with both speed and stability - Deploy frequency - Lead Time for Changes - Mean Time to Recover (MTTR) - Change fail rate
  9. 9. More frequent Code deployments 46x That’s the difference between multiple times per day and once a week or less. Faster lead time from commit to deploy 440x That’s the difference between less than an hour and more than a week. @nicolefv
  10. 10. faster mean time to recover from downtime 96x That means high performers recover in less than an hour instead of several days. as likely that changes will fail 1/5x That means high performers’ changes fail 7.5% of the time instead of 38.5%. @nicolefv
  11. 11. @nicolefv
  12. 12. @nicolefv Is this the DevOps Journey? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance
  13. 13. @nicolefv But wait. Tech transformations are HARD. @nicolefv
  14. 14. @nicolefv J curve The WORK High speed High stability High innovation High retention Low speed Low stability Low innovation Low retention
  15. 15. @nicolefv Some evidence of the J curve • 2016 State of DevOps Report: unplanned rework • 2017 State of DevOps Report: manual work High Medium Low 21% 32% 27%
  16. 16. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  17. 17. @nicolefv
  18. 18. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  19. 19. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  20. 20. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  21. 21. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  22. 22. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  23. 23. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  24. 24. @nicolefv Why do we care? Why even start this journey? Why hit that dip in the J-curve? @nicolefv
  25. 25. @nicolefv High Performing organizations are twice as likely to achieve or exceed goals Commercial Goals • Productivity • Profitability • Market Share • # of customers
  26. 26. @nicolefv High Performing organizations are twice as likely to achieve or exceed goals Commercial Goals • Productivity • Profitability • Market Share • # of customers Non-Commercial Goals • Quantity of Products or Services • Operating efficiency • Customer satisfaction • Quality of products or services provided • Achieving organizational and mission goals
  27. 27. @nicolefv But wait, there’s more! • Once you’re a High Performer, there’s evidence that you can overcome the mythical man month. Etsy Code Deployment What once required 6-14 hours and an “Army” …Now takes 15 minutes and 1 person 2013 Mike Brittain, Continuous Deployment: The Dirty Details 3/2014 Daniel Schauenberg , Qcon London 4/2014 tweet @philkates 30+Deploys per day 2013 50Deploys per day March 2014 QCon London 80-90Deploys per day April 2014 ChefConf
  28. 28. @nicolefv High (linear) Low Med Source: Puppet Labs 2015 State Of DevOps: https://puppetlabs.com/2015-devops-report deploys/day # of developers
  29. 29. @nicolefv #worth @nicolefv
  30. 30. @nicolefv Your DevOps Signposts • IT Performance: Direction • Deploy frequency • Lead time • MTTR • Change fail rate • Look for improvements in all four • We know that all four in tandem are possible • Watch for: sustained tradeoffs! • Not possible to see or notice if you aren’t measuring!
  31. 31. @nicolefv Your DevOps Signposts • Waste Work: Detours (J-curve) • Unplanned rework • Manual work • Quality proxies (specific to your context) • Defect incidents, security remediation These help you judge the depth and breadth of your J-curve
  32. 32. @nicolefv One final (important) check •Is this sustainable? • Deploy pain • Burnout
  33. 33. @nicolefv Microsoft Engineering: DevOps Lessons Thiago Almeida -- DevOps Days London, 2016 Work/Life Scores Before DevOps: 38% After DevOps: 75%
  34. 34. @nicolefv We know our signposts for our journey. But how do we move from Low to Medium to High Performance?
  35. 35. @nicolefv@nicolefv
  36. 36. @nicolefv We know there are key capabilities that drive IT Perf •They fall into four categories: •Tech and automation •Process •Measurement/monitoring •Culture
  37. 37. @nicolefv Tech and automation • Version control • Deployment automation • Continuous integration • Trunk-based development • Test automation • Test data management • Shift left on security • Continuous delivery • Loosely-coupled architecture • Architect for empowered teams
  38. 38. @nicolefv Process • Gather and implement customer feedback • Work in small batches • Lightweight change approval process • Team experimentation
  39. 39. @nicolefv Trunk-based development & change approval process By focusing on trunk-based development and streamlining their change approval processes, Capital One saw stunning improvements in just two months.
  40. 40. @nicolefv Measurement and Monitoring • Visual management • Monitoring for business decisions • Check system health proactively • WIP limits • Visualizations
  41. 41. @nicolefv@nicolefv
  42. 42. @nicolefv Culture • Westrum organizational culture • Climate for learning • Collaboration among teams • Make work meaningful • Transformational leadership
  43. 43. @nicolefv Westrum Organizational Culture Pathological Power-oriented Bureaucratic Rule-oriented Generative Performance-oriented Low cooperation Modest cooperation High cooperation Messengers shot Messengers neglected Messengers trained Responsibilities shirked Narrow responsibilities Risks are shared Bridging discouraged Bridging tolerated Bridging encouraged Failure leads to scapegoating Failure leads to justice Failure leads to inquiry Novelty crushed Novelty leads to problems Novelty implemented
  44. 44. @nicolefv@nicolefv
  45. 45. @nicolefv The hard part • Prioritizing work • The quick wins part of the J-curve makes it easy to figure this out. The growing complexity part makes this difficult.
  46. 46. @nicolefv Where should I start? • “It depends.” Everyone is different • Patterns I see often: • Architecture is highest contributor to Continuous Delivery (SODR 2017) and shows up for very many teams (DORA: as the need for loosely-coupled architecture or trunk- based development) • Lightweight change approval process is a constraint for most teams (DORA) • Continuous integration (DORA – and its full complement)
  47. 47. @nicolefv
  48. 48. @nicolefv So what to do? 1. Identify your constraints. Pick 1-3. 2. Work to eliminate those constraints. 3. Re-evaluate your environment and system. 4. Rinse and repeat.
  49. 49. @nicolefv Maturity models are for CHUMPS (redux) @nicolefv
  50. 50. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  51. 51. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  52. 52. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  53. 53. @nicolefv Can both be true? Low speed Low stability Low innovation Low retention High speed High stability High innovation High retention Low Med High IT Performance IT Performance The WORK
  54. 54. @nicolefv How to measure? • It’s important to have good metrics, wherever you get them from • If you know they’re sh*t, toss them and start over. • Value a real baseline, even if it’s not encouraging.
  55. 55. @nicolefv Getting a full picture of your system is important •Start now. •Full system instrumentation takes time. •Use system and survey measures to give you coverage.
  56. 56. @nicolefv@nicolefv
  57. 57. @nicolefv Types of measures to collect • Data about systems and process from systems • Data about systems and process from people • Data about people from people
  58. 58. @nicolefv Data about systems and process from systems • This data is good for: • Precision • Continuous data • Specific data • Volume/scale • This data is not good for: • A holistic view of your system • Capturing drifts in data • Capturing behavior outside of the system • Cultural or perceptual measures
  59. 59. @nicolefv Data about systems and process from people • This data is good for • Accuracy (if collected w/validated measures) • A holistic view of your system • Triangulation with system data • Capturing behavior outside of the system • Perceptual measures related to the system (e.g., deploy pain) • This data is not good for • Precision (e.g., milliseconds) • Continuous data - survey fatigue is a thing • Measures in strained environments (where it is not safe to be honest) • Volume - once you pass ~20 min, generally bad news
  60. 60. @nicolefv An example… Data from people and systems helps us paint a more complete picture
  61. 61. @nicolefv One or two system metrics might hint at this:
  62. 62. @nicolefv More complete system instrumentation might tell us this: This image cannot currently be displayed.
  63. 63. @nicolefv Our people might tell us this:
  64. 64. @nicolefv A real world example: DOES SF 2016, Topo Pal, Capital One
  65. 65. @nicolefv Data about people from people • This data is good for: • Understanding your culture • Capturing perceptual measures • Leading indicator • This data is not good for: • Continuous data (survey fatigue is a thing) • Measures in strained environments • HR system data is only a lagging indicator of what has already happened
  66. 66. @nicolefv Employees in high-performing organizations are 2.2 times more likely to recommend their organization as a great place to work.
  67. 67. @nicolefv@nicolefv
  68. 68. @nicolefv Leadership Necessary but not Sufficient • Teams with the least transformational leaders (the bottom third) were one-half as likely to be high IT performers • Leaders cannot do it alone! Teams with the top 10% of transformational leaders performed no better than the median
  69. 69. @nicolefv What can you do? • Read and share Data Driven by @dpatil and @hmason • Ask others about metrics • Where is metrics collection happening? • Is it considered key to improvement? • Think about your own metrics & signposts • Start with outcomes: IT Performance • Then think about what influences IT Performance: technology, process, monitoring, culture. • Also measure your waste work to track your j-curve
  70. 70. @nicolefv What we’ve talked about • Direction, not destination • Your DevOps direction • Moving along your journey • Checking progress • Measures @nicolefv
  71. 71. @nicolefv For more metrics and data: • ROI whitepaper • Case studies • Peer-reviewed research • 2014 – 2017 State of DevOps Reports • Learn about assessment devops-research.com
  72. 72. @nicolefv Thank you! @nicolefv nicolefv.com devops-research.com @nicolefv

×