How to use data to improve software development teams and processes. Presented at the Prairie Dev Con Deliver conference October 2016. http://www.prdcdeliver.com
Data driven coaching - Agile 2016 (troy magennis)Troy Magennis
Team data and dashboards can be misused and cause more pain than results. Having the team run blind to its historical data though is often worse, with solely opinions and gut-feel driving process change. Helping your teams see and understand a holistic balance of their data will give your coaching advice context and encourage team constant improvement through experiments and reflection.
Coaching dashboards are about balancing trade-offs. Trading something your team is great at for something they want (or need) to improve. Having the team complete the feedback loop and confirm than an experiment had the intended impact, will process improvement be continuous and sustainable.
This presentation shows how to expose data to teams in order for them to retrospect productively, determine if a process experiment is panning out as expected, and to vigorously explore process change opportunities. Recent research shows strong relationships of certain metrics to process and practices, and this session demonstrates how these metrics have and can be tied to timely coaching advice.
The real-world dashboards demonstrated in this session show most common problems and how to avoid them with before and after shots and quotes from the teams impacted by them.
In this session you will –
- Learn how you can not only gather data, but use it to improve the process, with examples!
- Learn how your can tie data insights to coaching advice (data driven coaching)
- Learn how you can detect, predict and avoid data gaming and dashboard misuse
- Learn from my mistakes, and mistakes I’ve seen others with real examples of Agile coaching dashboards (good and bad)
Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...Troy Magennis
To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.
Prioritization – 10 different techniques for optimizing what to start next ...Troy Magennis
10 different prioritization techniques to help understand what to START next. Shows the evolution between choosing at random up to full economic analysis. First presented at Agile 2017 in Florida.
I love the smell of data in the morning (getting started with data science) ...Troy Magennis
Data Science 101 for software development. I know it misses the purist view of Data Science, but this is intended to get you started! First presented at Agile 2017 in Florida.
The document discusses using metrics to improve decision making for software projects, explaining that metrics should be focused on outcomes that teams can influence and that predict future performance, and provides examples of different types of metrics and modeling techniques that can help teams forecast delivery and make better decisions.
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy MagennisTroy Magennis
Software risk impact is more predictable than you might think. This session discusses similarities of uncertainty in various industries and relates this back to how we can measure and analyze impediments and risk for agile software teams.
What is the story with agile data keynote agile 2018 (Magennis)Troy Magennis
This document discusses using data to improve agile practices and outcomes. It argues that agile has lost the "data war" by not capturing and utilizing data from teams effectively. It suggests that data needs to be handled safely to avoid embarrassing people and destroying the utility of historical data. Better ways are needed to measure outcomes rather than just output, and to balance predictability with creativity. The document also discusses visualizing and managing dependencies, comparing performance across teams, and using the right metrics depending on a team's characteristics and challenges. The overarching message is that data needs to be used carefully and conversationally to drive the right actions and improve agile practices.
Forecasting using data workshop slides for the Deliver conference in Winnipeg October 2016. This session introduces practical exercises for probabilistic forecasting. http://www.prdcdeliver.com
Data driven coaching - Agile 2016 (troy magennis)Troy Magennis
Team data and dashboards can be misused and cause more pain than results. Having the team run blind to its historical data though is often worse, with solely opinions and gut-feel driving process change. Helping your teams see and understand a holistic balance of their data will give your coaching advice context and encourage team constant improvement through experiments and reflection.
Coaching dashboards are about balancing trade-offs. Trading something your team is great at for something they want (or need) to improve. Having the team complete the feedback loop and confirm than an experiment had the intended impact, will process improvement be continuous and sustainable.
This presentation shows how to expose data to teams in order for them to retrospect productively, determine if a process experiment is panning out as expected, and to vigorously explore process change opportunities. Recent research shows strong relationships of certain metrics to process and practices, and this session demonstrates how these metrics have and can be tied to timely coaching advice.
The real-world dashboards demonstrated in this session show most common problems and how to avoid them with before and after shots and quotes from the teams impacted by them.
In this session you will –
- Learn how you can not only gather data, but use it to improve the process, with examples!
- Learn how your can tie data insights to coaching advice (data driven coaching)
- Learn how you can detect, predict and avoid data gaming and dashboard misuse
- Learn from my mistakes, and mistakes I’ve seen others with real examples of Agile coaching dashboards (good and bad)
Risk Management and Reliable Forecasting using Un-reliable Data (magennis) - ...Troy Magennis
To meet expectations and optimize flow, managing risk is an important part of Kanban. Anticipating and adapting to things that "go wrong" and the uncertainty they cause is topic of this session. We look at techniques for quantifying what risks should be considered important to deal with.
Although discouraging, forecasting size, effort, staff and cost is sometimes necessary. Of course we have to do as little of this as possible, but when we do, we have to do it well with the data we have available. Forecasting is made difficult by un-reliable information as inputs to our process – the amount of work is uncertain, the historical data we are basing our forecasts on is biased and tainted, the situation seems hopeless. But it isn't. Good decisions can be made on imperfect data, and this session discusses how. This session shows immediately usable and simple techniques to capture, analyze, cleanse and assess data, and then use that data for reliable forecasting.
Second and hopefully draft of LKCE 2014 talk.
Prioritization – 10 different techniques for optimizing what to start next ...Troy Magennis
10 different prioritization techniques to help understand what to START next. Shows the evolution between choosing at random up to full economic analysis. First presented at Agile 2017 in Florida.
I love the smell of data in the morning (getting started with data science) ...Troy Magennis
Data Science 101 for software development. I know it misses the purist view of Data Science, but this is intended to get you started! First presented at Agile 2017 in Florida.
The document discusses using metrics to improve decision making for software projects, explaining that metrics should be focused on outcomes that teams can influence and that predict future performance, and provides examples of different types of metrics and modeling techniques that can help teams forecast delivery and make better decisions.
LKNA 2014 Risk and Impediment Analysis and Analytics - Troy MagennisTroy Magennis
Software risk impact is more predictable than you might think. This session discusses similarities of uncertainty in various industries and relates this back to how we can measure and analyze impediments and risk for agile software teams.
What is the story with agile data keynote agile 2018 (Magennis)Troy Magennis
This document discusses using data to improve agile practices and outcomes. It argues that agile has lost the "data war" by not capturing and utilizing data from teams effectively. It suggests that data needs to be handled safely to avoid embarrassing people and destroying the utility of historical data. Better ways are needed to measure outcomes rather than just output, and to balance predictability with creativity. The document also discusses visualizing and managing dependencies, comparing performance across teams, and using the right metrics depending on a team's characteristics and challenges. The overarching message is that data needs to be used carefully and conversationally to drive the right actions and improve agile practices.
Forecasting using data workshop slides for the Deliver conference in Winnipeg October 2016. This session introduces practical exercises for probabilistic forecasting. http://www.prdcdeliver.com
CYCLE TIME ANALYTICS: RELIABLE #NOESTIMATES FORECASTING USING DATA, TROY MAGE...Lean Kanban Central Europe
If you are struggling to forecast project delivery dates and cost, or you want to eliminate the story estimation process because you feel it is waste, or you need to build the business case for hiring more staff, then this session is relevant to you. All estimates have uncertainty, and understanding how multiple uncertain factors compound is the first step to improving project and team predictability. A major benefit of Lean is the low weight capture of cycle time metrics. This session looks at how to use historical cycle time data to answer questions of forecasting and staff skill balancing. This session compares the benefits of using cycle time for analysis over current planning techniques such as velocity, burn-down charts, and cumulative flow diagrams. This session takes you on a journey of what to do after capturing cycle time data or what to do if you have no history to rely upon. Reducing reliance on developer estimation (popularized by the twitter hashtag of #NoEstimates movement) is good general advice, having the tools to plan and manage teams and projects is still important to maintain support at the executive level. This session details the approaches to getting the numbers you need to have whilst minimizing un-necesary overhead and estimating ONLY this factors that matter most.
The document discusses some of the risks and challenges of data visualization and analytics programs in organizations. It argues that while complex data visualizations can work, they are difficult to implement successfully from scratch. Additionally, stakeholders may claim the benefits from outside ideas while only superficially complying with analytics recommendations. The document provides steps for organizations to truly realize change through data-driven insights, such as having leadership buy-in and starting with small, test-based implementations.
Data-Driven off a Cliff: Anti-Patterns in Evidence-Based Decision Makingindeedeng
The document discusses common anti-patterns in evidence-based decision making, including being impatient, taking shortcuts in sampling and analysis, focusing on a single metric, and believing too strongly in one's own conclusions. It provides examples of companies making misguided decisions due to these anti-patterns, such as ending A/B tests early, ignoring parts of a sample, overemphasizing short-term metrics, and overrelying on persuasive but incorrect stories. The document advocates being patient, rigorous in sampling and analysis, considering multiple relevant metrics, and acknowledging the potential for fallibility.
Indeed Engineering and The Lead Developer Present: Tech Leadership and Manage...indeedeng
On March 1 2018, Indeed hosted a series of talks about leadership and management in the tech industry. Lighting talks included Data Scientist Robyn Rap with "Fish a Manager to Teach," Product Manager Michael Magan's "What Your Product Manager Wants from a Tech Lead," and Engineering Manager Paresh Suthar discussed "New Engineering Manager at Indeed? First: Write Some Code."
Ketan Gangatirkar, head of Job Seeker Engineering, provided the keynote "Quantum Leap: From Managing a Team to Leading an Org."
The document describes a Kanban policy game that is used to model evolutionary change. The goal of the game is to practice evolutionary change in the context of a knowledge discovery process. The game models information arrival as binary values (1s and 0s) representing favorable or unfavorable information. It also models different types of blockers or impediments. The game is played over multiple periods to model evolutionary change by making small, incremental policy changes and measuring their effects. In period 1, current policies are made explicit as a baseline. In period 2, the WIP policy is changed to limit work in process. In period 3, the collaboration policy is changed to focus on team rather than individual performance.
Leveraging Analytics In Gaming - Tiny Mogul GamesInMobi
'Analytics In Gaming' and how you can use it to improve the game's acquisition, retention and engagement' by Rajdeep Gumaste, Product Manager - Tiny Mogul Games.
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
[CXL Live 16] Beyond Test-by-Test Results: CRO Metrics for Performance & Insi...CXL
Individual tests drive insights & ROI, but the most sophisticated optimizers look beyond what an individual test is telling them and use data to optimize their overall testing performance.
In this talk, Claire will dive into the specifics of how to track, improve, and drive insight from performance metrics for your conversion program, so you can not only run better tests, but get more out of your investment in CRO.
Mastering Analytics for Optimization SuccessMichele Kiss
Analytics and optimization can each generate great results for businesses. However, it’s at the intersection of analytics and optimization that real value can be extracted. In this session, Analytics Demystified Senior Partner Michele Kiss will share how to better integrate your testing and analytics practices, and real-life examples of success.
Pairing Analytics With Qualitative Methods to Understand the WHYMichele Kiss
Rudimentary analytics can be valuable to understand WHAT your customers and prospects do. However, the true value from analytics comes from marrying that with the WHY - and more importantly, overcoming the WHY NOT. In this session, Analytics Demystified Senior Partner Michele Kiss will discuss quantitative and qualitative techniques analysts can leverage to get more insight into customer behavior. (Psychologist’s armchair not included.)
Mastering Analytics for Optimisation SuccessMichele Kiss
[This version was presented at Conversion Hotel in Texel, NL in November 2017]
Analytics and optimization can each generate great results for businesses. However, it’s at the intersection of analytics and optimization that real value can be extracted. In this session, Analytics Demystified Senior Partner Michele Kiss will share how to better integrate your testing and analytics practices, and real-life examples of success.
You want it when? Probabilistic forecasting and decision makingLarry Maccherone
Before the space shuttle Challenger explosion, a group of engineers identified a potentially catastrophic risk. They brought the issue to NASA management attention but failed to influence the final decision enough to stop the launch. As a leader in your organization, your failure to influence may not cost lives but it could be “catastrophic” for your business.
Learn how to get action and behavior change from your analysis. Steer the emotional elephant of your organization and appeal to the risk tolerance level of your stakeholders. Avoid your own cognitive biases and those of your executives.
The best analysis in the world is ineffective without successful communication of the results. This session delivers practical data visualisation tips, that any analyst can use (regardless of tool!)
[Presentation from the Observe Point Virtual Summit.]
[CXL Live 16] How to Utilize Your Test Capacity? by Ton WesselingCXL
Ton Wesseling gave a presentation at ConversionXL Live in Austin on March 31st 2016 about utilizing test capacity. He discussed optimizing conversions through the ROAR model of risk, optimization, automation and re-thinking. Wesseling emphasized fully using a company's test capacity for impactful A/B tests and separating that capacity for IT releases, campaigns and behavioral learning. He advised celebrating failures to encourage risk-taking and continuous learning.
My Estimates Are Better Than Your EstimatesPieter Rijken
This document discusses using historical data and Monte Carlo simulation to forecast project completion instead of traditional estimation methods. It recommends that teams use the number of completed stories from past sprints to simulate potential completion dates. The simulation results can then set service level agreements for completion, such as finishing within 38 sprints 95% of the time. It acknowledges assumptions like stable data but argues mitigating risks like "black swan" events through policies instead of changing practices.
The%20 Minimum%20 Daily%20 Adult%20 %20 Ca Cmgdahirf
This document discusses selecting the right metrics and avoiding misleading metrics when analyzing system performance. It cautions against averages that obscure variability, averages of averages, percentages without baseline context, and correlation being confused for causation. The key is to select metrics that provide useful insights, understand the data and what is being measured, and avoid cherry-picking or misusing statistics to mislead. Consistency, standard deviation, medians, and displaying trends over time are emphasized as better approaches than simple averages or percentages without context.
No More Excuses: Create a testing plan with no traffic, time, or budgetNTEN
Porter Mason, Steve Daignaeult, and Kira Marchenese gave a presentation on creating a testing plan with no constraints of time, budget, or resources. They discussed overcoming excuses for not testing, prioritizing tests and metrics, making sense of results, and provided next steps for attendees to begin implementing a testing process. The presentation provided tools and advice for starting simple tests immediately and developing a testing calendar and documentation to continuously learn and improve campaigns.
Agile Estimating & Planning by Amaad QureshiAmaad Qureshi
An introduction to Agile Estimating and how it can be used to measure the size and length of work.
Agile estimating & planning is a way of measuring the size and time it takes to complete a task. This technique is used by Agile teams in Enterprise and can be utilised in the same way by Start-ups not just for software but for all areas of the business. In this talk I will show you how estimating & planning works by:
- Writing effective user stories
- Writing tests to validate stories (acceptance criteria)
- Using story points to work out the size of a task
- Estimating using Planning Poker
- Using Story Points to calculate a team’s velocity (speed of work)
- Using a team’s velocity to calculate project length
Maximising Capital Investments - is guesswork eroding your bottomline?Michael McKeon
Globally, organisations waste US$122 million for every US$1 billion invested due to poor project performance. Daniel Galorath, the world’s leading expert in project estimation, explains why - and how to create better outcomes.
Agile is all about focus on creating value for the customer in a sustainable way. Actions that lead to business results and happier customers are a consequence of the behaviour of people. Agile coaching supports this by providing insights to people and the organization so they can choose what behaviour to change and how. This new behaviour will lead to improved business results and satisfied customers, or it leads to a more sustainable way - for the organisation - to achieve the business results.
How effective is the coaching and does it ultimately lead to changed improved business results? In this session Pieter demonstrates one way of linking the team actions to observed change in result as seen by the customer. This is demonstrated using data and methods taken from data science.
Artem Bykovets: Optimizing efficiency of Value Delivery vs keeping people bus...Lviv Startup Club
Artem Bykovets: Optimizing efficiency of Value Delivery vs keeping people busy: how it is connected? (UA)
Ukraine Online PMDay 2023 Winter
Website - www.pmday.org/online
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/pmdayconference
CYCLE TIME ANALYTICS: RELIABLE #NOESTIMATES FORECASTING USING DATA, TROY MAGE...Lean Kanban Central Europe
If you are struggling to forecast project delivery dates and cost, or you want to eliminate the story estimation process because you feel it is waste, or you need to build the business case for hiring more staff, then this session is relevant to you. All estimates have uncertainty, and understanding how multiple uncertain factors compound is the first step to improving project and team predictability. A major benefit of Lean is the low weight capture of cycle time metrics. This session looks at how to use historical cycle time data to answer questions of forecasting and staff skill balancing. This session compares the benefits of using cycle time for analysis over current planning techniques such as velocity, burn-down charts, and cumulative flow diagrams. This session takes you on a journey of what to do after capturing cycle time data or what to do if you have no history to rely upon. Reducing reliance on developer estimation (popularized by the twitter hashtag of #NoEstimates movement) is good general advice, having the tools to plan and manage teams and projects is still important to maintain support at the executive level. This session details the approaches to getting the numbers you need to have whilst minimizing un-necesary overhead and estimating ONLY this factors that matter most.
The document discusses some of the risks and challenges of data visualization and analytics programs in organizations. It argues that while complex data visualizations can work, they are difficult to implement successfully from scratch. Additionally, stakeholders may claim the benefits from outside ideas while only superficially complying with analytics recommendations. The document provides steps for organizations to truly realize change through data-driven insights, such as having leadership buy-in and starting with small, test-based implementations.
Data-Driven off a Cliff: Anti-Patterns in Evidence-Based Decision Makingindeedeng
The document discusses common anti-patterns in evidence-based decision making, including being impatient, taking shortcuts in sampling and analysis, focusing on a single metric, and believing too strongly in one's own conclusions. It provides examples of companies making misguided decisions due to these anti-patterns, such as ending A/B tests early, ignoring parts of a sample, overemphasizing short-term metrics, and overrelying on persuasive but incorrect stories. The document advocates being patient, rigorous in sampling and analysis, considering multiple relevant metrics, and acknowledging the potential for fallibility.
Indeed Engineering and The Lead Developer Present: Tech Leadership and Manage...indeedeng
On March 1 2018, Indeed hosted a series of talks about leadership and management in the tech industry. Lighting talks included Data Scientist Robyn Rap with "Fish a Manager to Teach," Product Manager Michael Magan's "What Your Product Manager Wants from a Tech Lead," and Engineering Manager Paresh Suthar discussed "New Engineering Manager at Indeed? First: Write Some Code."
Ketan Gangatirkar, head of Job Seeker Engineering, provided the keynote "Quantum Leap: From Managing a Team to Leading an Org."
The document describes a Kanban policy game that is used to model evolutionary change. The goal of the game is to practice evolutionary change in the context of a knowledge discovery process. The game models information arrival as binary values (1s and 0s) representing favorable or unfavorable information. It also models different types of blockers or impediments. The game is played over multiple periods to model evolutionary change by making small, incremental policy changes and measuring their effects. In period 1, current policies are made explicit as a baseline. In period 2, the WIP policy is changed to limit work in process. In period 3, the collaboration policy is changed to focus on team rather than individual performance.
Leveraging Analytics In Gaming - Tiny Mogul GamesInMobi
'Analytics In Gaming' and how you can use it to improve the game's acquisition, retention and engagement' by Rajdeep Gumaste, Product Manager - Tiny Mogul Games.
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
[CXL Live 16] Beyond Test-by-Test Results: CRO Metrics for Performance & Insi...CXL
Individual tests drive insights & ROI, but the most sophisticated optimizers look beyond what an individual test is telling them and use data to optimize their overall testing performance.
In this talk, Claire will dive into the specifics of how to track, improve, and drive insight from performance metrics for your conversion program, so you can not only run better tests, but get more out of your investment in CRO.
Mastering Analytics for Optimization SuccessMichele Kiss
Analytics and optimization can each generate great results for businesses. However, it’s at the intersection of analytics and optimization that real value can be extracted. In this session, Analytics Demystified Senior Partner Michele Kiss will share how to better integrate your testing and analytics practices, and real-life examples of success.
Pairing Analytics With Qualitative Methods to Understand the WHYMichele Kiss
Rudimentary analytics can be valuable to understand WHAT your customers and prospects do. However, the true value from analytics comes from marrying that with the WHY - and more importantly, overcoming the WHY NOT. In this session, Analytics Demystified Senior Partner Michele Kiss will discuss quantitative and qualitative techniques analysts can leverage to get more insight into customer behavior. (Psychologist’s armchair not included.)
Mastering Analytics for Optimisation SuccessMichele Kiss
[This version was presented at Conversion Hotel in Texel, NL in November 2017]
Analytics and optimization can each generate great results for businesses. However, it’s at the intersection of analytics and optimization that real value can be extracted. In this session, Analytics Demystified Senior Partner Michele Kiss will share how to better integrate your testing and analytics practices, and real-life examples of success.
You want it when? Probabilistic forecasting and decision makingLarry Maccherone
Before the space shuttle Challenger explosion, a group of engineers identified a potentially catastrophic risk. They brought the issue to NASA management attention but failed to influence the final decision enough to stop the launch. As a leader in your organization, your failure to influence may not cost lives but it could be “catastrophic” for your business.
Learn how to get action and behavior change from your analysis. Steer the emotional elephant of your organization and appeal to the risk tolerance level of your stakeholders. Avoid your own cognitive biases and those of your executives.
The best analysis in the world is ineffective without successful communication of the results. This session delivers practical data visualisation tips, that any analyst can use (regardless of tool!)
[Presentation from the Observe Point Virtual Summit.]
[CXL Live 16] How to Utilize Your Test Capacity? by Ton WesselingCXL
Ton Wesseling gave a presentation at ConversionXL Live in Austin on March 31st 2016 about utilizing test capacity. He discussed optimizing conversions through the ROAR model of risk, optimization, automation and re-thinking. Wesseling emphasized fully using a company's test capacity for impactful A/B tests and separating that capacity for IT releases, campaigns and behavioral learning. He advised celebrating failures to encourage risk-taking and continuous learning.
My Estimates Are Better Than Your EstimatesPieter Rijken
This document discusses using historical data and Monte Carlo simulation to forecast project completion instead of traditional estimation methods. It recommends that teams use the number of completed stories from past sprints to simulate potential completion dates. The simulation results can then set service level agreements for completion, such as finishing within 38 sprints 95% of the time. It acknowledges assumptions like stable data but argues mitigating risks like "black swan" events through policies instead of changing practices.
The%20 Minimum%20 Daily%20 Adult%20 %20 Ca Cmgdahirf
This document discusses selecting the right metrics and avoiding misleading metrics when analyzing system performance. It cautions against averages that obscure variability, averages of averages, percentages without baseline context, and correlation being confused for causation. The key is to select metrics that provide useful insights, understand the data and what is being measured, and avoid cherry-picking or misusing statistics to mislead. Consistency, standard deviation, medians, and displaying trends over time are emphasized as better approaches than simple averages or percentages without context.
No More Excuses: Create a testing plan with no traffic, time, or budgetNTEN
Porter Mason, Steve Daignaeult, and Kira Marchenese gave a presentation on creating a testing plan with no constraints of time, budget, or resources. They discussed overcoming excuses for not testing, prioritizing tests and metrics, making sense of results, and provided next steps for attendees to begin implementing a testing process. The presentation provided tools and advice for starting simple tests immediately and developing a testing calendar and documentation to continuously learn and improve campaigns.
Agile Estimating & Planning by Amaad QureshiAmaad Qureshi
An introduction to Agile Estimating and how it can be used to measure the size and length of work.
Agile estimating & planning is a way of measuring the size and time it takes to complete a task. This technique is used by Agile teams in Enterprise and can be utilised in the same way by Start-ups not just for software but for all areas of the business. In this talk I will show you how estimating & planning works by:
- Writing effective user stories
- Writing tests to validate stories (acceptance criteria)
- Using story points to work out the size of a task
- Estimating using Planning Poker
- Using Story Points to calculate a team’s velocity (speed of work)
- Using a team’s velocity to calculate project length
Maximising Capital Investments - is guesswork eroding your bottomline?Michael McKeon
Globally, organisations waste US$122 million for every US$1 billion invested due to poor project performance. Daniel Galorath, the world’s leading expert in project estimation, explains why - and how to create better outcomes.
Agile is all about focus on creating value for the customer in a sustainable way. Actions that lead to business results and happier customers are a consequence of the behaviour of people. Agile coaching supports this by providing insights to people and the organization so they can choose what behaviour to change and how. This new behaviour will lead to improved business results and satisfied customers, or it leads to a more sustainable way - for the organisation - to achieve the business results.
How effective is the coaching and does it ultimately lead to changed improved business results? In this session Pieter demonstrates one way of linking the team actions to observed change in result as seen by the customer. This is demonstrated using data and methods taken from data science.
Artem Bykovets: Optimizing efficiency of Value Delivery vs keeping people bus...Lviv Startup Club
Artem Bykovets: Optimizing efficiency of Value Delivery vs keeping people busy: how it is connected? (UA)
Ukraine Online PMDay 2023 Winter
Website - www.pmday.org/online
Youtube - https://www.youtube.com/startuplviv
FB - https://www.facebook.com/pmdayconference
This document summarizes an event at the University College Dublin (UCD) on February 20, 2020 called "Work Smarter Together". The event included an opening by Professor Mark Rogers and the recognition of several UCD staff who received Green Belts for process improvement projects. Several presenters then discussed how an agile approach can help improve processes in different roles at UCD. Elaine Hickey described using Lean Six Sigma tools to improve the room allocation process. Colm Walsh discussed using DMAIC methodology to enhance the management of student recruitment agencies.
This document discusses metrics for measuring software development at the individual, team, unit, and company levels. It emphasizes that metrics must be based on automatic collection of data to analyze long-term statistics and trends, and that individuals should be motivated to provide high-quality data through positive feedback rather than fear of negative consequences. Good metrics are easy to collect, unambiguous, help all parties understand what is being measured and how their work contributes, and drive continuous process improvement.
This document discusses key performance indicators (KPIs) for measuring agile projects. It begins by defining metrics and KPIs, noting that KPIs should be tied to strategic objectives and have defined targets. It then discusses characteristics of good KPIs and provides examples of both traditional and agile KPIs related to time, effort, scope, and quality. The document cautions that too many KPIs can be useless and advocates keeping metrics simple. It also discusses challenges like cheating on metrics and provides tips for using tools and dashboards to effectively measure agile performance.
The Business Case for DevOps - Justifying the JourneyXebiaLabs
Ting Cosper, IT Director at Freedom Mortgage, gives his presentation on building the case for DevOps within your organization at the DevOps Leaderships Summit in Boston MA.
The document discusses metrics for agile teams. It begins by introducing agile principles like Scrum and Lean that emphasize eliminating waste, delivering value early, and responding quickly to change. It then explains that metrics are important for agile teams for several reasons: to provide transparency to stakeholders, enable data-driven decision making, get feedback to improve estimates, and change day-to-day behaviors. However, metrics must be simple, change behaviors in a positive way, and not demotivate teams. The document provides examples of common agile metrics like velocity, story completion rates, and defect rates and advocates for visual dashboards that are transparent and aim to continuously improve.
Using data to guide product developmentMat Clayton
This document discusses using data to guide product development. It recommends analyzing user data and key performance indicators to identify features that correlate with retention. The document advocates continually running A/B tests to evaluate potential product features, noting tests should be fast to implement with minimal impact and visible results. Failure of tests is acceptable as getting results now is better than waiting.
This document discusses how to interpret data from Kanban retrospectives to identify opportunities for optimizing workflow. It provides examples of metrics like lead time, cycle time, and work in progress that can be analyzed to address issues like bottlenecks, piles of work in specific states, outliers in work completion times, frequent blockers, the impact of unplanned work, and ensuring team well-being and sustainability. The document advocates using a structured process of planning improvements, implementing changes, measuring their impact, and adjusting as needed.
The future for performance management, quality and true continuous improvement for local council planning services. Uses much of the data that councils already send to government, supplements it with some new approaches to customer and quality feedback, and brings it all together in one tidy, holistic report.
Agile metrics can be used to the advantage or the detriment of teams and an organisation’s Agile success. This session looks at several of the core Agile metrics used to measure success to help you understand what success looks like, why the metric is desirable and what the metrics can tell us.
Understanding why we want these metrics is critical to capturing something of value, rather than just doing 'because'. What will leaders and decision makers do with these metrics? What value do they add?
Steve will also dive into the negative impacts of some of the Agile metrics we are sometimes forced to capture, how chasing velocity leads to gaming the system etc. He’ll look at bad metrics such as the seven deadly sins of Agile measurement and how to avoid them in your enterprise.
The document discusses metrics for agile teams. It explains that metrics are important for several reasons, such as providing transparency, enabling business decisions, and changing day-to-day behavior. Good metrics should be vital, measure results not just outputs, track trends, be easy to collect, amplify learning, reinforce desired behavior, and optimize the whole. The document provides examples of common agile metrics like velocity, story completion, bugs, and technical debt. It emphasizes that metrics should be simple, visible to the team, and not threaten people.
Seven Key Metrics to Improve Agile PerformanceTechWell
It’s been said: If you can’t measure it, you can’t improve it. For most agile teams burndown charts and some type of velocity measurement are all they are doing. However, with just a few more metrics, you can gain substantial insight into how teams are performing and identify improvement opportunities. Andrew Graves explores seven key metrics―Effort by Class of Service, Accuracy of Estimation, Cost per Point, and four others―to measure how your team is doing and make adjustments in real time. Andrew illustrates how to use these metrics to communicate progress to stakeholders. Discover how to use these metrics to identify and analyze trends that lead to performance improvement ideas and strategies. Learn how to use these seven metrics to monitor the impact of changes made to verify they are bringing the hoped-for difference.
More and more teams are turning to DevOps as a way of working together to improve the efficiency and quality of software delivery and start adding more value to the business. But without having someone on the team with experience of putting it into practice, it's sometimes difficult to know how to get started.
Redgate Software invited Steve Thair, CTO at the DevOpsGuys, to deliver a one-hour training session on 'How to get started with DevOps'. Steve gave practical tips on how you can start implementing DevOps in your own organization.
The recording can be found here - https://youtu.be/ZioF58drwcA
For more information about services from the DevOpsGuys visit www.devopsguys.com
To find out about extending DevOps practices to the database visit www.red-gate.com/solutions
DevOpsGuys - How to get started with DevOps - Redgate Webinar April 2017DevOpsGroup
DevOpsGuys - How to get started with DevOps - Redgate Webinar April 2017. 9 steps to DevOps Transformation
#SystemsThinking
#MakeWorkVisible
#MeasureWhatsImportant
#ActOnFeedback
#IdentifyTheGoal
#BeAgile
#DeliverContinuously
#BuildTrust
#AlignToValue
#OptimiseForFlow
The document outlines Contactually's new product management framework. Previously, product development was disorganized with work thrown into a large backlog and no clear priorities. The new framework includes: a product vision document, iteration planning spreadsheets, and using Trello and Pivotal Tracker for feature development and engineering tasks. Features are developed in Trello first before being implemented in Pivotal Tracker. The framework provides transparency and ensures stakeholders understand upcoming work and engineers have clear goals. Monthly and sprint meetings keep the team aligned on priorities. The framework has improved productivity by providing structure and accountability.
This document discusses measuring team effectiveness through metrics focused on outcomes rather than outputs. It provides examples of metrics that measure building the right thing, building the thing right, and building in a sustainable way. Specific metrics discussed include activation rate, cycle time, flow, waste elimination, release confidence, team health, quality and incident response times. The document emphasizes using metrics for improvement and having responsible conversations about the data.
Measure what matters for your agile projectMunish Malik
While working with Agile projects, we simply can't get away from tracking and showcasing the progress of the project. A typical Agile project would be working with estimates, story points, velocities, burn-up or burn-down charts.
I have witnessed numerous sprint reviews and showcases where the business is only waiting to see those few slides of the presentation where there is the "actual" red worm, running against the "planned" green worm, trying to catch-up. If the red worm is ahead, I have seen a smile on the faces of the stakeholders. If it matches the green one, there is a sigh of relief. And as a development team you should just pray that the poor red guy is not falling behind the green one, lest it might lead to a lot of questions starting with why, how, what etc.
There have also been times where there have been some unfortunate heated discussions that last forever on why did the team end up not claiming a few points that they had committed. What gets lost is what the team accomplished in the sprint that adds good value to the product. There have also been times where the estimates are being questioned by the product owner or account managers. If you are working in a distributed setup where the product owner is working out of a different country, the problem is even bigger.
Let us think about a scenario where the project gets completed on time, budget and scope. Majority (or all) of estimates were correct. However, when the product went live to the market it failed big time. What is the use of building such a product?
Are we focusing too much on numbers and points and overlooking the other important aspects of Agile software development such as producing software that delights the customers and looking for ways on how we can measure that? Are we measuring if we are creating a solid, robust and a scalable platform that is ready for future developments and enhancements? Are we measuring the outcomes of the time we are spending in the shoes of the people who will actually use the software?
The objective of this presentation is to promote the thinking of measuring what matters for your project. To measure the goals that your software development wants to achieve. I don't plan to showcase an exhaustive list of measurements that can solve all your problems, however, I instead want to highlight some samples that I have used in my projects with the help of my team, that helped us to measure things that add value to the business and development v/S simply creating burn down charts.
Majorly, I want to encourage thinking out of the box to identify what measurements will really matter for your projects. Perhaps from the eyes of the users and business and see what things if measured will add a lot more value than simply estimates, and will help in creating a valuable product that will truly delight the business and the users of the product.
Similar to Data driven coaching - Deliver 2016 (20)
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
19. Time and Pace related questions
1. Is it taking us longer to do the same type of work?
2. What is a good commitment cycle time to others? (SLA)
3. What is and how stable is our completed work rate?
4. Where should we focus improvement efforts?
• Compared to what?
• Compared to the same type of work versus all work
• Compared to the same time period last week/month/year
• My work compares to others (only seen by me so I can improve)
20. “If anyone adjusts a stable process, the
output that follows will be worse than
if (s)he had left the process alone”
Attributed to William J Latzko.
Source: Out of the Crisis. Deming.
Q. Is the process stable? First, do no harm.
21.
22.
23. Demand on this team decreasing?
Cycle-time stable
Bulk close? Stable
“Long term”
distribution
24.
25.
26. Leankit’ers instrumental –
Eddie Detvongsa
Katie St. Francis
Keo Ros
Bob Saulsbury
Libby Padgett
Chris Gundersen
Scott Walters
Chris Mobley
Daniel Lesnansky
Danny McClain
Carl Nightingale
Alex Glabman
Florent de Gantes
Jon Terry
31. 17 charts so far…
Throughput (planned & un-planned)
Throughput Histogram(s)
Cycle Time (planned & un-planed)
Cycle Time Histogram(s)
Work In Process
Cumulative Flow
Arrival vs Departure Rate
Un-planned work Percentage
Cycle Time Distribution Fitting
36. 1930 to 2012
National League MVP
23% = 19 out of 82 (last time 1988)
1930 to 2013
All-American League MVP
23% 19 out of 82 (last time 1984)
1955-56 to 2015-16
NBA MVP
37% = 23 out of 62 (last time 2014 )
Source: NBA Most Valuable Player Award. (2016,
June 24). In Wikipedia, The Free Encyclopedia.
Retrieved 18:28, July 3, 2016, from
https://en.wikipedia.org/w/index.php?title=NBA_M
ost_Valuable_Player_Award&oldid=726766319Source: ESPN Playbook - SportsData
(infographic at end of this deck)
23% 37%
Hart Memorial AND Stanley Cup
17%
1927 to 2016
Hart Memorial
16..8% = 15 out of 89
(last time 2004, Martin St Louis
Tampa Bay Lightning)
Source: Wikipedia, excluded 2005 season.
https://en.wikipedia.org/wiki/Hart_Memorial_Trophy and
https://en.wikipedia.org/wiki/List_of_Stanley_Cup_champion
s
42. 4. Predictability
(how repeatable)
3. Responsiveness
(how fast)
1. Quality
(how well)
2. Productivity
(how much, delivery pace)
• Escaped defect counts
• Forecast to complete defects
• Measure of release “readiness”
• Test count (passing)
• Throughput ( / team size?)
• Velocity ( / team size?)
• Releases per day
• Lead time
• Cycle time
• Defect resolution time
• Coefficient of variation (SD/Mean)
• Standard deviation of the SD
• “Stability” of team & process
44. It’s about the TEAM
Divide by team size
Divide by average
45. Quality
• Goal is to keep the TEAMS
within 10 days of releasable
• Forecast has to be personal for the team
• Days = Open Bugs x Avg(recent cycle time samples)
Number of Devs on team
“If OUR entire TEAM did
nothing else but fix bugs
this sprint, at OUR
historical rate, we would
have x days of work”
51. 1970-1990’s
Waterfall
Rayleigh Distribution,
Weibull shape parameter = 2
57
Approx 2000
Weibull shape parameter = 1.5
Approx. 2008
Lean
Weibull shape parameter = 1.25
Approx 2010
Exponential Distribution,
Weibull shape parameter = 1
Work Item Cycle Time or Lead Time Distribution
Through the Ages
Cycle Time in Days
52. 58
Shape=2
Scale = 30
~ 1 month
Scale = 15
~ 2 week sprint
Scale = 5
< 1 week
Shape=1.5Shape=1
Work Item Cycle Time or Lead Time
Batch Size / Iteration Length
ProcessExternalFactors
53. Lean, Few dependencies
• Higher work item count
• More granular work items
• Lower WIP
• Team Self Sufficient
• Internal Impediments
• Do: Automation
• Do: Task Efficiency
Sprint, Many dependencies
• Lower work item count
• Chunkier work items
• Higher WIP
• External Dependencies
• External Impediments
• Do: Collapse Teams
• Do: Impediment analysis
59Paper: http://bit.ly/14eYFM2
55. Cycle time analysis
How to interpret cycle time distributions in coaching
@t_magennis | Bit.Ly/SimResources 61
56. Q. Can historical cycle-time be
used for coaching advice?
http://conferences.computer.org/hicss/2015/papers/7367f055.pdf
57. 63
Probability Density Function
Histogram Gamma (3P) Lognormal Rayleigh Weibull
x
1301201101009080706050403020100-10
f(x)
0.32
0.28
0.24
0.2
0.16
0.12
0.08
0.04
0
1997: Industrial Strength Software
by Lawrence H.
Putnam , IEEE , Ware Myers
2002: Metrics and Models in
Software Quality Engineering
(2nd Edition) [Hardcover]
Stephen H. Kan (Author)
Paper: http://bit.ly/14eYFM2
58. 1970-1990’s
Waterfall
Rayleigh Distribution,
Weibull shape parameter = 2
64
Approx 2000
Weibull shape parameter = 1.5
Approx. 2008
Lean
Weibull shape parameter = 1.25
Approx 2010
Exponential Distribution,
Weibull shape parameter = 1
Work Item Cycle Time or Lead Time Distribution
Through the Ages
Cycle Time in Days
Paper: http://bit.ly/14eYFM2
59. 65
Shape=2
Scale = 30
~ 1 month
Scale = 15
~ 2 week sprint
Scale = 5
< 1 week
Shape=1.5Shape=1
Work Item Cycle Time or Lead Time
Batch Size / Iteration Length
ProcessExternalFactors
60. Lean, Few dependencies
• Higher work item count
• More granular work items
• Lower WIP
• Team Self Sufficient
• Internal Impediments
• Do: Automation
• Do: Task Efficiency
Sprint, Many dependencies
• Lower work item count
• Chunkier work items
• Higher WIP
• External Dependencies
• External Impediments
• Do: Collapse Teams
• Do: Impediment analysis
66Paper: http://bit.ly/14eYFM2
61. 67
0 to 10 10 to 30
1.3to2
(WeibullRange)
1to1.3
(ExponentialRange) Traits:
Small or repetitive work
items. Low WIP. Few
external dependencies.
Good predictability.
Process advice:
Automation of tasks, focus
on task efficiency.
Lean/Kanban optimal.
Traits:
Larger unique work items.
High WIP. Low
predictability. Many
external dependencies.
Process advice: Focus on
identification and removal
of impediments and delays,
and quality. Scrum optimal.
Traits:
Small unique work items.
Medium WIP. Few external
impediments. Fair
predictability.
Traits:
Larger work items. Large
WIP. Many external
dependencies. Poor
predictability.
Weibull Scale Parameter
WeibullShapeParameter
@t_magennis | Bit.Ly/SimResources
69. Tools
• Excel or Google Sheets Spreadsheets (all free)
• General metrics spreadsheet (17 charts) –
• Team Capability Matrix -
• Forecasting –
• 10+ other spreadsheets tools all free -
• Visualization Tools
• Tableau ($995-$1995) – Tableau.com
• PowerBI (free) –
• Plotly (free) –
• Online Lean/Kanban Tool
• Leankit.com
70. Cool Visualization Resources and Websites
• My blog – FocusedObject.com/blog
• WindyTy.com – weather
• NY Times
• Tableau Public
• Books
• Tufty
• Few
73. Coaching professional teams
• Is about team performance, not individual
• If they don’t know it by now, they self improve it
• http://www.landofbasketball.com/awards/nba_season_mvps_year.ht
m
• 23 championships + MVP / 60 = ~ 1/3
• http://www.nba.com/2011/news/features/04/08/race-to-the-mvp-
final-rankings/index.html
• http://national.suntimes.com/nba/7/72/1237030/lebron-james-
stephen-curry-nba-finals-mvp
74. SDPI Dimensions
• Productivity = throughput avg / team size
• Predictability = variability of throughput / size
• Responsiveness = time in process average
• Quality = released defect density / throughput
The Software
Development
Performance Index
The SDPI framework
includes a balanced set
of outcome measures.
These fall along the
dimensions of
Responsiveness,
Quality, Productivity,
Predictability, …
Example, team over time -
Source: Rally Dev. 80
75. Responsiveness
• Average or median of the number of days between
two dates for items closed within a period
• Cycle time or Lead time of ???
• If reliable first touch date, use that
• If just created date, then use P1 and P2 bug
“If something urgent comes
along, how fast can we turn
that around”
@t_magennis | Bit.Ly/SimResources 81
76. Completion Rate
• Team goal is to maximize number of COMPLETED
items, not started items
• Count of items completed each period
• Don’t celebrate bug throughput (as much)
“What is holding us back on
completing more. Lets discuss
dependencies and blockers in
the retrospective”
@t_magennis | Bit.Ly/SimResources 82
77. Predictability
• How much variation there is each week in throughput,
normalized by “team size” in a rough way
• Coefficient of Variation = Mean/SD
“How consistently do we
deliver value?”
@t_magennis | Bit.Ly/SimResources 83
Editor's Notes
Interpreting data we capture for process improvement and coaching teams is hard. Sometimes you think you understand what you see, only to have that taken away from you. Sometimes, If it walks like a duck and quacks like a duck, it could still be a rabbit. Perspective and context matters.
Data isn’t evil, people are evil. Sure, sometimes data can highlight something we would rather not be true, but this doesn’t make it wrong to capture and analyze data in proper ways.
No matter how strong you are, being embarrassed is painful. When data is perceived as evil, its most commonly because it has been used as a blunt tool for coercing some behavior. Never do this, and never make data the tool to embarrass people. Not just because its wrong to that person, because it causes data to make better decisions to go underground. If you embarrass someone using data without context, then you are now just seeing a partial picture in the future – and your decisions will be sub-par.
If that’s not enough, Deming puts the people aspect of process impact around the 10-15% range versus 85-90% for the system. A metric managing the system response is far more likely to return benefits.
A good way to assess how data is being used is to ask the simple question, is that data being used to make a difference? Or is it just being used to make a point, to say I’m better than you.
To make a difference, data should tell a story. The readers should be able to follow the argument and train of thinking, and see something they weren’t expecting. A story doesn’t just jump to the last page to see whether little red riding hood lives happily ever after, there is a buildup of the characters. Your data stories should do the same. They should take the reader on a journey and leave them with the moral to the story.
Lets see a couple of examples of taking boring data and turning it into a story. Obligatory cat photo to make sure this deck is tweeted.
Boring tabular data. The domain of Excel. Also the format NOAA makes weather data available to the public. Pretty boring stuff, even for weirdo data geeks like me. But the clever Ivo an avid kite surfer turned this data into this [click]
Beautiful. But it gets better, the wind speed and direction and temperature are animated [click]
You can feel the weather pattern in my hometown of Seattle. Color helps me quickly interpret areas of higher wind speed. I can change altitude and see how the jetstream will impede or help my flight to Atlanta. I can see what the wind and weather is like here in Atlanta. I can see clouds and rainfall, temperature or barometric pressure. I can see how this is expected to trend in the past or future. I can see the story of the atmosphere and weather better than reading it from an excel text table.
Impressive, but hard for someone of my skill to do. Lets see one that we all could have built using Excel. The Wall Street Journal did a feature piece on Vaccination. They took boring health data from the University of Pittsburgh and turned it into this [click]
Even without the annotation, was the Polio Vaccine effective? Introduced in 1955 you can see the story of how every state in the US saw a decline of infections for every 10,000 people. The picture tells the story of how within 5 years the disease infected fewer people as the number of immunized increases and thrarted a debilitating disease into the rear vision mirror.
Source:
Battling Infectious Diseases in the 20th Century: The Impact of Vaccines
By Tynan DeBold and Dov Friedman
Published Feb. 11, 2015 at 3:45 p.m. ET
http://graphics.wsj.com/infectious-diseases-and-vaccines/
http://www.tycho.pitt.edu
http://www.tycho.pitt.edu
Edward Tufte – author and researcher on visual communications. Every number we show or hear should be put into the context “Compared to What?” A single number is meaningless. Single numbers make us susceptible to drawing incorrect conclusions. We need to know if the value we just got is better, worse or comparable to something else to see a story.
In our prior example, the authors managed to pack four “Compared To’s” into one graphic. State versus state. Year versus year, Before and after an event, and occurrence rates versus others in color.
Source:
Battling Infectious Diseases in the 20th Century: The Impact of Vaccines
By Tynan DeBold and Dov Friedman
Published Feb. 11, 2015 at 3:45 p.m. ET
http://graphics.wsj.com/infectious-diseases-and-vaccines/
http://www.tycho.pitt.edu
Compared to event, compared to prior time, compared to other states
And to see it wasn’t just a fluke, here is the same chart for Measles. Gone in under 5 years after vaccination introduced.
Source:
Battling Infectious Diseases in the 20th Century: The Impact of Vaccines
By Tynan DeBold and Dov Friedman
Published Feb. 11, 2015 at 3:45 p.m. ET
http://graphics.wsj.com/infectious-diseases-and-vaccines/
http://www.tycho.pitt.edu
Our last example shows census data. The age of the US population in a recent census. Steve Wexler took this boring data and put a clever spin on it to help create a personal story for the viewer. [click]
He let you put YOUR age in and see where you fell. Are you older than more than 50% of the population? Are you over the hill? I am. I really wish he had said YOUNGER than 37.3 of the male population. His point is that as long as its quiet and nobody else is around, you want to see how you compare to others like you. If for no other reason than to be “less over the hill” than your spouse or sibling. I’m both!
So that’s the theory. Lets look at how LeanKit took an existing report built-in to the application and applied these principles. Steve Wexler and myself works with the talented internal Visualization team to prototype and build the following visualization to replace an existing report as part of a training course.
Here is the current report. Pretty standard scatter plot of cycle times at the bottom, and a running average of cycle times for each type of work at the top. Design wise, these reports have served customers well. Being 46, the dots are a little small to hover over with my arthritic hands, and its pretty bland. It also doesn’t answer a lot of the questions i have around the time based aspects of software development.
So, I crowdsourced a set. I used Twitter to get feedback on what questions most resonated with my followers. Crude, but it quickly gave some weight to the possible questions we needed to answer. What we found was more comparison of similar work was key. And the ability to zoom in and out on date groupings, day, week, month for example. Leankit didn’t do wither of these very well in its current form.
So, taking input from the market, the following four questions were key. And we had three major “Compared to What” vectors. Card Type, Time Periods and My Cards. Now we had an understanding of the story we wanted to tell, it was time to do some research.
Edward Deming has published many books about his work in managing manufacturing processes, in his book “Out of the Crisis” he goes to good lengths to discuss that when looking at a process, your first job is to do no harm. If the process is “in-control” leave it alone! Touching it causes things to likely get worse. A graphic caught our eye. It is a run chart showing spring manufacturing data. Each dot represented a tested spring, and a downward trend can be seen even though the variation shown by the marginal histogram looks perfectly symetrical. In Deming’s words, you can assess a process unless you can see the change over time and the distribution of variation. The scatter plot alone isn’t enough. The histogram of frequency of result isn’t enough. You need both.
So we turned our attention to a paper prototype. And wondered what if we put marginal histograms like Deming on the right side and the top of a scatter plot. Would we tell a fuller story about the process and how “stable” it is. And that’s what the team did. [click]
Although this looks more aesthetically pleasing by not yelling in ALL CAPS using harsh colors. This visualizations packs a lot of information. Lets look at the layout.
Across the top are the time period buckets. Users can choose day week or month, we default it to week. And they can choose to show all types of work or just selected ones, we default it to all types [click]
Under that is a bar chart showing how many items were completed in that period. This is throughput. We can see by glancing at the bar height how completion rate is changing over time. [click]
Then our scatter plot. To avoid all of the dots being overlapped and not accessible by cursor, these dots are randomly jittered in horizontal location. Hovering over them gives popul information about that specific piece of work.. This is technically a jitter plot. [click] The key story though is how the average reference lines in each period trend over time. We try and help the user follow the path from left to right giving them the average value of cycle times for items completed in that time period.[click]
On the right hand side if the marginal histogram showing cycle-time distribution. This is an area of my research, how the cycle time histogram relates to Agile process factors, but most people will totally ignore it! [click]
And here is how I interpret the story of this team for this period. Demand is decreasing, and this lower demand is helping stabilize the cycle time of the items to a little less than 2 days on average. There are a few outliers to understand, in case we can solve the root cause of them. The cycle time distribution is following an expected shape. Of course, I’m the only one that knows that a dev-ops kanban team should expect an Exponential distribution, which this would have been if not for that clump of late january items all closing around 20 days.
And this is another direction I think we might be heading. Help the user feel the flow of work. This is a jump plot invented and mocked-up to show software process flow. The green jumps across the top is work flowing forward, underneath, work flowing back. The height of each arc represents average cycle time, and thickness represents count. In one plot, you get to see the story of cycle time, throughput and status steps. Amazing piece of story telling for understanding a process and its flow.
Source: https://public.tableau.com/views/JumpLineExamples/SDLCDetailedJP?%3Aembed=y&%3Adisplay_count=yes&%3AshowTabs=y&%3AshowVizHome=no
Amazing work by Tom VanBuskirk and Chris DeMartini. See JumpPlot.com
If you think its too hard or you don’t have enough data, your wrong.
I set out to prove this by setting myself a challenge to see what I could do with just completed and start dates. You can get the result for yourself by downloading the spreadsheet from bit.ly/Throughput with a capital T. Again, all these links will be tweeted at @t_magennis
Here is the input page. Completed date is mandatory. Start date is optional but well advised. And the type of work. Is it planned or un-planned work is needed to see how these different types stack up. This is optional. Most people don’t use it, I’ll leave that up to you.
The spreadsheet from just these three inputs creates 17 different charts so far. [click] Throughput rates and histograms. Cycle times values and histograms. Work in process rates, arrival and departure rates, cumulative flow diagrams. And some rather complex mathematical analysis of the cycle time histogram that fits the data to the Weibull probability distribution thanks to John Cook, someone who everyone in this industry should have heard of. It does all of this without macros. Everything is straight formula based. And all from 2 dates and one type column that’s optional.
Here are some examples. Here is the throughput chart. Intentionally kept light weight in design. Intentionally designed to draw your eye along the journey not focus on the values. This is the throughput trend week over week.
Here is a chart I use a lot to understand coaching opportunities. How much unplanned work is a team encountering. Helps to understand what external pressure is fighting for a teams attention and to help come up with a balance of managing planned versus unplanned work types. I can see this team is slowly driving down its un-planned percentage. Hopefully by using better triage practices and helping external parties get their work into the next sprint rather than interrupting this one.
The obligatory cycle time chart. Although now I showed you what LeanKit have planned, this is a let down. Still, not bad from a couple of dates. The percentile marker helps the team communicate an expected service level agreement. You don’t like 95%, change it to 85%. Its just a value.
And this is a new experiment. It tries to visualize the story of supply versus demand. Above zero, the team is completing more than its starting, below the line, starting more than completing and growing WIP. This is intended to tell the flow story, and help the team strive to balance starting and completion by staying close to zero in the center. I’m hoping this becomes more of a go-to chart than Cumulative Flow for teams who want to focus on consistent flow of value.
Moving onto talking about metrics and teams. We often think its about having the best and brightest superstars. To counter that thinking, what percentage of time do you think the league MVPs for baseball and basketball belong to a winning championship team? If it was just having the BEST player (whatever that means), it should be pretty high, above 50% at least.
Well in Baseball, its 23% in the 1930 to 2012/13 timeframe. And not recently. 1984 and 1988 depending on the league.
Basketball does better. 37%. Smaller team sizes? Who knows the cause, but still under 50% roughly 1 in 3 or 4.
It takes a team to win championships, so lets focus some effort on metrics that help form that team.
I have a simple tool to help visualize skill capabilities. Its – you guessed it – a spreadsheet. You enter a list of required skills, and it helps produce a simple paper survey sheet. The team members assess their ability to teach others on a skill, to perform that skill, or to be willing to learn that skill. It takes this data and aggregates it into a heatmap. [click]
I have a simple tool to help visualize skill capabilities. Its – you guessed it – a spreadsheet. You enter a list of required skills, and it helps produce a simple paper survey sheet. The team members assess their ability to teach others on a skill, to perform that skill, or to be willing to learn that skill. It takes this data and aggregates it into a heatmap. [click]
Along the continuum from green is “safe” to red is “at risk” for each skillset based on how many teachers, do-ers, and novices you have available. Being red isn’t bad. It just means you better hope demand doesn’t rise for that skillset. If it does, train up a novice or two.
[click]
The spreadsheet gives coaching advice based on what it sees, helping you as a coach or manager triage risky skill gaps and single points of failure. By proactively telling the story of skill capability of your teams, you position them to be resilient with changing demand.
One goal of mine when coaching teams using metrics is helping them make smart trades between competing forces. Maximizing performance in all facets of a process is stupid and likely impossible without gaming. I’m far more impressed by teams who trade something they are super good at for an incremental improvement in another area they are struggling. As a coach, we can help teams assess and make these trades.
Larry Maccherone spearheaded some research whilst working at Rally with collaboration with CMU/SEI. The Software Development Performance Index. The SDPI framework includes a balanced set of outcome measures. These fall along the dimensions of Responsiveness, Quality, Productivity, Predictability.
Each of these are opposing forces. Its unlikely any team will excel at all of them. Increasing productivity beyond team capability will likely cause a decrease in quality. Responding faster likewise, could mean corners are cut on quality with fewer tests or less testing. By making sure you track data trends in each area will help the team creeping net positive.
Here is one dashboard produces using the SDPI principles. Ii’’ talk about the quality dimension in a second, but the others are
Productivity, in this case throughput, dark green for story work, pale green for defect work. You can see the team traded some story work to rapidly burn down some defect debt nearing the end of a major release. I’d have been happier as a coach if they made that balance earlier.
Responsiveness is defect cycle time resolution. I want the team to have a clear picture of how long it may take to burn-down defect debt. In this case, the team managed to move from 10 days down to 3 days once they moved through the most difficult ones. A great indicator as a coach that the remaining defects are small in nature and effort.
Predictability is a little more complex a measure. If you just use standard deviation of the productivity number, bigger teams would be disadvantaged. The quickest way to control for team size is by dividing the standard deviation by the mean, a measure called the co-efficient of variance. This of it as the standard deviation of standard deviation.
Credit for this Visualization goes to Isaac Obezo.
Quality is always difficult. I like to consider some measure of ongoing delivery ability. Often though it comes down to escaped defect counts. To make this axis useful to the team in a coaching context, I forecast how long to zero defects given THIS teams cycle time average. During planning the team can quickly see “if we did nothing but defects we would need x days as a team.” A great discussion to help focus on technical debt removal before taking on more story work.
Key here: Make it personal about the team. THEIR cycle time, THEIR defect count. THEIR team size.
Team to team comparison is a dangerous pursuit. But lets try and do it safely anyway! This dashboard helps team compare their trend with similar teams in a company. By looking for patterns in the trends across the 4 different SDPI categories, contextual coaching advice can be displayed. But the data is noisy, so we quickly found removing the line cleared up the mess. [click]
Now it clear to see how “MY” teams trend in orange compares to the rest of the company shown in grey. No axis values, just trend. Every value is normalized as best as possible to fairly compare apples versus apples. Teams should focus on the steepest adversely trending category. They should trade something from the steepest favorable trending category. Smart trades, based on comparison against teams in the same context.
Sure, it could be misused, but the way the categories work against each other its not possible for any one team to have a stable favorable trend in all categories. Or if they can, I haven’t seen it.
My name is Troy Magennis, I’ve been in software for 25 years now, from QA through to VP Architecture and Development for companies like Travelocity and Lastminute.com. Most recently I formed my own company building tools and running training on software development forecasting and risk management solutions. Feel free to take notes, but the slides and examples are available to you online. And as a special benefit for joining us today, you can download the software used throughout this session for free. Bit.ly/agilesim will take you to the right site. I wrote a book about these topics, “Forecasting and Simulating Software Development Projects” and I’d like to make sure you all got a free PDF copy of this book also. Just download it from the same location.
I set out to prove this by setting myself a challenge to see what I could do with just completed and start dates. You can get the result for yourself by downloading the spreadsheet from bit.ly/Throughput with a capital T. Again, all these links will be tweeted at @t_magennis
1930 to 2012 = 82 years 19 nat league, 19 am league
Last time 1988, 1984
And this is another direction I think we might be heading. Help the user feel the flow of work. This is a jump plot invented and mocked-up to show software process flow. The blue jumps across the top is work flowing forward, underneath, work flowing back. The height of each arc represents average cycle time, and thickness represents count. In one plot, you get to see the story of cycle time, throughput and status steps. Amazing piece of story telling for understanding a process and its flow.
Source: https://public.tableau.com/views/JumpLineExamples/SDLCDetailedJP?%3Aembed=y&%3Adisplay_count=yes&%3AshowTabs=y&%3AshowVizHome=no
Amazing work by Tom VanBuskirk and Chris DeMartini. See JumpPlot.com