Many teams track their progress against a release plan by updating a burndown chart or burnup chart. The burndown chart tracks against a fixed volume of work (working it’s way down to zero)The burnup chart has affordance for scope volume changesThe 3-sprint trailing average of completed points is a team’s velocityThe burn up chart has been more popular since it better displays scope increase throughout the project.Burn charts are used to predict project completion time and to track the team’s productivity improvements
Assuming accuracy of scope unreleased scope definition is VERY WATERFALL…Main problems with burn charts is that they measure completion against the “assumed scope that will deliver the desired results”. Measuring points completed is a proxy metric towards success that assumes the planned scope will produce desired results and the that the estimates are somewhat accurate.
Anti patterns that can emerge: Team’s are being managed to velocity.Team management, Product owner and other stakeholder start putting too much emphasis on velocity (and overvalue the size estimates)By assuming the scope is correct, teams are Scaled up too early which inevitably means less exploration to try and find the right scopeFalling behind the ideal burn, often results into providing even more instruction and separation of analysis and execution… not unlike gant chart managementAll this energy best spent elsewhere because most projects aren't meant to reach max velocity (they lack a real value prop)Anti pattern resultsProduct management and tea get further away from each otherTeam starts inflating estimatesTeam members are not encouraged to explore better solutions. Hence less agileIt’s demotivatingThere is a lot to do,The lines don’t look right, death march …We’re not being told the truth … we can all see this isn’t happening.The closer you get to deadline or goal line, the more inaccurate the charts become. Tech debt, refactoring and performance optimizations aren’t visible.
Site response time key leading indicator for ability to handle actual successAs we grow along curve, user tolerance lowers, fast response is essential for growth…Health of software, simplified and refactoredCycle time as measurement of as the team’s/company’s response time to the market.
Track your growthUnderstandwhat causes growth or declineTrack against benchmarks (i.e. previous projects projects)Track market growth (i.e. comscore)This number will be very low for the first months.You can have negative velocity as user users drop or when you wall behind the market ( Market velocity)“Grow up charts”
Technical performance and response time as indicator whether you are ready for success/projected growthAs we grow along curve, user tolerance lowers, fast response is essential for growth…Health of software, simplified and refactoredi.e.Page load timesTop 5 Application transactionsSpeed of high volume Dbase operations
Measure how long it takes to respond to marketA very responsive organization has the ability to go from idea to user prototype test within one to two weeks and have it in production within 4 weeks with conclusive information in 5 to 6 weeks For other organizations it takes 2 to 4 weeks to get a new idea on the roadmap and at least 8 more weeks to get it launched. The most responsive organizations will churn through the most ideas and will learn the most about the market. Cycle time is key leading indicator of productivity ( minimal waste @ toyata way & reinertsen)
Another Visualization exploration
New feature usage against standish benchmark,Thisis about details, development stories. How much polish is enough. This level should be fairly detailed ( more like dev stories such as filter options, module tabs, number of items in home page carouselAcross platforms, is our RWD ( response web design), really RWD? Or does it just look good on other platforms but many features are barely used. Clicks on module options (i.e. carousels swipes, list filters/sorts, module tabsThe more you are heading towards standishnumbers, the more complete your product. Are we growing faster slower than market by adding featuresAre we adding the right featuresShould we stop adding features… and invest elsewhere?Gauge time spent on things that are used and not used
In search of better velocity metrics
VELOCITY METRICSIn search of velocity metricsthat spur better conversations.
BURN CHARTS Burndown Burnup R1points points beta alpha time time “How are we tracking against the scope?” Ideal burn Actual burn Velocity metrics for better conversations
Unless the software is released,all scope is assumed scope. Velocity metrics for better conversations
BURN CHART ANTI-PATTERNSManagement Does FeelsTeam Velocity metrics for better conversations
INSTEAD OF POINTSNumber of usersResponse timeCycle timeSuccess rateCompleteness... other? Velocity metrics for better conversations
NUMBER OF OF USERS Our growth Market growth Fast growthusers users T3 Stable growth T2 T1 now time now time “Is it a good idea? Is it the right time?” Benchmark Velocity metrics for better conversations Actual
APPLICATION RESPONSE TIME Trends Incident DetailsMilliseconds 3500 14000 Tolerance 3000 12000 2500 10000 2000 8000 1500 Goog/AMZN 6000 1000 4000 Tolerance 500 2000 Goog/AMZN 0 0 jan feb march april may 1:55 2:00 2:05 2:10 2:15 2:20 1/0/00 “Can we handle the projected user growth in 3,6, and 9 months from now?” Velocity metrics for better conversations
CYCLE TIME ideas Learn Build Data Product Verify/ Test“How fast can we respond to market changes and new insights?” Velocity metrics for better conversations
AVERAGE CYCLE TIME CurrentIteration Last Project Company avg 10 days 25 days 60 days Velocity metrics for better conversations
CYCLE/STORY VELOCITY Cycle time Total storiesdays time stories time Velocity metrics for better conversations
SUCCESS RATEProject Epics: verified/total Last 2 sprints Account Search Content Social Purchase Tracking .333 .500 .900 .200 .666 1.000 1/3 1/2 9/10 1/5 2/3 2/2 Project .666 .500 .900 .300 .500 .750 10/15 4/8 18/20 3/10 6/12 6/8 “Are we heading in the right direction? How good is our backlog? ” Velocity metrics for better conversations
FEATURE COMPLETENESS Standish 2002 Project Never, 0% Always Rarely, 10% 7% Always, 20 Often % 13%Never 45% Sometimes Sometimes 16% , 35% Often, 35% Rarely 19% “Are new stories used?” Velocity metrics for better conversations
Different projects, different metricsDifferent project stages, shift focus Velocity metrics for better conversations
SUMMARYMetric Discuss [Data points]Number of users Right value proposition? • User growth % Right time? • Market share • Referrals • Revenue • Usage frequency • …Response time Are we ready for the desired growth? • Browser load • App Transactions • Dbase operations • …Cycle time How responsive/Agile are we? • By major initiative When will we have more answers? • By feature • By story How much WIP/How much waste? • …Success rate Are we heading in the right direction? • Feature hypotheses How good is our backlog, analysis and execution? verified • … Are we in touch with the market?Completeness Should we keep investing in more scope? • Are new features used? Or start investing in other projects? • Are new feature options used? • … Velocity metrics for better conversations