6. single loop learning results actions lead to how which shape future efficiency doing things right incremental change
7. double loop learning results actions values, assumptions Chris Argyris guide how why lead to new/improved effectiveness doing the right things efficiency doing things right question assumptions radical change incremental change
8. learn, change,move on results actions values, assumptions definemetric* setexpirationdate goal ok orexpiration date passed? metric *shared, simple, controllable, transparent, time-bound
10. inward & outwardlooking metrics outward loooking inward looking feedback R&D Business & Other Stakeholders boundary objects
11. boundary objects metric business R&D boundary object [sociology]: something that helps different communities exchange ideas and information. could mean different things to differentpeople but allows coordination and alignment
16. metrics quadrants Business Lead Time Cycle Time Quality of Service (SLA) Throughput Business Value Revenues ROI Customer Satisfaction Bugs? Product Process WIP Cadence CI Failures Rework Impediments Retrospectives Morale Code QualityTechnical Debt Test Coverage Team Maturity
17. metrics quadrants Business what!? no velocity? Lead Time Cycle Time Quality of Service (SLA) Throughput Business Value Revenues ROI Customer Satisfaction Bugs? Product Process WIP Cadence CI Failures Rework Impediments Retrospectives Morale Code QualityTechnical Debt Test Coverage Team Maturity
18. metrics quadrants Business Lead Time Cycle Time Quality of Service (SLA) Throughput Business Value Revenues ROI Customer Satisfaction boundary objects Bugs? Product Process WIP Cadence CI Failures Rework Impediments Retrospectives Morale Code QualityTechnical Debt Test Coverage Team Maturity
19. metrics quadrants Business Lead Time Cycle Time Quality of Service (SLA) Throughput Business Value Revenues ROI Customer Satisfaction boundary objects Bugs? Product Process WIP Cadence CI Failures Rework Impediments Retrospectives Morale Code QualityTechnical Debt Test Coverage fragile Team Maturity
20. metrics quadrants Business Lead Time Cycle Time Quality of Service (SLA) Throughput Business Value Revenues ROI Customer Satisfaction boundary objects agile Bugs? Product Process WIP Cadence CI Failures Rework Impediments Retrospectives Morale Code QualityTechnical Debt Test Coverage fragile Team Maturity
24. agility being agile is not the goal, it’s a mean if you are really interested there are plenty of agility tests on the Internet: Nokia Test Scrum Open Assessment - ScrumAlliance Agile Maturity Model Agile Evaluation Framework Comparative Agility Assessment etc.
25. impediments, retrospectives, reviews # of questions answered # of questions asked # action items addressed # action items assigned(at previous meetings) # of WTFs ? WTF!? WTF!?
36. flow related metrics active WIP -buffered WIP tasks that are really in progress – task waiting to be handed-off (#,%,% of time spent) process efficiency active time / cycle time BIP Bugs In Process technical debtWIP / standard WIP # of projects a person works in parallel
45. how long since? you talked to a customer last useful retrospective you learned something at work your boss last freaked out last critical bug 6 3 52 2 1 days days days weeks week
46. and don’t forget bus factor # of key developers that need to be hit by a bus to kill a project
47. “per una vera mille sono finte” F. De André “for every true one thousands are fake”
metrics as learning & change agentsMetricheaiutano a capire e prendereunadirezionepiuttostocheun’altraif you're not making mistakes & changing your mind, you're not learning. If you're not learning why are you iterating & collecting feedback?shift from “build, measure, learn” to “learn, measure, build” (Lean Startup)
single is correcting an action to solve or avoid a mistake, double is correcting also the underlying causes behind the problematic actionLocal maximum, not global best … single = short termPDCAinspect. Single loop learning asks, “How can we do what we are doing better.”
Kaikakukaizen, toglierepilotaautomatico!Argyris & Schon's Theory of Action, Double loop learning asks “Why do we think this is the right thing to do,” Fix the causes not the symptoms. Inspect and adapt, single loop?
Focus on the positives. identifying the negatives and trying to fix them builds a wrong culture Metrics are needed to improve and not to punishBTW Failing is ok. self-definedsimplecontrollabletransparenttime-bound: created without extra effort, meaningful to all stakeholdersfocus on things that are actually controllable by the people being measuredLeading. prefer an imperfect forecast of the future to a perfect report on the past.visible and accessible without extra effortToo many metrics -> overload -> waste
help to carry meaning, promote communication and assist with understanding. They serve as both containers and carriers. BOs are highly abstract and generalized form of knowledge organization with considerable reification. Classification systems, ontologies, paper forms serve as BOs (where they are used by diverse groups). An prime example of a BO is an ERP order entry process, a shared information space, a product tracking number, RFID tag
Scorecard, sort of… Vanity metrics are things like registered users, downloads, and raw pageviews. They are easily manipulated, and do not necessarily correlate to the numbers that really matter: active users, engagement, the cost of getting new customers, and ultimately revenues and profits. The latter are moreactionable metricsThe only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Eric Ries
Scorecard, sort of… Vanity metrics are things like registered users, downloads, and raw pageviews. They are easily manipulated, and do not necessarily correlate to the numbers that really matter: active users, engagement, the cost of getting new customers, and ultimately revenues and profits. The latter are moreactionable metricsThe only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Eric Ries
Scorecard, sort of… Vanity metrics are things like registered users, downloads, and raw pageviews. They are easily manipulated, and do not necessarily correlate to the numbers that really matter: active users, engagement, the cost of getting new customers, and ultimately revenues and profits. The latter are moreactionable metricsThe only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Eric Ries
Scorecard, sort of… Vanity metrics are things like registered users, downloads, and raw pageviews. They are easily manipulated, and do not necessarily correlate to the numbers that really matter: active users, engagement, the cost of getting new customers, and ultimately revenues and profits. The latter are moreactionable metricsThe only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Eric Ries
Outcome trumps output and activityNo measures related to individuals, “The basic building block of work is a team, not an individual” (esther derby)
no clue abouteffectiveness/valueinducesextra effort/waste to correctly (!?) estimate -> don’t use it to predicteasy to be gamed ->don’t use it as a targetmeasure throughput instead (# of stories per iteration)
Technical debt = 1 / design qualityRefactoring code that has no value is a waste of time
Technical debt = 1 / design qualityRefactoring code that has no value is a waste of time
Technical debt = 1 / design quality
if nobody likes or wants to use your product, code quality does not really matter Technical debt = 1 / design quality Technical debt != bad codeWard CunninghamA little debt speeds development so long as it is paid back promptly with a rewrite.Every minute spent on not-quite-right code counts as interest on that debt.[Many] have explained the debt metaphor and confused it with the idea that you could write code poorly with the intention of doing a good job later.The ability to pay back debt [...] depends upon you writing code that is clean enough to be able to refactor as you come to understand your problem.
Code quality leading indicator … successful products with poor quality, but …
It takes longer to reach the front of a large line, this increases risk (customer & market), and variability (we move to a higher level of utilization where variability is amplified), more queues=more tasks & projects to track & report on, delayed feedback means bad assumptions live longer in code etc., no need to hurry if downstream activities will happen weeks later
it isn’t easy to ignore a blocked and work on something else
it isn’t easy to ignore a blocked and work on something else
BIP Bugs In Process
Check also pizza index (overtime = pizzas -> should drop to zero)
Check also pizza index (overtime = pizzas -> should drop to zero)
Simple, controllableSome of these mentioned by ArloBelshee