Agile Metrics V6
Upcoming SlideShare
Loading in...5
×
 

Agile Metrics V6

on

  • 6,211 views

Agile Metrics

Agile Metrics

Statistics

Views

Total Views
6,211
Views on SlideShare
6,187
Embed Views
24

Actions

Likes
11
Downloads
502
Comments
2

5 Embeds 24

http://www.slideshare.net 17
http://zootool.com 3
http://www.linkedin.com 2
http://www.lmodules.com 1
http://www.scoop.it 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Slide 11: The best (only good ones) metrics can be related directly to corporate goals. Process Engineering teaches you to start with goals analysis, and work down the process tree to figure out which metrics are important and which are not. Recording lots of metrics does not mean that you're recording good metrics.
    Are you sure you want to
    Your message goes here
    Processing…
  • Very interesting approach. I think it could be integrated with backlog prioritization techniques to help deciding the better subset of features to be included at different releases in the roadmap
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Version 4
  • People are always happy to see this. Project Managers don’t want to spend all their time mucking about with spreadsheets and status reports. Team members don’t want to be distracted from their work to perform overhead activities. The minimalist approach to measurement is always met with smiles. And then...
  • People are always happy to see this. Project Managers don’t want to spend all their time mucking about with spreadsheets and status reports. Team members don’t want to be distracted from their work to perform overhead activities. The minimalist approach to measurement is always met with smiles. And then...
  • The key attributes of useful metrics: (1) The measurement is used by some stakeholder to make decisions at some level. Measurements that are just filed away and never used are merely waste. (2) Level of detail. Each stakeholder can consume and use information at a particular level of detail. An executive will not be able to consume static code analysis statistics about cyclic dependencies. The executive will be able to consume information about code quality at a higher level of abstraction than that. (3) Scope. A team member will care about information pertaining to the team and project; a program manager will care about information pertaining to all the projects in his/her program; an executive will care about information pertaining to the enterprise as a whole. (4) Time frame. The customer or Product Owner needs up to the minute information throughout the project; the program manager needs information pertaining to a release; an executive needs information pertaining to the timeframe of a strategic plan or budget period. The executive can’t consume or use a daily report of iteration progress. The time frame is too small to be meaningful in his/her job.
  • ...we get down to the nitty-gritty about what is “necessary.” The definition of “necessary” includes all stakeholders. Everyone involved in a project must understand and accept that a certain amount of time will be spent taking measurements they, personally, aren’t interested in. We don’t want to waste time measuring and tracking information that nobody uses. We do have to ensure all stakeholders receive the information they need.
  • Many organizations have experimented with agile methods and a significant minority have moved beyond the proof of concept or pilot project stage in applying agile methods. However, as this is written, no large organizations are using agile as their primary, mainstream software development approach. Agile remains a secondary or alternative approach to software development projects. Your organization probably falls somewhere along a spectrum between the “fully traditional” and “fully agile” extremes. Some of the organizational differences that have implications for project metrics are listed on the slide. For stakeholders whose interest is at the level of a single team or single project, if the organization is “fully agile” then the metrics presented until now will be sufficient (with the addition of financial metrics). If the organization is not “fully agile,” then you may need to provide additional project metrics to ensure the project’s true status is properly understood by all stakeholders, and to help immature agile teams improve their effectiveness. The specifics will vary by circumstances. Different organizations have different problems and are at different levels of maturity with agile and lean thinking and application. We will present a few examples in this presentation, but you may well have to think of metrics that are meaningful in the context of your own situation.
  • Can we derive a basic set of metrics for agile teams based on the principles of the Agile Manifesto and what we know of project stakeholders’ needs? If working software is the primary measure of progress, then let’s measure the amount of working software the team delivers. The Product Backlog contains a list of features the customer wants to see in the software product. (These are the functional requirements.) As the team delivers each feature, we can count the number of features that have been completed and that are running in the development environment with all tests passing. This number should climb as the team builds up more and more of the software.
  • Can we derive a basic set of metrics for agile teams based on the principles of the Agile Manifesto and what we know of project stakeholders’ needs? If working software is the primary measure of progress, then let’s measure the amount of working software the team delivers. The Product Backlog contains a list of features the customer wants to see in the software product. (These are the functional requirements.) As the team delivers each feature, we can count the number of features that have been completed and that are running in the development environment with all tests passing. This number should climb as the team builds up more and more of the software.
  • Can we derive a basic set of metrics for agile teams based on the principles of the Agile Manifesto and what we know of project stakeholders’ needs? If working software is the primary measure of progress, then let’s measure the amount of working software the team delivers. The Product Backlog contains a list of features the customer wants to see in the software product. (These are the functional requirements.) As the team delivers each feature, we can count the number of features that have been completed and that are running in the development environment with all tests passing. This number should climb as the team builds up more and more of the software.
  • Can we derive a basic set of metrics for agile teams based on the principles of the Agile Manifesto and what we know of project stakeholders’ needs? If working software is the primary measure of progress, then let’s measure the amount of working software the team delivers. The Product Backlog contains a list of features the customer wants to see in the software product. (These are the functional requirements.) As the team delivers each feature, we can count the number of features that have been completed and that are running in the development environment with all tests passing. This number should climb as the team builds up more and more of the software.
  • This is a metric to help track the “valuable” part of “valuable software.” Earned Business Value (EBV) may be measured in terms of hard financial value based on the anticipated return on investment prorated to each feature or User Story. Alternatively, EBV may be expressed as the relative value of features or User Stories. In either case, the development team must ask the customer or customer proxy to assign a value to each feature or User Story so that there will be a basis for this measurement. If the customer will not or cannot assign a value to each feature or User Story, then the next best thing is to assume the highest priority features are also the highest value features, and track “value” on the basis of the team’s delivery of high priority stories.
  • Can we derive a basic set of metrics for agile teams based on the principles of the Agile Manifesto and what we know of project stakeholders’ needs? If working software is the primary measure of progress, then let’s measure the amount of working software the team delivers. The Product Backlog contains a list of features the customer wants to see in the software product. (These are the functional requirements.) As the team delivers each feature, we can count the number of features that have been completed and that are running in the development environment with all tests passing. This number should climb as the team builds up more and more of the software.
  • This is a metric to help track the “valuable” part of “valuable software.” Earned Business Value (EBV) may be measured in terms of hard financial value based on the anticipated return on investment prorated to each feature or User Story. Alternatively, EBV may be expressed as the relative value of features or User Stories. In either case, the development team must ask the customer or customer proxy to assign a value to each feature or User Story so that there will be a basis for this measurement. If the customer will not or cannot assign a value to each feature or User Story, then the next best thing is to assume the highest priority features are also the highest value features, and track “value” on the basis of the team’s delivery of high priority stories.
  • Sample velocity chart.
  • This principle suggests three things: (1) Customer satisfaction, (2) early and continuous delivery, and (3) valuable software. Velocity is a measure of the amount of work the team completes per iteration. Features are usually divided into User Stories, and User Stories are sized by the development team in terms of story points. When the customer accepts a story as complete, the team is credited with the number of story points associated with that story. The team receives no “partial credit” for incomplete stories. Therefore, by tracking velocity we are tracking customer satisfaction. Since velocity is calculated in each iteration, and software is demonstrated to the customer in each iteration, by tracking velocity we are tracking “early and continuous delivery.”
  • This principle suggests three things: (1) Customer satisfaction, (2) early and continuous delivery, and (3) valuable software. Velocity is a measure of the amount of work the team completes per iteration. Features are usually divided into User Stories, and User Stories are sized by the development team in terms of story points. When the customer accepts a story as complete, the team is credited with the number of story points associated with that story. The team receives no “partial credit” for incomplete stories. Therefore, by tracking velocity we are tracking customer satisfaction. Since velocity is calculated in each iteration, and software is demonstrated to the customer in each iteration, by tracking velocity we are tracking “early and continuous delivery.”
  • Snapshot of static code analysis output for a real project. It looks like the team has gotten carried away with the graphical capabilities of their reporting tool. Too small to read on a slide. Important parts: Test coverage 73%, tests passing 99.1%, most complex packages, least tested methods, some of the statistics in the blue section.
  • Closeup of statistics from static code analysis shown on the previous slide.
  • Graphic taken from here: http://hackystat.ics.hawaii.edu/hackystat/docbook/ch10s04.html
  • EV is the sum of the planned value (PV) of all the work items completed to date. It is not based on actual cost (ACWP), it is based on budgeted cost (BCWP). Therefore, it is useful for seeing schedule variance and budget variance, but does not give us information about cost overruns based on actual costs. Many people include a trend line in their EV charts to track ACWP as well as EV so that the real costs are visible.
  • Differences between predictive and adaptive planning for purposes of using EVM. The main difference is that the scope is defined at a finer level of granularity with predictive planning than with adaptive planning. With predictive planning, the level of detail gives the illusion of accuracy through false precision. In fact, details cannot be known in advance with a high degree of accuracy. With adaptive planning, the coarse level of detail means low precision. Provided people understand the plan is only approximate, the result may be higher accuracy, but only within a relatively wide margin of error.
  • Agile development projects follow either an iterative process or a non-iterative process. With an iterative process, the work is divided into equal-length time periods called “iterations” or “sprints.” The team commits to deliver a fixed amount of work in each iteration, chosen by the primary stakeholder according to business priorities. With a non-iterative process, the team works from a prioritized queue of work items and completes the items one at a time. This process is based on the lean manufacturing concepts of “customer pull” and “single-piece flow.” In either case we can usually apply EVM by breaking the costs down into fixed-length time intervals.
  • The EVM calculations depend on our being able to define a discrete level of effort for each work item. In situations when that is not feasible, EVM may yield inaccurate and misleading results.
  • The Value Delivery quadrant of the scorecard might look something like this. The example has snapshots of three charts from a spreadsheet program that show metrics relevant to value delivery and release status: Earned Business Value, Running Tested Features, and Release Burndown. It also contains a simple indication of the general status of delivery risks. The example shows a yellow light, which means every issue hasn’t been resolved but there are no critical issues. On the flipchart or whiteboard, write “Delivery Effectiveness” as the title of the upper right-hand quadrant on the scorecard. Ask participants which agile metrics pertain to this category? Possible answers: Burn chart. The bar chart version of the burndown chart is based on the same data as the line version we displayed in the Value Delivery quadrant, but makes visible the team’s effectiveness by correcting for scope changes. The tops of the bars descend smoothly when the team’s velocity is stable. Additional scope is shown at the bottom of each bar in a contrasting color, dropping below the zero line. Velocity chart. This shows the quantity of work the team has completed (through customer acceptance) in each iteration. Note that velocity cannot be compared directly across teams or across projects. Story sizes depend on the particular team, the problem they are solving, and the technical environment of the solution. Different teams may settle on different scales for story sizes and will reach different consensual agreements about how many points a story deserves. What is of interest in the velocity chart are variations, trends, and patterns over time. Those two metrics should be sufficient for a mature agile team operating in a supportive organizational culture. However, if the team is not applying agile methods very well or if the surrounding organization does not understand where the waste lives in its pre-agile methods, then some additional metrics may be appropriate here. Your specific needs will vary and you should come up with solutions tailored to your situation. A couple of the items we’ve discussed might be appropriate: Al Goerner’s Release Progress Report Card will show the gap between value-added work and overhead work. The simple metric, Story Cycle Time, will expose problems with “hangover” – stories started in one iteration and completed in a subsequent iteration.
  • Here is an example of the Delivery Effectiveness quadrant of a scorecard. This example includes the release burndown in bar chart form. Although the burndown is displayed in the Value Delivery quadrant, the bar chart format highlights the team’s effectiveness more obviously than does the line chart version. The velocity chart shows how consistently the team delivers the quantity of work that has been established as “normal” for this particular team on this particular project under this particular set of circumstances. Excessive variation in velocity indicates a problem with delivery effectiveness. In the example, the team attained a very low velocity in one of the past iterations. Whatever the problem was, it appears that the team has dealt with it successfully, since the pattern has not occurred again and there appears to be no negative trend in velocity. This example also includes the Release Progress Report Card, which may be useful in cases when a team is spending an inordinate amount of time on overhead work as opposed to value-add work. It also includes the Story Cycle Time metric, in this case showing that the team often takes 2 iterations to bring a story to completion. That is a “smell” that calls for further investigation. On the flipchart or whiteboard, write “Software Quality” as the title of the lower left-hand quadrant. Ask participants to name some factors or metrics that pertain to software quality. Most people will probably mention bugs or defects without much quantification of what metrics they are thinking about. Some people might mention customer satisfaction as a quality attribute. Some of the –ilities may come up, as well. These are all good answers. There are also some static code analysis metrics that pertain to code quality.
  • This example lists several items that might be appropriate under the heading of Software Quality. Customer Satisfaction is not really a “metric.” It may be any sort of feedback, whether formal or informal, that indicates the customer’s level of satisfaction with the code that has been delivered to date, or with his/her interaction with the team. Non-functional requirements are characteristics of a software system that may be seen as quality attributes. For example, “availability” is a quality attribute if there are specific requirements for system availability. Availability is measurable and you can report metrics to show the current level of quality of the system with respect to that requirement. Other quality attributes may only be subjectively determined; for example, “usability.” Most contemporary software development environments include static code analysis features. Some structural attributes of a code base speak to quality in one way or another. Be wary of overdoing it, as many static code analysis tools offer a huge variety of statistics and a wide array of compelling, graphical representations of the data. Only include metrics the team can act upon to improve quality. Defect density is a somewhat crude but widely-used indicator of quality. A “defect” may be defined in whatever way makes sense in your environment. Typically, defects include the bugs that have been reported against code the team has already delivered plus the number of failing tests in the latest build. Express the sum of these values as a ratio over KLOC. Industry norms reported by IBM for applications written in languages like Java, C++, and C# are about 0.362. This gives your team a target to aim for, although one would hope that a disciplined use of agile methods would keep the level very close to zero. Mention that for external-facing status reports (for instance, reports upward in the management hierarchy), the fourth quadrant might be devoted to financial metrics. For inward-facing status reports (information the team will use for its own purposes), the fourth quadrant can be devoted to continuous improvement. Let’s briefly consider financial metrics, bearing in mind that it is a topic of some complexity that cannot be covered very well in a couple of minutes. The chain of command from project manager to program manager to CIO will be interested in how each level of management below his/her own is handling the budget. Corporate IT departments are treated as cost centers. As such, they are allocated a fixed budget, usually on an annual basis. Managers are considered good financial managers if they burn their budget allocation smoothly over the course of the fiscal year and end up on zero. Obviously, this is not a very businesslike view of financial management. Yet, it may be a requirement that you report up the chain on how your portion of the IT budget is burning. It is easy enough to include this information on the scorecard. A more interesting financial metric might be the project’s ROI. We use the term ROI loosely in this context. It is not possible to base the ROI calculation on real numbers, since the business sponsor of an IT project is gambling on the value of the project. He/she has calculated an expected return somehow, but however he/she calculated it, it remains a somewhat subjective and hopeful number. We cannot know whether the new system yielded any value until after the fact, when we can collect data on it in production and in the context of the business operations it supports. Even then, we cannot know what proportion of the ROI is attributable to the software as such and what proportion is a result of general business process improvement or marketing activities. Despite the limitations, a report of ROI may be a useful communication tool in a mixed environment where the agile teams must prove their worth to a skeptical organization. The next couple of slides provide some general background information on applying Throughput Accounting principles to agile software development projects.
  • A scorecard for the team to use might include a section on continuous improvement opportunities. Any team can find ways to improve its effectiveness; and let’s face it, most teams that claim to be using agile methods today are not really very disciplined about pushing the envelope. Any areas the team decides it wants to work on can be included. By displaying these on the scorecard, the team has a visible reminder of its commitment to continuous improvement and of which specific areas the team members have chosen to focus on just now. The results of any improvements in the team’s working style will eventually be reflected in the other quadrants of the scorecard, as well as in the quality of the code they deliver. Of the examples shown, “build frequency” can be taken directly from the continuous integration server; “escaped defects” (those that get past the development team to be discovered later, possibly by users or by a QA group) can be determined from production support tickets or bug reports; “use of TDD” can be directly measured on Java projects by a new Eclipse plug-in under development at the University of Hawaii called “Zorro.” Use of TDD may also be inferred indirectly form some of the other metrics, such as cyclomatic complexity, structural complexity, and defect density. “Big-bang refactorings” may be reported by team members or noticed in tell-tale trends in velocity metrics. Ask participants what other areas of agile practice their teams might want to consider as opportunities for improvement.
  • The next few slides provide some examples of scorecards that other companies have come up with. They are provided as examples only, and we won’t spend much time on them.
  • Nothing much to note here except that this is a good example of a “fancy” agile tracking tool. Beware of the allure of pretty graphics. It’s easy to be led astray and start including a lot of unnecessary data on information radiators and status reports. This can cause the useful information to get lost in the noise.
  • This is a screenshot from Serena’s agile project tracking tool. Like other scorecards, it divides the display into sections that focus on particular aspects of the project. The product presents different views of the same data depending on what you’re interested in. The example shows a Release Status View. The scorecard omits information that isn’t especially pertinent to release status. That helps keep the scorecard small and reduces visual clutter so people can see what they need to see.

Agile Metrics V6 Agile Metrics V6 Presentation Transcript

  • Agile Metrics Dave Nicolette
  • An approach to software development based on the values and principles expressed in the Agile Manifesto. http://www.agilemanifesto.org Definition: Agile Software Development Copyright © 2007-2009 Dave Nicolette
  • A metric is a standard for measuring or evaluating something. A measure is a quantity, a proportion, or a qualitative comparison of some kind. Quantity : "There are 25 open defect reports on the application as of today.” Proportion : "This week there are 10 percent fewer open defect reports than last week.” Qualitative comparison : "The new version of the software is easier to use than the old version." Definition: Metrics Copyright © 2007-2009 Dave Nicolette
    • Informational – tells us what’s going on
    • Diagnostic – identifies areas for improvement
    • Motivational – influences behavior
    • One metric may function in multiple categories.
    • Example: Delivering high value to customers (informational) can increase team morale (motivational).
    • Beware of unintended side-effects.
    • Example: Rewarding people for fixing bugs may result in an increase in bugs, as people create opportunities to earn the rewards.
    Three Kinds of Metrics Copyright © 2007-2009 Dave Nicolette
  • Metrics as Indicators Leading Indicator Suggests future trends or events Lagging Indicator Provides information about outcomes Copyright © 2007-2009 Dave Nicolette
  • Einstein’s Wisdom Copyright © 2007-2009 Dave Nicolette
  • Agile Rule of Thumb About Metrics Measure outcomes, not activity. Copyright © 2007-2009 Dave Nicolette
  • A Minimalist Philosophy of Metrics Measure everything necessary and nothing more . Copyright © 2007-2009 Dave Nicolette
  • All the information they need to make decisions, and no more. Information at the level of detail they can use. Information at the scope they care about (team, project, program, line of business, enterprise, industry) Information pertaining to the time frame they care about (day, iteration, release, project, strategic milestone) For each stakeholder... Copyright © 2007-2009 Dave Nicolette
  • Stakeholders Team member Product Owner ScrumMaster Project Manager User Copyright © 2007-2009 Dave Nicolette Executive Auditor Process Improvement Researcher Production Support
    • General style of the agile process
    • Type of work being done
    • How the work is decomposed and planned
    • The team's self-organization choices
    • Characteristics of the organization
    • The team's continuous improvement goals
    Factors Influencing Your Choice of Metrics Copyright © 2007-2009 Dave Nicolette
  • Iterative Based on time-boxed iterations of fixed duration Continuous Flow Based on principles derived from Lean Manufacturing Styles of Agile Software Development Copyright © 2007-2009 Dave Nicolette
  • New Dev or Major Enhancement Project with a defined “end” to build a planned scope of work Ongoing Maintenance & Support Variable rate of incoming work requests, no defined “end” or planned scope Broad Categories of Business SW Development Copyright © 2007-2009 Dave Nicolette
    • Variation 1:
    • Short iterations (e.g., 1 week)
    • Low-overhead process
    • High maturity in agile thinking
    • Unit of work for execution is the User Story
    • User Stories are not decomposed into Tasks
    • User Stories are small and of consistent size
    • No sizing or estimation of fine-grained work items
    • Team commits to completing selected User Stories
    • No daily burn tracking
    How Agile Teams Plan Their Work (Short Term) Copyright © 2007-2009 Dave Nicolette
    • Variation 2:
    • Short to medium length iterations (e.g., 1-2 weeks)
    • Reasonably low-overhead process
    • Moderately high maturity in agile thinking
    • Unit of work for execution is the User Story
    • User Stories are not decomposed into Tasks
    • User Stories are small and of consistent size
    • Team sizes User Stories relative to each other
    • Team commits to completing a given number
      • of Story Points
    • No daily burn tracking
    How Agile Teams Plan Their Work (Short Term) Copyright © 2007-2009 Dave Nicolette
    • Variation 3:
    • Short to medium length iterations (e.g., 1-2 weeks)
    • Moderately low-overhead process
    • Average maturity in agile thinking
    • Unit of work for execution is the Task
    • User Stories are decomposed into Tasks
    • Variations in story size may affect planning
    • Team sizes User Stories relative to each other
    • Team estimates Tasks in terms of ideal time
    • Team agrees to try to complete a given number
      • of ideal hours or days of work
    • Daily burn tracking and re-estimation of Tasks
    How Agile Teams Plan Their Work (Short Term) Copyright © 2007-2009 Dave Nicolette
    • Variation 4:
    • Iteration length up to 6 weeks
    • Iterative process with some elements of agile work
    • Low maturity in agile thinking
    • Unit of work for execution is the Task
    • User Stories are decomposed into Tasks
    • Story points (if used) are pegged to ideal time
    • Team estimates Tasks in terms of ideal time
    • Team uses a “load factor” to guess at the
    • amount of non-ideal time
    • Team agrees to work a given number of
      • hours or days
    • Daily burn tracking and re-estimation of Tasks
    How Agile Teams Plan Their Work (Short Term) Copyright © 2007-2009 Dave Nicolette
  • The Team's Self-Organization Choices Copyright © 2007-2009 Dave Nicolette
    • Generalizing specialists – peer model
    • Chief Programmer / technical lead model
    • Specialists with internal hand-offs
  • Characteristics of the Organization Copyright © 2007-2009 Dave Nicolette
    • Fully supportive lean or “agile” organization
    • Organization embraces “agile,”
    • some areas operate in a traditional way
    • “ Agile” is an experiment or skunkworks operation,
    • low level of organizational buy-in
  • Organizational Differences Traditional organization Culture Risk aversion, blame-shifting, competition, zero-sum thinking, fear of failure Structure Administrative separation between application developers and their customers Management philosophy Command-and-control, Theory X, crack the whip Teams Temporary assignment, multiple assignment, functional silos Financial management Cost-center mentality; Cost Accounting Agile/Lean organization Risk management, trust, transparency, collaboration, failure as learning opportunity Application developers work for the lines of business they serve; central IT is for central functions Self-organizing teams, Theory Y, enable and support people Stable teams, dedicated teams, cross-functional teams Profit center mentality; Throughput Accounting Copyright © 2007-2009 Dave Nicolette
  • Team's Self-Improvement Goals Copyright © 2007-2009 Dave Nicolette Choose metrics that track the team's progress toward self-improvement goals Discontinue these metrics when the goals have been achieved
  • "Our highest priority is to satisfy the customer through early and continuous delivery of valuable software." and "Working software is the primary measure of progress." Two Agile Principles That Guide the Choice of Metrics Copyright © 2007-2009 Dave Nicolette
  • Running Tested Features Graphic from Ron Jeffries
  • Running Tested Features Copyright © 2007-2009 Dave Nicolette
  • Running Tested Features Process style Nature of the work Stakeholders Frequency Time-boxed iterations Continuous flow Ongoing support Delivery of defined scope As each feature is delivered Copyright © 2007-2009 Dave Nicolette
  • Running Tested Features Principle Working software is the primary measure of progress . Informational Direct measure of delivered results. Diagnostic If flat or declining over time, a problem is indicated. Motivational Team members naturally want to see RTF increase. Copyright © 2007-2009 Dave Nicolette
  • Forms of “Business” Value
    • Revenue
    • Cost savings
    • Market share
    • Customer relations
    • Reputation
    Copyright © 2007-2009 Dave Nicolette
  • Tracking Hard Financial Value Copyright © 2007-2009 Dave Nicolette Profit = Income - Costs Incremental delivery to production Baseline: Calculate profitability of the system/process being replaced or enhanced Per release: Calculate the change in profitability
  • Tracking Hard Financial Value Copyright © 2007-2009 Dave Nicolette
  • Hard Financial Value Process style Nature of the work Stakeholders Frequency Time-boxed iterations Continuous flow Ongoing support Delivery of defined scope As process performance is observed in production operation Copyright © 2007-2009 Dave Nicolette
  • Hard Financial Value Principle Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Informational Direct measure of financial value delivered. Diagnostic Downward trend or projection can be used to inform business decisions about continuing or modifying the project. Motivational Team members like to deliver value because it makes them feel they are contributing to the success of the organization. Stakeholders are motivated to pay attention to the business value of incremental releases. Copyright © 2007-2009 Dave Nicolette
  • Tracking Projected Value Copyright © 2007-2009 Dave Nicolette When incremental delivery is not to production...
  • Earned Business Value from Dan Rawsthorne, Calculating Earned Business Value for an Agile Project , 2006
  • Earned Business Value BV ( bucket ) = BV ( parent ) x This results in a percentage value for each item delivered to the customer representing the relative business value of the item as defined by the customer. Copyright © 2007-2009 Dave Nicolette wt ( bucket ) wt ( bucket ) + wt ( sibling ) siblings
  • Earned Business Value Example
    • Applying the formula to “Update Cust Info” in the feature decomposition, we have
    • x (3 / 4) x (1 / 1) x (10 / 40 ) x (10 / 20) = 9.4% (approx)
    • This means that when the team delivers the item named “Update Cust Info,” they will have delivered 9.4% of the business value of the project, according to the customer’s own definition of the relative value of each item.
    Copyright © 2007-2009 Dave Nicolette
  • Earned Business Value: When To Use It Yes The scope of the project is well-known up front and it is possible to develop a fairly comprehensive decomposition of features before development begins. No There is a high level of uncertainty about scope, and the expectation is that the scope will emerge as the team makes progress and stakeholders learn more about the problem and the solution. EBV breaks down in the latter case because as new scope is added, the percentage of business value already delivered decreases. This makes is appear as if the project is taking business value away from the customer. Copyright © 2007-2009 Dave Nicolette
  • Earned Business Value by Points Feature Group A Feature Group B Feature Group C 600 300 100 Early in project
  • Earned Business Value by Points Feature Group A Feature Group B Feature Group C 600 300 100 Part-way through Feature A-1 Feature A-2 400 200 Story 1 Story 2 50 35 etc . Copyright © 2007-2009 Dave Nicolette
  • Earned Business Value Charts See the Agile Metrics spreadsheet, “EBV Charts” sheet, for examples Based on percentages Based on points Copyright © 2007-2009 Dave Nicolette
  • Earned Business Value Process style Nature of the work Stakeholders Frequency Time-boxed iterations Continuous flow Ongoing support Delivery of defined scope As each feature is delivered Copyright © 2007-2009 Dave Nicolette
  • Earned Business Value Principle Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Informational Direct measure of customer-defined value delivered. Diagnostic Trend should be an S curve; otherwise, problems in prioritization or valuation are indicated. Motivational Team members like to deliver value because it makes them feel they are contributing to the success of the organization. Stakeholders are motivated to pay attention to the business value of incremental releases. Copyright © 2007-2009 Dave Nicolette
  • Velocity
    • Velocity is...
    • An empirical observation of the team’s capacity to complete work per iteration.
    • ...and not...
    • an estimate
    • a target to aim for
    Copyright © 2007-2009 Dave Nicolette
  • Velocity
    • Velocity is...
    • Based on the team’s own sizing of work items
    • ...and not...
    • based on estimated or actual time
    • dictated or imposed by anyone other than team members
    Copyright © 2007-2009 Dave Nicolette
  • Velocity
    • Velocity is...
    • Comparable across iterations for a given team on a given project
    • ...and not...
    • comparable across teams
    • comparable across projects
    Copyright © 2007-2009 Dave Nicolette
  • Unit of Measure for Velocity How the team plans Unit of Measure Commitment to stories Story Relative sizing (points) Points Estimation (ideal hours) Ideal Hours Copyright © 2007-2009 Dave Nicolette
  • What Counts Toward Velocity? Only completed work counts toward velocity Copyright © 2007-2009 Dave Nicolette
  • Velocity Copyright © 2007-2009 Dave Nicolette
  • Velocity Process style Nature of the work Stakeholders Frequency Time-boxed iterations Continuous flow Ongoing support Delivery of defined scope At the end of each iteration Copyright © 2007-2009 Dave Nicolette
  • Velocity Principle Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Informational Empirical observation of the team’s capacity for work; useful for projecting the likely completion date of a given amount of scope; useful for estimating the amount of scope that can be delivered by a given date. Diagnostic Patterns in trends in velocity indicate various problems; provides a baseline for continuous improvement efforts Motivational Team members take pride in achieving a high velocity and keeping it stable. Copyright © 2007-2009 Dave Nicolette
  • Putting Velocity to Work: Burn Charts Burn-down chart How much work remains to be completed? Burn-up chart How much work has been completed? Combined burn chart How much work has been completed and how much work remains? Copyright © 2007-2009 Dave Nicolette
  • Burndown Chart – Line Style Copyright © 2007-2009 Dave Nicolette
  • Burndown Chart – Bar Style Copyright © 2007-2009 Dave Nicolette
  • Burnup and Burndown Chart Copyright © 2007-2009 Dave Nicolette
  • Burn Chart Process style Nature of the work Stakeholders Frequency Time-boxed iterations Continuous flow Ongoing support Delivery of defined scope When time-boxed iterations are used: At the end of each iteration When continuous flow is used: At fixed time intervals (e.g., monthly) Copyright © 2007-2009 Dave Nicolette
  • Burn Charts Principle Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Informational Direct measure of work remaining; projected completion dates; impact of scope changes on schedule. Diagnostic Indicates whether scope changes or team performance is the cause of schedule variance. Motivational Team members are motivated by seeing clearly when they are likely to finish the project and by seeing the amount of work remaining steadily reduced. Copyright © 2007-2009 Dave Nicolette
  • LT = WIP (units) / ACR (units per time period) WIP = LT * ACR ACR = WIP / LT Little’s Law
    • Lead Time is the time required to deliver a given
    • amount of work.
    • WIP is work in process – items started but not completed.
    • ACR is average completion rate.
    Copyright © 2007-2009 Dave Nicolette
  • Cumulative Flow Diagram Copyright © 2007-2009 Dave Nicolette
  • Cumulative Flow Diagram Copyright © 2007-2009 Dave Nicolette Lead time WIP inventory
  • Cumulative Flow Diagram Process style Nature of the work Stakeholders Frequency Time-boxed iterations * Continuous flow Ongoing support Delivery of defined scope When time-boxed iterations are used: At the end of each iteration When continuous flow is used: At fixed time intervals (e.g., monthly) Copyright © 2007-2009 Dave Nicolette * if release cadence is decoupled from development cadence
  • Cumulative Flow Diagram Principle Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Metric Cumulative Flow Informational Visualization of flow. Empirical observation of lead time and WIP queue depth. Diagnostic Exposes capacity constraints and not-immediately-available constraints. Motivational Team members take pride in seeing the workflow in a visual form. Copyright © 2007-2009 Dave Nicolette
  • “ Continuous attention to technical excellence and good design enhances agility.” “ Simplicity – the art of maximizing the amount of work not done – is essential.” “ At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” More Agile Principles That Guide the Choice of Metrics Copyright © 2007-2009 Dave Nicolette
  • Static Code Analysis Example Copyright © 2007-2009 Dave Nicolette
  • Static Code Analysis Example cyclomatic complexity not covered by tests warns of large methods Copyright © 2007-2009 Dave Nicolette
  • Automated Inference of TDD Practices Copyright © 2007-2009 Dave Nicolette
  • Earned Value Management (EVM) Myth EVM doesn’t apply to agile projects because it requires a detailed WBS at the outset. Copyright © 2007-2009 Dave Nicolette
  • Earned Value Formula Copyright © 2007-2009 Dave Nicolette EV = Current Start PV (Completed)
  • Adapting Earned Value Management (EVM) to Agile Projects
    • Predictive planning (traditional)
    • Detailed work breakdown structure at the outset
    • Method of quantifying “done” for each item in the WBS
    • Definition of the value of each item in the WBS
    • Track planned (BCWS) and actual costs (ACWP)
    • EV is the budgeted cost of work performed (BCWP)
    • Adaptive planning (agile)
    • Scope defined at a high level at the outset (features)
    • Definition of “done” for each feature in scope
    • Definition of the value of each feature in scope
    • Track planned (budget) and actual costs (spend)
    • EV is the budgeted cost of features delivered
    Copyright © 2007-2009 Dave Nicolette
  • Budgeted Cost of Work Scheduled (BCWS) on Agile Projects Iterative process (or non-iterative process with equal-length releases) Sum of one-time costs / number of iterations (or releases) = One-time cost allocation per iteration Total on-going costs per iteration X number of iterations = Total on-going costs BCWS = Sum of one-time costs + Total on-going costs Cost per iteration = One-time cost allocation per iteration + On-going costs per iteration Non-iterative process with variable release schedule Sum of one-time costs / a chosen time interval (e.g., week) = One-time cost allocation per time interval Total on-going costs per time interval X number of time intervals = Total on-going costs BCWS = Sum of one-time costs + Total on-going costs Cost per time interval = One-time cost allocation per time interval + On-going costs per time interval Copyright © 2007-2009 Dave Nicolette
  • EV Examples See the Agile Metrics spreadsheet, “EV Iterative” and “EV Non-iterative” sheets for examples. Iterative process Non-iterative process Copyright © 2007-2009 Dave Nicolette
  • When EVM is Applicable Yes Level of effort per task is well understood Example: Corporate intranet CRUD webapp based on existing st andards No Project involves a high degree of uncertainty and will involve prototyping, spiking, research, and/or experimentation Example: The company’s first business application using an unfamiliar programming language Work items are added to the work queue in an unpredictable fashion Example: Production support group that addresses bug reports as they are received Copyright © 2007-2009 Dave Nicolette
  • Throughput Accounting Throughput (T): The rate at which a system produces goal units (money). S = net sales TVC = totally variable cost T = S – TVC Investment (I): The money tied up in the system. Operating Expense (OE): The cost of generating goal units. Copyright © 2007-2009 Dave Nicolette
  • Throughput Accounting Net Profit (NP) is throughput less operating expense. NP = T - OE Return on Investment (ROI) is net profit / investment. ROI = NP / I TA Productivity is throughput / operating expense TAP = T / OE Investment Turns (IT) are throughput / investment IT = T / I Copyright © 2007-2009 Dave Nicolette
  • Throughput Accounting and Iterative Agile Methods
    • There are no “sales” and therefore no “sales price” in internal IT projects.
    • Use the project budget as the sales price.
    • Investment is the total cost of preparing the Master Story List or Product Backlog – the list of all the features to be developed. May include:
    • All up-front analysis costs
    • All up-front requirements elaboration costs
    • All project planning, release planning, and iteration planning costs
    • Operating Expense includes all costs for the iteration except investment.
    Copyright © 2007-2009 Dave Nicolette
  • Throughput Accounting: Investment for a Release Copyright © 2007-2009 Dave Nicolette I = I release + Iterations n = 0 I n
  • OE release = OE iteration x iterations Throughput Accounting: Operating Expense for a Release Copyright © 2007-2009 Dave Nicolette
  • Throughput Accounting Example See the Agile Metrics spreadsheet, “TA Iterative” sheet for an example. NP (net profit) isn’t really profit. It tells you whether you’re doing better than your budget. I (Inventory) is the cost of requirements, analysis, writing acceptance tests, and writing user stories. OE (operating expense) is the cost of building the solution. If you can drive these down, then T (throughput) and NP (net profit) will go up. Copyright © 2007-2009 Dave Nicolette
  • Reliability of Promises Reliable promise: I deliver as promised, or I tell you I can't deliver as soon as I know it. Copyright © 2007-2009 Dave Nicolette
  • Niko-Niko Calendar Symbols Positive Neutral Negative Copyright © 2007-2009 Dave Nicolette
  • Niko-Niko Calendar Patterns Copyright © 2007-2009 Dave Nicolette
  • Niko-Niko Calendar Patterns Copyright © 2007-2009 Dave Nicolette
  • Niko-Niko Calendar Example Copyright © 2007-2009 Dave Nicolette
  • Story Cycle Time (Iterative) The number of iterations it takes to complete a story. Copyright © 2007-2009 Dave Nicolette
  • Cycle Time (Lean) The average time between delivery of completed work items. Copyright © 2007-2009 Dave Nicolette
  • Problematic Measures
    • Not relevant to agile methods:
    • Gantt chart
    • Percent complete
    • Time per team member per task
    • Actual time vs. estimated time
    Copyright © 2007-2009 Dave Nicolette
  • Using Trends to Spot Problems
  • Sample Scorecard Value Delivery Risks Copyright © 2007-2009 Dave Nicolette
  • Sample Scorecard Delivery Effectiveness Story Cycle Time: 2 Copyright © 2007-2009 Dave Nicolette
  • Sample Scorecard Software Quality Customer satisfaction Non-functional requirements Testing metrics Coverage Tests passing Least-tested components Static code analysis metrics Cyclomatic complexity Structural complexity Cyclic dependencies Observational/calculated Defect density Copyright © 2007-2009 Dave Nicolette
  • Sample Scorecard Continuous Improvement Copyright © 2007-2009 Dave Nicolette Build frequency Escaped defects Use of TDD Big-bang refactorings Pairing time vs solo time Overtime Issues from Retrospective
  • Agile Balanced Metrics (Forrester) Operational Excellence User Orientation Business Value Future Orientation
    • Project Management
    • Productivity
    • Organizational Effectiveness
    • Quality
    • User Satisfaction
    • Responsiveness to needs
    • Service Level Performance
    • IT Partnership
    • Business value of projects
    • Alignment with strategy
    • Synergies across business
    • units
    • Development capability
    • improvement
    • Use of emerging processes
    • and methodologies
    • Skills for future needs
    Copyright © 2007-2009 Dave Nicolette
  • Agile Project Scorecard (Ross Pettit) Copyright © 2007-2009 Dave Nicolette
  • Sample Agile Dashboard (VersionOne)
  • Sample Agile Dashboard (Serena)
  • Thanks for your time! Contact information: Dave Nicolette Email [email_address] Blogs http://www.davenicolette.net/agile http://www.davenicolette.net/taosoft Workshops http://davenicolette.wikispaces.com Copyright © 2007-2009 Dave Nicolette