Successfully reported this slideshow.
Your SlideShare is downloading. ×

Predictability: No Magic Required - LeanKit Webinar (June 2017)

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Loading in …3
×

Check these out next

1 of 34 Ad

Predictability: No Magic Required - LeanKit Webinar (June 2017)

Download to read offline

Knowledge work tends to be variable in nature and involves cross-functional teams collaborating on each step of the process. This makes project delivery hard to predict as work may be held up due to unforeseen blockers, hand-off delays, or approval cycles taking longer than expected.

In this webinar, I’ll provide guidance around choices you can make that impact your ability to meet your commitments with confidence.

You'll learn how to predict the cycle time of work before it's finished. I will also explain the basics of queuing theory, and the relationship between queue size, capacity utilization, and cycle times. Armed with this insight, you'll be able to:

* Monitor your workflow for leading predictability indicators.
* Make informed choices to maximize your value throughput.
* Forecast delivery with better accuracy using LeanKit data and analytics.

There's no magic required in achieving predictable delivery. The secret is using past performance data to predict future behavior.

Knowledge work tends to be variable in nature and involves cross-functional teams collaborating on each step of the process. This makes project delivery hard to predict as work may be held up due to unforeseen blockers, hand-off delays, or approval cycles taking longer than expected.

In this webinar, I’ll provide guidance around choices you can make that impact your ability to meet your commitments with confidence.

You'll learn how to predict the cycle time of work before it's finished. I will also explain the basics of queuing theory, and the relationship between queue size, capacity utilization, and cycle times. Armed with this insight, you'll be able to:

* Monitor your workflow for leading predictability indicators.
* Make informed choices to maximize your value throughput.
* Forecast delivery with better accuracy using LeanKit data and analytics.

There's no magic required in achieving predictable delivery. The secret is using past performance data to predict future behavior.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to Predictability: No Magic Required - LeanKit Webinar (June 2017) (20)

Advertisement

Recently uploaded (20)

Predictability: No Magic Required - LeanKit Webinar (June 2017)

  1. 1. Julia Wester Executive Consultant & Manager, Customer Education EverydayKanban.com @everydaykanban learn@leankit.com Predictability No Magic Required
  2. 2. Adjective Expected, especially on the basis of previous or known behavior Predictable [pri-dik-tuh-buh l] @everydaykanban USUALLY ________!
  3. 3. You can be predictably bad… @everydaykanban USUALLY HORRIBLE!
  4. 4. @everydaykanban USUALLY GREAT! Or, you can be predictably good…
  5. 5. Predictability is driven by the range of outcomes @everydaykanban We deliver between 1 and 126 days
  6. 6. Smaller ranges mean more predictability @everydaykanban 10 to 50 days is more predictable
  7. 7. We usually only care about the upper limit @everydaykanban We deliver in <= 50 days
  8. 8. “In our zeal to improve the reliability of software development, we have institutionalized practices that decrease, rather than increase, the predictability of outcomes.” @everydaykanban Mary Poppendieck Lean Development & the Predictability Paradox (2003)
  9. 9. Ex: Over-Focus on Capacity Utilization @everydaykanban http://www.peterkretzman.com/ http://www.we-care.com
  10. 10. Predict the # of circuits and operators needed to avoid blocked calls given:  Random arrivals  Random durations @everydaykanban This problem has existed for a while…
  11. 11. The mathematical study of waiting lines, or queues. Can quantify relationships between queue size, capacity utilization and cycle times Queueing Theory was devised to help @everydaykanban capacity utilization Queue size For a simple M/M/1/∞ queue
  12. 12. @everydaykanban Buildup starts well before 99.9% utilization https://less.works/less/principles/queueing_theory.html A simple system that gets one request at a time Even for a simple M/M/1/∞ queue with random arrival and service times and a single ‘server’
  13. 13. Request size Utilization Cycle Time Single item requests 50% 2x Time in service 90% 10x Time in service Big Batch requests 50% 5x Time in Service 90% 22x Time in Service What about queues that aren’t so simple? @everydaykanban https://less.works/less/principles/queueing_theory.html Big Batches with random arrivals and service times: M[x]/M/1/∞ queue
  14. 14. @everydaykanban How this affects planning accuracy https://less.works/less/principles/queueing_theory.html What we assume What we know What is likely, given probabilities 80 weeks 100 weeks 5
  15. 15. @everydaykanban Queue size is a leading indicator Which lanes are going faster?
  16. 16. “100% of developers [that I surveyed] measured cycle time. 2% measured queues.” @everydaykanban Donald Reinertsen The Principles of Product Development Flow (2009)
  17. 17. If you only do one thing… make queues visible @everydaykanban
  18. 18. @everydaykanban Manage queues by, focus on improving flow of the work
  19. 19. Flow is about leveling out periods of inactivity and creating a smooth, consistent delivery of value @everydaykanban
  20. 20. A focus on keeping the worker busy Image: Todd A. Clarke - http://visualonepagers.com/
  21. 21. What it looks like to focus on the worker @everydaykanban https://www.targetprocess.com https://www.industriallogic.com/
  22. 22. A focus on the flow of the actual work Image: Todd A. Clarke - http://visualonepagers.com/
  23. 23. What it looks like to focus on the work @everydaykanban
  24. 24. “Business units that embraced [process/queue management] reduced their average [product] development times by 30% to 50%.” @everydaykanban OnPoint - HBR.org Getting the most out of your product development process (2003)
  25. 25. CHOICES YOU MAKE about managing the queue size and flow can make or break your ability to be predictable. @everydaykanban
  26. 26. Choice #1 How you assign work normal stopped Pre-Assign work More Predictable Slower, but consistent Workers pull work Less variation in queue times. More variation in queue times @everydaykanban
  27. 27. @everydaykanban Choice #2 The order that you process work FIFO Non- FIFO More variation in queue times Less variation in queue times. Feasible? More Predictable
  28. 28. @everydaykanban Choice #3 The amount of work you batch Once a week Once a month More variation Less variation
  29. 29. @everydaykanban Choice #4 No. of dependencies you create What are the odds you’ll finish on time with ‘n’ dependencies? 1 in 2n = 1 in 24 = 1 in 16 1 in 2n = 1 in 22 = 1 in 4 1 in 2n = 1 in 26 = 1 in 64 Troy Magennis Less More Variation
  30. 30. MONITORING YOUR PREDICTABILITY INDICATORS @everydaykanban Identify and monitor predictability indicators using LeanKit data and analytics
  31. 31. @everydaykanban Work-In-Process (hidden queues?) Queued work
  32. 32. @everydaykanban
  33. 33. Use LeanKit data to forecast delivery dates @everydaykanban with tools from Troy Magennis & FocusedObjective.com Plug in LeanKit data http://focusedobjective.com/forecast_agile_project_spreadsheet/
  34. 34. @everydaykanban Great references to check out

Editor's Notes

  • Hi, I’m Julia Wester and my topic today is “Predictability: No Magic Required”

    Generally, we don’t think we, individually, have a lot of control over predictability. We say there are too many factors outside of our control. However, each of us, regardless of our title, make decisions that impact our ability to deliver predictability. So, it's important for us to fully understand the impact of these choices that we make and to realize that we have control, without performing a single magic trick.

    Let's start by defining predictable. It means "Expected, especially on the basis of previous or known behavior." This means we've trained people to have certain expectations of us. We have been consistently one way or another.

  • You can be predictably bad... There's usually at least one restaurant or other place we don't want to go to anymore because their food or service is usually really bad. This is what we want to avoid for our businesses!
  • On the flip side you can be predictably good. We want to be working for the business that everyone wants to do business with because we deliver what we say we will, when we say we will and we treat our customers well while we do it. We have to do this more often than we don't in order to be considered predictably that way.
  • If you aren't consistently delivering in a certain range of expectations, you're unpredictable.
    Therefore, to become more predictable, we need to focus on decrease the range of our outcomes.

    Let's take speed as an example

    If I was concerned about my predictability in delivery times and I looked at this chart (which is LeanKit's speed report), I would see that my team sometimes delivers work same day and sometimes we take 126 days to deliver work. Now, if these points all represent a similar class of work, perhaps the same card type on a LeanKit board, then it would be fair to call my team unpredictable.
  • What we would then want to do is bring in that range. If we were able to take somewhere between 10 and 50 days to deliver the same type of work, we'd have become more predictable.

    Notice that I brought that bottom number up! That's because I want to show that predictability, by definition, isn't the same as fastest. Instead its about consistency. So, its worth being clear about what you really want when you say you want to be predictable.
  • Generally when we say we want to be more predictable with our speed, we really are concerning our selves with the upper limit of how long it takes us to deliver work. We'd be happier going from a 1-126 range to a 1-50 range instead of settling for 10-50.

    So, when we strive for predictability, in my experience, we're striving for being consistently faster or consistently better rather than just being consistent in general. This is important to keep in mind.

  • Unfortunately, despite our desire for greatness, we do things that have unintended consequences. Mary Poppendieck said it well in her book "Lean Development & the Predictability Paradox" when she said "In our zeal to improve the reliability of software development, we have institutionalized practices that decrease, rather than increase, the predictability of outcomes."

    Why do we do this? Well, usually its not on purpose! We know a few standard tools and we apply them. When it doesn't work, we double down on them thinking we just needed to be better at it.
  • One big example of this is the way we often approach capacity utilization & planning.

    I want to make sure I'm clear, before I lose the attention of half the audience, capacity planning is not evil. Under-utilization of people’s time can be dreadful. Bored employees who aren't being appropriately utilized are unhappy employees. Not to mention, its fiscally irresponsible.

    But, when we do capacity planning, we generally aren't trying to avoid under-utilizing people. That's a really rare scenario from where I sit. Usually, we are trying to figure out how to cram as much work in as possible - for lots of reasons... pressure on management, the thought that you have to be busy to be productive. Is this you? Ask yourself if you've ever looked at a gantt chart and wondered if there is any small chance you move a few pieces around and fit in that one more thing. Then you'll have your answer.

    This practice has serious consequences. When it starts to feel like you're playing a tetris game with human pieces, you're doing more harm than good. When there's no unallocated space in the capacity planning chart, you haven't finished your quest... you've put the team on the path to unpredictability. Essentially we harm the organization in our quest to help it, just like Mary wrote. We cram our people and systems so full of work that everything grinds to a halt, which is extremely fiscally irresponsible. Unfortunately, unlike underutilization, this scenario isn't rare at all.
  • Fortunately, we can learn from the experience of others who have searched for paths to predictability.

    Agner Krarup Erlang , while working for the Copenhagen Telephone Company, was presented with the classic problem of determining how many circuits were needed to provide an acceptable telephone service. His thinking went further by finding how many telephone operators were needed to handle a given volume of calls.

    It wasn’t a simple question. They had calls coming in at random arrival rates and the calls lasted for random durations. They had to make sense out of chaos. Our work isn’t all that different in those ways. We have work coming in at random intervals, also in random numbers. We can get a request every five minutes, or five at once. Our work requests aren’t all the same so their durations aren’t the same either. As we saw, some work we’ll finish in 1 day, some in 50 days. Yet, we need to be predictable as well.

    What did they do?
  • Erlang ended up figuring out a way to accurately estimate the probability that a call would be blocked at different levels of capacity utilization.

    Their solution was called queueing theory. It is the mathematical study of waiting lines, or queues. For us, queues represent the amount of waiting work in our system. Erlang’s queueing theory quantifies the relationship between three key things:
    Queue size
    Capacity utilization, and
    Cycle times

    The relationship, as you can see in the graph, isn’t linear. As you increase capacity utilization, queue size (and thus cycle times) increase exponentially. Even when you have a pretty simple queue where one thing comes in at a time (which we all know we wish we had). When our queue sizes go up, we have work sitting idle, which can be costly. This graph shows the problem with trying to maximize capacity utilization. You now have proof, through queueing theory, that it utilizing too much capacity causes work to pile up in queues, which then, in turn, causes a significant increase in cycle times.

  • Delay and overload does not start at 99.99% utilization—It is not the case that everything goes fast and smooth on the highway until you squeeze in that last car that jams it all up. Rather, things buildup and cause a slowing trend well before capacity is reached – starting around 50% capacity.

    I like this chart because it gives us a bit more explanation of what we’re seeing. The term time in service in this chart refers to the active working time it takes to get something done. It is a subset of cycle time, which is the service time plus all the wait time in queues.

    As capacity grows, the time each item spends in a queue will grow, while the active working time (or service time stays the same). At 50% utilization, the cycle time averages 2 times the time in service (or active work time).
    At 90%, the cycle time jumps to 10 times the time in service.

    So, to go back to Mary Poppendieck’s statement, we utilize concepts like capacity management thinking it will improve our ability to deliver. But, our lack of understanding of the impacts of our choices result in causing the very thing we’re trying to avoid: gridlock and extremely long delivery times. And, with that, we have no space to address emergent business risks without delaying things even further.

    And, so far, this is just for what can be considered a simple queue – one with single items arriving at random intervals, with random durations processed by a single ‘server’ which is a queueing term for person or thing processing the requests.

  • When you allow for more complicated situations, like the ones we work with, in which we often receive batches of work at the same time (think projects, large initiatives, etc.) combined with the simpler day to day items, the downsides of utilizing more capacity dramatically increase.

    When you drop big batches of work into a queue at once, queue sizes grow sooner and the cycle times for varying levels of capacity utilization are much higher because it can’t process the load as fast. This is similar to what happens when you get caught up in the backlog of a traffic jam caused by an accident. Once the queue of cars starts moving again, all the cars in the gridlock don’t move at once. It can take quite a while for the cars at the back of the queue to be able to move. The same happens with our work. Even at a low utilization, a bulk arrival has this effect. But, when we are at a high capacity utilization rate, the effect is exponential.

    If we get a big batch at 50% capacity utilization, we see a cycle time of approx 5x time in service as compared to a single item arrival.
    When we look at 90% capacity utilization, that number jumps from 10x time in service to a whopping 22x that amount.

    If you’re thinking that sounds good, but what does this all mean to me in my day to day life?, let’s look at what it means when a delivery date for a project…
  • So, you get a single request and you estimate that it will take two weeks of hands-on service time to complete it given 50% utilization. We know that cycle time for those conditions is about 2x service time or 4 weeks and we give that estimate for completion to our stakeholders.

    Next, we get a big batch of 10 similar things. What we want to do is just multiply the 2 weeks of service time by the number of things (10) which would give us 20 weeks of service time. And, we apply that 2x service time formula and give an estimate of 40 weeks to complete all 10 things. You feel good, your stakeholders are somewhat happy. But, we don’t realize the huge problem lurking in the shadows.

    The accident-related gridlock teaches us that a queue doesn’t clear in a linear fashion. The more that queues up, the exponentially longer the queue takes to clear. Our ratio of Cycle Time to Service Time jumps. Its no longer 2x. Now, its closer to 5. So, while the service time for 10 items is still 20 weeks, we multiply that by 5 and get a delivery date of 100 weeks in the future. No one wants to believe that though… it seems crazy. Where is the difference? Its in the queue, or wait, times.
  • So, what do we do now? We focus on monitoring and managing our queues. That is where the big advantage is. We need to reduce that queue time and we reduce queue time by reducing queue size.

    The good thing is that it is much easier to measure queue size than trying to estimate capacity utilization. By knowing our queue size we can calculate, instead of estimate, capacity utilization. This allows us to focus our attention on critical things like conversations about the interdependencies of our smaller and faster concurrent loads of work. Of course, we’ll still keep on tracking cycle times just to make sure our queue management has a check that can help us guide future decisions. But, queue sizes become our much-sought-after leading indicator.

  • In Don Reinertsen’s book he talks about how he speaks at many conferences. I’ve been lucky enough to hear some of these talks and to take one of his classes. He says that he surveys the audience and asks how many developers measure cycle time. Usually everyone raises their hand. But, when he asks how many measure queues, almost no one raises their hand. Given what we know about the impact of queues, this is a significant problem in how we run our businesses.

    So, what should we do?
  • The simplest thing we can take to to start managing our queues is to make them visible using a Kanban board. In case you don’t know it, LeanKit is a web-based tool that lets you build Kanban boards. A Kanban board gives us a way to visually represent work that was previously invisible. This is very important, because without this, we can’t see how much work is queued up in our system. When your Kanban board is properly designed, it also shows the specific state work is in. This allows us to clearly distinguish between items that are actively being worked on and items that are sitting in a queue.

    When we can see our queues we can manage them.
  • We manage work by considering and improving something called flow.

    Before we dive into Flow and what that means, let me share Don’s interpretation of Aesop’s fable ”The Tortoise and the Hare”.

    Don says “The hare excelled at adding value quickly, but still lost the race due to periods of inactivity. Eliminating or reducing the periods of inactivity, such as time spent in queues, can be far more important than speeding up activities.”
  • Flow is about leveling out periods of inactivity. We want to be the steady tortoise who actually delivers things rather than the inconsistent hare who goes fast for a bit and then takes long breaks.

    This is lean is a really short but good book on explaining how to make a system that optimizes for flow and how it differs from a system that optimizes for maximum capacity utilization.
  • An example they use in the book This is Lean is how we experience healthcare systems. They take two women and describe their experience trying to get to diagnosis on the same type of problem. The first woman goes through the traditional American healthcare system in which medical professionals are fully booked and sometimes overbooked to ensure no time elapses in which they are idle. They function in at 100% capacity. This is good for billing, but bad for the person trying to get value out of the system. In a sense the patient is like one of our projects. She goes to station 1, the primary care physician. It takes some time to get an appointment. When she gets to it, she sees the doctor and they send her to the lab for tests. But, they are full so there’s a wait to get there. This process continues a few more times and it takes her over 40 days to get a diagnosis. This is not a system focused on the flow of value (aka the patients). This is one focused on the workers and keeping them busy.

    The problem is that we create these systems in our organizations, but we don’t get to bill after each step. We don’t get paid until the entire thing is complete and the value has been delivered to the end customer.
  • We know we’re focusing on the worker when we are making visual systems that tell us more about who’s doing what than what is being done and its progress.
  • In “this is Lean” that system is contrasted with the experience of a different women going through a healthcare system that is a bit like a cross-functional team working together to optimize for quick patient outcomes. It takes her a bit longer to get an appointment at the clinic – maybe 2 weeks, instead of 2 days. But, she gets to diagnosis without moving through functional silos and uncoordinated dependencies.

    The staff at this clinic aren’t concerned about being busy 100% of the time. In fact, they leave time unbooked to ensure that if someone comes into the clinic, that every possible practitioner would be able to see them. This slack time ensures a smoother flow for the patient and quicker time to diagnosis (or value delivery). They don’t worry about what’s going on in that slack time, usually its being used in a very productive, but lower priority way. But, even if it wasn’t, their goal would still be accomplished – getting work all the way through the system to delivery of value. That is their highest mission.
  • If we want to start to focus on flow, we have to create visualizations that focus on the work, not the worker. The might look something like this. Cards on the board represent work. The board itself represents the states the work moves through as its being processed. We even visualize wait time in between steps so we can really know how long our work spends waiting in queues.
  • HBR has an article that states that Business units that embrace this type of process or queue management often reduce their average product development times by 30-50%. I think that’s worth investigating.
  • So, we’ve already talked about some major points here:

    The impact of high capacity utilization
    How queue sizes are the ideal leading indicator
    That we should start with visualizing our queues
    And follow up by managing flow.

    But, there are other choices you might make at work that can also make or break your ability to be more predictable and I want to share some of those with you.
  • The first is how we approach who is going to do the work. In Queueing theory speak, the approach we choose is called our queueing discipline. The first is one queue per server and that’s what we have when we pre-assign a lot of work to people before they can start it. If I go to your backlog and I see the work already assigned to people, you are implementing a one queue per server approach. A commonplace example of this is your standard checkout lane at the supermarket.

    The second option is a single queue, multiple server approach. This is like the check-in line at the airport ticket counter or the self-checkout kiosks at the supermarket. There’s one queue and when someone finishes their work and has an opening, they pull the next item (or person) from the front of the queue.

    Let’s see what happens to the queued work in each approach when a server is blocked. In the pre-assign work approach, everything in line for that person just stops while all of the other work assigned to others continues moving at the pre-existing pace. This creates a wider range of experiences. If we remember back to the definition of predictability, we can see that we are moving away from it rather than towards it.

    In the approach where you pull work when you have an opening, if a server is blocked, the processing for the entire queue slows down. It is a consistent impact to everyone in the queue. Less variation means it’s a more predictable approach.
  • The second choice is the order in which we decide to process the work that arrives in our queues.

    One option is First In, First Out (or FIFO). This means that we start work in the order it comes in and we finish it in that order too.
    The other option is really anything else. There are many variations but all other approaches pull certain items out and give them preferential treatment. The cycle times for the other items are being artificially inflated. This difference in treatment means we are introducing more variation in queue times and thus a FIFO queue is more predictable.

    However, this choice is a good example of one that has larger considerations. We have to answer the question ”what is our key goal”. Predictability may not trump the fact that some work is more economically important than other work. We don’t want to blindly chase predictability at all costs. Everything is a balance. We just need to understand the impact of the choices we make.
  • The third choice is about batch size. We have already discussed at length, the impact of the arrival of big batches of requests. But, we also make choices about how to batch up items when we deliver them.

    When we deliver less frequently say one a month versus once a week, we artificially inflate the cycle time of certain work. It sits there aging, while it waits to be delivered. We are increasing the range of cycle times for our work. Work delivered closer to the once a month delivery date has a much shorter time than the work completed right after a delivery date.

    When we deliver more frequently we introduce less artificial inflation, meaning less variation in cycle times thus more frequent delivery allows us to be more predictable.

    Again, this is a choice that has to be balanced against the cost of making the actual delivery.
  • The last choice I want to talk about today is the number of dependencies we create, or allow. In a highly matrixed organization, work has to move through many separate functional business units before it can be complete. Each handoff is a dependency. Each dependency we introduce doubles the chance of late delivery.

    If the formula for calculating the impact of dependencies is that there’s a 1 in 2n chance of something working out the way you want, we can easily calculate what that means given our specific number of dependencies.

    If we have 2 dependencies that means there’s a 1 in 4 chance because 2x2 = 4.
    If we have 4 dependencies, it jumps to a 1 in 16 chance because 2x2x2x2 = 16
    If we have 6, it jumps to 1 in 64 and you get the picture.

    The chart here on the left shows a visual representation of the 4 dependency situation. You can see that each row describes a possible outcome. Green means that person delivered on time and red means they didn’t. Only 1 row out of 16 total possible combinations work out as we want them to. So, we can see how this formula really works out.

    So, if we take that into account, the fewer dependencies we have, the more predictable we will be.
  • Now that we know a lot of the choices we can make to foster better predictability, let’s move on to monitoring some of the things we’ve been talking about using LeanKit Data and Analytics (or your version of that if you’re in another tool or working with physical boards).
  • First, we monitor queue sizes. (replace with CFD). You can do that in our Flow report or in the one I have pictured here, which is our efficiency report. This tells us how much work is in an active lane on your board versus how much is in a ready (or queue) lane in your board. We can monitor this and see growing queues and address the issue before they can cause a major problem.
  • Next we move to our speed report, which tells us our throughput and our cycle times for work that starts and finishes within a certain time period. This shows us our spread of cycle times. We can look at this alongside our queue size report and see the impact of managing queues on our cycle times.
  • Now, if you want to move into forecasting dates on specific work, we do have some upcoming additions to our LeanKit product. But, there are some tools freely available from Troy Magennis and FocusedObjective.com that you can plug a very small amount of LeanKit data into and get startlingly accurate probabilities for when a body of work will be complete. All you need is a start date for that body of work (the accuracy of this piece of information is the most critical one in the tool) combine with the number of stories you need to complete and estimates or historical data for story throughput per week and it runs a monte carlo simulation for you to generate these probabilities.

    The great part about this tool is that it has an area to plug in risks, the likelihood of them occurring and the estimate increase in work they will cause and you can see how that will affect your forecasted delivery dates. With tools like this, its never been easier to use your LeanKit data and give more accurate forecasts to your stakeholders.
  • I have used a few amazing references in this talk. They are all very good books for learning more about how to be predictable. They are:

    (read pictures).

    I highly recommend them all.

    So, this is where I end my planned presentation and thank you for listening. Before we leave, I’m going to hand the reigns back over to Karen to see if we have any related questions I might be able to answer on the spot.
  • You have a lot of control over predictability.
    Start with measuring your current queue size and cycle time range.
    Make choices that keep queues small and cycle times ranges concise
    Monitor queue sizes and cycle times continually to anticipate and correct negative patterns

×