Successfully reported this slideshow.

How to Run a Post-Mortem (With Humans, Not Robots), Velocity 2013

61

Share

Loading in …3
×
1 of 57
1 of 57

How to Run a Post-Mortem (With Humans, Not Robots), Velocity 2013

61

Share

Download to read offline

Slides (with annotations) from a talk on post-mortems at Velocity CA, 2013.

This is an expanded version of my earlier slides, from the Lean Startup Conf.

Slides (with annotations) from a talk on post-mortems at Velocity CA, 2013.

This is an expanded version of my earlier slides, from the Lean Startup Conf.

More Related Content

Related Books

Free with a 14 day trial from Scribd

See all

How to Run a Post-Mortem (With Humans, Not Robots), Velocity 2013

  1. 1. How To Run a Post-Mortem With Humans (Not Robots) Dan Milstein Hut 8 Labs @danmil
  2. 2. Act I: What The Hell Is a Post-Mortem Anyways?
  3. 3. Ahhhhhh! Something Very Bad Just Happened
  4. 4. What Is a Post-Mortem Anyways? • Something you do when your company has badly screwed up • E.g. your CEO demos your cloud storage system to an early prospective customer, and, when he runs a search, it shows other customers’ data (I have done this, it was not awesome) • You get a bunch of people into a room and say: “How on earth did that happen? And how can we make sure it never, ever happens again?” • That’s a Post-Mortem • But, there’s a problem....
  5. 5. Shameful Mistakes: Humans vs Robots
  6. 6. Human Beings Will Eff It Up • Humans (unlike robots) feel this intense emotion called shame • Shame will suggest (strongly) “Slow Down, Stop Making So Many Mistakes” • Aka “Destroy your company by way of opportunity costs, immediately!” • Has potential to be incredibly damaging to your startup • And I have some bad news...
  7. 7. You Will Totally Experience Shame (I Still Do) F.A.E.
  8. 8. This Emotional Experience Can Not Be Avoided • I’ve run c. 50 post-mortems, have studied failure... and I still have this emotional reaction • You will, too. And so will your team. • Much more strongly than you realize right now • This is the “Fundamental Attribution Error” (FAE), from psychology • FAE = humans vastly underestimate the power of a situation on our behavior
  9. 9. Big Idea: Adopt Economic, Not Moral Mindset $, FTW
  10. 10. What Does That Mean • Let me tell you a story...
  11. 11. Parable: A Tale of Two Factories
  12. 12. Two Factories • Both make widgets • Both are missing their monthly Widget Production goals by 10% • But for different reasons...
  13. 13. Factory 1... Broken Machine
  14. 14. When The Machine Breaks... • Belt slips off every once in a while • Ruins a bunch of widgets • Gotta replace it, drift a little behind plan • So... what questions do humans ask in this situation?
  15. 15. • “How much is it costing us?” • “How much does it cost to repair?” • “Can we kludge a partial fix?” • “What are risks if we delay a fix?” Economic Mindset = Broken Machine
  16. 16. Note the Key Words • “Cost”, “Partial”, “Risk” • These are things you hear a lot in an economic discussion • Okay, meanwhile in Factory 2, also missing by 10%, different reason...
  17. 17. Factory 2... One Employee Is an Axe Murderer
  18. 18. After Every Axe Murdering... • Have to, like, hire a new guy, train him on the machine, takes forever • Questions we asked before are now somehow deeply wrong: • “What if we just cut down on the rate, so there’s less axe murdering?” • “Hey, we can train a pool of temps on all the machines, when someone gets killed, we’ll just swap some new guy in, bang, problem solved!” • “How much is it really costing us, anyways?” • These ideas seem obscene, not merely bad
  19. 19. Moral Mindset = Axe Murderer “Search for villains, elevation of accusers, and mobilization of authority to mete out punishment” (Pinker, The Blank Slate)
  20. 20. Moral Mindset, Key Words • “Villains”, “Accusers”, “Authority”, “Punishment” • I believe that most companies, in investigating outages, act much more like they’re looking for an axe murderer, than trying to fix a broken machine
  21. 21. Act II: What To Do in the Post-Mortem Room
  22. 22. Challenge #1, As Person Running Post-Mortems Get team out of moral mindset. Note: this is not, in fact, easy.
  23. 23. Why It’s Hard • Mindsets control how we interpret the world... • ...including what people say to us • So, a team sitting there, fearing moral censure, hears you say “We’re not looking to blame anyone”, they just think you’re lying. How could you mean that, when the thing that happened was so terrible and wrong? • The deep trick (and this is the point of this whole presentation, frankly), is that you have to take advantage of the thing that separates humans and robots...
  24. 24. Fundamental Tool: Make ‘Em Laugh
  25. 25. Humor == Breaking Frames • That’s what humor actually is -- something that stretches or breaks the mental frame that people are using to interpret a situation • So, you use humor to break the frame, release people from the blame/fear/ punishment of the moral mindset, and then refocus them on the economic challenges you’re facing • The humor is, IMHO, not a nice-to-have. It’s absolutely central. I’ve seen smart, caring leaders get this one wrong, and finish their post-mortems with a room full of tense, closed-up team members (and no good ideas on the table) • Talk has specific examples of this, but this is a central point
  26. 26. Tip 1: Share Your Personal “Bad Things”
  27. 27. Place The Bad Thing on a Continuum • Moral mindset is very absolutist: this bad thing is The Worst Thing Ever • I like to say “Okay, well it’s pretty bad, let’s compare it to some things” • Did we irretrievably lose customer data? (I’ve done that, not awesome) • Did we almost get our customer fired by her boss (also, not awesome) • Did we send hundreds of emails to everyone on our customer’s mailing list... but the emails were all question marks? For a customer who was in the proofreading business? (done that, very much not awesome) • People laugh, and then say “Okay, how bad was this, really?” Win.
  28. 28. More Stories of Actual Failures (Just For Fun) • Did we break our allergies-to-medicines module, and risk having a doctor prescribe the wrong medication to someone? • Did our internet-connected home thermostat system have a server crash, causing all the thermostats to set the temp to the default... of 85 degrees? • Did our high-frequency trading program have flaws that led to our company losing 450 million dollars? (that is a tough one to beat, IMHO) • Collect your own! It’s fun!
  29. 29. Tip 2: Mock Hindsight Bias To Its Face “Let’s plan for a future where we’re all as stupid as we are today.”
  30. 30. How Hindsight Bias Shows up in Post-Mortems • Someone says “Oh, yeah, I screwed that one up, I knew I had to run the deploy in that one order, and I just forgot. I’m really sorry, I won’t make that mistake again, totally my bad.” • You have to utterly reject this. It’s pure hindsight bias (easy to see errors after the fact, very difficult in the moment). • I say “It’s like we’re saying ‘I was stupid, this one time, and we’ll fix that problem by never being stupid again.’” • Hence: “planning for a future where we’re as stupid as we are today” • aka “Must create a system which is resilient to occasional bouts of really intense stupidity”.
  31. 31. Tip 3: Relish Absurdities of Your System
  32. 32. You Will Find That Your Code is a Mess • E.g. you’ve refactored, and rewritten in python (or node or something), and moved to the cloud, but this 5 whys is making clear that your most important report is still run by a VisualCron job on a Windows server that never quite made it out of the office... and someone just tripped on the power cord • Team will feel ashamed, you have to give them license to relish absurdity • I often point out “There are two kinds of startups: the ones that achieve some modest traction on top of a pile of code of which they are vaguely ashamed... and the ones that go out of business. That’s it. No third kind.” • Also sometimes it helps to just laugh: “It’s kind of amazing this works at all”
  33. 33. Interlude: A Worked Example
  34. 34. Three Axioms For Leading Post-Mortems • Everyone involved acted in good faith • Everyone involved is competent • We’re doing this to find improvements
  35. 35. Axioms == Ground Truth From Which You Start • If you don’t start with these as givens... • ...you’ll find yourself seeing every incident as human error • Whereas, if you can convince/trick yourself into such beliefs... • ...you’ll find a thousand valuable improvements to make • Or, to put it another way:
  36. 36. Human Error is the Question, Not the Answer
  37. 37. Restate the Problem To Include TTR We broke the db access code.
  38. 38. Restate the Problem To Include TTR We pushed a deploy... which broke db access code.
  39. 39. Restate the Problem To Include TTR We pushed a deploy... which broke the db access code... and didn’t find out until customers complained.
  40. 40. Restate the Problem To Include TTR We pushed a deploy... which broke the db access code... didn’t find out until customers complained... and couldn’t fix it for three hours.
  41. 41. Redefining Problem Is Very Valuable • People tend to focus on a single mistake • Broaden that, to include full cycle back to restored service • At what point was the triggering decision made? • How long did it take to find out something was wrong? • How long did it take to restore service?
  42. 42. “Broadest Fix” vs “Root Cause”
  43. 43. Handling a Fork in the Road • Which is the Root Cause? DB access bug or monitoring failure? • Answer: don’t care about “root causes”. They don’t exist (multiple things conspire for failures to happen). Also, kind of moral/blame-ish. • Ask instead: if we made an incremental improvement in area A or area B, which would prevent the broadest class of problems going ahead? • Much better conversation. Answer here is clear: monitoring.
  44. 44. Act III: Corrective Actions / Remediations / Fixes
  45. 45. Incrementalism Or You’re Fired
  46. 46. Require Small Steps From Your Team • Team will tell you they have no option but to do Some Huge Thing • You have to totally reject this, push for a small step • e.g. “What’s the simplest, dumbest thing that will make it slightly better?” • After some hemming and hawing, great, cheap ideas emerge • Might be: small steps towards Huge Thing • Or: installing data collection to prove Huge Thing is necessary
  47. 47. “Automation” vs “Tools”
  48. 48. “Automation” => Humans Cause Your Problems • Strong • Silent • Clumsy • Difficult to Direct David Woods, “Decomposing Automation: Apparent Simplicity, Real Complexity”
  49. 49. Automation Written By People Who Don’t Do Job
  50. 50. “Tooling” => Humans Solve Your Problems • How do the humans currently do their jobs? • What tools do they use? • When you give them a new tool, do they actually use it? • How badly did you just screw up their jobs? • YOU MUST ITERATE
  51. 51. Dan Mongers Some Fear
  52. 52. Here’s What’s Happening, Right Now • Your systems are experiencing constant, small-scale failures... invisibly • Your team is struggling to keep your systems running... but are so habituated to it, they don’t even realize that’s true • Your smart people are spending their smart cycles just trying to work around the complexity in your system • The business side is making plans which aren’t supported by your infrastructure • Customers are getting ready to surprise you, and it won’t be fun
  53. 53. Do This • Elect a Post-Mortem Boss (Man|Lady) • Look for a Goldilocks incident • Expect awkwardness • THERE MUST BE FIXES • Incrementally improve the incremental improvements
  54. 54. Read This • How Complex Systems Fail, Richard Cook (SOOOOO GOOOD) • How the Mind Works, Steven Pinker (moral instinct, much other awesome) • Thinking Fast and Slow, Daniel Kahneman • The Field Guide to Understanding Human Error, Sidney Dekker • Complications and Better, Atul Gawande (marvelous narratives) • Kitchen Soap, blog by John Allspaw
  55. 55. Photo Credits, I • “Wonderworks Upside Down Building”, by Andy Leonard, http://www.flickr.com/photos/ rover75/3901166997/ • “Robot de Martillo”, by Luis Perez, http://www.flickr.com/photos/pe5pe/2454661748/ • “Helios-Factory floor”, http://commons.wikimedia.org/wiki/File:Helioshall2.jpg • “old machine”, by Jun Aoyama, http://www.flickr.com/photos/jam343/1730140/ • “Axe Marks The Spot”, by Alan Levine, http://www.flickr.com/photos/cogdog/4461665810/ • “Failboat Has Arrived”, http://www.rotskyinstitute.com/rotsky/wp-content/uploads/2008/02/ failboat2.jpg
  56. 56. Photo Credits, II • “14 plugs but only 6 sockets”, by Jason Rogers, http://www.flickr.com/photos/restlessglobetrotter/ 2661016046/ • “shame in scranton”, by Shira Golding Evergreen, http://www.flickr.com/photos/boojee/ 3613772785/ • “tiny dollhouse steps”, by Yi-Tao “Timo” Lee, http://www.flickr.com/photos/timojazz/6235519218/ • “Computers can be stupid”, by Brent Moore, http://www.flickr.com/photos/brent_nashville/ 2634912345/ • “Robot Uprising”, http://gordonandthewhale.com/wp-content/uploads/2010/10/How-To-Survive-a- Robot-Uprising.jpeg • “Shark”, by Steve Garner, http://www.flickr.com/photos/22032337@N02/8314569214/
  57. 57. Thanks... Dan Milstein Hut 8 Labs @danmil

×