Building A Successful
Organization By
Mastering Failure
John Goulah (@johngoulah)
Etsy
Marketplace
• $1.93B Annual GMS 2014
• 1.4M active sellers
• 20M+ active buyers
• 30% international GMS
• 57%+ mobile visits
Infrastructure
• over 5500 MySQL databases
• 750K graphite metrics/min
• 1.3GB logs written/min
• 50M - 75M gearman jobs / day
• 30-50 deploys / day
Company
• Headquartered in Brooklyn
• Over 700 employees
• 7 offices around the world
• 80+ dogs / 80+ cats
Values
Learning Org
a company that facilitates the learning of its members and
continuously transforms itself
Five Disciplines
Systems Thinking
process of understanding how people, structure, and
processes influence one another within a larger system
Personal Mastery
an individual holds great importance in a learning organization
Mental Models
the assumptions held by individials and organizations
Shared Vision
creates a common identity that provides focus and energy for
learning
Team Learning
the problem solving capacity of the organization is improved
through better access to knowledge and expertise
Learning About Failure
• architecture reviews
• operability reviews
• blameless post mortems
failure and success
come from the same
source
context
can study the system
at any time
inflection points
• architecture reviews
• early feedback and discussion
• operability reviews
• held before launching
• blameless post mortems
• held after a failure
Architecture Reviews
Architecture Reviews
understand the costs and benefits of a proposed solution, and
discuss alternatives
Etsy Tech Axioms
• we use a small number of well known tools
• all technology decisions come with trade offs
• with new technology, many of those trade offs are
unknown
• we’re growing. things change
with new technology
many of those tradeoffs are unknown
Departures
a departure is when new technologies or patterns are
introduced that deviate from the current known methods of
operating the system and maintaining the software
How do I know I need an
architecture review?
when there is a perceived departure from current technology
choices or patterns
How early do you hold them?
early enough to be able to bail out or make major course
corrections
Who should come?
• the people presenting the change
• key stakeholders (sr. engineers, or arch review working
group)
• everyone else that wants to learn about the proposed
changes to the system
Architecture Review
Meeting Format
Preparation
• a proposal is written in a shared document and circulated
• comments are added, discussed, and potentially resolved in
advance
• initial questions for the meeting are collected in a tool such
as google moderator
Some General Questions
• Do we understand the costs of this departure?
• Have we asked hard questions about trade-offs?
• What will this prohibit us from doing in the future?
Some General Questions (cont)
• Are we impacting visibility, measurability, debuggability and
other operability concerns?
• Are we impacting testability, security, translatability,
performance and other product quality concerns?
• Does it makes sense?
The Arch Review
• proposal is presented to the group
• discuss questions and concerns
• decide if we are moving forward or need further discussion
you're saying my
project might not
move forward?
Why might this end a project?
• we learned through this discussion that an alternative is
better
• we find goals overlap with other projects that are in
progress
• we discover that it isn't worth the costs now that we have a
better idea what they are
At the end we should have
• detailed notes from the conversation
• agreement on tricky components and document them
• a compilation of learnings and questions
• a decision of whether to keep going with the project, stop
and rethink, or gather more information
Operability
Reviews
Operability Reviews
understand how the system could break, how we will know,
and how we will react
When do we do operability
reviews?
• after architecture reviews in the product lifecycle, generally
right before launch
• when we need to gain increased confidence for launch due
to the technology, product, or communication choices
being risky
• if there's a chance you'd surprise teams that operate the
software
Who comes to the operability
review?
representatives from:
• Product
• Development
• Operations
• Community/Support
• QA
Some Questions
• Has the feature been tested enough to deploy to
production?
• Does everyone know when it will go live, and who will push
the feature?
• Is there communication about the feature ready to go out
with the feature?
• Is it possible to turn up this feature on a percentage basis,
dark launch, or gameday it?
Some Questions (cont)
• Does the launch involves any new production infrastructure?
• If so, are those pieces in monitoring or metrics collection?
• If so, is there a deployment pipeline in place?
• If so, is there a development environment set up to make
it work in dev?
• If so, are there tests that can be and are run on CI?
Contingency
Checklist
Contingency Checklist
a list of things that could possibly go "wrong" with a new
feature, what we could do about it
Issue
What could possibly go wrong with the feature launched in
production?
Likelihood
What is the likelihood of each item going wrong?
Comments
Any comments about the item?
Impact
This is just a measure of how impactful this will be if it does
actually turn out to be a concern.
Engineering
What do we do to mitigate the issue with the item (i.e. can we
gracefully degrade?)
Onsite Messaging
What is the messaging to the user in the forums, blog, and
social media if this needs graceful degradation?
PR
Is PR needed for the contingency (i.e. larger scale failure)
Blameless
Post Mortems
What is a post mortem?
a postmortem is a facilitated meeting during which people
involved/interested/close to an accident or incident debriefs
together on how we think the event came about
What does it cover?
• walking through a timeline of events
• learning how things are expected to work "normally",
adding the context of everyone’s perspective
• exploring what we might do to improve things for the future
Local Rationality
we want to know how it made sense for someone to do what
they did at the time
searching for second stories
instead of human error
• asking why is leading to who is responsible
• asking how leads to what
Avoiding Human Error
Human error points directly to individuals in a complex
system. But, in complex systems, system behaviour is driven
fundamentally by the goals of the system and the system
structure. People just provide the flexibility to make it work.
Avoiding Human Error (cont)
Human error implies deviation from “normal” or "ideal", but in
complex situations and tasks there is often no normal ideal that
can be precisely and exactly described, many variable
interconnected touchpoints influence decisions that are made
Recognizing Human Error
• be aware of other terms for it: slip, lapse, distraction,
mistake, deviation, carelessness, malpractice, recklessness,
violation, misjudgement, etc
• don’t point to individuals when you really want to
understand system itself and the work
• how do you feel when something goes wrong?
• is it to find who did it / who screwed up, or to find how it
happened?
Other Things to Avoid
Root Cause
• it leads to a simplistic and linear explanation of how events
transpired
• linear mental models of causality don’t capture what is
needed to improve the safety of a system
• ignores the complexity of an event, which is what should be
explored if we are going to learn
• leads directly to blaming things on human error
Nietzschean anxiety
when situations appear both threatening and ambiguous we
seem to demand a clear causal agency; because if we cannot
establish this agency then the "problem" is potentially
irresolvable
Hindsight Bias
inclination, after an event has occurred, to see the event as
having been predictable, despite there having been little or no
objective basis for predicting it
Counterfactuals
the human tendency to create possible alternatives to life
events that have already occurred; something that is contrary
to what actually happened
Morgue
https://github.com/etsy/morgue
Post Mortem
Meeting Format
Meeting Format
• Timeline
• Discussion
• Remediation Items
Timeline
• a rough timeline scaffolding is required
• talk about facts that were known at the time, even if
hindsight reveals misunderstandings in what we knew
• look out for knowledge that some people were aware of,
that others were not, and dig into that
• no judgement about actions or knowledge (counterfactuals)
• tell people to hold that thought if they jump to remediation
items at this point
Timeline (cont)
• continually ask "What are we missing?" until those involved
feel its complete
• continually ask "Does everyone agree this is the order in
which events took place?"
• make sure to include important times for events that
happened (alerts, discoveries)
• reach a consensus on the timeline and move on to the
discussion
Discussion
• When an action or decision was taken in the timeline, ask
the person: "Think back to what you knew at the time, why
did that action make sense to you at the time?"
• Did we clean up anything after we were stable, how long
did it take?
• Was there any troubleshooting fatigue?
Discussion (cont)
• Did we do a good job with communication (site status,
support, forums, etc)?
• Were all tools on hand and working, ready to use when we
needed them during the issue? Where there tools we would
have liked to have?
• Did we have enough metrics visibility to diagnose the issue?
• Was there collaborative and thoughtful communication
during the issue?
Remediation
• Remediation items should have tickets associated with them
to follow up on
• There can be further post meeting discussion on these but
tasks should not linger
Remediation questions
• What things could we do to prevent this exact thing from
happening in the future?
• What things could we do to make troubleshooting similar
incidents in the future easier?
In Summary
We Can Learn Before
and After Failure
Before
• Architecture reviews for new technology
• Operability reviews to gain launch confidence
After
• Postmortems are done soon after a failure
• avoid human error, counterfactuals, hindsight bias, and
root cause
Questions?
John Goulah (@johngoulah)
Etsy

Building a Successful Organization By Mastering Failure

  • 1.
    Building A Successful OrganizationBy Mastering Failure John Goulah (@johngoulah) Etsy
  • 3.
    Marketplace • $1.93B AnnualGMS 2014 • 1.4M active sellers • 20M+ active buyers • 30% international GMS • 57%+ mobile visits
  • 4.
    Infrastructure • over 5500MySQL databases • 750K graphite metrics/min • 1.3GB logs written/min • 50M - 75M gearman jobs / day • 30-50 deploys / day
  • 5.
    Company • Headquartered inBrooklyn • Over 700 employees • 7 offices around the world • 80+ dogs / 80+ cats
  • 6.
  • 8.
    Learning Org a companythat facilitates the learning of its members and continuously transforms itself
  • 9.
  • 10.
    Systems Thinking process ofunderstanding how people, structure, and processes influence one another within a larger system
  • 11.
    Personal Mastery an individualholds great importance in a learning organization
  • 12.
    Mental Models the assumptionsheld by individials and organizations
  • 13.
    Shared Vision creates acommon identity that provides focus and energy for learning
  • 14.
    Team Learning the problemsolving capacity of the organization is improved through better access to knowledge and expertise
  • 15.
    Learning About Failure •architecture reviews • operability reviews • blameless post mortems
  • 16.
    failure and success comefrom the same source
  • 17.
  • 18.
    can study thesystem at any time
  • 19.
    inflection points • architecturereviews • early feedback and discussion • operability reviews • held before launching • blameless post mortems • held after a failure
  • 20.
  • 21.
    Architecture Reviews understand thecosts and benefits of a proposed solution, and discuss alternatives
  • 22.
    Etsy Tech Axioms •we use a small number of well known tools • all technology decisions come with trade offs • with new technology, many of those trade offs are unknown • we’re growing. things change
  • 23.
    with new technology manyof those tradeoffs are unknown
  • 24.
    Departures a departure iswhen new technologies or patterns are introduced that deviate from the current known methods of operating the system and maintaining the software
  • 25.
    How do Iknow I need an architecture review? when there is a perceived departure from current technology choices or patterns
  • 26.
    How early doyou hold them? early enough to be able to bail out or make major course corrections
  • 27.
    Who should come? •the people presenting the change • key stakeholders (sr. engineers, or arch review working group) • everyone else that wants to learn about the proposed changes to the system
  • 28.
  • 29.
    Preparation • a proposalis written in a shared document and circulated • comments are added, discussed, and potentially resolved in advance • initial questions for the meeting are collected in a tool such as google moderator
  • 30.
    Some General Questions •Do we understand the costs of this departure? • Have we asked hard questions about trade-offs? • What will this prohibit us from doing in the future?
  • 31.
    Some General Questions(cont) • Are we impacting visibility, measurability, debuggability and other operability concerns? • Are we impacting testability, security, translatability, performance and other product quality concerns? • Does it makes sense?
  • 32.
    The Arch Review •proposal is presented to the group • discuss questions and concerns • decide if we are moving forward or need further discussion
  • 33.
    you're saying my projectmight not move forward?
  • 34.
    Why might thisend a project? • we learned through this discussion that an alternative is better • we find goals overlap with other projects that are in progress • we discover that it isn't worth the costs now that we have a better idea what they are
  • 35.
    At the endwe should have • detailed notes from the conversation • agreement on tricky components and document them • a compilation of learnings and questions • a decision of whether to keep going with the project, stop and rethink, or gather more information
  • 36.
  • 37.
    Operability Reviews understand howthe system could break, how we will know, and how we will react
  • 38.
    When do wedo operability reviews? • after architecture reviews in the product lifecycle, generally right before launch • when we need to gain increased confidence for launch due to the technology, product, or communication choices being risky • if there's a chance you'd surprise teams that operate the software
  • 39.
    Who comes tothe operability review? representatives from: • Product • Development • Operations • Community/Support • QA
  • 40.
    Some Questions • Hasthe feature been tested enough to deploy to production? • Does everyone know when it will go live, and who will push the feature? • Is there communication about the feature ready to go out with the feature? • Is it possible to turn up this feature on a percentage basis, dark launch, or gameday it?
  • 41.
    Some Questions (cont) •Does the launch involves any new production infrastructure? • If so, are those pieces in monitoring or metrics collection? • If so, is there a deployment pipeline in place? • If so, is there a development environment set up to make it work in dev? • If so, are there tests that can be and are run on CI?
  • 42.
  • 43.
    Contingency Checklist a listof things that could possibly go "wrong" with a new feature, what we could do about it
  • 44.
    Issue What could possiblygo wrong with the feature launched in production?
  • 45.
    Likelihood What is thelikelihood of each item going wrong?
  • 46.
  • 47.
    Impact This is justa measure of how impactful this will be if it does actually turn out to be a concern.
  • 48.
    Engineering What do wedo to mitigate the issue with the item (i.e. can we gracefully degrade?)
  • 49.
    Onsite Messaging What isthe messaging to the user in the forums, blog, and social media if this needs graceful degradation?
  • 50.
    PR Is PR neededfor the contingency (i.e. larger scale failure)
  • 51.
  • 52.
    What is apost mortem? a postmortem is a facilitated meeting during which people involved/interested/close to an accident or incident debriefs together on how we think the event came about
  • 53.
    What does itcover? • walking through a timeline of events • learning how things are expected to work "normally", adding the context of everyone’s perspective • exploring what we might do to improve things for the future
  • 54.
    Local Rationality we wantto know how it made sense for someone to do what they did at the time
  • 55.
    searching for secondstories instead of human error • asking why is leading to who is responsible • asking how leads to what
  • 56.
    Avoiding Human Error Humanerror points directly to individuals in a complex system. But, in complex systems, system behaviour is driven fundamentally by the goals of the system and the system structure. People just provide the flexibility to make it work.
  • 57.
    Avoiding Human Error(cont) Human error implies deviation from “normal” or "ideal", but in complex situations and tasks there is often no normal ideal that can be precisely and exactly described, many variable interconnected touchpoints influence decisions that are made
  • 58.
    Recognizing Human Error •be aware of other terms for it: slip, lapse, distraction, mistake, deviation, carelessness, malpractice, recklessness, violation, misjudgement, etc • don’t point to individuals when you really want to understand system itself and the work • how do you feel when something goes wrong? • is it to find who did it / who screwed up, or to find how it happened?
  • 59.
  • 60.
    Root Cause • itleads to a simplistic and linear explanation of how events transpired • linear mental models of causality don’t capture what is needed to improve the safety of a system • ignores the complexity of an event, which is what should be explored if we are going to learn • leads directly to blaming things on human error
  • 61.
    Nietzschean anxiety when situationsappear both threatening and ambiguous we seem to demand a clear causal agency; because if we cannot establish this agency then the "problem" is potentially irresolvable
  • 62.
    Hindsight Bias inclination, afteran event has occurred, to see the event as having been predictable, despite there having been little or no objective basis for predicting it
  • 63.
    Counterfactuals the human tendencyto create possible alternatives to life events that have already occurred; something that is contrary to what actually happened
  • 64.
  • 65.
  • 66.
    Meeting Format • Timeline •Discussion • Remediation Items
  • 67.
    Timeline • a roughtimeline scaffolding is required • talk about facts that were known at the time, even if hindsight reveals misunderstandings in what we knew • look out for knowledge that some people were aware of, that others were not, and dig into that • no judgement about actions or knowledge (counterfactuals) • tell people to hold that thought if they jump to remediation items at this point
  • 68.
    Timeline (cont) • continuallyask "What are we missing?" until those involved feel its complete • continually ask "Does everyone agree this is the order in which events took place?" • make sure to include important times for events that happened (alerts, discoveries) • reach a consensus on the timeline and move on to the discussion
  • 69.
    Discussion • When anaction or decision was taken in the timeline, ask the person: "Think back to what you knew at the time, why did that action make sense to you at the time?" • Did we clean up anything after we were stable, how long did it take? • Was there any troubleshooting fatigue?
  • 70.
    Discussion (cont) • Didwe do a good job with communication (site status, support, forums, etc)? • Were all tools on hand and working, ready to use when we needed them during the issue? Where there tools we would have liked to have? • Did we have enough metrics visibility to diagnose the issue? • Was there collaborative and thoughtful communication during the issue?
  • 71.
    Remediation • Remediation itemsshould have tickets associated with them to follow up on • There can be further post meeting discussion on these but tasks should not linger
  • 72.
    Remediation questions • Whatthings could we do to prevent this exact thing from happening in the future? • What things could we do to make troubleshooting similar incidents in the future easier?
  • 73.
  • 74.
    We Can LearnBefore and After Failure
  • 75.
    Before • Architecture reviewsfor new technology • Operability reviews to gain launch confidence
  • 76.
    After • Postmortems aredone soon after a failure • avoid human error, counterfactuals, hindsight bias, and root cause
  • 77.