Continuous Deployment: Startup Lessons Learned

  • 10,925 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
10,925
On Slideshare
0
From Embeds
0
Number of Embeds
9

Actions

Shares
Downloads
493
Comments
2
Likes
37

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

  • Hi - I am Ash with WiredReach and I'm going to be presenting a case-study on how I transitioned from a traditional development process to continuous deployment.







    Just to get a sense of who is in the room,


    How many people here know what Continuous Deployment is?


    And how many people practice Continuous Deployment today?


  • If I were to ask you “How do you maximize progress in a lean startup?”,


    most of you would probably say:




  • By maximizing learning about customers.


    We are in a conference about Lean Startups after all.


  • So lets take a look at where in the development process we learn about customers.


    Some of this learning happens during the requirements stage in the form of customer discovery when we’re out talking to customers and figuring and out what to build.


    But most of this learning happens only after we get the product into customers’ hands in the form of customer validation.


    There is very little learning during development and QA.


    Sure we learn about other things, just not about customers.









  • So lets take a look at where in the development process we learn about customers.


    Some of this learning happens during the requirements stage in the form of customer discovery when we’re out talking to customers and figuring and out what to build.


    But most of this learning happens only after we get the product into customers’ hands in the form of customer validation.


    There is very little learning during development and QA.


    Sure we learn about other things, just not about customers.









  • So lets take a look at where in the development process we learn about customers.


    Some of this learning happens during the requirements stage in the form of customer discovery when we’re out talking to customers and figuring and out what to build.


    But most of this learning happens only after we get the product into customers’ hands in the form of customer validation.


    There is very little learning during development and QA.


    Sure we learn about other things, just not about customers.









  • Even though building a product is the purpose of a startup, you could say that


    Product Development actually gets in the way of learning about customers.


  • While we obviously can’t eliminate development and QA, we can shorten the cycle time from requirements to release so we can get to the learning parts faster.







    That is exactly what Continuous Deployment does.

    Continuous Deployment shortens the cycle time it takes to build, test, and deploy features.



    Shorter cycle times mean we get feedback faster and learn faster.

  • While we obviously can’t eliminate development and QA, we can shorten the cycle time from requirements to release so we can get to the learning parts faster.







    That is exactly what Continuous Deployment does.

    Continuous Deployment shortens the cycle time it takes to build, test, and deploy features.



    Shorter cycle times mean we get feedback faster and learn faster.

  • While we obviously can’t eliminate development and QA, we can shorten the cycle time from requirements to release so we can get to the learning parts faster.







    That is exactly what Continuous Deployment does.

    Continuous Deployment shortens the cycle time it takes to build, test, and deploy features.



    Shorter cycle times mean we get feedback faster and learn faster.

  • While we obviously can’t eliminate development and QA, we can shorten the cycle time from requirements to release so we can get to the learning parts faster.







    That is exactly what Continuous Deployment does.

    Continuous Deployment shortens the cycle time it takes to build, test, and deploy features.



    Shorter cycle times mean we get feedback faster and learn faster.
  • Here’s an overview of what I’ll be talking about today.



    I’ll describe what my development process looked like before and after continuous deployment.
    I’ll talk about how I got started and how I build features now.
  • Here’s some background on WiredReach and the type of products we build. We’ve been in business for 7 years and have launched 2 products - BoxCloud and CloudFire.



    The first product BoxCloud was built using a release early, release often methodology while CloudFire was built using Lean Startup techniques.



    Both products are hybrid web/desktop applications. What I mean by that is they have both a web component and a downloaded client component which runs on mac and windows.

  • So, before Continuous Deployment, we used to release on a 2 week cycle which is fairly fast by most standards.


    We used to spend about a week in development and then QA tested for another 2-3 days before we deployed to customers.







    Now we release multiple times a day.


  • Before Continuous Deployment, we had a common staging area that mirrored production. This is what we used for development and QA. It worked fine in the early days but as we grew, coordinating around a single staging area became a problem.







    Now we build complete standalone sandboxes for development and QA. Each developer has the complete system on their workstation which gives them a lot more freedom to experiment. Each QA machine also runs on a standalone sandbox which makes it easier for us to do isolated testing as well as scale the testing infrastructure horizontally.


  • Before Continuous Deployment, releases were all day events. We would do a code freeze typically on a Thursday and spend most of Friday building, testing and packaging a release.







    Now a release is triggered automatically every time we commit code. The system is run against a battery of tests and only deployed into production if it passes all the tests. We constantly monitor the production environment and can tell good changes from bad changes quickly and revert a release if we need to.







    Our release process currently takes about 20 minutes.


  • Before Continuous Deployment, the average size of a release was measured in hundreds of lines of code.







    Now a typical release is under 25 lines of code.


  • When you are committing 25 lines of code versus hundreds per release, the impact on the system is much more localized. This has led to fewer production emergencies and less firefighting for us.


  • But most important of all, if you are a technical founder like me, you constantly have to trade-off outside the building activities like customer development against inside the building activities like product development.







    Before Continuous Deployment, I had to schedule my week for coding days and customer days. Now I can do both in the same day. I schedule my coding in 2 hour blocks usually early in the day. That leaves the rest of the day open for everything else.

  • So, Continuous Deployment sounds great in theory.
  • But taking the plunge was still scary.



    The biggest reason for us was having the feeling of there being no safety net. With a traditional development process, there is a QA cycle before deployment which provides a safety net of time and there is some comfort in sharing testing responsibility with someone else.
  • But taking more time in QA was not always optimal for us:
    - despite our best testing efforts, bugs still crept into production
    - bugs got more expensive and harder to fix the longer we waited
    - but most important of all, we felt we weren’t learning anything about customers during that time



    The answer isn’t taking more time in QA but less through test automation and getting better at detecting and fixing issues in production.
  • But taking more time in QA was not always optimal for us:
    - despite our best testing efforts, bugs still crept into production
    - bugs got more expensive and harder to fix the longer we waited
    - but most important of all, we felt we weren’t learning anything about customers during that time



    The answer isn’t taking more time in QA but less through test automation and getting better at detecting and fixing issues in production.
  • But taking more time in QA was not always optimal for us:
    - despite our best testing efforts, bugs still crept into production
    - bugs got more expensive and harder to fix the longer we waited
    - but most important of all, we felt we weren’t learning anything about customers during that time



    The answer isn’t taking more time in QA but less through test automation and getting better at detecting and fixing issues in production.
  • But taking more time in QA was not always optimal for us:
    - despite our best testing efforts, bugs still crept into production
    - bugs got more expensive and harder to fix the longer we waited
    - but most important of all, we felt we weren’t learning anything about customers during that time



    The answer isn’t taking more time in QA but less through test automation and getting better at detecting and fixing issues in production.
  • But taking more time in QA was not always optimal for us:
    - despite our best testing efforts, bugs still crept into production
    - bugs got more expensive and harder to fix the longer we waited
    - but most important of all, we felt we weren’t learning anything about customers during that time



    The answer isn’t taking more time in QA but less through test automation and getting better at detecting and fixing issues in production.
  • So lets take a look at how we got started.
  • The first and most important practice we adopted was fitting releases into small batch sizes. Coding in small batches is the key concept in continuous deployment and can directly be attributed back to shorter cycle times, faster feedback, and a better work flow.



    For me, a small batch is the output of a 2 hour work block. We can not always build a full feature in 2 hours but we have gotten good at deploying features incrementally. We start with non-user facing changes first, like api updates and database changes, before building user-facing changes. Even deploying these changes early greatly help to lower the integration risk of a feature.


  • The next thing we did was not try to achieve 100% automation from the start. We kept deploying these small batch releases manually for a while and audited everything we did. This helped us build confidence in the process and overcome some of the initial fear of loosing control.



    We already had a continuous integration server that ran a large collection of unit tests but once we took out the formal QA step, we found ourselves preferring functional tests over unit tests since they were much better at reflecting what users actually did with the system.
  • We now have a practice of writing a new functional test with every user facing change, but we didn’t start that way. We got started by writing a single test for the user activation flow first. This is the path users take when they first signup, download and interact with our product.



    If something goes wrong here, nothing else matters after that.
  • One downside of relying on more functional tests is that they take much longer to run and can drive up the release cycle time. Our goal was to keep the release cycle time under 30 minutes and we were only able to achieve that by distributing the tests across multiple machines.



    This is where the standalone sandboxes really come in handy. As we add more tests, it is fairly easy for us to add more test boxes and keep the cycle time in check.
  • Another problem with tests is that over time, they get out of date and start failing. I’ve worked in places where developers start ignoring these which is a slippery slope as the problem only keeps getting worse.



    In Continuous Deployment, these tests are your only line of defense before deploying code, so you have to take failing tests very seriously. We only deploy a release if it passes all the tests. Otherwise, we stop and fix the tests first.
  • You’ll eventually want to build one of these cluster immune systems (just hopefully not that one) that can automatically tell good changes from bad ones and do something about it. But here too, it’s important to build it out incrementally.



    We got started simply monitoring the health of production servers using off-the-shelf tools like ganglia and nagios. Over time we built other application and business level monitoring into it.



    We did this mostly reactively to production issues. 5Whys.
  • One of the challenges we faced was adopting continuous deployment for the downloaded client component. Interrupting customers multiple times a day for updates was not an option so we had to build a software update process that updated itself transparently in the background without any user intervention. We also wanted to be able to both push and pull for updates so that we could control how and when we delivered updates.



    This took some time and experimentation to get right.
  • Lastly, I want to spend some time talking about how I build features.



    Continuous Deployment shortens the cycle time it takes to deploy features but how do you make sure you’re actually building what customers want and not simply cranking out new features faster.



    Here are some rules we follow.
  • Features must be pulled not pushed.



    If you’ve followed a customer discovery process, identified your top 3 problems, and defined a mvp, you do not need to push more features until you have validated your mvp.



    This doesn’t mean you stop development, but most of your time should be spent measuring and improving existing features versus chasing after new problems to solve.



    From experience, I know this can be a hard rule to enforce and the next rule helps with that.
  • A good practice for ensuring the 80/20 rule is constraining the features pipeline. This is a common practice from Agile and Kanban, but with the addition of a validated learning state.



    Here’s how it works: Ideally, a new feature must be pulled or tested with more than one customer for it to show up in the backlog. The number of features in-progress is constrained by the number of developers and so is the number of features waiting for validation. This ensures that you cannot deploy a new feature until a previously deployed feature has been validated.
  • So how do you validate a feature? Unless you have a lot of traffic, quantitative metrics can take some time to collect. For this reason, I prefer to validate a feature qualitatively first. Once a feature goes live, I directly contact customers that expressed interest in that feature and ask them for feedback. It’s important not just to test the coolness factor of a feature but that it actually solves a customer problem and more importantly makes or keeps the sale. If I don’t get a strong initial signal, I try and figure out why and either improve or kill the feature.



    We use google website optimizer, KISSmetrics, Mixpanel, and homegrown scripts to collect quantitative data. It’s important here too, to focus on the macro and track key metrics over time, like revenue and retention, versus just clicks.
  • So when is the best time to adopt Continuous Deployment?
    I believe the ideal time is as early in the product development cycle as possible - when you are small and have a few or even no customers.



    Continuous Deployment is an incremental process that you have to practice to get really good at. But even adopting simple practices like “coding in small batch sizes” paid off for us very quickly. The biggest benefit we have derived from Continuous Deployment is the ability to integrate Customer Development with Product Development.



    The fundamental call to action from Continuous Deployment is to “Ship More Frequently”. Once you take that first step, what you need to do next becomes clearer.



    Thank you.


Transcript

  • 1. Continuous Deployment Case-study: WiredReach COMMIT DEV MONITOR TEST RELEASE QA DEPLOY Ash Maurya @ashmaurya http://www.ashmaurya.com
  • 2. How do you maximize progress in a Lean Startup?
  • 3. By maximizing validated learning about customers.
  • 4. Requirements Development QA Release
  • 5. Some learning Requirements Development QA Release
  • 6. Most learning happens Some learning here Requirements Development QA Release
  • 7. Most learning happens Some learning here Requirements Development QA Release Very little learning
  • 8. Product Development gets in the way of learning about customers.
  • 9. Most learning happens Some learning here Requirements Development QA Release Very little learning
  • 10. Most learning happens Some learning here Requirements Release
  • 11. Most learning happens Some learning here Requirements Release
  • 12. Most learning happens Some learning here Requirements Release
  • 13. Most learning happens Some learning here Continuous Requirements Release Deployment Shortens cycle time
  • 14. 1. Before and After 2. How we got started 3. How we build features
  • 15. About WiredReach Dead-Simple Sharing Software BoxCloud The simple way to share files with clients and coworkers. CloudFire Photo and Video Sharing for Busy Parents and Photographers.
  • 16. Before After COMMIT DEV MONITOR TEST RELEASE QA DEPLOY 2 week release cycles Multiple releases a day
  • 17. Before After PRODUCTION PRODUCTION CERTIFICATION SANDBOXES Staging area Standalone sandboxes
  • 18. Before After COMMIT DEV MONITOR TEST RELEASE QA DEPLOY Releases were all day events Releases are non-events
  • 19. Before After COMMIT DEV MONITOR TEST RELEASE QA DEPLOY Release size: Release size: hundreds of lines of code < 25 lines of code
  • 20. Before After COMMIT DEV MONITOR TEST RELEASE QA DEPLOY More emergency releases Less firefighting
  • 21. Before After COMMIT DEV MONITOR TEST RELEASE QA DEPLOY Coding days versus Coding days AND Customer days Customer days
  • 22. Continuous Deployment sounds great, but...
  • 23. Taking the plunge is scary as hell
  • 24. Requirements Development QA Release
  • 25. Requirements Development QA Release
  • 26. $ Requirements Development QA Release
  • 27. $ Requirements Development QA Release Very little learning
  • 28. $ Automated Requirements Development QA QA Release Very little learning
  • 29. $ Automated Requirements Development QA QA Release Production Very little learning Monitoring
  • 30. 1. Before and After 2. How we got started 3. How we build features
  • 31. Code in small batch sizes
  • 32. Deploy manually at first, then automate.
  • 33. Always test the User Activation flow
  • 34. Watch the release cycle time COMMIT MONITOR TEST DEPLOY Less than 30 minutes
  • 35. Be ready to stop the production line
  • 36. Build a cluster immune system. Incrementally. Build a cluster immune system. Incrementally.
  • 37. Challenges: Downloadable software
  • 38. 1. Before and After 2. How we got started 3. How we build features
  • 39. Don’t be a feature pusher NEW FEATURES 20% CONTINUOUS RELEASE 80% EXISTING FEATURES
  • 40. Constrain the features pipeline Validated Backlog In-Progress Done Learning Was this feature any good?
  • 41. Closing the loop with validated learning Qualitative Quantitative Start here Verify with data
  • 42. When is the right time to start ? There is no better time than the present.
  • 43. Thanks! Ash Maurya twitter: ashmaurya blog: http://www.ashmaurya.com Getting Lean - the book How to iterate your web application to product/market fit http://www.wiredreach.com/gettinglean.html