Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Dev Ops @ Envato

6,494 views

Published on

My presentation from the Melbourne Dev Ops meetup on 22nd March 2011 on how we structure our team, deployments, and infrastructure.

Published in: Technology

Dev Ops @ Envato

  1. Dev Ops @ Envato ... or how we found out there’s a name and a supportgroup for all the stuff we already did to let a team of 8deploy heaps of times a day to a Ruby on Rails app that has scaled up to around 20 million requests a week without an ops team.
  2. John Barton @johnbartonjohn@envato.com
  3. Envato?http://envato.com
  4. Stock Marketplaces & Tutorial Network
  5. Marketplaces
  6. Like iStockPhoto but for other creative niches or eBay for digital goods
  7. Tuts+ Network
  8. Big Blog Network for education in creativefields.... and linkbait (ducks)
  9. Big traffic and interesting init’s own right, but that’s for @ryanfaceryan to talk about
  10. Why?
  11. 2 things recently made us think we might be doingsomething a bit different
  12. Neal Ford @ YOW! “Rails can Scale”
  13. 200k+ daily requests vs 2m+ daily requests
  14. Dev Ops Melbourne“Continuous Deployment”
  15. 1 Deployment per week or fortnight vs 5 Deployments per Day
  16. The Marketplace
  17. August 2006
  18. One Marketplace
  19. FlashDen
  20. Rails 0.13bNo UsersNo Traffic
  21. March 2011
  22. 9 Marketplaces
  23. ActiveDen (nee FlashDen) AudioJungle ThemeForest VideoHive GraphicRiver CodeCanyon 3D Ocean Tuts+ Marketplace PhotoDune
  24. Rails 2.3.11 687,830 Users20 Million Requests Weekly
  25. One Codebase and OneProduction Environment
  26. ... and well gosh darn it we just deploy to that site whenever we darned well feel like it
  27. As much a product of timeand circumstances as good decisions and hard work
  28. The “Golden Age”
  29. 3 Developers30 minute feedback cycledeploy, discuss on forums, deploy again
  30. Decision 1:Preserve everything goodabout the the startup days for as long as we can
  31. Commodity Hosting
  32. Notice I’m not saying“Cloud Computing”
  33. You canttrust “thecloud” but you cantrust “the cloud to be “the cloud”.
  34. Decision 2: Conservative Platformchoice so we don’t have to sweat the details
  35. Corporate AntiPatterns
  36. We’ve all been doing dev long enough to see thisstuff screwed up over and over again
  37. Decision 3:Don’t do all that stuff
  38. So what do we actually do?
  39. “Culture of respect &trust, good attitude toward failure...”Ted Dzubia
  40. “How about "culture of stop fucking up"?” Ted Dzubiahttp://teddziuba.com/2011/03/devops-scam.html
  41. Ultimate Responsibility
  42. "The fault, dearBrutus, is not in our QA or Ops, But in ourselves."
  43. Test Driven Development vs. QA Team
  44. Test Driven Infrastructure vs. Ops Team
  45. Both as a team and asindividuals we own ourwork from when we are asked to do it...
  46. ... until is is demonstrablyerror free and performant in production
  47. Everyone is in the (paid) on call roster
  48. Everyone takes a turn atLevel 2 Customer Support
  49. Want those jobs to be easier? Stop fucking up.
  50. Process
  51. LEAN / TPS Principles...without the process
  52. You cannot write code anyfaster than you can deploy it to production
  53. Long running projects?
  54. A. B. C.Always Be Cmerging (the c is silent)
  55. Dark LaunchFeature FlagsPrivate Beta
  56. Community
  57. We do trip up running this fast
  58. But through years of openness with our usersvia our forums and owning up to our mistakes
  59. ... we’ve ended up with a (relatively) sympathetic community
  60. Time Zones both help and hurt
  61. Traffic peaks during US day means that if things gowrong we’re usually asleep
  62. But it makes it very easy todeploy during our business hours
  63. Dev vs. Ops
  64. Our Solution:Don’t have Ops
  65. Outsource commodity platform bits: virtualisation/cloud, haverackspace take care of db/ mailserver
  66. Ensure the dev team hasthe skills to take care of the rest
  67. Keep the stack as Vanilla as possible
  68. Virtualised servers in our own sandbox.Cloud Flexibility - Cloud Shit-ness = WIN
  69. Automate Configuration Management
  70. Don’t modify live boxes,tear down and build again from scratch
  71. Take advantage of an individuals talents, but don’t rely upon themie. don’t accidentally create an ops guy
  72. Performance & Scaling
  73. Not as big a deal as everyone thinks
  74. Shared-nothing loadbalanced app servers + outof request queue workers not rocket surgery
  75. Measure, deploy, measureagain and then tweak or rollback New Relic FTW
  76. There is no code faster than no code.
  77. Caveats
  78. We rely on our growthfollowing a steady exponent for this approach to be viable
  79. We made a conscious tradeoff between deliverytime and production snafus
  80. We’ve traded off platform innovation for product innovation (no MongoDB, etc)
  81. Auditors aren’t big fans ofone team with access to everything, but you can mitigate these problems
  82. Question?

×