Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

AWS Intro for Knight News Fellows


Published on

I gave a brief intro about using AWS to our Mozilla Knight News Fellows.

Published in: Technology
  • Be the first to comment

AWS Intro for Knight News Fellows

  1. 1. AWS Fun
  2. 2. A short bit of history● Not that long ago, a "server" was :○ One piece of hardware○ One operating system○ Physically racked, powered, networked in amanaged datacenter● People started playing with "virtualization"○ One piece of hardware○ Multiple operating systems running independently○ Physically racked, powered, networked in amanaged datacenter
  3. 3. As virtualization was taking off...● Mid-2000s, buys a TON ofhardware.● The mantra for the folks building infrastructure:Provide service style endpoint access toinfrastructure management for internal use.EVERYTHING IS AN API
  4. 4. At the sametime....Marketing deptseverywhere goto town, asmarketing does...VIRTUALIZEITALL!!!!
  5. 5. Amazon Realizes...If we run the virtual server hosts...And we just open up our internal infrastructureAPIs to end users...
  6. 6. $$$$$$$$$$$$$$$$$$$$$$$$$$
  7. 7. Marketing took over,now everything is Cloud.
  8. 8. By Cloud, I mean....● Must be distributed.● Must be programatically accessible● Is multi-tenanted (you are not the only userof the hardware)
  9. 9. In general, what is AWS?● A collection of commonly used pieces ofsoftware, made easily accessible in:○ Distributed environment: Multiple Availability zonesper region, multiple regions○ Programatically accessible infrastructureFor example: Mysql, MS SQL, Memcached, Linux,Windows,CDN, DNS Management, User/Admin management,Firewalls, Load balancers...
  10. 10. Common componentsof infrastructurein your old datacenter
  11. 11. Common componentsof infrastructurein AWS
  12. 12. Some of what this buys us● We can spin up replica environments● Easier functional STAGING● Load test against prod without touching prod● Build in automated deployments and testing,making pushing to prod a breeze for alldevs.● This makes the feedback loop tighter, faster,and keeps changes and their inevitable bugsmore in context● This all wraps up to make you, the devs,more confident to try new things.
  13. 13. Controlling all of that infrastructure
  14. 14. Lots of configuration managementoptions....● Chef (Opscode)● Puppet (What I use)● AMIs (Server images)● Cloudformation (AWS Service)
  15. 15. But wait...isnt the cloud dangerous?● Yes! Just as dangerous as your datacenter● Secrets stores in S3, managed by puppet● Each app has its own key, security groups● Managing security via security groups, sshkeys
  16. 16. General scaling on AWS● Use autoscale groups (even if you neverhave to autoscale)● You can trigger autoscaling on any metric● Use EBS and instance store autoscalegroups○ 30 seconds to "traffic ready" prebuilt EBS instancevs. 2-10 min for a instance store○ Keep a baseline # of instance store nodes, for whenEBS has issues.○ You can have multiple autoscale groups load intoone ELB (so, app-ebs-fastscale-group and app-instancestore-noscale-group)
  17. 17. General scaling on AWS● For high IO data (RDS or self-managedEC2), use provisioned IOPS.● On EC2, EBS volumes can be RAID10d...need a 50k IOPS volume? :D Great way tovertically scale.●
  18. 18. General scaling on AWS● Adhere to rules so you canhoriziontally scale○ CNAME all resources, such as mysql servers. If youcan easily move a resource, you can easily verticallyscale it elsewhere and move to it.○ Store dependent content away from web tier nodes,ie media, user uploads. If a web node dies and youlost anything, you did it wrong.○ All pieces of app modular, independently scalableand revvable without retooling
  19. 19. General High Availability on AWS● Multi-Region (Each region has multiple AZs)● Multi-Availability Zone for○ RDS (built in) (takes ~3 min to failover)○ Load balancing○ Autoscaling groups (3 AZs recommended)● Dynamic DNS● Health Checks on apps
  20. 20. General High Availability on AWS● Mix in instance store baseline with EBS forfast scaling for when EBS has issues.● Health Checks on apps● Status updates to S3 file, updates app topoint to failover resources... No db? Writeto a SQS queue!● Oh yeah, use a lot of SQS!
  21. 21. CNAME for all the resources (12-factor friendly)
  22. 22. Easier to move, failover, rebuild
  23. 23. RDS Tricks● Multi-AZ, takes ~3 min to failover● EBS volumes of greater storage get betterperformance, always use 300gb for prod,even for small instances.● Read slaves have a lot of challenges withschema changes. It is usually faster to justrebuild slaves● For monitoring, grant repl client to user
  24. 24. Some other tricks● ELBs are EBS-backed EC2 Instances...when EBSalerts go out, be careful!● Setup ifttt alerts for AWS RSS status updates● Use New Relic. Please!● IAM Roles allow for interaction with AWSinfrastructure...think, a monitoring server that tells anautoscale group to respond to a problem by launchingnew nodes● Route53 is awesome. Alias A records, super reliable,you can keep low ttls
  25. 25. Pay Amazon Less● Reserved instances can save a lot of money● Spot instances are great for batch andprocessing, EMR, Cluster Compute● S3 static hosting is ridiculously inexpensive.Go that route for anything static.● For dev work, Heroku is great, no cost forapps that do not scale
  26. 26. Other random advice...
  27. 27. Good stuff●●●●● AWS Marketplace has a lot of good stuff● My example repos: and●●●
  28. 28. Demo time (if there is time)-Building a new autoscale group/app?-Managing infrastructure via fabric, jenkins,puppet-Show off the puppet systems config setup?