AWS Customer Presentation - family builder

806 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
806
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

AWS Customer Presentation - family builder

  1. 1. CONFIDENTIAL Familybuilder June 2010 David Blinder CTO – Familybuilder.com
  2. 2. Intro to Familybuilder <ul><li>Familybuilder connects families through their existing online social networks </li></ul><ul><li>We provide features to facilitate communication, locate new relatives, build family trees, engage in fun activities, and more </li></ul><ul><li>We have over 26 million users who have added over 160 million family members to the application </li></ul>
  3. 3. Metrics <ul><li>Over 6.3 million users engaged monthly </li></ul><ul><li>26 million total installs and over 160 million relatives added </li></ul><ul><li>1.5 million impressions daily </li></ul><ul><li>Growth of 50k-70k new installs daily through virality </li></ul>
  4. 4. What Led Us to AWS <ul><li>Startup with limited financial resources Infrastructure build-out Setup costs Contracts and commitments </li></ul><ul><li>Usage-based costing Increase/decrease availability paying for what we use </li></ul><ul><li>Significant Traffic Fluctuations Automation of additional resources as needed Social networks viral factors result in significant traffic fluctuations </li></ul><ul><li>The Application Designed as a working model moved into production The need to temporarily bridge design gaps and inefficiencies </li></ul><ul><li>Race to launch new features Amazon allowed for reduced time to market </li></ul>
  5. 5. Metrics <ul><li>Over 6.3 million users engaged monthly </li></ul><ul><li>26 million total installs and over 160 million relatives added </li></ul><ul><li>1.5 million impressions daily </li></ul><ul><li>Growth of 50k-70k new installs daily through virality </li></ul>
  6. 6. AWS Evaluation Process <ul><li>Cost of setup as well as opportunity cost </li></ul><ul><li>Ability to pay for what we are using </li></ul><ul><li>Comparison to hosted and co-located models (Rackspace, Local Racks) </li></ul><ul><li>Setup time and turnaround time for server provisioning, builds and launching </li></ul><ul><li>Digital media storage, design, setup and maintenance considerations </li></ul><ul><li>Flexibility in infrastructure to hit moving targets while scaling </li></ul><ul><li>Human resource development and cost Sysadmins Lead developer involvement Limited learning curves and associated costs </li></ul>
  7. 7. Experience Getting Started <ul><li>Solicited the help of a third party RightScale At the time all the GUI and tools for management did not exist and we needed to move forward. Understanding the services available and how they could be used to scale our application took about 7 weeks to design, implement and migrate </li></ul><ul><li>Migrating existing user images to S3 Migration from current ISP was time consuming and often times our processes would fail and need to be restarted. These are no longer issues </li></ul><ul><li>Utilization spikes Utilization spikes occur while re-factoring source code and optimized queries Understanding the different instances and their resource value Bridge problems with larger instances </li></ul>
  8. 8. How This Works
  9. 9. What We Use <ul><li>EC2 - www/load balance instances/MySQL/spot instances WWW units run httpd and distributed memcache on large instances. The extra memory on these units, the price-point, and the processing power fit the need of the application </li></ul><ul><li>Load Balance instances - We run haproxy on two small instances and round-robin between them. This insures lost instances are taken out in a bad situation </li></ul><ul><li>MySQL - We used to run master/slave setup on xlarge instances but migrated to a sharded model to achieve a horizontal scale where we had issues with I/O on these instances. We are looking at RDS and recommend high-memory units for master/slave now. </li></ul><ul><li>Spot instances - To communicate out to 26 mm people in less that 12 hours is something you cannot do unless you have a flexible infrastructure you can command. We achieve this with spot instances loading them locally from data pulled from production and have almost automated the process fully </li></ul>
  10. 10. What We Use (continued) <ul><li>Cloudfront - cached store of images to S3 with cloudfront </li></ul><ul><li>S3 - Uploaded images from users 2TB+ currently When making the choice to go with an ISP/Co-lo/AWS we looked at storage most critically. Being a family app where images of family are treasured we needed to be able to store this data without limit. Speaking with professional peers both of which came from image based websites where both maintained their own infrastructure locally, I could not imagine dealing with the upfront cost, the maintained and the additional human resource required to be on hand and manage. The cost saving in human resource alone makes this a no brainier. </li></ul>
  11. 11. Benefits from AWS <ul><li>Competitors did not get away - We maintained high level availability as user participation grew leveraging the immediacy of AWS resources </li></ul><ul><li>Grew the company within limited budget, with limited human resource cost and without delays in infrastructure design changes </li></ul><ul><li>By Hour use and Spot instance use to deliver personal communication to the user base could not have been accomplished without this resource </li></ul><ul><li>S3 storage - scalable image store is headache free! </li></ul>
  12. 12. Best Practices Learned <ul><li>Think parallel and take advantage of the resource </li></ul><ul><li>Automate As Much As Possible </li></ul><ul><li>Design for failure - Being a pessimist when designing </li></ul><ul><li>Secure your Application - setup security groups around roles </li></ul><ul><li>Securing and managing your AWS credentials – Application development specific </li></ul>

×