Vox files_openstack_swift_voxel_net

440 views

Published on

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
440
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
5
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Vox files_openstack_swift_voxel_net

  1. 1. VoxFiles: OpenStack Swift @ Voxel.Net Leaving the Nest Judd Maltin [email_address]
  2. 2. About Voxel <ul><ul><li>A profitable company comprised of people who really love this stuff (even if you don't) and that's why we're good at it. </li></ul></ul><ul><ul><li>Founded in 1999; first to commercially support load balancing for Linux (RAILs) in 2000. </li></ul></ul><ul><ul><li>Own and operate an International network connecting nearly 20 Voxel Network POPs (AS29791). </li></ul></ul><ul><ul><li>Over 1,000 clients rely on Voxel: Fortune 5000, Web 2.0 startups, Carriers, Media & Entertainment, Advertising Networks, etc. </li></ul></ul><ul><ul><li>Our core competency is providing a 100% uptime SLA on your entire stack: ProManaged Hosting Infrastructure, Voxel IP Network, VoxCLOUD Hybrid Cloud and VoxCAST Content Delivery. </li></ul></ul>
  3. 3. VoxFiles - why? <ul><li>Compete in the growing Object Storage marketplace  </li></ul><ul><li>  </li></ul><ul><li>Great position in the Managed Hosting world </li></ul><ul><li>Other storage systems not meeting needs of customers </li></ul><ul><ul><ul><li>others are unreliable, expensive, proprietary </li></ul></ul></ul><ul><li>  </li></ul><ul><li>Complements Voxel.Net's rich offerings: </li></ul><ul><ul><ul><li>ProManaged Hosting Infrastructure </li></ul></ul></ul><ul><ul><ul><li>Voxel IP Network </li></ul></ul></ul><ul><ul><ul><li>VoxCLOUD Hybrid Cloud </li></ul></ul></ul><ul><ul><ul><li>VoxCAST Content Delivery Network </li></ul></ul></ul><ul><li>  </li></ul>
  4. 4. OpenStack - why? <ul><li>Strong, active, responsive developer community </li></ul><ul><li>Open Source </li></ul><ul><li>All One Language - Python </li></ul><ul><li>Big Vision </li></ul><ul><ul><li>many products </li></ul></ul><ul><ul><li>shared components </li></ul></ul><ul><li>  </li></ul><ul><li>Great vendor involvement!  Yay, DELL! </li></ul>
  5. 5. What's Great About VoxFiles? <ul><li>Multi-Datacenter </li></ul><ul><li>  </li></ul><ul><li>Some Storage Zones Cross Data-Centers </li></ul><ul><li>Fully Automated Expansion of Storage </li></ul><ul><li>    (demo in a few minutes) </li></ul><ul><li>We wont run out: Big Storage Purchase pre-Thailand Flooding </li></ul><ul><li>  </li></ul>
  6. 6. Multi-Datacenter, you say? <ul><li>Yes. </li></ul><ul><li>All datacenters on different local power, generator backup </li></ul><ul><li>Multiple independent routes, above 500 year flood plains </li></ul>
  7. 7. VoxFiles Zones <ul><li>Swift does not yet have the notion of tiered zones. </li></ul><ul><li>So I made my own. </li></ul><ul><li>3/4 of zones are only within a single datacenter </li></ul><ul><li>1/4 of zones span datacenters </li></ul><ul><li>Rationale: </li></ul><ul><li>Zones that cross datacenters can be more quickly rebuilt when there's connectivity loss.  BEWARE SPLIT BRAIN! </li></ul>
  8. 8. VoxFiles - Production <ul><li>Multi-Petabyte ready storage nodes of commodity hardware. </li></ul><ul><li>  </li></ul><ul><li>No special hardware.  Just shove drives into standard 2Us and put SSDs and lots of RAM in the proxy/account/container servers.  We don't want to be an experimental HW shop - we're growing too fast. </li></ul><ul><li>  </li></ul><ul><li>Keep ready and waiting storage offline (save power/cooling). </li></ul><ul><li>Bring storage online (for now) in ~30TB increments as usage exceeds 50%. </li></ul><ul><li>Rebuild rings and apply weighting slowly. </li></ul>
  9. 9. VoxFiles - Environments & Automation <ul><li>We started with an Alpha environment </li></ul><ul><ul><li>VMs as Proxy Servers and 6 Storage Nodes </li></ul></ul><ul><ul><li>Alpha environment became my Dev system </li></ul></ul><ul><li>  </li></ul><ul><li>Hacked Andy's Chef recipes to fit our custom needs </li></ul><ul><ul><li>Setup Proxy, Storage, Account, Ring-Builder, Munin </li></ul></ul><ul><ul><li>Drives, Networks, OS users </li></ul></ul><ul><ul><li>Authentication, Dispersion Testing, Kernel Settings, Logging </li></ul></ul><ul><li>  </li></ul><ul><li>I setup a Production Environment with Chef & RunDeck </li></ul><ul><ul><li>RunDeck only in Prod </li></ul></ul><ul><ul><li>Orchestrates ring rebuilding and node addition/removal </li></ul></ul>
  10. 10. VoxFiles - Auto-Growth in Prod (now) <ul><li>We are still growing our existing zones </li></ul><ul><li>Munin alerts Ops when a Zone > 50% </li></ul><ul><li>  </li></ul><ul><li>Ops adds a storage node via RunDeck:  <enter Zone #> </li></ul><ul><li>RunDeck finds a storage node <node_id> in Ubersmith(tm) inventory and &quot;VoxServers hybrid cloud&quot; powers it on.  RunDeck does a &quot;knife voxel server create node_id=<node_id> hostname=storage07.blah.com&quot; and the post-install runs chef-client.  Chef sets up the node for storage.  RunDeck runs chef-client on ring-builder with low and increasing weights for new storage devices. RunDeck watches disk IO load on each box and if low for 5 minutes, applies the new ring with higher weights. </li></ul>
  11. 11. VoxFiles - AutoGrowth in Prod (later) <ul><li>Once our zones are big-enough tm we'll start adding capacity by adding zones. </li></ul><ul><li>Adding a zone has similar challenges as adding storage to zones: preventing replication storms. </li></ul><ul><li>In the &quot;adding zone&quot; case, we manage weights on the whole zone.  Otherwise, it's weights per device. </li></ul>
  12. 12. VoxFiles - Basic Project Plan <ul><li>alpha 11/11   </li></ul><ul><li>beta 12/11   </li></ul><ul><li>production 1/12 </li></ul>

×