Vox files_openstack_swift_voxel_net

  • 288 views
Uploaded on

 

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
288
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
4
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. VoxFiles: OpenStack Swift @ Voxel.Net Leaving the Nest Judd Maltin [email_address]
  • 2. About Voxel
      • A profitable company comprised of people who really love this stuff (even if you don't) and that's why we're good at it.
      • Founded in 1999; first to commercially support load balancing for Linux (RAILs) in 2000.
      • Own and operate an International network connecting nearly 20 Voxel Network POPs (AS29791).
      • Over 1,000 clients rely on Voxel: Fortune 5000, Web 2.0 startups, Carriers, Media & Entertainment, Advertising Networks, etc.
      • Our core competency is providing a 100% uptime SLA on your entire stack: ProManaged Hosting Infrastructure, Voxel IP Network, VoxCLOUD Hybrid Cloud and VoxCAST Content Delivery.
  • 3. VoxFiles - why?
    • Compete in the growing Object Storage marketplace 
    •  
    • Great position in the Managed Hosting world
    • Other storage systems not meeting needs of customers
        • others are unreliable, expensive, proprietary
    •  
    • Complements Voxel.Net's rich offerings:
        • ProManaged Hosting Infrastructure
        • Voxel IP Network
        • VoxCLOUD Hybrid Cloud
        • VoxCAST Content Delivery Network
    •  
  • 4. OpenStack - why?
    • Strong, active, responsive developer community
    • Open Source
    • All One Language - Python
    • Big Vision
      • many products
      • shared components
    •  
    • Great vendor involvement!  Yay, DELL!
  • 5. What's Great About VoxFiles?
    • Multi-Datacenter
    •  
    • Some Storage Zones Cross Data-Centers
    • Fully Automated Expansion of Storage
    •     (demo in a few minutes)
    • We wont run out: Big Storage Purchase pre-Thailand Flooding
    •  
  • 6. Multi-Datacenter, you say?
    • Yes.
    • All datacenters on different local power, generator backup
    • Multiple independent routes, above 500 year flood plains
  • 7. VoxFiles Zones
    • Swift does not yet have the notion of tiered zones.
    • So I made my own.
    • 3/4 of zones are only within a single datacenter
    • 1/4 of zones span datacenters
    • Rationale:
    • Zones that cross datacenters can be more quickly rebuilt when there's connectivity loss.  BEWARE SPLIT BRAIN!
  • 8. VoxFiles - Production
    • Multi-Petabyte ready storage nodes of commodity hardware.
    •  
    • No special hardware.  Just shove drives into standard 2Us and put SSDs and lots of RAM in the proxy/account/container servers.  We don't want to be an experimental HW shop - we're growing too fast.
    •  
    • Keep ready and waiting storage offline (save power/cooling).
    • Bring storage online (for now) in ~30TB increments as usage exceeds 50%.
    • Rebuild rings and apply weighting slowly.
  • 9. VoxFiles - Environments & Automation
    • We started with an Alpha environment
      • VMs as Proxy Servers and 6 Storage Nodes
      • Alpha environment became my Dev system
    •  
    • Hacked Andy's Chef recipes to fit our custom needs
      • Setup Proxy, Storage, Account, Ring-Builder, Munin
      • Drives, Networks, OS users
      • Authentication, Dispersion Testing, Kernel Settings, Logging
    •  
    • I setup a Production Environment with Chef & RunDeck
      • RunDeck only in Prod
      • Orchestrates ring rebuilding and node addition/removal
  • 10. VoxFiles - Auto-Growth in Prod (now)
    • We are still growing our existing zones
    • Munin alerts Ops when a Zone > 50%
    •  
    • Ops adds a storage node via RunDeck:  <enter Zone #>
    • RunDeck finds a storage node <node_id> in Ubersmith(tm) inventory and &quot;VoxServers hybrid cloud&quot; powers it on.  RunDeck does a &quot;knife voxel server create node_id=<node_id> hostname=storage07.blah.com&quot; and the post-install runs chef-client.  Chef sets up the node for storage.  RunDeck runs chef-client on ring-builder with low and increasing weights for new storage devices. RunDeck watches disk IO load on each box and if low for 5 minutes, applies the new ring with higher weights.
  • 11. VoxFiles - AutoGrowth in Prod (later)
    • Once our zones are big-enough tm we'll start adding capacity by adding zones.
    • Adding a zone has similar challenges as adding storage to zones: preventing replication storms.
    • In the &quot;adding zone&quot; case, we manage weights on the whole zone.  Otherwise, it's weights per device.
  • 12. VoxFiles - Basic Project Plan
    • alpha 11/11  
    • beta 12/11  
    • production 1/12