MyCloud for $100k
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

MyCloud for $100k

on

  • 1,280 views

A simple setup to build a private or public cloud. ...

A simple setup to build a private or public cloud.
A cloud at the IaaS layer is simply a cluster of hypervisors with some added storage infrastructure and software to orchestrate everything. In this presentation we show some straightfoward DELL hardware that could be purchased to build a single rack as the basic for a private or public cloud. It totals $100k and coupled with open source software: cloudstack, ceph, glusterfs, nfs etc is the basis for your cloud.
You will get a AWS compatible cloud in no-time and with limited acquisition cost.

Statistics

Views

Total Views
1,280
Views on SlideShare
1,259
Embed Views
21

Actions

Likes
1
Downloads
18
Comments
0

5 Embeds 21

http://irq.tumblr.com 13
https://twitter.com 5
http://safe.tumblr.com 1
http://www.linkedin.com 1
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Qty 2 Intel® Xeon® E5-2630 2.30GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W Qty 8 8GB RDIMM, 1333 MT/s, Low Volt, Dual Rank (64 GB RAM) Qty 1 PERC H310 Integrated RAID Controller (JBOD, RAID 0/1/5/10 supported) Qty 12 2TB 7.2K RPM Near-Line SAS 3.5in Hot-plug Hard Drive Qty 2 500GB 7.2K RPM Near-Line SAS 6Gbps 2.5in Hot-plug Hard Drive, Flex Bay Qty 1 iDRAC7 Enterprise Qty 1 Broadcom 5720 QP 1Gb Network Daughter Card Qty 1 Dual, Hot-plug, Redundant Power Supply (1+1), 750W
  • Qty 2 Intel® Xeon® E5-2430 2.20GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W Qty 4 8GB RDIMM, 1333 MT/s, Low Volt, Dual Rank (32 GB RAM) Qty 1 PERC H310 Integrated RAID Controller (JBOD, RAID 0/1/5/10 supported) Qty 4 1TB 7.2K RPM SATA 3.5in Hot-plug Hard Drive Qty 1 iDRAC7 Enterprise Qty 1 On-Board Dual Gigabit Network Adapter Qty 1 Single, Hot-plug Power Supply, 550W
  • Qty 1 PCT7048, 48 port Managed Switch, 1 GbE with 10Gb and Stacking capabilities Qty 1 10GbE Uplink Module for SFP+, supports up to 2 SFP+ Qty 1 SFP+, PowerConnect 10GBASE-SR Multi-Mode SFP+ Optics, LC-LC

MyCloud for $100k Presentation Transcript

  • 1. My $100k Cloud Sebastien Goasguen - CitrixMichael Fenn - D. E Shaw Research Oct 1st 2012
  • 2. Goal• Build a rack that can act as a private/public cloud• IaaS implementation from hardware to software• Entry level system for SME /Academic research / POC• Main capability: Provision/Manage virtual machines on-demand, AWS compliant
  • 3. Assumptions• There is a machine room to put this rack• We choose DELL as a vendor for no other reason than familiarity and our hope that we can get a 33% discount on the list price • We are going to use CloudStack as cloud platform solution.• And use other open source software for configuration, storage and monitoring.
  • 4. Head nodeHead + storage node: Dell R720xd(2U)2 Intel Xeon E5-2650 2.30 Ghz8x 8 GB RDIMM (64 GB RAM)12x 2TB NL-SAS Hot Plug (24 TB)Quad-port Broadcom 5720 1GbDual Hot-Plug Redundant PowerSupplyPer node cost w/ discount: $9,500
  • 5. Compute/Hypervisor NodeCompute node: Dell R420 (1U)2 Intel Xeon E5-2430 2.20 GHz4x 8GB RDIMM ( 32 GB RAM)4x 1TB SATA Hot-PlugOn-Board Dual GigabitNetworkPer node cost w/ discount:$3,500
  • 6. SwitchNetworking: DellPowerConnect 704848 port Managed Switch, 1GbE with 10 GbE and stackingcapabilities1x 10 GbE Uplink ModulePer switch cost w/ discount:$5,000
  • 7. Rack and PDUs• Standard air cooled rack. The DELL 4220 rack would be a good choice.• The whole solution should draw around 6kW, so an 8kW UPS would be a good fit. APC has one called the Smart-UPS RT 10000.
  • 8. Total Budget• Networking (1 unit) = $5,000• Head node (1 unit) = $9,500• Compute nodes (21 units) = $73,500• Rack + power infrastructure = $10,000• Total: $98,000• Total: 264 cores, 736 GB of RAM, and 109 TB of storage, on 25 Us
  • 9. Software setup• OS: RHEL like, and since we used to work for High Energy Physics we will choose Scientific Linux 6.3. Not supported by CloudStack but it does work.• Hypervisor: KVM or Xen depending on local expertise• Cloud Platform: Apache CloudStack
  • 10. Software setup• Storage: NFS for image store for ease of setup. GlusterFS for primary storage or local mount point depending on expertise.• Configuration management: Puppet or Chef• Monitoring: Zenoss core with CloudStack ZenPack
  • 11. CloudStack History• Original company VMOPs (2008)• Open source (GPLv3) as CloudStack• Acquired by Citrix (July 2011)• Relicensed under ASL v2 April 3, 2012• Accepted as Apache Incubating Project April 16, 2012 (http://www.cloudstack.org)• First Apache (ACS 4.0) coming really soon !!
  • 12. Multiple Contributors Sungard: Seven developers have joined the incubating project Schuberg Philis: Big contribution in building/packaging and Nicira support Go Daddy: Maven building Caringo: Support for own object store Basho: Support for RiackCS
  • 13. Terminology Zone: Availability zone, aka Regions. Could be worldwide. Different data centers Pods: Racks or aisles in a data center Clusters: Group of machines with a common type of Hypervisor Host: A Single server Primary Storage: Shared storage across a cluster Secondary Storage: Shared storage in a single Zone
  • 14. “Logical” CS deployment• Farm of hypervisors. Primary storage available “cluster” wide for running VMs• Separate secondary storage to store VM images and data volumes.
  • 15. Our deployment
  • 16. Economy• We have 252 cores of hypervisors• If we consider overprovisioning of 2 VMs per core, full capacity is 504 VMs.• At $0.10 per hour for small instances, we need 1M hours to get back our $100k.• 1M/(480*24) = 83• 83 days to recover the capital investment
  • 17. Optional Setup• Dual GigE cards allows us to do NIC bonding.• Or to create a separate management network or storage network if need be.• First deployment should use CloudStack security groups (to avoid having to configure VLANs on the switch). Second deployment could try to use VLANs.• Run an openflow controller on the head node and experiment with SDN, using Open Vswitch on the nodes.
  • 18. Possible Expansion with more $$• Fill the rack with nodes to be used as hypervisors –no change to the software setup, just add hosts in CloudStack-.• Fill the rack with GPU nodes for HPC. –add hosts in CloudStack using the baremetal component PXE/IPMI -.• Fill the rack with storage nodes setup as a hadoop cluster on bare metal• Fill the rack with SSD base storage nodes
  • 19. “Bare Metal” Hybrid deployment• Hypervisor cluster, bare metal cluster with specialized hardware (e.g GPUs) or software (Hadoop).
  • 20. Info• Apache incubator project• http://www.cloudstack.org• #cloudstack on irc.freenode.net• @cloudstack on Twitter• http://www.slideshare.net/cloudstack• http://cloudstack.org/discuss/mailing-lists.htmlWelcoming contributions and feedback, Join the fun !