My $100k Cloud

   Sebastien Goasguen - Citrix
Michael Fenn - D. E Shaw Research
          Oct 1st 2012
Goal
• Build a rack that can act as a private/public cloud
• IaaS implementation from hardware to software
• Entry level system for SME /Academic research /
  POC
• Main capability: Provision/Manage virtual
  machines on-demand, AWS compliant
Assumptions
• There is a machine room to put this rack
• We choose DELL as a vendor for no other
  reason than familiarity and our hope that we
  can get a 33% discount on the list price 
• We are going to use CloudStack as cloud
  platform solution.
• And use other open source software for
  configuration, storage and monitoring.
Head node
Head + storage node: Dell R720xd
(2U)

2 Intel Xeon E5-2650 2.30 Ghz
8x 8 GB RDIMM (64 GB RAM)
12x 2TB NL-SAS Hot Plug (24 TB)
Quad-port Broadcom 5720 1Gb
Dual Hot-Plug Redundant Power
Supply

Per node cost w/ discount: $9,500
Compute/Hypervisor Node
Compute node: Dell R420 (1U)

2 Intel Xeon E5-2430 2.20 GHz
4x 8GB RDIMM ( 32 GB RAM)
4x 1TB SATA Hot-Plug
On-Board Dual Gigabit
Network

Per node cost w/ discount:
$3,500
Switch
Networking: Dell
PowerConnect 7048

48 port Managed Switch, 1
GbE with 10 GbE and stacking
capabilities

1x 10 GbE Uplink Module

Per switch cost w/ discount:
$5,000
Rack and PDUs
• Standard air cooled rack. The
  DELL 4220 rack would be a good
  choice.

• The whole solution should draw
  around 6kW, so an 8kW UPS
  would be a good fit. APC has
  one called the Smart-UPS RT
  10000.
Total Budget
•   Networking (1 unit) = $5,000
•   Head node (1 unit) = $9,500
•   Compute nodes (21 units) = $73,500
•   Rack + power infrastructure = $10,000

• Total: $98,000
• Total: 264 cores, 736 GB of RAM, and 109 TB
  of storage, on 25 Us
Software setup
• OS: RHEL like, and since we used to work for High
  Energy Physics we will choose Scientific Linux 6.3.
  Not supported by CloudStack but it does work.



• Hypervisor: KVM or Xen depending on local
  expertise
• Cloud Platform: Apache CloudStack
Software setup
• Storage: NFS for image store for ease of setup.
  GlusterFS for primary storage or local mount point
  depending on expertise.



• Configuration management: Puppet or Chef
• Monitoring: Zenoss core with CloudStack ZenPack
CloudStack History
• Original company VMOPs (2008)
• Open source (GPLv3) as CloudStack
• Acquired by Citrix (July 2011)
• Relicensed under ASL v2 April 3, 2012
• Accepted as Apache Incubating Project April
  16, 2012 (http://www.cloudstack.org)
• First Apache (ACS 4.0) coming really soon !!
Multiple Contributors
               Sungard: Seven
               developers have joined
               the incubating project
               Schuberg Philis: Big
               contribution in
               building/packaging and
               Nicira support
               Go Daddy: Maven
               building
               Caringo: Support for own
               object store
               Basho: Support for
               RiackCS
Terminology
         Zone: Availability zone,
         aka Regions. Could be
         worldwide. Different data
         centers
         Pods: Racks or aisles in a
         data center
         Clusters: Group of
         machines with a common
         type of Hypervisor
         Host: A Single server
         Primary Storage: Shared
         storage across a cluster
         Secondary Storage:
         Shared storage in a single
         Zone
“Logical” CS deployment
• Farm of hypervisors. Primary storage available
  “cluster” wide for running VMs
• Separate secondary storage to store VM images
  and data volumes.
Our deployment
Economy
• We have 252 cores of hypervisors
• If we consider overprovisioning of 2 VMs per
  core, full capacity is 504 VMs.
• At $0.10 per hour for small instances, we need
  1M hours to get back our $100k.
• 1M/(480*24) = 83

• 83 days to recover the capital investment
Optional Setup
• Dual GigE cards allows us to do NIC bonding.
• Or to create a separate management network
  or storage network if need be.
• First deployment should use CloudStack
  security groups (to avoid having to configure
  VLANs on the switch). Second deployment
  could try to use VLANs.
• Run an openflow controller on the head node
  and experiment with SDN, using Open Vswitch
  on the nodes.
Possible Expansion with more $$
• Fill the rack with nodes to be used as
  hypervisors –no change to the software setup,
  just add hosts in CloudStack-.
• Fill the rack with GPU nodes for HPC. –add
  hosts in CloudStack using the baremetal
  component PXE/IPMI -.
• Fill the rack with storage nodes setup as a
  hadoop cluster on bare metal
• Fill the rack with SSD base storage nodes
“Bare Metal” Hybrid deployment
• Hypervisor cluster, bare metal cluster with
  specialized hardware (e.g GPUs) or software
  (Hadoop).
Info
•   Apache incubator project
•   http://www.cloudstack.org
•   #cloudstack on irc.freenode.net
•   @cloudstack on Twitter
•   http://www.slideshare.net/cloudstack
•   http://cloudstack.org/discuss/mailing-lists.html

Welcoming contributions and feedback, Join the fun !

MyCloud for $100k

  • 1.
    My $100k Cloud Sebastien Goasguen - Citrix Michael Fenn - D. E Shaw Research Oct 1st 2012
  • 2.
    Goal • Build arack that can act as a private/public cloud • IaaS implementation from hardware to software • Entry level system for SME /Academic research / POC • Main capability: Provision/Manage virtual machines on-demand, AWS compliant
  • 3.
    Assumptions • There isa machine room to put this rack • We choose DELL as a vendor for no other reason than familiarity and our hope that we can get a 33% discount on the list price  • We are going to use CloudStack as cloud platform solution. • And use other open source software for configuration, storage and monitoring.
  • 4.
    Head node Head +storage node: Dell R720xd (2U) 2 Intel Xeon E5-2650 2.30 Ghz 8x 8 GB RDIMM (64 GB RAM) 12x 2TB NL-SAS Hot Plug (24 TB) Quad-port Broadcom 5720 1Gb Dual Hot-Plug Redundant Power Supply Per node cost w/ discount: $9,500
  • 5.
    Compute/Hypervisor Node Compute node:Dell R420 (1U) 2 Intel Xeon E5-2430 2.20 GHz 4x 8GB RDIMM ( 32 GB RAM) 4x 1TB SATA Hot-Plug On-Board Dual Gigabit Network Per node cost w/ discount: $3,500
  • 6.
    Switch Networking: Dell PowerConnect 7048 48port Managed Switch, 1 GbE with 10 GbE and stacking capabilities 1x 10 GbE Uplink Module Per switch cost w/ discount: $5,000
  • 7.
    Rack and PDUs •Standard air cooled rack. The DELL 4220 rack would be a good choice. • The whole solution should draw around 6kW, so an 8kW UPS would be a good fit. APC has one called the Smart-UPS RT 10000.
  • 8.
    Total Budget • Networking (1 unit) = $5,000 • Head node (1 unit) = $9,500 • Compute nodes (21 units) = $73,500 • Rack + power infrastructure = $10,000 • Total: $98,000 • Total: 264 cores, 736 GB of RAM, and 109 TB of storage, on 25 Us
  • 9.
    Software setup • OS:RHEL like, and since we used to work for High Energy Physics we will choose Scientific Linux 6.3. Not supported by CloudStack but it does work. • Hypervisor: KVM or Xen depending on local expertise • Cloud Platform: Apache CloudStack
  • 10.
    Software setup • Storage:NFS for image store for ease of setup. GlusterFS for primary storage or local mount point depending on expertise. • Configuration management: Puppet or Chef • Monitoring: Zenoss core with CloudStack ZenPack
  • 11.
    CloudStack History • Originalcompany VMOPs (2008) • Open source (GPLv3) as CloudStack • Acquired by Citrix (July 2011) • Relicensed under ASL v2 April 3, 2012 • Accepted as Apache Incubating Project April 16, 2012 (http://www.cloudstack.org) • First Apache (ACS 4.0) coming really soon !!
  • 12.
    Multiple Contributors Sungard: Seven developers have joined the incubating project Schuberg Philis: Big contribution in building/packaging and Nicira support Go Daddy: Maven building Caringo: Support for own object store Basho: Support for RiackCS
  • 13.
    Terminology Zone: Availability zone, aka Regions. Could be worldwide. Different data centers Pods: Racks or aisles in a data center Clusters: Group of machines with a common type of Hypervisor Host: A Single server Primary Storage: Shared storage across a cluster Secondary Storage: Shared storage in a single Zone
  • 14.
    “Logical” CS deployment •Farm of hypervisors. Primary storage available “cluster” wide for running VMs • Separate secondary storage to store VM images and data volumes.
  • 15.
  • 16.
    Economy • We have252 cores of hypervisors • If we consider overprovisioning of 2 VMs per core, full capacity is 504 VMs. • At $0.10 per hour for small instances, we need 1M hours to get back our $100k. • 1M/(480*24) = 83 • 83 days to recover the capital investment
  • 17.
    Optional Setup • DualGigE cards allows us to do NIC bonding. • Or to create a separate management network or storage network if need be. • First deployment should use CloudStack security groups (to avoid having to configure VLANs on the switch). Second deployment could try to use VLANs. • Run an openflow controller on the head node and experiment with SDN, using Open Vswitch on the nodes.
  • 18.
    Possible Expansion withmore $$ • Fill the rack with nodes to be used as hypervisors –no change to the software setup, just add hosts in CloudStack-. • Fill the rack with GPU nodes for HPC. –add hosts in CloudStack using the baremetal component PXE/IPMI -. • Fill the rack with storage nodes setup as a hadoop cluster on bare metal • Fill the rack with SSD base storage nodes
  • 19.
    “Bare Metal” Hybriddeployment • Hypervisor cluster, bare metal cluster with specialized hardware (e.g GPUs) or software (Hadoop).
  • 20.
    Info • Apache incubator project • http://www.cloudstack.org • #cloudstack on irc.freenode.net • @cloudstack on Twitter • http://www.slideshare.net/cloudstack • http://cloudstack.org/discuss/mailing-lists.html Welcoming contributions and feedback, Join the fun !

Editor's Notes

  • #5 Qty 2 Intel® Xeon® E5-2630 2.30GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W Qty 8 8GB RDIMM, 1333 MT/s, Low Volt, Dual Rank (64 GB RAM) Qty 1 PERC H310 Integrated RAID Controller (JBOD, RAID 0/1/5/10 supported) Qty 12 2TB 7.2K RPM Near-Line SAS 3.5in Hot-plug Hard Drive Qty 2 500GB 7.2K RPM Near-Line SAS 6Gbps 2.5in Hot-plug Hard Drive, Flex Bay Qty 1 iDRAC7 Enterprise Qty 1 Broadcom 5720 QP 1Gb Network Daughter Card Qty 1 Dual, Hot-plug, Redundant Power Supply (1+1), 750W
  • #6 Qty 2 Intel® Xeon® E5-2430 2.20GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W Qty 4 8GB RDIMM, 1333 MT/s, Low Volt, Dual Rank (32 GB RAM) Qty 1 PERC H310 Integrated RAID Controller (JBOD, RAID 0/1/5/10 supported) Qty 4 1TB 7.2K RPM SATA 3.5in Hot-plug Hard Drive Qty 1 iDRAC7 Enterprise Qty 1 On-Board Dual Gigabit Network Adapter Qty 1 Single, Hot-plug Power Supply, 550W
  • #7 Qty 1 PCT7048, 48 port Managed Switch, 1 GbE with 10Gb and Stacking capabilities Qty 1 10GbE Uplink Module for SFP+, supports up to 2 SFP+ Qty 1 SFP+, PowerConnect 10GBASE-SR Multi-Mode SFP+ Optics, LC-LC