Cloud stack overview

  • 3,011 views
Uploaded on

cloudstack architecture overview, include management server, network,storage and roadmap.

cloudstack architecture overview, include management server, network,storage and roadmap.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
3,011
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
76
Comments
3
Likes
16

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • CloudStack works within multiple enterprise strategies and mandates, as well as supporting multiple cloud strategies from a provider perspective. As an initial step beyond traditional server virtualization, many organizations are looking to private cloud implementations as a means to satisfy flexibility while still retaining control over service delivery. The private cloud may be hosted by the IT organization itself, or sourced from a managed service provider, but the net goals of total control and security without compromising SLAs is achieved.For some organizations, the managed service model is stepped up one level with all resources sourced from a hosted solution. SLA guarantees and security concerns often dictate the types of providers an enterprise will look towards. At the far end of the spectrum are public cloud providers with pay as you go pricing structures and elastic scaling. Since public clouds often abstract details such as network topology, a hybrid cloud strategy allows IT to retain control over key aspects of their operations such as data, while leveraging the benefits of elastic public cloud capacity.
  • The core components of a CloudStack implementation are:Hosts – Hosts are servers from at least one of the supported virtualization providers. CloudStack fully supports hosts from multiple providers, but does not convert VM images from one hypervisor type to another. Depending on the hypervisor, a “host” may be a higher level concept. For example, in XenServer a CloudStack “host” is equivalent to a XenServer resource pool and the “host” entry is the pool master.Primary Storage – Primary storage is the hypervisor level storage containing the deployed VM storage. Primary storage options will vary by hypervisor, and depending upon the hypervisor selected, CloudStack may impose requirements upon it.Cluster – Host groups are combined into Clusters which contain the primary storage options for the Cluster. Primary storage isn’t shared outside of a Cluster. In the case of CloudStack, a Cluster in of itself does not imply modification of any clustering concept within the hypervisor. For example, in XenServer a resource pool is a host to CloudStack, and CloudStack does not create a super set of Cluster functionality for XenServer. Pod -- Host groups are combined first into Clusters and then into Pods. For many customers, a pod represents a high level physical concept like a server rackNetwork – Network is the logical and physical network associated with service offerings. Multiple concurrent network service offerings and topologies can be supported within CloudStackSecondary Storage – Secondary storage is the storage system used for template and ISO management. It also is where snapshot events occur.Zone – A zone is a collection pods to form some level of service availability. While Amazon EC2 defines an availability zone as a data center, CloudStack keeps the concept more abstract allowing cloud operators to have multiple availability zones within a given data center.Management Server Farm – The CloudStack management server farm is a grouping of CentOS/RHEL CloudStack servers forming a web farm, with an underlying MySQL cluster database. The management server farm can manage multiple Zones, and can be virtualized.
  • Primary StoragePrimary storage is used for all active VM storage of both root and data disks. This storage is local to the CloudStack Pod and is directly available to the hypervisors hosts in the pod. The two universally supported connection methods are NFS and iSCSI, and CloudStack manages these connections. Additionally, options exist for FC and local storage, but these options do vary by hypervisor type. New for CloudStack 3.0 is OpenStack Swift integration.Secondary StorageSecondary storage is used for all template, ISO and volume snapshot activities. This storage is local to each CloudStack availability zone and is accessed through the CloudStack secondary storage server. This system VM connects to the underlying secondary storage device using NFS.Templates and ISOsTemplates and ISOs are imported into CloudStack secondary storage through the use of the storage system VM. The import process is through HTTP. ISOs can be defined as being bootable, and templates must be of a file type which matches hypervisors within the zone. CloudStack won’t convert a template from one hypervisor disk format to another.
  • When a user requests a VM instance, there are several steps performed.The user logs in and selects the desired availability zone for their instance, and then selects the desired template from the list of templates available to them. This is the trigger for the provisioning process.Depending on the instance and zone requirements, optional network services such as routing, dhcp and load balancing are provisioned for the zone. If these services are already provisioned, and can be shared by the user, then shared instances are used; otherwise isolated instances of the network services are used.The template representing the root disk of the VM is copied from the secondary storage for the zone to the primary storage for the cluster. CloudStack attempts to localize services for accounts to as few clusters as possible. This is done partly for security reasons, and partly to ensure optimal performance for provisioned services.If the instance requires any data volumes, the data volumes are created on primary storage for the cluster. Note that the storage preferences for the root volume and data volumes may be different resulting in the volumes occupying different primary storage devices within a given cluster. For example, data disks may have attributes which place them on a primary storage device which is continuously backed up while the root volume might be located on local storage.CloudStack then instructs the host to create and start the instance VM
  • When using XenServer, you will first add the XenServer pool master to CloudStack as a host, and CloudStack will transparently add all slave hosts to CloudStack.
  • Limitations: No snapshot because OVM is using raw format for volumeNo system VM because OVM won’t support Debian guestNeed a helper cluster(xenserver/kvm/vmware)Advantage:Oracle provides lots of templates which have Oracle DB frameworks, applications built in, customer can quickly deploy Oracle serviceCreate templateCreate template from root volume of VMStart system VMAdd a helper cluster(XenServer/KVM/Vmware) before creating any OVM VmThe domain router will automatically be created in helper cluster when creating first OVM instanceNo OVM manager and CloudStack mixedOvs-agent will store data in local database on hostSupported OS typeAll Linux/Solaris templates must be from Oracle siteWindows can be installed from ISOOracle Cluster File SystemOracle recommendation solution for using ISCSIUser responsibilitySetup ISCSI device on every hostCreate OCFS2 file system on every deviceCloudStack responsibilityConfigure every ocfs2 nodeAdd/Remove node on demand
  • For KVM, Support is only for RHEL 6 based KVM and Ubuntu 10.04. No other flavors of KVM are supported, including RHEV.
  • vCenter cluster/hostA vCenter cluster is mapped directly to a CloudStack cluster under PodA vCenter cluster for CloudStack can only belong to one vCenter datacenterWhy?vCenter Datastore used by vCenter cluster is at scope of vCenter datacentervCenter vSwitch used by vCenter cluster is at scope of vCenter datacenterSharing vCenter datacenter resource outside of CloudStack will be problematicSystem VM bootstrapFirst generation is done by CloudStack management serverSecond/beyond generations is done through a running SSVMSSVM (Secondary Storage VM)SSVM for template processingSSVM for VMware volume/snapshot/template operationCommand delegationSystem VM, extension of CloudStack management serverResource manager can be running in context of a system VMCommand delegation in CloudStack management serverSnapshotsCloudStack snapshot is taken at volume basisSnapshot in vCenter is take at VM basisFill the gapTake a VM snapshot, if it is for a detached volume in CloudStack, create a worker VMParse VM snapshot meta data, build up disk chain information at volume basisCreate intermediate VM on top of a selected disk chainExport VM (full backup) to secondary storageCleanupsvCenter vSwitchvSwitch setup is done through vCenterNIC-bonding is done through vCenterCloudStack creates networks (portgroups) dynamicallyCloudStack propagates networks across clusterWhy? To support independent VM live migration both in CloudStack and vCenterDefault vSwitch portsNot enough, usually needs to extend
  • Network isolation (Security Group, L3)
  • To Alex, the point of this slides
  • External devices why not sequenced?
  • Network OfferingsThe administrator starts off with deciding the network offerings they want to provide throughout their entire cloud offering. Network Offerings group together a set of network services such as firewall, dhcp, dns, etc.Network Offerings allow specific network service providers to be specified.Network Offerings can be tagged to specifically choose the underlying network.Network Offerings have the following states: Disabled, Enabled, Inactive.  All Network Offerings are created in the Disabled state.  Once a network offering has been configured to the correct stateCertain Network Offerings are for used by the system only.  This means end users cannot see them.Network Offerings can be updated to enable/disable services and providers.  Once that is done, it is up to the administrator to reprogram all of the networks that are based on that network offering.Network Offerings tags cannot be updated.  However, the tags on the physical networks can be updated and deleted.CloudStack is deployed with three default network offerings for the end users, virtual network offering and shared network offering without security group and a shared network offering with security group.
  • For latest information: http://docs.cloud.com/Knowledge_Base/Domain_Router_Security
  • To Alex: MS interact with OVM using XenApi?

Transcript

  • 1. CloudStack Overview Written by: Chiradeep Vittal, Alex Huang @ CitrixRevised by: Gavin Lee, Zhennan Sun @ TCloud Computing
  • 2. Outline• Overview of CloudStack• Problem Definition• Feature set overview• Network• Storage• MS internals• System VMs• System Interactions• Roadmap• Comparisons
  • 3. What is CloudStack? • Multi-tenant cloud orchestration platform – Turnkey – Hypervisor agnostic – Scalable Build your cloud the way the – Secureworld’s most successful clouds – Open source, open standards are built – Deploys on premise or as a hosted solution – BSS, self service portal. (Not ASL) – Extensive networking service • Deliver cloud services faster and cheaper
  • 4. CloudStack Supports Multiple Cloud Strategies Private Clouds Public Clouds On-premise Hosted Multi-tenant Enterprise Cloud Enterprise Cloud Public Cloud • Dedicated • Dedicated • Mix of shared and resources resources dedicated • Security & total • Security resources control • SLA bound • Elastic scaling • Internal network • 3rd party owned • Pay as you go • Managed by and operated • Public Enterprise or 3rd internet, VPN party access
  • 5. CloudStack Provides On-demand Access to Infrastructure Through a Self-Service Portal Org A Org B Users Admin AdminEnd User Users Users Compute Network StorageAdmin
  • 6. Open Flexible PlatformCompute Hypervisor XenServer VMware Oracle VM KVM Bare metalStorage Block & Object Fiber Local Disk iSCSI NFS Swift Channel Primary Storage Secondary StorageNetwork Network & Network Services Connection Load Isolation Firewall VPN Type balancer
  • 7. Problem Definition• Offer a scalable, flexible, manageable IAAS platform that follows established cloud computing paradigms• IAAS – Orchestrate physical and virtual resources to offer self-service infrastructure provisioning and monitoring• Scalable – 1 -> N hypervisors / VMs / virtual resources – 1 -> N end users• Flexible – Handle new physical resource types • Hypervisors, storage, networking – Add new APIs – Add new services – Add new network models
  • 8. Problem Definition (contd)• Manageable – Hide complexity of underlying resources – Rich functional end-user and admin UI – Admin API to automate operations – Easy install, upgrade for small -> large clouds – Simple scaling, automated resilience• Established Paradigms – EC2 –inspired • Semantic variations based on cloud provider needs, hypervisor capabilities
  • 9. Feture Set Overview
  • 10. Create Custom Virtual Machines via Service Offerings Select Operating System • Windows, Linux Select Compute Offering • CPU & RAM Select Disk Offering • Volume Size Select Network Offering • Network & Services Create VM
  • 11. Dashboard Provides Overview of Consumed Resources• Running, Stopped & Total VMs• Public IPs• Private networks• Latest Events
  • 12. Virtual Machine Management Users ChangeVM Operations VM Access VM Status Service Offering Start • CPU Utilized 2 CPUs 4 CPUs Stop 1 GB RAM 4 GB RAM • Network Read 20 GB 200 GB Restart • Network Writes 20 Mbps 100 Mbps Destroy
  • 13. Volume & Snapshot Management VM 1 Add / Delete Volumes VolumeCreate Templates Volume Template from Volumes Hourly Weekly Schedule Now Snapshots Daily Monthly …. View Snapshot History
  • 14. Network & Network Services• Create Networks and attach VMs• Acquire public IP address for NAT & load balancing• Control traffic to VM using ingress and egress firewall rules• Set up rules to load balance traffic between VMs
  • 15. Core CloudStack Components VM• Hosts • Servers onto which services will be provisioned Host VM• Primary Storage • VM storage Host• Cluster Primary • A grouping of hosts and their associated storage Storage• Pod • Collection of clusters Cluster• Network Secondary • Within the same L2 switch Storage Network Cluster• Secondary Storage • Template, snapshot and ISO storage CloudStack Pod• Zone • Collection of pods, network offerings and secondary storage CloudStack Pod• Management Server Farm • Responsible for all management and provisioning tasks Zone
  • 16. CloudStack Deployment Architecture Internet  Hypervisor is the basic unit of CloudStack Management scale. ServerZone 1  Cluster consists of one ore more hosts of same hypervisor L3 core  All hosts in cluster have access to shared (primary) storagePod 1 Access Layer Pod N Secondary  Pod is one or more clusters, …. Storage usually with L2 switches. Cluster N  Availability Zone has one or more pods, has access to …. secondary storage.  One or more zones represent Cluster 1 cloud Host 1 Primary Storage Host 2
  • 17. CloudStack Cloud Architecture CloudData Center 1 Data Center 2 Data Center 2 Data Center 3 Zone 2 Zone 2 Zone1 Zone 3 Zone 4 3 Zone CloudStack Cloud can have one or more Availability Zones (AZ). Data Center 2 Data Center 2 Data Center 2 Zone 2 Zone 2 ZoneZone 3 2 Zone 3 Zone 3
  • 18. Management Server Managing Multiple Zones CloudData Center 1 Data Center 2  Single Management Server can Data Center 2 Mgmt Data Center 3 manage multiple zones Server Zone 2  Zones can be geographically Zone 2 distributed but low latency links are Zone 3 expected for better performance Zone1 Zone 4 3 Zone  Single MS node can manage up to 10K hosts.  Multiple MS nodes can be deployed Data Center 2 as cluster for scale or redundancy Data Center 2 Data Center 2 Zone 2 Zone 2 Zone Zone 3 2 Zone 3 Zone 3
  • 19. Management Server Deployment Architecture Single-node Multi-node Deployment Deployment MS User API User API MS MySQL Load MS DB BalancerAdmin API Admin API MySQL MS DB Back Up Replication DB  MS is stateless. MS can be deployed as physical server or VM  Single MS node can manage up to Infrastructure 10K hosts. Multiple nodes can be Infrastructure Resources deployed for scale or redundancy Resources  Commercial: RHEL 5.4+; FOSS: Ubuntu 10.0.4, Fedora 16
  • 20. CloudStack Storage Primary Storage• Configured at Cluster-level. Close to hosts for better performance L3 switch• Stores all disk volumes for VMs in a cluster• Cluster can have one or more primary storages Pod 1 L2 switch• Local disk, iSCSI, FC or NFS Secondary Cluster 1 Storage Host 1 Primary Secondary Storage Storage Host 2• Configured at Zone-level• Stores all Templates, ISOs and Snapshots• Zone can have one or more secondary storages• NFS, OpenStack Swift
  • 21. Understanding the Role of Storage and Templates• Primary Storage • Cluster level storage for VMs Host • Connected directly to hosts • NFS, iSCSI, FC and Local Host• Secondary Storage Primary Storage • Zone level storage for template, ISOs and Cluster snapshots • NFS or OpenStack Swift via CloudStack Pod System VM• Templates and ISOs • Imported into CloudStack • Can be private or public Secondary Storage Zone Template
  • 22. Provisioning Process1. User Requests Instance VM2. Provision Optional Network Services Host3. Copy instance template from Host secondary storage to primary storage Primary Storage on appropriate cluster Cluster4. Create any requested data volumes on primary storage for the cluster Pod5. Create instance Template6. Start instance Secondary Storage Zone
  • 23. Citrix XenServer CloudStack• Integrates directly with XenServer Manager Pool Master• Snapshots at host level XenServer Pool Master Host• System VM control channel at host level XenServer Host• Network management is host level XenServer Host XenServer Host XenServer Host XenServer Resource Pool
  • 24. Oracle VM• Integrates with ovs-agent CloudStack Manager• Snapshots at host level• System VM control channel at OVS Agent host level OVM Host• Network management is host OVS Agent level OVM Host• Does not use OVM Manager OVS Agent• All templates must be from Oracle OVM Host• CloudStack configures ocfs2 nodes OVS Agent OVM Host• Requires “helper” cluster • XenServer, KVM or vSphere
  • 25. RedHat Enterprise Linux (KVM)• Integrates with libvirt using Cloud CloudStack Agent Manager• Snapshots at host level Cloud Agent• System VM control channel at host level Libvirt KVM Host• Network management is host level Cloud Agent• Only RHEL 6, not RHEV Libvirt • Also supports Ubuntu 10.04 KVM Host
  • 26. VMware vSphere CloudStack• Integration through vCenter Manager vSphere Host• System VM control channel via vCenter CloudStack private network vSphere Host• Snapshot and volume vSphere Cluster management via Secondary Storage VM vSphere Host• Networking via vSphere vSwitch vSphere Host vSphere Host vSphere Cluster Data Center
  • 27. Management Server Interaction with Hypervisors Management Server XAPI HTTPS vCenter Agent Agent XenServer KVM OVM ESX• XS 5.6, 5.6FP1, 5.6 SP2, 6.0 • ESX 4.1, 5.0 (coming) • RHEL 6.0, 6.1, 6.2 (coming) • OVM 2.2• Incremental Snapshots • Full Snapshots • Full Snapshots (not live) • No Snapshots• VHD • VMDK • QCOW2 • RAW• NFS, iSCSI, FC & Local disk • NFS, iSCSI, FC & Local disk • NFS, iSCSI & FC • NFS & iSCSi• Storage over-provisioning: • Storage over-provisioning: • Storage over-provisioning: • No storage over- NFS NFS, iSCSI NFS provisioning
  • 28. Multi-tenancy & Account Management Cloud Resources Domain VMs, IPs, Snapshots… • Domain is a unit ofOrg A isolation that represents Admin a customer org, business unit or a reseller DomainReseller A • Domain can have Admin Resources arbitrary levels of sub- Sub-Domain Org C VMs, IPs, Snapshots… domains Admin • A Domain can have one Account or more accounts Group A • An Account represents Account one or more users and is Group B the basic unit of isolation User 1 • Admin can limit resources at the Account User 2 or Domain levels
  • 29. CloudStack Network
  • 30. Network Terminology• Traffic type – Guest: The tenant network to which instances are attached – Storage: The physical network which connects the hypervisor to primary storage – Management: Control Plane traffic between CloudStack management server and hypervisor clusters – Public: “Outside” the cloud [usually Internet] Shared public VLANs trunked down to all hypervisors• Network type – Shared, one subnet serve multiple users Direct. 1 subnet Direct tagged. VLAN, multiple subnet – Isolated, different subnet for different user Virtual (tagged)• All traffic can be multiplexed on to the same underlying physical network using VLANs – Usually Management network is untagged – Storage network usually on separate nic (or bond)• Admin informs CloudStack how to map these network types to the underlying physical network – Configure traffic labels on the hypervisor – Configure traffic labels on Admin UI
  • 31. VM Instance • Choose the instantiated guest network • IP is arbitrary Guest Network • Instance of Network Offering • Shared: created by Admin • Isolated: Created and owned by user • One virtual router for one network • Cross pod, within Zone • VLAN id picked from the pool Physical Network Network Offering • Only for Guest traffic• Zone level • Guest network type: Shared or Isolated• Defined by NIC • Defined a set of network services, such• Assigned with traffic type (P, G, M, S) as DHCP, Firewall, VPN, NAT…• Associated by label/vswitch name • Bandwidth• Attached with device as service provider Tag
  • 32. Physical Network Operations Users Admin and Cloud API CloudStack MS Cluster Router MySQL Load Balancer Availability Zone L3 Core Switch Access LayerSwitches Secondary Servers … … … … … Storage Pod 1 Pod 2 Pod 3 Pod N
  • 33. Network Isolation Web DB Web VM VM VM Web DB Security Security Group Group Web Web DB VM VM VM… … … Web Web VM VM
  • 34. Network Isolation (Security Group, L3)Public Guest 1 10.1.0.2Internet VM 1 10.1.0.1 Guest 2 Pod 1 L2 10.1.0.3 Switch VM 1 Guest 1 10.1.0.4 VM 2 L3 Core Pod 2 L2 Switch 10.1.8.1 … Switch Guest 2 10.1.16.12 Load 10.1.16.1 VM 2 Pod 3 L2 Balancer Switch Guest 2 10.1.16.21 VM 3 … Guest 1 VM 3 10.1.16.47 Guest 1 10.1.16.85 VM 4
  • 35. Network Isolation (VLAN, L2) Core (L3) Network Pod K Pod M Pod N Access Switch(es) V Hypervisor V V Hypervisor R … CLUSTER 1 Hypervisor 1 RVLAN 101 Traffic … Hypervisor 8VLAN 102 Traffic … CLUSTER 4 V V Hypervisor N V Tenant VM Hypervisor N+1 V R Tenant Virtual Router
  • 36. Guest virtual network Guest Virtual Network 10.1.1.0/24 Public Public IP Guest 1 Gateway Guest Network address VM 1 address address 65.37.141.11 10.1.1.1 10.1.1.2 65.37.141.36 Guest 1 Virtual Guest 1 GuestPublic Router VM 2 addressInternet 10.1.1.3 NAT Guest 1 Guest DHCP VM 3 address Load Balancing VPN 10.1.1.4 Guest 1 Guest VM 4 address 10.1.1.5 Guest Virtual Network Public IP 10.1.1.0/24 address Gateway Guest 2 Guest 65.37.141.24 address VM 1 address 65.37.141.80 10.1.1.1 10.1.1.2 Guest 2 Virtual Guest 2 Guest Router VM 2 address 10.1.1.3 NAT Guest 2 Guest DHCP VM 3 address Load Balancing VPN 10.1.1.4
  • 37. Guest Virtual Network With Physical Device CS Virtual Router provides Network Services External Devices provide Network Services Guest Virtual Network 10.1.1.1/8 Guest Virtual Network 10.1.1.1/8 VLAN 100 VLAN 100Public PublicNetwork/Internet Network/Internet Guest Private IP Guest 10.1.1.1 VM 1 Public IP 10.1.1.1 VM 1 65.37.141.111 Juniper 10.1.1.111 GatewayPublic IP SRX address CS Firewall65.37.141.11 10.1.1.1 Guest Guest Virtual 10.1.1.3 VM 2 10.1.1.3 VM 2 Router Public IP Private IP DHCP, DNS NetScaler 10.1.1.112 65.37.141.112 NAT Guest Load Guest Load Balancing 10.1.1.4 VM 3 Blancer VM 3 10.1.1.4 VPN Guest Guest 10.1.1.5 VM 4 10.1.1.5 VM 4 CS Virtual DHCP, DNS Router
  • 38. Layer-3 Guest NetworkNetwork Services Managed Externally Network Services Managed by CS Public Network 65.11.0.0/16 Security Group 1 Security Group 1 Public Network/Internet 10.1.2.3 65.11.1.2 Guest VM Guest VM 1 1 65.11.1.2 10.2.12.4 65.11.1.3 NetScaler Guest VM Guest VM L3 65.11.1.3 Load 2 2 switch Blancer 65.11.1.4 EIP, ELB 10.5.2.99 65.11.1.4 Guest VM Guest VM 3 3 10.1.2.18 65.11.1.5 Guest VM Guest VM 4 4 CS CS Virtual Virtual DHCP, Security Group 2 DHCP, Security Group 2 Router Router DNS DNS
  • 39. Multi-tier network Multi-tier network Virtual Network Virtual Network Virtual Network 10.1.2.0/24 10.1.3.0/24 10.1.1.0/24 VLAN 1001 VLAN 141Public VLAN 100Network/Intern App VM 10.1.2.31 1 10.1.3.21et Web VMPublic IP Private IP 10.1.1.1 1 10.1.2.2165.37.141.11 Juniper 10.1.1.1111 SRX App VM Firewall 10.1.2.24 10.1.3.45 Web VM 2 10.1.1.3 2 10.1.2.18 Public IP Private IP 65.37.141. Netscaler 10.1.1.112 112 Load Web VM Balancer 10.1.1.4 3 10.1.2.38 10.1.3.24 DB VM 1 Web VM 10.1.1.5 4 10.1.2.39 CS DHCP, CS DHCP, Virtual Virtual DNS CS DHCP, DNS, Router Virtual Router User- User- DNS Router data data, User- Source data Public IP -NAT, 65.37.141.115
  • 40. Multi-tier unified [vision] Internet IPSec or SSL site-to-site VPN CS Virtual Router / Customer Loadbalancer Other Premises Monitoring VLANVirtual Router Services App VM• IPAM 10.1.2.31 1• DNS 10.1.1.1 Web VM 1• LB [intra]• S-2-S VPN App VM 10.1.2.24• Static Routes Web VM 2• ACLs 10.1.1.3 2• NAT, PF• FW [ingress & egress] Web VM• BGP 10.1.1.4 3 10.1.3.24 DB VM 1 Web VM 10.1.1.5 4 Virtual Network Virtual Network Virtual Network 10.1.1.0/24 10.1.2.0/24 10.1.3.0/24 VLAN 100 VLAN 1001 VLAN 141
  • 41. Multi-tier unified with SDN[vision] Internet IPSec or SSL site-to-site VPN CS Loadbalancer Virtual Router / Customer Other Premises Virtual Appliance Monitoring VLANVirtual Router Services App VM• IPAM 10.1.2.31 1• DNS 10.1.1.1 Web VM 1• LB [intra]• S-2-S VPN App VM 10.1.2.24• Static Routes Web VM 2• ACLs 10.1.1.3 2• NAT, PF• FW [ingress & egress] Web VM• BGP 10.1.1.4 3 10.1.3.24 DB VM 1 Web VM 10.1.1.5 4 Overlay Overlay Overlay Network Network Network 10.1.1.0/24 10.1.2.0/24 10.1.3.0/24
  • 42. Network Offerings• Cloud provider defines the feature set for guest networks• Toggle features or service levels – Security groups on/off – Load balancer on/off – Load balancer software/hardware – VPN, firewall, port forwarding• User chooses network offering when creating network• Enables upgrade between network offerings• Default offerings built-in – For classic CloudStack networking
  • 43. CloudStack Storage
  • 44. Storage • Primary Storage Zone-Level Layer 3 Switch Private Network – Block device to the VM – IOPs intensive – Accessible from host orPod 1 Pod Pod cluster wide Pod-Level Layer-2 Switch 2 N – Supports storage tiering Scale-Out NFS • WORM Storage – Secondary Storage or Object Computing Primary Server 1 Storage Cluster 2 Store for templates, ISO, and Computing Primary snapshot archiving Server 2 Storage – High capacity Computing Scale-Out NFS • CloudStack manages the Server 3 storage between the two to Cluster 1 Primary Storage achieve maximum benefit and Computing Server 4 resiliency
  • 45. Primary Storage Support Matrix Type XenServer VmWare KVMLocal Disk Supported Supported SupportediSCSI Supported Supported Not SupportedFiber Channel Supported Supported Not SupportedNFS Supported Supported SupportedVM Storage Supported Supported SupportedNetwork
  • 46. Storage Tiering• Supported via storage tags for primary storage• Specify a tag when adding a storage pool• Specify a tag when adding a disk offering• Only storage pools with the tag will be allocated for the volume
  • 47. WORM Storage• Write Once Read Many storage pattern is supported by two different storage types – Secondary Storage (NFS Server within an availability zone) – Object Store (Swift implementation for cross-zone)• Objective for WORM storage – High capacity, cheap storage – Easy to increase capacity• Used to store templates, ISOs, and snapshots
  • 48. Snapshots• Snapshots are used as backups for DRS• Taken on the primary storage and moved to secondary storage• Supports individual snapshots and recurring snapshots• Full snapshots on VmWare and KVM. Need help.• Incremental snapshots on XenServer• Allows backup network traffic to be specified in zone to segregate the backup network traffic from other network traffic types
  • 49. MS Internals• Architecture• Workflow• High Availability• Scalability
  • 50. Inside a Management Server Plugins cmd.execute() Plugins Cmds Plugins AsyncCS API API Job Services Servlet Queue API Mgr Kernel Responses Agent API (Commands) Agent Resources Manager Local Or Remote Hypervisor Network Native Device APIs API MySQL
  • 51. Old Architecture API Layer Pros EC2 CloudStack Access Control • Agile development for existing developersVirtual Machine Manager Console Proxy Manager • Scales well horizontally Async Job Manager Snapshot Manager Template Manager Network Manager Storage Manager Cons … • Monolithic • Difficult to educate new and third-party Agent Manager XenServ KVM SRX F5 NetScal Other developers er er • Easy to introduce bugs Resour Resour Resour Resourc Resourc ce ce Resour es ce e ce 52
  • 52. New Deployment Architecture • Scales horizontally to different pressure points • Automatically scales service VMs in zones to facilitate most efficient data path transfers • Fault isolation between API servers and Execution Servers and resources within zones
  • 53. New Architecture – API Server UI Cloud CLI Other Clients • API Server isolates Portal integration code from REST Execution Server API Server • API Server can OAM&P API Pluggable API Engine End User EC2 Other horizontally scale to Management Services API API ACL & APIs Integration handle traffic- Resource management- Configuration Authentication - Accounts, Domains, and • Easily adds other API- Additional operations added Projects - ACL, limits compatibility • Easily exposes API by third party checking Framework- Job Queue-- Database Access Layer OSGi needed by third party vendors
  • 54. New Architecture – Execution Server Execution Server • Execution Server protected by job queue • Kernel kept small for stability. It Services API Kernel Plugins• Drives long running VM operations • Storage only drives processes.• Syncs between resources managed and DB • Handling Network • Plugins provide mappings of• Generates events • Handling Deployment virtual entities to physical • planning Hypervisor resources Handling • Third party plugins to provide vendor differentiation in CloudStack• Cluster Management • Framework Component Framework • Communicates with resources•• Job Management • Alert & Event Management (OSGi) Transaction Management within data center over message•• Database Access Layer Messaging Layer bus
  • 55. New Architecture – Resources Agent • Resources are carried in Hypervisor Resources service VMs to be in close network proximity to the Network Resources physical resources it Storage Resources managesImage & Template Resources • Easily scales to utilize the most abundant resource in Snapshot Resources data center (CPU & RAM) • Communicates with Execution Server over message bus (JSON) • Can be replicated for fault tolerance
  • 56. Cloud Other UI CLI Clients Portal Management Server REST API OAM&P API End User API EC2 API Other APIs Pluggable Service API EngineConsole Proxy ACL & Authentication Security AdaptersManagement - Accounts, Domains, and Projects - ACL, limits checking Account Management Connectors Template Services API Access Deployment Planning Plugin API HA Kernel - Drives long running VM Services API Network Configurations Usage operations Calculations - Syncs between resources managed and DB Network Elements Additional - Generates events Services Hypervisor Gurus Cluster Resource Job Alert & Event Database Management Management Management Management Access Event Bus Message Bus Hypervisor Network Storage Image Snapshot Resources Resources Resources Resources Resources
  • 57. Kernel Module• Understands how to orchestrate long running processes (i.e. VM starts, Snapshot copies, Template propagation)• Well defined process steps• Calls Plugin API to execute functionalities that it needs
  • 58. Plugins• Various ways to add more capability to CloudStack• Implements clearly defined interfaces• All operations must be idempotent• All calls are at transaction boundaries• Compiles only against the Plugin API module
  • 59. Anatomy of a Plugin Rest API- Optional. Required only if needs to expose configuration API to admin. ServerResource - Optional. Required if Plugin needs to be co- located with the resource - Implements translation layer to talk to resource - Communicates withPlugin API Implmentation server component via JSON Data Access Layer
  • 60. Anatomy of a Plugin• Can be two jars: server component to be deployed on management server and an optional ServerResource component to be deployed co- located with the resource• Server component can implement multiple Plugin APIs to affect its feature• Can expose its own API through Pluggable Service so administrators can configure the plugin• As an example, OVS plugin actually implements both NetworkGuru and NetworkElement
  • 61. Plugin Interfaces Available• NetworkGuru – Implements various network isolation technologies and ip address technologies• NetworkElement – Facilitate network services on network elements to support a VM (i.e. DNS, DHCP, LB, VPN, Port Forwarding, etc)• DeploymentPlanner – Different algorithms to place a VM and volumes.• Investigator – Ways to find out if a host is down or VM is down.• Fencer – Ways to fence off a VM if the state is unknown• UserAuthenticator – Methods of authenticating a user• SecurityChecker – ACL access• HostAllocator – Provides different ways to allocate host• StoragePoolAllocator – Provides different ways to allocate volumes
  • 62. Adding a Plugin to CloudStack• Components are configured through components.xml• Supports DAO, Manager, and Adapter patterns• Open to other component frameworks (OSGi a possibility)
  • 63. Components.xml Example<components.xml> <system-integrity-checker class="com.cloud.upgrade.DatabaseUpgradeChecker"> <checker name="ManagementServerNode" class="com.cloud.cluster.ManagementServerNode"/> <checker name="EncryptionSecretKeyChecker"class="com.cloud.utils.crypt.EncryptionSecretKeyChecker"/> <checker name="DatabaseIntegrityChecker" class="com.cloud.upgrade.DatabaseIntegrityChecker"/> <checker name="DatabaseUpgradeChecker" class="com.cloud.upgrade.PremiumDatabaseUpgradeChecker"/> </system-integrity-checker> <interceptor library="com.cloud.configuration.DefaultInterceptorLibrary"/> <management-server class="com.cloud.server.ManagementServerExtImpl"library="com.cloud.configuration.PremiumComponentLibrary"> <adapters key="com.cloud.storage.allocator.StoragePoolAllocator"> <adapter name="LocalStorage" class="com.cloud.storage.allocator.LocalStoragePoolAllocator"/> <adapter name="Storage" class="com.cloud.storage.allocator.FirstFitStoragePoolAllocator"/> </adapters> <pluggableservice name="VirtualRouterElementService"key="com.cloud.network.element.VirtualRouterElementService"class="com.cloud.network.element.VirtualRouterElement"/> </management-server></components.xml>
  • 64. ServerResource• Translation layer between CloudStack commands and resource API• May be Co-located with resource• Have no access to DB• API defined in JSON messages
  • 65. DAO• SQL generation done mostly in GenericDaoBase• Uses JPA annotations• Very little code to write for each individual DAO• Database Access Layer for Kernel• No support for more complicated features such as fetch strategy• Welcome to use other types of ORM in other modules but like to hear about preferred library. (Hibernate is out due to licensing issues)
  • 66. Example DAO// ExampleVO.java // ExampleDao.java@Entity public interface ExampleDao@Table(name=“example”) extends GenericDao<ExampleVO, Long> {public class ExampleVO { } @Id @GeneratedValue(strategy= // ExampleDaoImpl.javaGenerationType.IDENTITY) @Local(value=ExampleDao.class) @Column(name=“id”) public class ExampleDaoImpl long id; extends GenericDaoBase<ExampleVO, Long> implements ExampleDao { @Column(name=“name”) String name; protected ExampleDaoImpl() { } @Column(name=“value”) } String value;}
  • 67. Sequence Flow for deploy VM Kernel End User Security User VM VirtualMac Network Storage Network Job Rest API Checkers Mgr hine Mgr Mgr Mgr Guru Scheduling Deploy VM ACL Checks Allocate Entity in CS Allocate VM Allocate NIC Allocate IP Allocate Volume Schedules Deploy Job Returns with job id, VM id Query Job ResultReturns with job status
  • 68. Sequence Flow for deploy VM User VM VirtualMac Network Storage Network Network Template Deployment ServerJob Threads Services API Mgr hine Mgr Mgr Mgr Guru Element Mgr Planner Resource Start VM Start User VM Start VM Get a Deployment Plan (Host and StoragePool) Prepare Nics Reserve resources for Nic Notify that Nic is about to be started in network Agent Calls Prepare Volumes Prepare template on Primary Storage Agent Calls Agent Start VM Call Stores job result
  • 69. High Availability
  • 70. High Availability• Service Offering contains a flag for whether HA should be supported for the VM• Does not use the native HA capability of hypervisors for XenServer and KVM• Uses adapters to fine tune HA process
  • 71. Triggering High AvailabilityVM HA are triggered via the following methods:• VM Sync detects out of band VM changes• Resource Management detects that a resource is unreachable and its state can not be determined.• VM start/stop has been sent to the resource but resource does not return• Details of how high availability is done is at http://docs.cloudstack.org/CloudStack_Documentation/Design_Documents/CloudStack_High_Availability_- _Developers_Guide
  • 72. High Availability Has VM changed since Yes Cancel Work • Investigation work scheduled? No – Uses investigators to find out if Investigation VM is alive or down – Each investigator returns three No Needed? Yes states • Up Up • DownFailure Start VM Down Is VM Up or Up Is hypervisor Down? host Up or Success Down? • Unknown Unknown Down • Fencing Completed Work Has more – Uses fencers to fence off the VM Investigators ? Yes from accessing storage to ensure No VM is not corrupted Reschedule Work – Each Fencer returns three states Yes Fence off VM? • Fenced • Unable to Fence No • Don’t know how to fence • Restart – Restarts the VM More Yes Fencers?? No
  • 73. Scalability
  • 74. Current Status• 10k resources managed per management server node• Scales out horizontally (must disable stats collector)• Real production deployment of tens of thousands of resources• Internal testing with software simulators up to 30k physical resources with 30k VMs managed by 4 management server nodes• We believe we can at least double that scale per management server node
  • 75. Balancing Incoming Requests• Each management server has two worker thread pools for incoming requests: effectively two servers in one. – Executor threads provided by tomcat – Job threads waiting on job queue• All incoming requests that requires mostly DB operations are short in duration and are executed by executor threads because incoming requests are already load balanced by the load balancer• All incoming requests needing resources, which often have long running durations, are checked against ACL by the executor threads and then queued and picked up by job threads.• # of job threads are scaled to the # of DB connections available to the management server• Requests may take a long time depending on the constraint of the resources but they don’t fail.
  • 76. Comparison of two Approaches• Stats Collector – collects capacity statistics – Fires every five minutes to collect stats about host CPU and memory capacity – Smart server and dumb client model: Resource only collects info and management server processes – Runs the same way on every management server• VM Sync – Fires every minute – Peer to peer model: Resource does a full sync on connection and delta syncs thereafter. Management server trusts on resource for correct information. – Only runs against resources connected to the management server node
  • 77. Numbers• Assume 10k hosts and 500k VMs (50 VMs per host)• Stats Collector – Fires off 10k requests every 5 minutes or 33 requests a second. – Bad but not too bad: Occupies 33 threads every second. – But just wait: • 2 management servers: 66 requests • 3 management servers: 99 requests – It gets worse as # of management servers increase because it did not auto-balance across management servers – Oh but it gets worse still: Because the 10k hosts is now spread across 3 management servers. While it’s 99 requests generated, the number of threads involved is three-fold because requests need to be routed to the right management server. – It keeps the management server at 20% busy even at no load from incoming requests• VM Sync – Fires off 1 request at resource connection to sync about 50 VMs – Then, push from resource as resource knows what it has pushed before and only pushes changes that are out-of-band. – So essentially no threads occupied for a much larger data set.
  • 78. Resource Load Balancing• As management server is added into the cluster, resources are rebalanced seamlessly. – MS2 signals to MS1 to hand over a resource – MS1 wait for the commands on the resources to finish – MS1 holds further commands in a queue – MS1 signals to MS2 to take over – MS2 connects – MS2 signals to MS1 to complete transfer – MS1 discards its resource and flows the commands being held to MS2• Listeners are provided to business logic to listen on connection status and adjusts work based on who’s connected.• By only working on resources that are connected to the management server the process is on, work is auto-balanced between management servers.• Also reduces the message routing between the management servers.
  • 79. CloudStack System VMs
  • 80. CloudStack System VMs• System VMs optimize and scale the data path on behalf of CloudStack – Stateless, can be destroyed and recreated from database state – Highly Available – Communicates with Management Server over management network – Usually have 3 interfaces: control(linked-local), mgmt and public• Console Proxy VM – Provides AJAX-style HTTP-only console viewer – Grabs VNC output from hypervisor – Scales out (more spawned) as load increases – Java-based server Communicates with MS• Secondary Storage VM – Provides image (template) management services – Download from HTTP file share or Swift – Copy between zones – Scale out to handle multiple NFS mounts – Java-based server communicates with MS
  • 81. CloudStack System VMs• Virtual Router VM – Provides multiple network services – IPAM (DHCP), DNS, NAT, Source NAT, Firewall, Port Forwarding, VPN – User-data, Meta-data, guest SSH keys and password change server – Redundancy via VRRP – MS configures VR over SSH • Proxied via the hypervisor on XS and KVM
  • 82. System VM spec• Debian 6.0 ("Squeeze"), 2.6.32 kernel with the latest security patches from the Debian security APT repository. No extraneous accounts• 32-bit for enhanced performance on Xen/VMWare• Only essential software packages are installed. Services such as, printing, ftp, telnet, X, kudzu, dns, sendmail are not installed.• SSHd only listens on the private/link-local interface. SSH port has been changed to a non- standard port (3922). SSH logins only using keys (keys are generated at install time and are unique for every customer)• pvops kernel with Xen paravirt drivers + KVM virtio drivers + VMware tools for optimum performance on all hypervisors. Xen tools inclusion allows performance monitoring• Template is built from scratch and is not polluted with any old logs or history• Latest versions of haproxy, iptables, ipsec, apache from debian repository ensures improved security and speed• Latest version of jre from Sun/Oracle ensures improved security and speed
  • 83. System VM contd• SSH keys and password are unique to cloud installation• Code can be patched by restarting system vm – Mounts a special ISO file with latest code at boot – If ISO contents differ, patch and reboot• Same system vm works on XS, KVM, VMWare – Bootstrap step for the cloud is to install the template for this system vm• Ready to be re-purposed for other specialized tasks
  • 84. Interactions OVM Cluster Primary Storage vcenter Monitoring Primary CS API vSphere Cluster Storage End User UI Primary XS Cluster Storage Admin UI Clustered CloudStack XAPI Domain CS Admin & CloudStack CloudStack Admin End-user API Primary UI Management JSON KVM Cluster Storage Server NetConf Juniper SRXCloud user Nitro API{API client (Fog/etc)} VNC JSON ec2 API JSON Netscaler Cloud user Console Console {ec2 API client } Proxy VM Proxy VM NFS MySQL Server {Proxied} SSH Sec. Storage NFS NFS Sec. Storage VM Ajax HTTPS VM Console Router VM HTTP (Template Download) Router VM HTTP (Template Copy) Router VM Cloud user HTTP (Swift)
  • 85. CloudStack Roadmap 2012 2013 Feb Apr Jul Oct Feb Acton Bonita Burbank Campo ?► Swift Integration ► OpenvSwitch Support ► Inter-Vlan Routing ► AWS-style Regions ► Hyper-V (win 8)► Support XenServer 6 ► VMWare Distributed ► Multi-tier App ► IPv6► Support Vsphere 5 vSwitch Support ► Site-to-Site VPNs ► Resource Scaling► Netscaler Integration ► Cisco Nexus 1000v ► AWS-style tags ► Dedicated Resource Support Module► Refine Resource ► VM Tiers Management ► Upload Volume ► Scalability (50K hosts)► UI refinement ► Plugin Architecture► LDAP/AD Authentication ► Hypervisor Enhancement► Clustered LVM support
  • 86. CloudStack vs. OpenStack vs. Eucalyptus
  • 87. CloudStack• Mainly written in Java• ASL2.0 license• Has more than 100 production clouds (Around May, 2012)• Support private/hybrid/public cloud• Scale to 30K physical host in commercial environment• Support XenServer/Vsphere/KVM/OVM/Hyper-V/Baremetal as hypervisor• Multiple geographically distributed datacenters management• Flexible and rich network functionality• Easy installation and management• Amazon EC2 API compatible• Well documented• Active community
  • 88. OpenStack• Mainly written in Python• ASL2.0 license• Support private/hybrid/public cloud• Immature for commercial usage• Support XenServer/Vsphere/KVM/Xen/Hyper-V as hypervisor• Network is a single point of failure• Weak VPN support for enterprise hybrid cloud• All inter-module communication are based on MQ• Not well documented• A bit hard to install• Amazon EC2 API partially compatible
  • 89. Eucalyptus (Open Source edition)• Mainly written in Java• GPLv3 license• Focus on private cloud• Support KVM/Xen as hypervisor• Fully compatible with Amazon EC2• Fully compatible with Amazon S3 via Walrus• EBS support via AoE and iSCSI• Both web UI and command line tools for cloud administration• Well documented• Difficult to getting started