Successfully reported this slideshow.

Virtualisation overview

2,324 views

Published on

Published in: Technology, Business
  • Be the first to comment

Virtualisation overview

  1. 1. © Copyright IBM Corporation 2009 3.2 PowerVM Virtualization plain and simple
  2. 2. © Copyright IBM Corporation 2009 IBM System p Goals with Virtualization Lower costs and improve resource utilization - Data Center floor space reduction or… - Increase processing capacity in the same space - Environmental (cooling and energy challenges) - Consolidation of servers - Lower over all solution costs  Less hardware, fewer software licenses - Increase business flexibility  Meet ever changing business needs faster provisioning - Improving Application Availability  Flexibility in moving applications between servers
  3. 3. © Copyright IBM Corporation 2009 IBM System p The virtualization elevator pitch • The basic elements of PowerVM - Micro-partitioning – allows 1 CPU look like 10 - Dynamic LPARs – moving resources - Virtual I/O server – partitions can share physical adapters - Live partition mobility – using Power6 - Live application mobility – using AIX 6.1
  4. 4. © Copyright IBM Corporation 2009 IBM System p First there were servers • One physical server for one operating system • Additional physical servers added as business grows Physical view Users view
  5. 5. © Copyright IBM Corporation 2009 IBM System p Then there were logical partitions • One physical server was divided into logical partitions • Each partition is assigned a whole number of physical CPUs (or cores) • One physical server now looks like multiple individual servers to the user Physical view 8 CPUs Users viewLogical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs
  6. 6. © Copyright IBM Corporation 2009 IBM System p Then came dynamic logical partitions • Whole CPUs can be moved from one partition to another partition • These CPUs can be added and removed from partitions without shutting the partition down • Memory can also be dynamically added and removed from partitions Physical view 8 CPUs Users viewLogical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs 1 CPUs 3 CPUs 2 CPUs
  7. 7. © Copyright IBM Corporation 2009 IBM System p Dynamic LPAR •Standard on all POWER5 and POWER6 systems HMC AIX 5L Linux Hypervisor Part#1 Production Part#2 Part#3 Part#4 Legacy Apps Test/ Dev File/ Print AIX 5L AIX 5L Move resources between live partitions
  8. 8. © Copyright IBM Corporation 2009 IBM System p Now there is micro partitioning • A logical partition can now have a fraction of a full CPU • Each physical CPU (core) can be spread across 10 logical partitions • A physical CPU can be in a pool of CPUs that are shared by multiple logical partitions • One physical server can now look like many more servers to the user • Can also dynamically move CPU resources between logical partitions Physical view 8 CPUs Users viewLogical view 0.2 CPU 2.3 CPUs 1.2 CPUs 1 CPU 0.3 CPU 1.5 CPUs 0.9 CPU
  9. 9. © Copyright IBM Corporation 2009 IBM System p Logical partitions (LPARs) can be defined with dedicated or shared processors Processors not dedicated to a LPAR are part of the pool of shared processors Processing capacity for a shared LPAR is specified in terms of processing units. With as little as 1/10 of a processor Micro-partitioning terminology
  10. 10. © Copyright IBM Corporation 2009 IBM System p Micro-partitioning – more details Lets look deeper into micro-partitioning
  11. 11. © Copyright IBM Corporation 2009 IBM System p  A physical CPU is a single “core” and also called a “processor” The use of micro-partitioning introduces the virtual CPU concept A virtual CPU could be a fraction of a physical CPU A virtual CPU can not be more than a full physical CPU  IBM’s simultaneous multi threading technology (SMT) enables two threads to run on the same processor at the same time. With SMT enabled the operating system sees twice the number of processors Micro-partitioning terminology (details) Physical CPU Virtual CPU Virtual CPU Virtual CPU Logical CPU Logical CPU Logical CPU Logical CPU Logical CPU Logical CPU Using SMT Using micro-partitioning Each logical CPU appears to the operating system as a full CPU
  12. 12. © Copyright IBM Corporation 2009 IBM System p The LPAR definition sets the options for processing capacity: ƒ Minimum: ƒ Desired: ƒ Maximum: The processing capacity of an LPAR can be dynamically changed ƒ Changed by the administrator at the HMC ƒ Changed automatically by the hypervisor The LPAR definition set the behavior when under a load ƒ Capped: LPAR processing capacity is limited to the desired setting ƒ Uncapped: LPAR is allowed to use more then it was given Micro-partitioning terminology (details)
  13. 13. © Copyright IBM Corporation 2009 IBM System p Shared processor pool Basic terminology around Logical Partitions Shared processor partition SMT Off Shared processor partition SMT On Dedicated processor partition SMT Off Deconfigured Inactive (CUoD) Dedicated Shared Virtual Logical (SMT) Installed physical processors Entitled capacity
  14. 14. © Copyright IBM Corporation 2009 IBM System p Capped and uncapped partitions • Capped partition - Not allowed to exceed its entitlement • Uncapped partition - Is allowed to exceed its entitlement • Capacity weight - Used for prioritizing uncapped partitions - Value 0-255 - Value of 0 referred to as a “soft cap” Note: The CPU utilization metric has less relevance in the uncapped partition.
  15. 15. © Copyright IBM Corporation 2009 IBM System p What about system I/O adapters • Back in the “old” days, each partition had to have its own dedicated adapters • One Ethernet adapter for a network connection • One SCSI or HBA card to connect to local or external disk storage • The number of partitions was limited by the number of available adapters Physical adapters Users view Logical Partitions 1 CPUs 3 CPUs 2 CPUs 2 CPUs Ethernet adap Ethernet adap Ethernet adap Ethernet adap SCSI adap SCSI adap SCSI adap SCSI adap
  16. 16. © Copyright IBM Corporation 2009 IBM System p Then came the Virtual I/O server (VIOS) • The virtual I/O server allows partitions to share physical adapters • One Ethernet adapter can not provide a network connection for multiple partitions • Disks on one SCSI or HBA card can now be shared with multiple partitions • The number of partitions is no longer limited by the number of available adapters Ethernet adap SCSI adap Virtual I/O Server partition 0.5 CPU 1.1 CPUs 0.3 CPU 1.4 CPUs 2.1 CPUs Ethernet network
  17. 17. © Copyright IBM Corporation 2009 IBM System p Virtual I/O server and SCSI disks
  18. 18. © Copyright IBM Corporation 2009 IBM System p Integrated Virtual Ethernet LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hypervisor SEA Virtual Ethernet Switch Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hyper- visor SEA Ethernet Driver Ethernet Driver Ethernet Driver Integrated Virtual Adapter VIOS Set up is not required for sharing Ethernet Adapters PCI Ethernet Adapter Virtual I/O Shared Ethernet Adapter Integrated Virtual Ethernet vs
  19. 19. © Copyright IBM Corporation 2009 IBM System p Lets see it in action Now let’s see this technology in action This demo illustrates the topics just discussed
  20. 20. © Copyright IBM Corporation 2009 IBM System p
  21. 21. © Copyright IBM Corporation 2009 IBM System p Shared Processor pools It is possible to have multiple shared processor pools Lets dive in deeper
  22. 22. © Copyright IBM Corporation 2009 IBM System p Linux Software: A,B,C AIX 5L Software: X,Y,Z Multiple Shared Processor Pools VSP2 Max Cap=2VSP1 Max Cap=4 AIX 5L DB/2 Physical Shared Pool ► Useful for multiple business units in a single company – resource allocation ► Only license the relevant software based on VSP Max ► Cap total capacity used by a group of partitions ► Still allow other partitions to consume capacity not used by the partitions in the VSP
  23. 23. © Copyright IBM Corporation 2009 IBM System p AIX 6.1 Introduces Workload Partitions • Workload partitions (WPAR) is yet another way to create virtual systems • WPARs are partitions within a partition • Each partition is isolated from one another • AIX 6.1 can be run on Power5 or Power6 hardware
  24. 24. © Copyright IBM Corporation 2009 IBM System p AIX 6 Workload Partitions (details)  WPAR appears to be a stand alone AIX system  Created entirely within a single AIX system image  Created entirely in software (no HW assist or configuration)  Provides an isolated process environment: Processes within a WPAR can only see other processes in the same partition.  Provides an isolated file system space A separate branch of the global file system space is created and all of the WPARS processes are chrooted to this branch. Processes within a WPAR see files only in this branch.  Provides an isolated network environment Separate network addresses, hostnames, domain names Other nodes on the network see WPAR as a stand alone system.  Provides WPAR resource controls The amount of system memory, CPU resources, paging space allocated to each WPAR can be set.  Shared system resources: OS, I/O Devices, Shared Library Workload Partition A Workload Partition C Workload Partition B AIX 6 Image Workload Partition D Workload Partition E
  25. 25. © Copyright IBM Corporation 2009 IBM System p Inside a WPAR
  26. 26. © Copyright IBM Corporation 2009 IBM System p Workload Partition Billing Workload Partition QA AIX # 2 Workload Partition Data Mining Live Application Mobility Workload Partition Application Server Workload Partition Web AIX # 1 Application Partition Dev The ability to move a Workload Partition from one server to another Provides outage avoidance and multi-system workload balancing Workload Partition eMail Policy based automation can provide more efficient resource usage Workload Partitions Manager Policy NFSNFS
  27. 27. © Copyright IBM Corporation 2009 IBM System p Live application mobility in action Lets see this techonolgy in action with another demo Need to exit presentation in order to run the demo
  28. 28. © Copyright IBM Corporation 2009 IBM System p Power6 hardware introduced partition mobility With Power6 hardware, partitions can not be moved from on system to another without stopping the applications running on that partition.
  29. 29. © Copyright IBM Corporation 2009 IBM System p Partition Mobility: Active and Inactive LPARs Active Partition Mobility  Active Partition Migration is the actual movement of a running LPAR from one physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.  Applicability  Workload consolidation (e.g. many to one)  Workload balancing (e.g. move to larger system)  Planned CEC outages for maintenance/upgrades  Impending CEC outages (e.g. hardware warning received) Active Partition Mobility  Active Partition Migration is the actual movement of a running LPAR from one physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.  Applicability  Workload consolidation (e.g. many to one)  Workload balancing (e.g. move to larger system)  Planned CEC outages for maintenance/upgrades  Impending CEC outages (e.g. hardware warning received) Inactive Partition Mobility  Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not running) from one system to another. Inactive Partition Mobility  Inactive Partition Migration transfers a partition that is logically ‘powered off’ (not running) from one system to another. Partition Mobility supported on POWER6 AIX 5.3, AIX 6.1 and Linux
  30. 30. © Copyright IBM Corporation 2009 IBM System p Live partition mobility demo The following demo show live partition mobility (LPM) in action
  31. 31. © Copyright IBM Corporation 2009 IBM System p Response Time & Utilization based Workload & Resource Management AIX 5.3 Linux Partitions Power Hypervisor Virtual I / O Server (VI OS) Ethernet & Fiber Channel Adapter Sharing Virtualized disks Interpartition Communication Dedicated I/O Shared I/O AIX 6 IBM System p Offers Best of Both Worlds in Virtualization WPAR Application Server WPAR Web Server WPAR Billing AIX instance WPAR Test WPAR BI Logical Partitions (LPARS) AIX 6 Workload Partitions (WPARs)  Multiple OS Images in LPARs  Up to a maximum of 254  Maximum Flexibility  Different OSes and OS Versions in LPARs  Maximum Fault / Security / Resource Isolation  Multiple workloads within a single OS image  Minimum number of OS Images: one  Improved administrative efficiency  Reduce number of OS images to maintain  Good Fault / Security / Resource isolation AIX Workload Partitions can be Used in LPARs
  32. 32. © Copyright IBM Corporation 2009 IBM System p Virtualization Benefits • Increase Utilization - Single application servers often run at lower average utilizations levels. - Idle capacity cannot be used - Virtualized servers run at high utilization levels. • Simplify Workload Sizing - Sizing new workloads is difficult - LPARs can be resized to match needs - Can over commit capacity - Scale up and scale out applications on the same hardware platform 0 10 20 30 40 50 60 70 80 90 100 8:00 10:00 12:00 2:00 4:00 Time CPUUtilization Purchased Peak Average
  33. 33. © Copyright IBM Corporation 2009 IBM System p Backup slides Still more details for those interest….
  34. 34. © Copyright IBM Corporation 2009 IBM System p Partition capacity entitlement • Processing units - 1.0 processing unit represents one physical processor • Entitled processor capacity - Commitment of capacity that is reserved for the partition - Set upper limit of processor utilization for capped partitions - Each virtual processor must be granted at least 1/10 of a processing unit of entitlement • Shared processor capacity is always delivered in terms of whole physical processors Processing capacity 1 physical processor 1.0 processing units 0.5 processing unit 0.4 processing unit Minimum requirement 0.1 processing units
  35. 35. © Copyright IBM Corporation 2009 IBM System p Capped Shared Processor LPAR Maximum Processor Capacity Entitled Processor CapacityProcessor Capacity Utilization LPARCapacity Utilization Pool Idle CapacityAvailable Time minimumprocessor capacity ceded capacity utilized capacity
  36. 36. © Copyright IBM Corporation 2009 IBM System p Uncapped Shared Processor LPAR MaximumProcessor Capacity Processor Capacity Utilization Pool IdleCapacityAvailable Time EntitledProcessor Capacity minimumprocessor capacity UtilizedCapacity cededcapacity
  37. 37. © Copyright IBM Corporation 2009 IBM System p Shared processor partitions • Micro-Partitioning allows for multiple partitions to share one physical processor • Up to 10 partitions per physical processor • Up to 254 partitions active at the same time • Partition’s resource definition - Minimum, desired, and maximum values for each resource - Processor capacity - Virtual processors - Capped or uncapped • Capacity weight - Dedicated memory • Minimum of 128 MB and 16 MB increments - Physical or virtual I/O resources CPU 0 CPU 1 CPU 3 CPU 4 LPAR 1 LPAR 2 LPAR 5 LPAR 6 LPAR 4LPAR 3
  38. 38. © Copyright IBM Corporation 2009 IBM System p Understanding min/max/desired resource values • The desired value for a resource is given to a partition if enough resource is available. • If there is not enough resource to meet the desired value, then a lower amount is allocated. • If there is not enough resource to meet the min value, the partition will not start. • The maximum value is only used as an upper limit for dynamic partitioning operations.
  39. 39. © Copyright IBM Corporation 2009 IBM System p Partition capacity entitlement example • Shared pool has 2.0 processing units available • LPARs activated in sequence • Partition 1 activated - Min = 1.0, max = 2.0, desired = 1.5 - Starts with 1.5 allocated processing units • Partition 2 activated - Min = 1.0, max = 2.0, desired = 1.0 - Does not start • Partition 3 activated - Min = 0.1, max = 1.0, desired = 0.8 - Starts with 0.5 allocated processing units
  40. 40. © Copyright IBM Corporation 2009 IBM System p Capped and uncapped partitions • Capped partition - Not allowed to exceed its entitlement • Uncapped partition - Is allowed to exceed its entitlement • Capacity weight - Used for prioritizing uncapped partitions - Value 0-255 - Value of 0 referred to as a “soft cap”
  41. 41. © Copyright IBM Corporation 2009 IBM System p Shared Dedicated Capacity 0 25 50 75 100 125 150 175 200 1-way Dedicated Wasted Dedicated 0.5 Uncapped 1 0.5 Uncapped 2 Dedicated Processor Partitions often have excess capacity that can be utilized by uncapped micropartitions Increased Resource Utilization Today  Unused capacity in dedicated partitions gets wasted 0 25 50 75 100 125 150 175 200 1-way Dedicated Wasted Dedicated 0.5 Uncapped 1 0.5 Uncapped 2  With the new support, a dedicated partition will donate its excess cycles to the uncapped partitions  Results in increased resource utilization  Dedicated processor partition maintains the performance characteristics and predictability of the dedicated environment under load With Shared Dedicated Capacity Equivalent Workload Complete
  42. 42. © Copyright IBM Corporation 2009 IBM System p WPAR Manager view of WPARs
  43. 43. © Copyright IBM Corporation 2009 IBM System p Active Memory Sharing Overview • Next step in resource virtualization, analogous to shared processor partitions that share the processor resources available in a pool of processors. • Supports over-commitment of physical memory with overflow going to a paging device. - Users can define a partition with a logical memory size larger than the available physical memory. - Users can activate a set of partitions whose aggregate logical memory size exceeds the available physical memory. • Enables fine-grained sharing of physical memory and automated expansion and contraction of a partition’s physical memory footprint based on workload demands. • Supports OS collaborative memory management (ballooning) to reduce hypervisor paging. A pool of physical memory is dynamically allocated amongst multiple logical partitions as needed to optimize overall physical memory usage in the pool.

×