SlideShare a Scribd company logo
1 of 43
PowerVM Virtualization plain and simple Richard Bassemir Senior Software Engineer ISV Business Strategy and Enablement
Goals with Virtualization ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
The virtualization elevator pitch ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
First there were servers ,[object Object],[object Object],Physical view Users view
Then there were logical partitions ,[object Object],[object Object],[object Object],Physical view 8 CPUs Users view Logical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs
Then came dynamic logical partitions ,[object Object],[object Object],[object Object],Physical view 8 CPUs Users view Logical view 1 CPUs 3 CPUs 2 CPUs 2 CPUs 1 CPUs 3 CPUs 2 CPUs
Dynamic LPAR ,[object Object],HMC AIX 5L Linux Hypervisor Part#1  Production Part#2 Part#3 Part#4 Legacy Apps Test/ Dev File/ Print AIX 5L AIX 5L Move resources between live partitions
Now there is micro partitioning ,[object Object],[object Object],[object Object],[object Object],[object Object],Physical view 8 CPUs Users view Logical view 0.2 CPU 2.3 CPUs 1.2 CPUs 1 CPU 0.3 CPU 1.5 CPUs 0.9 CPU
[object Object],[object Object],[object Object],[object Object],Micro-partitioning terminology
Micro-partitioning – more details ,[object Object]
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Micro-partitioning terminology (details) Physical CPU Virtual CPU Virtual CPU Virtual CPU Logical  CPU Logical  CPU Logical  CPU Logical  CPU Logical  CPU Logical  CPU Using SMT Using  micro-partitioning Each logical CPU  appears to the  operating system as  a full CPU
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Micro-partitioning terminology (details)
Basic terminology around Logical Partitions Shared processor pool Shared processor partition SMT Off Shared processor partition SMT On Dedicated processor partition SMT Off Deconfigured Inactive (CUoD) Dedicated Shared Virtual Logical (SMT) Installed physical processors Entitled capacity
Capped and uncapped partitions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
What about system I/O adapters ,[object Object],[object Object],[object Object],[object Object],Physical  adapters Users view Logical  Partitions 1 CPUs 3 CPUs 2 CPUs 2 CPUs Ethernet adap Ethernet adap Ethernet adap Ethernet adap SCSI adap SCSI adap SCSI adap SCSI adap
Then came the Virtual I/O server (VIOS) ,[object Object],[object Object],[object Object],[object Object],Ethernet adap SCSI adap Virtual I/O Server partition 0.5 CPU 1.1 CPUs 0.3 CPU 1.4 CPUs 2.1 CPUs Ethernet network
Virtual I/O server and SCSI disks
Integrated Virtual Ethernet LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hypervisor SEA  Virtual Ethernet Switch Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hyper- visor SEA Ethernet Driver Ethernet Driver Ethernet Driver Integrated Virtual  Adapter VIOS Set up is not required for sharing Ethernet Adapters PCI Ethernet  Adapter Virtual I/O Shared Ethernet Adapter Integrated Virtual Ethernet vs
Lets see it in action ,[object Object],[object Object]
 
Shared Processor pools ,[object Object],[object Object]
Multiple Shared Processor Pools AIX 5L  Software: X,Y,Z VSP2   Max   Cap=2 VSP1   Max Cap=4 Physical Shared Pool ,[object Object],[object Object],[object Object],[object Object],Linux  Software: A,B,C AIX 5L DB/2
AIX 6.1 Introduces Workload Partitions ,[object Object],[object Object],[object Object],[object Object]
AIX 6 Workload Partitions (details) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Workload Partition A Workload Partition C Workload Partition B Workload Partition D Workload Partition E AIX 6 Image
Inside a WPAR
Live Application Mobility Workload Partition Billing Workload Partition QA AIX # 2 Workload Partition Data Mining Workload Partition Application Server Workload Partition Web AIX # 1 Application Partition Dev The ability to move a Workload Partition from one server to another Provides outage avoidance and multi-system workload balancing Workload  Partition eMail Policy based automation can provide more efficient resource usage Policy NFS   Workload Partitions Manager
Live application mobility in action ,[object Object],[object Object]
Power6 hardware introduced partition mobility ,[object Object]
Partition Mobility: Active and Inactive LPARs ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Partition Mobility supported on POWER6 AIX 5.3, AIX 6.1 and Linux
Live partition mobility demo ,[object Object]
IBM System p Offers Best of Both Worlds in Virtualization   WPAR Application Server WPAR Web Server WPAR Billing AIX instance WPAR Test WPAR BI Logical Partitions (LPARS) AIX 6 Workload Partitions (WPARs) ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],AIX Workload Partitions can be Used in LPARs
Virtualization Benefits ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Purchased Peak Average
Backup slides ,[object Object]
Partition capacity entitlement ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Processing capacity 1 physical processor 1.0 processing units 0.5 processing unit 0.4 processing unit Minimum requirement 0.1 processing units
 
 
Shared processor partitions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],CPU 0 CPU 1 CPU 3 CPU 4 LPAR 1 LPAR 2 LPAR 5 LPAR 6 LPAR 4 LPAR 3
Understanding min/max/desired resource values ,[object Object],[object Object],[object Object],[object Object]
Partition capacity entitlement example ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Capped and uncapped partitions ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Shared Dedicated Capacity Dedicated Processor Partitions often have excess capacity that can be utilized by uncapped micropartitions Increased Resource Utilization  Today ,[object Object],[object Object],[object Object],[object Object],With Shared Dedicated Capacity Equivalent  Workload  Complete
WPAR Manager view of WPARs
Active Memory Sharing Overview ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],A  pool of physical memory  is dynamically allocated amongst multiple logical partitions as needed to optimize overall physical memory usage in the pool.

More Related Content

What's hot

Presentation aix workload partitions (wpa rs)
Presentation   aix workload partitions (wpa rs)Presentation   aix workload partitions (wpa rs)
Presentation aix workload partitions (wpa rs)xKinAnx
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopsolarisyougood
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
 
High Availability Options for IBM i
High Availability Options for IBM iHigh Availability Options for IBM i
High Availability Options for IBM iHelpSystems
 
Xiv svc best practices - march 2013
Xiv   svc best practices - march 2013Xiv   svc best practices - march 2013
Xiv svc best practices - march 2013Jinesh Shah
 
Power8 sales exam prep
 Power8 sales exam prep Power8 sales exam prep
Power8 sales exam prepJason Wong
 
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...Cesar Maciel
 
AIXpert - AIX Security expert
AIXpert - AIX Security expertAIXpert - AIX Security expert
AIXpert - AIX Security expertdlfrench
 
Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718brettallison
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...Hendrik van Run
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysisbrettallison
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvmsolarisyougood
 
Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2solarisyougood
 
M series technical presentation-part 1
M series technical presentation-part 1M series technical presentation-part 1
M series technical presentation-part 1xKinAnx
 

What's hot (20)

Aix overview
Aix overviewAix overview
Aix overview
 
Presentation aix workload partitions (wpa rs)
Presentation   aix workload partitions (wpa rs)Presentation   aix workload partitions (wpa rs)
Presentation aix workload partitions (wpa rs)
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshop
 
PowerHA for i
PowerHA for iPowerHA for i
PowerHA for i
 
Ibm aix
Ibm aixIbm aix
Ibm aix
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
 
High Availability Options for IBM i
High Availability Options for IBM iHigh Availability Options for IBM i
High Availability Options for IBM i
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
 
IBM XIV Gen3 Storage System
IBM XIV Gen3 Storage SystemIBM XIV Gen3 Storage System
IBM XIV Gen3 Storage System
 
Xiv svc best practices - march 2013
Xiv   svc best practices - march 2013Xiv   svc best practices - march 2013
Xiv svc best practices - march 2013
 
Power8 sales exam prep
 Power8 sales exam prep Power8 sales exam prep
Power8 sales exam prep
 
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
Heterogeneous Computing on POWER - IBM and OpenPOWER technologies to accelera...
 
AIXpert - AIX Security expert
AIXpert - AIX Security expertAIXpert - AIX Security expert
AIXpert - AIX Security expert
 
Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
 
Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvm
 
Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2Masters stretched svc-cluster-2012-04-13 v2
Masters stretched svc-cluster-2012-04-13 v2
 
M series technical presentation-part 1
M series technical presentation-part 1M series technical presentation-part 1
M series technical presentation-part 1
 

Similar to Simple Virtualization Overview

IBM System p Virtualisation.ppt
IBM System p Virtualisation.pptIBM System p Virtualisation.ppt
IBM System p Virtualisation.ppthellocn
 
Introduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersIntroduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
 
Visual comparison of Unix-like systems & Virtualisation
Visual comparison of Unix-like systems & VirtualisationVisual comparison of Unix-like systems & Virtualisation
Visual comparison of Unix-like systems & Virtualisationwangyuanyi
 
04_virtualization1_v1.pdf
04_virtualization1_v1.pdf04_virtualization1_v1.pdf
04_virtualization1_v1.pdfHossainOrnob
 
Linux architecture
Linux architectureLinux architecture
Linux architecturemcganesh
 
3PAR and VMWare
3PAR and VMWare3PAR and VMWare
3PAR and VMWarevmug
 
Oracle rac 10g best practices
Oracle rac 10g best practicesOracle rac 10g best practices
Oracle rac 10g best practicesHaseeb Alam
 
Linux internal
Linux internalLinux internal
Linux internalmcganesh
 
Linux architecture
Linux architectureLinux architecture
Linux architecturemcganesh
 
Lecture 4.pptx
Lecture 4.pptxLecture 4.pptx
Lecture 4.pptxinfomerlin
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmoxOriol Izquierdo Vibalda
 
Rha cluster suite wppdf
Rha cluster suite wppdfRha cluster suite wppdf
Rha cluster suite wppdfprojectmgmt456
 
Clusters (Distributed computing)
Clusters (Distributed computing)Clusters (Distributed computing)
Clusters (Distributed computing)Sri Prasanna
 
PCA_Admin_Presentation-1.pptx
PCA_Admin_Presentation-1.pptxPCA_Admin_Presentation-1.pptx
PCA_Admin_Presentation-1.pptxssuser21ded1
 
Microx - A Unix like kernel for Embedded Systems written from scratch.
Microx - A Unix like kernel for Embedded Systems written from scratch.Microx - A Unix like kernel for Embedded Systems written from scratch.
Microx - A Unix like kernel for Embedded Systems written from scratch.Waqar Sheikh
 

Similar to Simple Virtualization Overview (20)

IBM System p Virtualisation.ppt
IBM System p Virtualisation.pptIBM System p Virtualisation.ppt
IBM System p Virtualisation.ppt
 
Introduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & ContainersIntroduction to OS LEVEL Virtualization & Containers
Introduction to OS LEVEL Virtualization & Containers
 
Visual comparison of Unix-like systems & Virtualisation
Visual comparison of Unix-like systems & VirtualisationVisual comparison of Unix-like systems & Virtualisation
Visual comparison of Unix-like systems & Virtualisation
 
Wk6a
Wk6aWk6a
Wk6a
 
04_virtualization1_v1.pdf
04_virtualization1_v1.pdf04_virtualization1_v1.pdf
04_virtualization1_v1.pdf
 
RAC - Test
RAC - TestRAC - Test
RAC - Test
 
PROSE
PROSEPROSE
PROSE
 
Linux architecture
Linux architectureLinux architecture
Linux architecture
 
3PAR and VMWare
3PAR and VMWare3PAR and VMWare
3PAR and VMWare
 
Oracle rac 10g best practices
Oracle rac 10g best practicesOracle rac 10g best practices
Oracle rac 10g best practices
 
Linux internal
Linux internalLinux internal
Linux internal
 
Linux architecture
Linux architectureLinux architecture
Linux architecture
 
Libra Library OS
Libra Library OSLibra Library OS
Libra Library OS
 
Lecture 4.pptx
Lecture 4.pptxLecture 4.pptx
Lecture 4.pptx
 
3487570
34875703487570
3487570
 
High availability virtualization with proxmox
High availability virtualization with proxmoxHigh availability virtualization with proxmox
High availability virtualization with proxmox
 
Rha cluster suite wppdf
Rha cluster suite wppdfRha cluster suite wppdf
Rha cluster suite wppdf
 
Clusters (Distributed computing)
Clusters (Distributed computing)Clusters (Distributed computing)
Clusters (Distributed computing)
 
PCA_Admin_Presentation-1.pptx
PCA_Admin_Presentation-1.pptxPCA_Admin_Presentation-1.pptx
PCA_Admin_Presentation-1.pptx
 
Microx - A Unix like kernel for Embedded Systems written from scratch.
Microx - A Unix like kernel for Embedded Systems written from scratch.Microx - A Unix like kernel for Embedded Systems written from scratch.
Microx - A Unix like kernel for Embedded Systems written from scratch.
 

Simple Virtualization Overview

  • 1. PowerVM Virtualization plain and simple Richard Bassemir Senior Software Engineer ISV Business Strategy and Enablement
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13. Basic terminology around Logical Partitions Shared processor pool Shared processor partition SMT Off Shared processor partition SMT On Dedicated processor partition SMT Off Deconfigured Inactive (CUoD) Dedicated Shared Virtual Logical (SMT) Installed physical processors Entitled capacity
  • 14.
  • 15.
  • 16.
  • 17. Virtual I/O server and SCSI disks
  • 18. Integrated Virtual Ethernet LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hypervisor SEA Virtual Ethernet Switch Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver LPAR #2 LPAR VIOS LPAR #3 LPAR #1 Power Hyper- visor SEA Ethernet Driver Ethernet Driver Ethernet Driver Integrated Virtual Adapter VIOS Set up is not required for sharing Ethernet Adapters PCI Ethernet Adapter Virtual I/O Shared Ethernet Adapter Integrated Virtual Ethernet vs
  • 19.
  • 20.  
  • 21.
  • 22.
  • 23.
  • 24.
  • 26. Live Application Mobility Workload Partition Billing Workload Partition QA AIX # 2 Workload Partition Data Mining Workload Partition Application Server Workload Partition Web AIX # 1 Application Partition Dev The ability to move a Workload Partition from one server to another Provides outage avoidance and multi-system workload balancing Workload Partition eMail Policy based automation can provide more efficient resource usage Policy NFS Workload Partitions Manager
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.  
  • 36.  
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42. WPAR Manager view of WPARs
  • 43.

Editor's Notes

  1. This presentation is targeted to for a less technical audience with the objective of explaining the power of IBM’s virtualization technologies
  2. One or all of these topics are good reasons to do Virtualization. Some will say you can even lower your overall cost in head count. That is some what true but you need to have highly skilled UNIX talent for this. If you look at what a UNIX admin can support for physical servers maybe 50 systems and that is a lot. Compared to two UNIX admin who are trained and skilled with Virtualization. The pair can easily mange 4 595 class systems with a range of 350 to 400 LPARs on these systems, increasing the over all effectiveness of FTE (full time employee) staff.
  3. Allocate processors, memory and I/O to create virtual servers Minimum 128 MB memory, one CPU, one PCI-X adapter slot All resources can be allocated independently Resources can be moved between live partitions Applications notified of configuration changes Movement can be automated using Partition Load Manager Works with AIX 5.2+ or Linux 2.4+
  4. Micro partitioning allows for many more logical partitions to be created since you are no longer required to assign a full processor to a logical partition. Partitions can now more effectively be assigned enough CPU resources to do its workload allowing other partitions to use remaining CPU resources.
  5. Here is the basic terminology.
  6. The next few slides go into a little more detail. Use if audience is interested in knowing more.
  7. Talk about the relationship between the physical CPU, the virtual CPU and the logical CPU.
  8. Explain the concept of min, desired (or entitled) and maximum. Explain how the behavior of the hypervisor is controlled by the lpar definition
  9. The diagram in this chart shows the relationship and new concepts regarding Micro-Partitioning processor terminology used in this presentation. Virtual processors These are the whole number of concurrent operations that the operating system can use on a partition. The processing power can be conceptualized as being spread equally across these virtual processors. Selecting the optimal number of virtual processors depends on the workload in the partition. Some partitions benefit from greater concurrence, where other partitions require greater power. The maximum number of virtual processors per partition is 64. Dedicated processors Dedicated processors are whole processors that are assigned to a single partition. If you choose to assign dedicated processors to a logical partition, you must assign at least one processor to that partition. By default, a powered-off logical partition using dedicated processors will have its processors available to the shared processing pool. When the processors are in the shared processing pool, an uncapped partition that needs more processing power can use the idle processing resources. However, when you power on the dedicated partition while the uncapped partition is using the processors, the activated partition will regain all of its processing resources. If you want to prevent dedicated processors from being used in the shared processing pool, you can disable this function using the logical partition profile properties panels on the Hardware Management Console. Shared processor pool The POWER Hypervisor schedules shared processor partitions from a set of physical processors that is called the shared processor pool. By definition, these processors are not associated with dedicated partitions. Deconfigured processor This is a failing processor left outside the system’s configuration after a dynamic processor deallocation has occurred.
  10. A capped partition is not allowed to exceed it capacity entitlement, while an uncapped partition is. In fact, it may exceed its maximum processor capacity. An uncapped partition is only limited in its ability to consume cycles by the lack of online virtual processors and its variable capacity weight attribute. The variable capacity weight attribute is a number between 0–255, which represents the relative share of extra capacity that the partition is eligible to receive. This parameter applies only to uncapped partitions. A partition’s share is computed by dividing its variable capacity weight by the sum of the variable capacity weights for all uncapped partitions. Therefore, a value of 0 may be used to prevent a partition from receiving extra capacity. This is sometimes referred to as a “soft cap”. There is overhead associated with the maintenance of online virtual processors, so clients should carefully consider their capacity requirements before choosing values for these attributes. In general, the value of the minimum, desired, and maximum virtual processor attributes should parallel those of the minimum, desired, and maximum capacity attributes in some fashion. A special allowance should be made for uncapped partitions, since they are allowed to consume more than their entitlement. If the partition is uncapped, then the administrator may want to define the desired and maximum virtual processor attributes x% above the corresponding entitlement attributes. The exact percentage is installation specific, but 25-50% seems like a reasonable number.
  11. Explain how partitions were limited by the number of physical adapters in the system
  12. The Integrated Virtual Ethernet adapter is a standard feature of every POWER6 processor-based server. You can select from different offerings according to the specific IBM System p server. At the time of writing, the IBM System p 570 is the first server to offer this feature. The IVE consists of a physical Ethernet adapter that is connected directly to the GX+ bus of a POWER6 processor-based server instead of being connected to a PCIe or PCI-X bus, either as an optional or integrated PCI adapter. This provides IVE with the high throughput and low latency of a bus imbedded in the I/O controller. IVE also includes special hardware features that provide logical Ethernet adapters. These adapters can communicate directly to logical partitions (LPARs), reducing the interaction with the POWER Hypervisor™ (PHYP). In addition to 10 Gbps speeds, the IVE can provide familiar 1 Gbps Ethernet connectivity common on POWER5 and POWER5+™ processor-based servers. Prior to IVE, virtual Ethernet provided a connection between LPARs. The use of an SEA and the Virtual I/O Server allowed connection to an external network. The IVE replaces the need for both the virtual Ethernet and the SEA. It provides most of the function of each. Therefore, this eliminates the need to move packets (using virtual Ethernet) between partitions and then through a shared Ethernet adapter (SEA) to an Ethernet port. LPARs can share IVE ports with improved performance.
  13. A System WPAR presents an environment most similar to a standalone AIX 5L system. This WPAR type runs most of the system services that would be found in a standalone system and does not share writeable file systems with any other WPAR or the global system. An Application WPAR has all the process isolation that a system WPAR provides, except that it shares file system name space with the global system and any other application WPAR defined within the system. Other than the application itself, a typical Application WPAR only runs an additional light weight init process within the WPAR.
  14. Note that the read only directories are the file systems provided by the global environment.
  15. Application Mobility is an optional capability that will allow an administrator to move a running WPAR from one system to another using advanced checkpoint restart capabilities that will make the movement transparent to the end user.
  16. The demo is a flash file viewed by using a browser that can not be started from within the presentation. Unzip the LAM_DB2_SAP_demo.zip file and open the html file to start the demo
  17. Again it is necessary to leave the presentation to run the demo. Two choices, LPM with DB2 and network attached storage use the LPM_DB2_NAS_demo.zip or for LPM with DB2 and SAP use the LPM_DB2_SAP_demo.zip file. Either case, unzip the file locally and open the html file to start the demo.
  18. Processor capacity attributes are specified in terms of processing units. 1.0 processing unit represents one physical processor. 1.5 processing units is equivalent to one and a half physical processors. For example, a shared processor partition with 2.2 processing units has the equivalent power of 2.2 physical processors. Processor units are also used; they represent the processor percentage allocated to a partition. One processor unit represents one percent of one physical processor. One hundred processor units is equivalent to one physical processor. Shared processor partitions may be defined with a processor capacity as small as 1/10 of a physical processor. A maximum of 10 partitions may be started for each physical processor in the platform. A maximum of 254 partitions may be active at the same time. When a partition is started, the system chooses the partition’s entitled processor capacity from the specified capacity range. The value that is chosen represents a commitment of capacity that is reserved for the partition. This capacity cannot be used to start another shared partition; otherwise, capacity could be overcommitted. Preference is given to the desired value, but these values cannot always be used, because there may not be enough unassigned capacity in the system. In that event, a different value is chosen, which must be greater than or equal to the minimum capacity attribute. Otherwise, the partition cannot be started. The same basic process applies for selecting the number of online virtual processors with the extra restriction that each virtual processor must be granted at least 1/10 of a processing unit of entitlement. In this way, the entitled processor capacity may affect the number of virtual processors that are automatically brought online by the system during boot. The maximum number of virtual processors per partition is 64. The POWER Hypervisor saves and restores all necessary processor states, when preempting or dispatching virtual processors, which for simultaneous multi-threading-enabled processors means two active thread contexts. The result for shared processors is that two of the logical CPUs will always be scheduled in a physical sense together. These sibling threads are always scheduled in the same partition.
  19. Micro-partitioning allows for multiple partitions to share one physical processor. A partition may be defined with a processor capacity as small as 10 processor units. This represents 1/10 of a physical processor. Each processor can be shared by up to 10 shared processor partitions. The shared processor partitions are dispatched and time-sliced on the physical processors under control of the POWER Hypervisor. Micro-partitioning is supported across the entire POWER5 product line from the entry to the high-end systems. Shared processor partitions still need dedicated memory, but the partitions I/O requirements can be supported through Virtual Ethernet and Virtual SCSI Server. Utilizing all virtualization features support for up to 254 shared processor partitions is possible. The shared processor partitions are created and managed by the HMC. When you start creating a partition, you have to choose between a shared processor partition and a dedicated processor partition. When setting up a partition, you have to define the resources that belong to the partition like memory and IO resources. For shared processor partitions, you have to specify the following partition attributes that are used to define the dimensions and performance characteristics of shared partitions: Minimum, desired, and maximum processor capacity Minimum, desired, and maximum number of virtual processors Capped or uncapped Variable capacity weight
  20. A capped partition is not allowed to exceed it capacity entitlement, while an uncapped partition is. In fact, it may exceed its maximum processor capacity. An uncapped partition is only limited in its ability to consume cycles by the lack of online virtual processors and its variable capacity weight attribute. The variable capacity weight attribute is a number between 0–255, which represents the relative share of extra capacity that the partition is eligible to receive. This parameter applies only to uncapped partitions. A partition’s share is computed by dividing its variable capacity weight by the sum of the variable capacity weights for all uncapped partitions. Therefore, a value of 0 may be used to prevent a partition from receiving extra capacity. This is sometimes referred to as a “soft cap”. There is overhead associated with the maintenance of online virtual processors, so clients should carefully consider their capacity requirements before choosing values for these attributes. In general, the value of the minimum, desired, and maximum virtual processor attributes should parallel those of the minimum, desired, and maximum capacity attributes in some fashion. A special allowance should be made for uncapped partitions, since they are allowed to consume more than their entitlement. If the partition is uncapped, then the administrator may want to define the desired and maximum virtual processor attributes x% above the corresponding entitlement attributes. The exact percentage is installation specific, but 25-50% seems like a reasonable number.
  21. Virtual real memory provides on capable Power Systems servers the ability to overcommit system’s memory, enabling better memory utilization and dynamic memory allocation across partitions in response to the partitions workload. Virtual real memory helps users reduce costs because they don’t have to dedicate memory to a particular logical partition. In doing so, they can reduce the total amount of memory in the system. It also allows users to “right-size” memory to their needs. Virtual Real Memory is the next step in resource virtualization evolution on POWER systems. The experiences gained in processor virtualization are applied to the virtualization of real memory to enable better memory utilization across partitions. The hypervisor manages a Virtual Real Memory Pool, which is just a portion of physical memory set aside to meet the memory residency requirements of a set of partitions defined as “shared memory partitions”. The hypervisor move page frames in and out of the system to a paging device as required to support overcommitment of physical memory. The OS collaborates with the hypervisor to reduce hypervisor paging. The most important aspect of the VRM function is the ability to overcommit the system’s memory. The virtualization of “real” main storage enables better memory utilization and dynamic memory allocation across partitions in response to partitions workload. The hypervisor distributes the physical memory in the pool among these partitions based on partition configuration parameters and dynamically changes a partition’s physical memory footprint based on workload demands. The hypervisor also coalesces common pages shared across shared memory partitions to reduce a partition’s cache foot print and free page frames.