Your SlideShare is downloading. ×

IBM PowerVM Virtualization Managing and Monitoring

5,785
views

Published on

Learn about the IBM PowerVM virtualization technology which is a combination of hardware and software that supports and manages the virtual environments on POWER5-,POWER5+, POWER6 and POWER7 based …

Learn about the IBM PowerVM virtualization technology which is a combination of hardware and software that supports and manages the virtual environments on POWER5-,POWER5+, POWER6 and POWER7 based systems. It lowers energy cost through server consolidation, reduces cost of your existing infrastructure, and provides better management of the growth, complexity, and risk of your infrastructure. For more information on Power Systems, visit
http://ibm.co/Lx6hfc.

Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.

Published in: Technology, Business

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
5,785
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
36
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Front coverIBM PowerVMVirtualization Managingand MonitoringProvides managing and monitoringbest practices focused on virtualizationCovers AIX, IBM i, and Linux forPower virtual I/O clientsIncludes Virtual I/OServer 2.2 enhancements Nicolas Guerin Jimi Inge Narutsugu Itoh Robert Miciovici Rajendra Patel Arthur Törökibm.com/redbooks
  • 2. International Technical Support OrganizationIBM PowerVM Virtualization Managing andMonitoringMay 2012 SG24-7590-03
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page xxvii.Fourth Edition (May 2012)This edition applies to:Version 7, Release 1 of AIX (product number 5765-G98)Version 7, Release 1 of IBM i (product number 5770-SS1)Version 2, Release 2, Modification 10, Fixpack 24, Service pack 1 of the Virtual I/O ServerVersion 7, Release 7, Modification 2 of the HMCVersion EM350, release 85 of the POWER6 System FirmwareVersion AL720, release 80 of the POWER7 System Firmware© Copyright International Business Machines Corporation 2012. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.
  • 4. Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxvii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . .xxxii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxiii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxv May 2012, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxvPart 1. PowerVM virtualization management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 PowerVM Editions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 PowerVM Express Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.2 PowerVM Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 PowerVM Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.4 How to determine the PowerVM Edition . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.5 Software licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 Maintenance strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 New features for Virtual I/O Server Version 2.2 . . . . . . . . . . . . . . . . . . . . 10 Chapter 2. Virtual storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1 Disk mapping options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.1 Physical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.2 Logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.3 File-backed devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.4 Logical units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Virtual optical devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Virtual tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.1 Moving the virtual tape drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.2 Finding the partition that holds the virtual tape drive. . . . . . . . . . . . . 20© Copyright IBM Corp. 2012. All rights reserved. iii
  • 5. 2.3.3 Unconfiguring a virtual tape drive for local use . . . . . . . . . . . . . . . . . 22 2.3.4 Unconfiguring a virtual tape drive to be moved . . . . . . . . . . . . . . . . . 22 2.4 Using file-backed virtual optical devices . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Mapping LUNs over vSCSI to hdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5.2 Virtual device slot numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5.3 Tracing a configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 Managing Shared Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6.1 Creating the shared storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6.2 Adding physical volumes to the shared storage pool . . . . . . . . . . . . 50 2.6.3 Creating and mapping logical units . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.6.4 Tracing logical units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.6.5 Unmapping and removing logical units . . . . . . . . . . . . . . . . . . . . . . . 55 2.6.6 Managing VLAN tagging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.7 Replacing a disk on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . 57 2.7.1 Replacing an LV-backed disk in the mirroring environment . . . . . . . 58 2.7.2 Replacing a mirrored storage pool-backed disk . . . . . . . . . . . . . . . . 63 2.7.3 Replacing a disk in the shared storage pool . . . . . . . . . . . . . . . . . . . 67 2.8 Managing multiple storage security zones . . . . . . . . . . . . . . . . . . . . . . . . 68 2.9 Storage planning with migration in mind . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.9.1 Virtual adapter slot numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.9.2 SAN considerations for LPAR migration . . . . . . . . . . . . . . . . . . . . . . 72 2.9.3 Backing devices and virtual target devices . . . . . . . . . . . . . . . . . . . . 73 2.10 Managing N_Port ID virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 2.10.1 Managing virtual Fibre Channel adapters . . . . . . . . . . . . . . . . . . . . 75 2.10.2 Replacing a Fibre Channel adapter configured with NPIV . . . . . . . 77 2.10.3 Migrating to virtual Fibre Channel adapter environments . . . . . . . . 78 Chapter 3. Virtual network management. . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.1 Modifying IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.1.1 Virtual I/O Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.1.2 Client partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.2 Modifying VLANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.2.1 Process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.2.2 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.2.3 Virtual I/O Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 3.2.4 Client partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.3 Modifying MAC addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.3.1 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.3.2 Operating system MAC modifications . . . . . . . . . . . . . . . . . . . . . . . 109 3.4 Managing the mapping of network devices . . . . . . . . . . . . . . . . . . . . . . . 114 3.4.1 Virtual network adapters and VLANs . . . . . . . . . . . . . . . . . . . . . . . 115 3.4.2 Virtual device slot numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115iv IBM PowerVM Virtualization Managing and Monitoring
  • 6. 3.4.3 Tracing a configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.5 SEA threading on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . 1213.6 Tuning network throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 3.6.1 Network Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.6.2 Operating system device configuration . . . . . . . . . . . . . . . . . . . . . . 123 3.6.3 Tuning network payloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 3.6.4 Payload tuning examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.6.5 Payload tuning verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 3.6.6 TCP checksum offload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.6.7 Largesend option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1363.7 Shared Ethernet Adapter failover with Load Sharing . . . . . . . . . . . . . . . 1383.8 Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.8.1 Strict mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.8.2 Loose mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.8.3 Setting up QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3.8.4 General rules for setting modes for QoS. . . . . . . . . . . . . . . . . . . . . 1493.9 Denial of Service hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 3.9.1 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Chapter 4. Virtual I/O Server security . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514.1 Network security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.1.1 Stopping network services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.1.2 Setting up the firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.1.3 Enabling ping through the firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.1.4 Security hardening rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.1.5 DoS hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574.2 The Virtual I/O Server as an LDAP client . . . . . . . . . . . . . . . . . . . . . . . . 157 4.2.1 Creating a key database file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4.2.2 Configuring the LDAP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.2.3 Configuring the Virtual I/O Server as an LDAP client . . . . . . . . . . . 1704.3 Network Time Protocol configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724.4 Setting up Kerberos on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . 1734.5 Managing users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 4.5.1 Creating a system administrator account . . . . . . . . . . . . . . . . . . . . 176 4.5.2 Creating a service representative (SR) account . . . . . . . . . . . . . . . 177 4.5.3 Creating a read-only account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.5.4 Checking the global command log (gcl) . . . . . . . . . . . . . . . . . . . . . 1784.6 Role-based access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 4.6.1 Authorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 4.6.2 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 4.6.3 Privileges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 4.6.4 Using role-based access control . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Contents v
  • 7. Chapter 5. Virtual I/O Server maintenance . . . . . . . . . . . . . . . . . . . . . . . . 193 5.1 Installing or migrating to Virtual I/O Server Version 2.x. . . . . . . . . . . . . . 194 5.1.1 Installing Virtual I/O Server Version 2.2.1.0 . . . . . . . . . . . . . . . . . . 195 5.1.2 Migrating from an HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5.1.3 Migrating from a DVD that is managed by an HMC . . . . . . . . . . . . 198 5.1.4 Migrating from a DVD that is managed by an IVM . . . . . . . . . . . . . 208 5.2 Virtual I/O server back up strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 5.2.1 Backing up external device configuration . . . . . . . . . . . . . . . . . . . . 211 5.2.2 Backing up HMC resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.2.3 Backing up IVM resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.2.4 Backing up operating systems from the client logical partitions . . . 212 5.2.5 Backing up the Virtual I/O Server operating system . . . . . . . . . . . . 213 5.3 Scheduling backups of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . 214 5.4 Backing up the Virtual I/O Server operating system . . . . . . . . . . . . . . . . 215 5.4.1 Backing up to tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.4.2 Backing up to a DVD-RAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5.4.3 Backing up to a remote file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5.5 Backing up user-defined virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . 221 5.5.1 Backing up user-defined virtual devices using viosbr . . . . . . . . . . . 222 5.5.2 Scheduling regular backups using the viosbr command . . . . . . . . . 223 5.6 Backing up user-defined virtual devices using backupios . . . . . . . . . . . . 223 5.6.1 Backing up using IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . 229 5.7 Restoring the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 5.7.1 Restoring the HMC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 231 5.7.2 Restoring other IT infrastructure devices . . . . . . . . . . . . . . . . . . . . 231 5.7.3 Restoring the Virtual I/O Server operating system . . . . . . . . . . . . . 231 5.7.4 Recovering user-defined virtual devices and disk structure . . . . . . 241 5.7.5 Restoring the Virtual I/O Server client operating system . . . . . . . . 246 5.8 Rebuilding the Virtual I/O Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 5.8.1 Rebuilding the SCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . 249 5.8.2 Rebuilding the network configuration . . . . . . . . . . . . . . . . . . . . . . . 252 5.9 Updating the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 5.9.1 Updating a single Virtual I/O Server environment . . . . . . . . . . . . . . 254 5.9.2 Updating a dual Virtual I/O Server environment . . . . . . . . . . . . . . . 256 5.10 Updating Virtual I/O Server adapter firmware . . . . . . . . . . . . . . . . . . . . 266 5.11 Error logging on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . 281 5.11.1 Redirecting error logs to other servers . . . . . . . . . . . . . . . . . . . . . 283 5.11.2 Troubleshooting error logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 5.12 VM Storage Snapshots/Rollback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Chapter 6. Dynamic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.1 Multiple Shared Processor Pools management . . . . . . . . . . . . . . . . . . . 288 6.2 Dynamic LPAR operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293vi IBM PowerVM Virtualization Managing and Monitoring
  • 8. 6.2.1 Adding and removing processors dynamically . . . . . . . . . . . . . . . . 293 6.2.2 Adding memory dynamically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 6.2.3 Removing memory dynamically . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 6.2.4 Adding physical adapters dynamically . . . . . . . . . . . . . . . . . . . . . . 301 6.2.5 Moving physical adapters dynamically . . . . . . . . . . . . . . . . . . . . . . 304 6.2.6 Removing physical adapters dynamically . . . . . . . . . . . . . . . . . . . . 309 6.2.7 Adding virtual adapters dynamically . . . . . . . . . . . . . . . . . . . . . . . . 311 6.2.8 Removing virtual adapters dynamically . . . . . . . . . . . . . . . . . . . . . 314 6.2.9 Removing or replacing a PCI Hot Plug adapter . . . . . . . . . . . . . . . 3166.3 Dynamic LPAR operations on Linux for Power . . . . . . . . . . . . . . . . . . . . 317 6.3.1 Service and productivity tools for Linux for Power . . . . . . . . . . . . . 3176.4 Dynamic LPAR operations on the Virtual I/O Server. . . . . . . . . . . . . . . . 332 6.4.1 Replacing Ethernet adapters on the Virtual I/O Server . . . . . . . . . . 332 6.4.2 Replacing a Fibre Channel adapter on the Virtual I/O Server . . . . . 335Chapter 7. PowerVM Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . 3397.1 PowerVM Live Partition Mobility requirements . . . . . . . . . . . . . . . . . . . . 340 7.1.1 HMC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 7.1.2 Common system requirements checklist . . . . . . . . . . . . . . . . . . . . 341 7.1.3 Destination system requirements checklist . . . . . . . . . . . . . . . . . . . 341 7.1.4 Migrating partition requirements checklist . . . . . . . . . . . . . . . . . . . . 342 7.1.5 Active and inactive migrations checklist . . . . . . . . . . . . . . . . . . . . . 3427.2 Managing a live partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 7.2.1 The migration validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 7.2.2 Validation and migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 7.2.3 How to fix missing requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 3477.3 Live Partition Mobility and Live Application Mobility . . . . . . . . . . . . . . . . 348Chapter 8. Partition Suspend and Resume. . . . . . . . . . . . . . . . . . . . . . . . 3518.1 Listing volumes in the reserved storage device pool. . . . . . . . . . . . . . . . 3538.2 Adding volume to the reserved storage device pool . . . . . . . . . . . . . . . . 3548.3 Removing a volume from the reserved storage device pool . . . . . . . . . . 3588.4 Suspending a partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3618.5 Shutting down a suspended partition . . . . . . . . . . . . . . . . . . . . . . . . . . . 3638.6 Recovering a suspended or resumed partition . . . . . . . . . . . . . . . . . . . . 3658.7 Correcting validation errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366Chapter 9. System Planning Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3699.1 Sample scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3709.2 Preparation recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3719.3 Planning the configuration with SPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3729.4 Initial setup checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381Chapter 10. Automated management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Contents vii
  • 9. 10.1 Using System Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 10.2 Using the HMC command line interface . . . . . . . . . . . . . . . . . . . . . . . . 387 10.2.1 Configuring the Secure Shell interface . . . . . . . . . . . . . . . . . . . . . 387 10.2.2 Client configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10.2.3 Initial login and shell conventions . . . . . . . . . . . . . . . . . . . . . . . . . 390 10.2.4 Basic reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 10.2.5 Modifying the power state of partitions and systems . . . . . . . . . . 391 10.2.6 Modifying profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 10.2.7 Dynamic LPAR operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 10.3 Scheduling jobs on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . 393 Chapter 11. High-level management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 11.1 Systems Director overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.1.1 Plug-ins included with IBM Systems Director . . . . . . . . . . . . . . . . 397 11.1.2 Plug-ins for Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 11.1.3 IBM Systems Director editions . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 11.1.4 Choosing the management level for managed systems . . . . . . . . 401 11.2 IBM Systems Director installation on AIX . . . . . . . . . . . . . . . . . . . . . . . 402 11.3 Log on to IBM Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 11.4 Preparing managed systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 11.4.1 Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . 407 11.4.2 Virtual I/O Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 11.4.3 Power Systems servers running AIX. . . . . . . . . . . . . . . . . . . . . . . 408 11.4.4 Power Systems servers running IBM i . . . . . . . . . . . . . . . . . . . . . 410 11.4.5 Power Systems servers running Linux . . . . . . . . . . . . . . . . . . . . . 412 11.5 Discover managed systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 11.6 Collect inventory data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 11.7 View Managed resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 11.8 Power Systems Management summary . . . . . . . . . . . . . . . . . . . . . . . . 423 11.9 IBM Systems Director VMControl plug-in summary . . . . . . . . . . . . . . . 426 11.10 Manage Virtual I/O Server with IBM Systems Director . . . . . . . . . . . . 430 11.10.1 Create a virtual server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 11.10.2 Show Virtual adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 11.10.3 Topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 11.10.4 Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 11.10.5 Monitor resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 11.11 IBM Systems Director Active Energy Manager plug-in . . . . . . . . . . . . 439 11.11.1 Basic principles of power management . . . . . . . . . . . . . . . . . . . 439 11.11.2 Features of EnergyScale that can achieve the basic principles . 439Part 2. PowerVM virtualization monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Chapter 12. Virtual I/O Server monitoring agents . . . . . . . . . . . . . . . . . . 443 12.1 IBM Tivoli Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444viii IBM PowerVM Virtualization Managing and Monitoring
  • 10. 12.1.1 What to monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 12.1.2 VIOS Premium agent configuration. . . . . . . . . . . . . . . . . . . . . . . . 445 12.1.3 CEC Base agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 12.1.4 Using the Tivoli Enterprise Portal . . . . . . . . . . . . . . . . . . . . . . . . . 448 12.1.5 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45712.2 Configuring the IBM Tivoli Storage Manager client . . . . . . . . . . . . . . . . 45712.3 IBM Tivoli Usage and Accounting Manager agent . . . . . . . . . . . . . . . . 45812.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . 46012.5 IBM Tivoli Application Dependency Discovery Manager . . . . . . . . . . . . 464Chapter 13. Monitoring global system resource allocations . . . . . . . . . 46513.1 Hardware Management Console monitoring . . . . . . . . . . . . . . . . . . . . . 466 13.1.1 Partition properties monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 13.1.2 HMC hardware information monitoring . . . . . . . . . . . . . . . . . . . . . 467 13.1.3 HMC virtual storage monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . 469 13.1.4 HMC virtual network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 471 13.1.5 HMC shell scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47213.2 Integrated Virtualization Manager monitoring . . . . . . . . . . . . . . . . . . . . 47313.3 Systems Director Management Console monitoring . . . . . . . . . . . . . . . 47513.4 Monitoring resource allocations from a partition . . . . . . . . . . . . . . . . . . 476 13.4.1 Monitoring CPU and memory allocations in AIX . . . . . . . . . . . . . . 476 13.4.2 Monitoring CPU and memory allocations in Linux . . . . . . . . . . . . 477Chapter 14. Monitoring commands on the Virtual I/O Server . . . . . . . . . 47914.1 Global system monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48014.2 Device inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48214.3 Storage monitoring and listing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48214.4 Shared storage pool monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 14.4.1 Cluster information commands . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 14.4.2 Pool information commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48314.5 Network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483Chapter 15. CPU monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48515.1 CPU-related terminology and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 486 15.1.1 Common to POWER5 or later systems. . . . . . . . . . . . . . . . . . . . . 486 15.1.2 Specific to POWER6 or later systems. . . . . . . . . . . . . . . . . . . . . . 48915.2 CPU metrics computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 15.2.1 Processor Utilization of Resources Register . . . . . . . . . . . . . . . . . 492 15.2.2 PURR-based metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 15.2.3 System-wide tools modified for virtualization . . . . . . . . . . . . . . . . 495 15.2.4 Scaled Processor Utilization of Resources Register (SPURR) . . . 49515.3 Cross-partition CPU monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 15.3.1 Monitoring from AIX and Virtual I/O Server . . . . . . . . . . . . . . . . . . 497 15.3.2 Cross-partition CPU monitoring from IBM i . . . . . . . . . . . . . . . . . . 504 Contents ix
  • 11. 15.4 AIX and Virtual I/O Server CPU monitoring. . . . . . . . . . . . . . . . . . . . . . 506 15.4.1 Monitoring using topas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 15.4.2 Monitoring using nmon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 15.4.3 Monitoring using vmstat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 15.4.4 Monitoring using lparstat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 15.4.5 Monitoring using sar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 15.4.6 Monitoring using mpstat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 15.4.7 Report generation for CPU utilization . . . . . . . . . . . . . . . . . . . . . . 519 15.5 IBM i CPU monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 15.6 Linux for Power CPU monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Chapter 16. Memory monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 16.2 Dedicated memory partition monitoring. . . . . . . . . . . . . . . . . . . . . . . . . 536 16.2.1 AIX and Virtual I/O Server memory monitoring . . . . . . . . . . . . . . . 536 16.2.2 IBM i memory monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 16.2.3 Linux for Power memory monitoring . . . . . . . . . . . . . . . . . . . . . . . 542 16.3 Shared memory partition monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 16.3.1 HMC and IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 16.3.2 Monitoring IBM i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 16.3.3 Monitoring AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 16.3.4 Monitoring Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 16.4 Monitoring Active Memory Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . 559 16.4.1 The amepat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 16.4.2 The topas command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 16.4.3 The vmstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 16.4.4 The lparstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 16.4.5 The svmon command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564 Chapter 17. Virtual storage monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . 565 17.1 Virtual I/O Server storage monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 566 17.1.1 Checking storage health on the Virtual I/O Server . . . . . . . . . . . . 566 17.1.2 Monitoring storage performance on the Virtual I/O Server . . . . . . 566 17.1.3 Shared Storage Pools monitoring . . . . . . . . . . . . . . . . . . . . . . . . . 567 17.2 AIX virtual I/O client storage monitoring . . . . . . . . . . . . . . . . . . . . . . . . 569 17.2.1 Checking storage health on the AIX virtual I/O client . . . . . . . . . . 570 17.2.2 Monitoring storage performance on the AIX virtual I/O client . . . . 573 17.2.3 IBM i virtual I/O client storage monitoring . . . . . . . . . . . . . . . . . . . 574 17.2.4 Checking storage health on the IBM i virtual I/O client . . . . . . . . . 574 17.2.5 Monitoring storage performance on the IBM i virtual I/O client . . . 576 17.3 Linux for Power virtual I/O client storage monitoring . . . . . . . . . . . . . . . 579 Chapter 18. Virtual network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 581 18.1 Monitoring the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582x IBM PowerVM Virtualization Managing and Monitoring
  • 12. 18.1.1 Error logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 18.1.2 IBM Tivoli Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 18.1.3 Testing your configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58218.2 Virtual I/O Server networking monitoring. . . . . . . . . . . . . . . . . . . . . . . . 587 18.2.1 Describing the scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 18.2.2 Advanced SEA monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 18.2.3 Using Topas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60418.3 AIX client network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60418.4 IBM i client network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 18.4.1 Checking network health on the IBM i virtual I/O client . . . . . . . . . 605 18.4.2 Monitoring network performance on the IBM i virtual I/O client. . . 60618.5 Linux network monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608Chapter 19. Third-party monitoring tools for AIX and Linux. . . . . . . . . . 60919.1 The nmon utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 19.1.1 nmon on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 19.1.2 nmon on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 19.1.3 Additional nmon statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 19.1.4 Recording with the nmon tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61219.2 sysstat utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61319.3 Ganglia tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61319.4 Other third party tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614Appendix A. Sample script for disk and NIB network checking and recovery on AIX virtual clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615Listing of the fixdualvio.ksh script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 Contents xi
  • 13. xii IBM PowerVM Virtualization Managing and Monitoring
  • 14. Figures 2-1 Logical versus physical drive mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2-2 Setting maximum number of virtual adapters in a partition profile . . . . . . 32 2-3 IBM i SST Display Disk Configuration Status panel . . . . . . . . . . . . . . . . . 36 2-4 IBM i SST Display Disk Unit Details panel . . . . . . . . . . . . . . . . . . . . . . . . 37 2-5 IBM i partition profile virtual adapters configuration . . . . . . . . . . . . . . . . . 38 2-6 IBM i SST Logical Hardware Resources Associated with IOP . . . . . . . . . 41 2-7 IBM i SST Logical Hardware Resources disk unit serial numbers . . . . . . 42 2-8 IBM i SST Auxiliary Storage Hardware Resource Detail. . . . . . . . . . . . . . 43 2-9 AIX LVM mirroring environment with LV-backed virtual disks. . . . . . . . . . 58 2-10 AIX LVM mirroring with storage pool-backed virtual disks . . . . . . . . . . . 63 2-11 Create virtual SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 2-12 Slot numbers that are identical in the source and target system . . . . . . 71 2-13 LUN mapped to a physical Fibre Channel adapter . . . . . . . . . . . . . . . . . 79 2-14 Add Virtual Adapter to the vios1 partition . . . . . . . . . . . . . . . . . . . . . . . . 80 2-15 Create virtual Fibre Channel server adapter in the vios1 partition . . . . . 81 2-16 Set Adapter IDs in the vios1 partition . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2-17 Add a virtual adapter to the NPIV partition . . . . . . . . . . . . . . . . . . . . . . . 83 2-18 Create virtual Fibre Channel client adapter in the NPIV partition . . . . . . 84 2-19 Set Adapter IDs in the NPIV partition . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 2-20 Add a new host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2-21 Remove a physical Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . . . 90 2-22 Select the adapter to be removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3-1 Dynamically adding a virtual adapter to a partition . . . . . . . . . . . . . . . . . . 99 3-2 Modifying an existing adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3-3 Adding VLAN 200 to the additional VLANs field . . . . . . . . . . . . . . . . . . . 101 3-4 Defining a custom MAC address. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3-5 MAC address format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3-6 IBM i Display line description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 3-7 HMC Virtual Network Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3-8 Virtual Ethernet adapter slot assignments . . . . . . . . . . . . . . . . . . . . . . . 117 3-9 IBM i Work with Communication Resources panel . . . . . . . . . . . . . . . . . 118 3-10 IBM i Display Resource Details panel . . . . . . . . . . . . . . . . . . . . . . . . . . 119 3-11 HMC IBMi partition properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3-12 HMC virtual Ethernet adapter properties panel. . . . . . . . . . . . . . . . . . . 121 3-13 IBM i Work with TCP/IP Interface Status panel. . . . . . . . . . . . . . . . . . . 131 3-14 Send Data error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3-15 SEA failover Primary-Backup configuration . . . . . . . . . . . . . . . . . . . . . 139 3-16 SEA failover with Load Sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140© Copyright IBM Corp. 2012. All rights reserved. xiii
  • 15. 4-1 The ikeyman program initial window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4-2 Create new key database window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4-3 Creating the ldap_server key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4-4 Setting the key database password . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 4-5 Default certificate authorities available on the ikeyman program . . . . . . 161 4-6 Creating a self-signed certificate initial panel . . . . . . . . . . . . . . . . . . . . . 162 4-7 Self-signed certificate information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 4-8 Default directory information tree created by mksecldap command . . . . 165 5-1 Define the System Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5-2 Installation and Maintenance main menu . . . . . . . . . . . . . . . . . . . . . . . . 201 5-3 Virtual I/O Server Migration Installation and Settings . . . . . . . . . . . . . . . 202 5-4 Change Disk Where You Want to Install. . . . . . . . . . . . . . . . . . . . . . . . . 203 5-5 Virtual I/O Server Migration Installation and Settings - start migration . . 204 5-6 Migration Confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 5-7 Running migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5-8 Set Terminal Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5-9 Example of a System Plan generated from a managed system . . . . . . . 247 5-10 IBM i Work with TCP/IP Interface Status panel. . . . . . . . . . . . . . . . . . . 258 5-11 Virtual I/O client running MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 5-12 Virtual I/O client partition software mirroring . . . . . . . . . . . . . . . . . . . . . 260 5-13 IBM i Display Disk Configuration Status panel . . . . . . . . . . . . . . . . . . . 261 5-14 IBM Fix Central website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 5-15 IBM Fix Central website Firmware and HMC . . . . . . . . . . . . . . . . . . . . 269 5-16 IBM Fix Central website Select by feature code . . . . . . . . . . . . . . . . . . 270 5-17 IBM Fix Central website Select device feature code . . . . . . . . . . . . . . . 270 5-18 IBM Fix Central website Select device firmware fixes. . . . . . . . . . . . . . 271 5-19 Diagnostics aids Task Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5-20 Diagnostics aids Microcode Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 5-21 Diagnostics aids Download Microcode . . . . . . . . . . . . . . . . . . . . . . . . . 275 5-22 Diagnostics aids resource selection list . . . . . . . . . . . . . . . . . . . . . . . . 276 5-23 Diagnostic aids install microcode notice . . . . . . . . . . . . . . . . . . . . . . . . 277 5-24 Diagnostics aids install microcode image source selection . . . . . . . . . 278 5-25 Diagnostics aids microcode level selection . . . . . . . . . . . . . . . . . . . . . . 279 5-26 Diagnostics aids install microcode success message. . . . . . . . . . . . . . 280 5-27 Diagnostic aids successful diagnostic test . . . . . . . . . . . . . . . . . . . . . . 281 6-1 Shared Processor Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 6-2 Modifying Shared Processor pool attributes . . . . . . . . . . . . . . . . . . . . . . 290 6-3 Partitions assignment to Multiple Shared Processor Pools. . . . . . . . . . . 291 6-4 Assign a partition to a Shared Processor Pool . . . . . . . . . . . . . . . . . . . . 291 6-5 Comparing partition weights from different Shared Processor Pools . . . 292 6-6 Add or remove processor operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 6-7 Defining the amount of CPU processing units for a partition . . . . . . . . . 295 6-8 IBM i Work with System Activity panel . . . . . . . . . . . . . . . . . . . . . . . . . . 296xiv IBM PowerVM Virtualization Managing and Monitoring
  • 16. 6-9 Add or remove memory operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2976-10 Changing the total amount of memory of the partition to 5 GB . . . . . . . 2986-11 Dynamic LPAR operation in progress . . . . . . . . . . . . . . . . . . . . . . . . . . 2986-12 Add or remove memory operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996-13 Dynamically reducing 1 GB from a partition . . . . . . . . . . . . . . . . . . . . . 3006-14 LPAR overview menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016-15 Add physical adapter operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026-16 Select physical adapter to be added . . . . . . . . . . . . . . . . . . . . . . . . . . . 3036-17 I/O adapters properties for a managed system . . . . . . . . . . . . . . . . . . . 3046-18 Move or remove physical adapter operation . . . . . . . . . . . . . . . . . . . . . 3066-19 Selecting adapter in slot C2 to be moved to partition AIX_LPAR . . . . . 3076-20 Save current configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3086-21 Remove physical adapter operation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3096-22 Select physical adapter to be removed . . . . . . . . . . . . . . . . . . . . . . . . . 3106-23 Add virtual adapter operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3116-24 Dynamically adding virtual SCSI adapter . . . . . . . . . . . . . . . . . . . . . . . 3126-25 Virtual SCSI adapter properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3136-26 Virtual adapters for an LPAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3146-27 Remove virtual adapter operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3156-28 Delete virtual adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3166-29 Adding a processor to a Linux partition. . . . . . . . . . . . . . . . . . . . . . . . . 3246-30 Increasing the number of virtual processors . . . . . . . . . . . . . . . . . . . . . 3256-31 DLPAR add or remove memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3296-32 DLPAR adding 2 GB memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3307-1 Partition Migration Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3447-2 Partition Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3457-3 Virtual Storage assignments selection . . . . . . . . . . . . . . . . . . . . . . . . . . 3467-4 Partition migration validation detailed information. . . . . . . . . . . . . . . . . . 3478-1 Reserved storage device pool management access menu . . . . . . . . . . 3538-2 Reserved storage device pool device list . . . . . . . . . . . . . . . . . . . . . . . . 3538-3 Edit pool operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3558-4 Reserved storage device pool management device . . . . . . . . . . . . . . . 3558-5 Reserved storage device pool management device list selection. . . . . . 3568-6 Reserved storage device pool management device selection . . . . . . . . 3578-7 Adding a device to the reserved storage device pool validation . . . . . . . 3588-8 Reserved storage device pool management. . . . . . . . . . . . . . . . . . . . . . 3598-9 Reserved storage device pool management device . . . . . . . . . . . . . . . . 3608-10 Removing a device from the reserved storage device pool validation . 3608-11 Starting the suspend operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3618-12 Options for suspend and resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3618-13 Activity status window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3628-14 Suspend and resume final status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3628-15 HMC operating status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Figures xv
  • 17. 8-16 Recovering a suspended partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 8-17 Partition recover operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 9-1 The partition and slot numbering plan of virtual storage adapters . . . . . 370 9-2 The partition and slot numbering plan for virtual Ethernet adapters . . . . 371 9-3 The SPT Partition properties window . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 9-4 The SPT Virtual SCSI window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 9-5 The SPT Edit Virtual Slots window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 9-6 System Planning Tool ready to be deployed. . . . . . . . . . . . . . . . . . . . . . 375 9-7 Deploy System Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 9-8 Deploy System Plan Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 9-9 The System Plan validation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9-10 Partitions to Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-11 The Deployment Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 9-12 The Deployment Progress window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 9-13 Partition profiles deployed on the HMC. . . . . . . . . . . . . . . . . . . . . . . . . 380 10-1 Creating a system profile on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . 386 10-2 The HMC Remote Command Execution menu . . . . . . . . . . . . . . . . . . . 387 11-1 IBM Systems Director management topology. . . . . . . . . . . . . . . . . . . . 400 11-2 IBM Systems Director login panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 11-3 IBM Systems Director home window . . . . . . . . . . . . . . . . . . . . . . . . . . 406 11-4 HMC LAN Adapter Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 11-5 System Discovery view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 11-6 HMC discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 11-7 Resource explorer group view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 11-8 Basic virtualization topology view of the HMC . . . . . . . . . . . . . . . . . . . 416 11-9 View and collect inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 11-10 System selection for inventory collection . . . . . . . . . . . . . . . . . . . . . . 418 11-11 Scheduled inventory collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 11-12 Active and scheduled jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 11-13 All Operating Systems inventory view. . . . . . . . . . . . . . . . . . . . . . . . . 421 11-14 Resource explorer - Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 11-15 Power Systems Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 11-16 IBM i integrated management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 11-17 VMControl home panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 11-18 Platform managers and members view. . . . . . . . . . . . . . . . . . . . . . . . 430 11-19 Summary page for creating a virtual server . . . . . . . . . . . . . . . . . . . . 431 11-20 Virtual servers view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 11-21 Virtual LAN Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 11-22 Virtual SCSI adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 11-23 IBM Systems Director Topology Virtualization Basic . . . . . . . . . . . . . 434 11-24 All systems view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 11-25 Inventory summary view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 11-26 Virtualization monitors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437xvi IBM PowerVM Virtualization Managing and Monitoring
  • 18. 11-27 CPU Utilization graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43812-1 Tivoli Enterprise Portal login using web browser . . . . . . . . . . . . . . . . . 44912-2 Tivoli Enterprise Portal login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45012-3 Storage Mappings Workspace selection . . . . . . . . . . . . . . . . . . . . . . . . 45112-4 ITM panel showing Storage Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 45212-5 ITM panel showing Network Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 45312-6 ITM window showing Top Resources Usage . . . . . . . . . . . . . . . . . . . . 45412-7 ITM window showing CPU Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . 45512-8 ITM window showing System Storage Information . . . . . . . . . . . . . . . . 45612-9 ITM window showing Network Adapter Utilization . . . . . . . . . . . . . . . . 45713-1 Available servers managed by the HMC . . . . . . . . . . . . . . . . . . . . . . . . 46613-2 Configuring the displayed columns on the HMC . . . . . . . . . . . . . . . . . . 46613-3 Virtual adapters configuration in the partition properties. . . . . . . . . . . . 46713-4 Virtual I/O Server hardware information context menu . . . . . . . . . . . . . 46813-5 The Virtual I/O Server virtual SCSI topology window . . . . . . . . . . . . . . 46913-6 HMC Virtual Storage Management window . . . . . . . . . . . . . . . . . . . . . 47013-7 Virtual Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47113-8 Virtual Network Management - detailed information . . . . . . . . . . . . . . . 47213-9 IVM partitions monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47313-10 IVM virtual Ethernet configuration monitoring . . . . . . . . . . . . . . . . . . . 47413-11 IVM virtual storage configuration monitoring . . . . . . . . . . . . . . . . . . . . 47413-12 SDMC Virtual Storage Management. . . . . . . . . . . . . . . . . . . . . . . . . . 47515-1 16-core system with dedicated and shared CPUs . . . . . . . . . . . . . . . . 48715-2 A Multiple Shared Processor Pool example on POWER6 . . . . . . . . . . 49015-3 Shared Processor Pool attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49115-4 Per-thread PURR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49315-5 Dedicated partition’s Processor Sharing properties . . . . . . . . . . . . . . . 50015-6 IBM i, Allow performance information collection . . . . . . . . . . . . . . . . . . 50415-7 IBM Systems Director Navigator for i Logical Partitions Overview . . . . 50615-8 Using smitty topas for CPU utilization reporting . . . . . . . . . . . . . . . . . . 52015-9 Local CEC recording attributes window . . . . . . . . . . . . . . . . . . . . . . . . 52115-10 Report generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52215-11 Reporting Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52315-12 IBM i WRKSYSACT command output . . . . . . . . . . . . . . . . . . . . . . . . 52715-13 IBM i CPU Utilization and Waits Overview . . . . . . . . . . . . . . . . . . . . . 53115-14 The mpstat command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53316-1 IBM i WRKSYSSTS command output. . . . . . . . . . . . . . . . . . . . . . . . . . 53816-2 IBM i System Director Navigator Page fault overview. . . . . . . . . . . . . . 54116-3 Displaying shared memory pool utilization using the HMC . . . . . . . . . . 54516-4 Displaying I/O entitled memory for a shared memory partition . . . . . . . 54617-1 AIX virtual I/O client using MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57017-2 AIX virtual I/O client using LVM mirroring . . . . . . . . . . . . . . . . . . . . . . . 57217-3 IBM i mirroring across two Virtual I/O Servers . . . . . . . . . . . . . . . . . . . 575 Figures xvii
  • 19. 17-4 IBM i WRKDSKSTS command output . . . . . . . . . . . . . . . . . . . . . . . . . 576 17-5 IBM i Navigator Disk Overview for System Disk Pool . . . . . . . . . . . . . . 578 17-6 iostat command output showing I/O output activity . . . . . . . . . . . . . . . . 579 17-7 iostat output with -the d flag and a 5 sec interval as a parameter. . . . . 580 18-1 Network monitoring testing scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 18-2 IBM i Work with TCP/IP Interface Status panel. . . . . . . . . . . . . . . . . . . 605 18-3 IBM i Work with Configuration Status panel . . . . . . . . . . . . . . . . . . . . . 605 18-4 IBM i Work with Communication Resources panel . . . . . . . . . . . . . . . . 606 19-1 The nmon LPAR statistics report for a Linux partition . . . . . . . . . . . . . . 612xviii IBM PowerVM Virtualization Managing and Monitoring
  • 20. Tables 1-1 PowerVM Editions components, editions, and hardware support . . . . . . . . 7 3-1 Required versions for dynamic VLAN modifications . . . . . . . . . . . . . . . . . 98 3-2 OSI seven layer network model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3-3 Cap values for loose mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4-1 Default open ports on Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . 152 4-2 Hosts in the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4-3 Task and associated command to manage Virtual I/O Server users . . . 176 4-4 Authorizations corresponding to Virtual I/O Server commands . . . . . . . 180 4-5 RBAC commands and their descriptions . . . . . . . . . . . . . . . . . . . . . . . . 189 5-1 Virtual I/O Server backup and restore methods . . . . . . . . . . . . . . . . . . . 214 5-2 Commands to save information about Virtual I/O Server . . . . . . . . . . . . 227 5-3 Error log entry classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 6-1 Service and productivity tools description . . . . . . . . . . . . . . . . . . . . . . . . 318 7-1 Missing requirements for PowerVM Live Partition Mobility . . . . . . . . . . . 347 7-2 PowerVM Live Partition Mobility versus Live Application Mobility. . . . . . 349 8-1 Common Suspend and Resume validation errors . . . . . . . . . . . . . . . . . 366 11-1 IBM Systems Director editions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 11-2 Terms for IBM Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 11-3 Tools for monitoring resources in a virtualized environment . . . . . . . . . 442 12-1 TPC agent attributes, descriptions, and their values. . . . . . . . . . . . . . . 461 15-1 POWER5-based terminology and metrics . . . . . . . . . . . . . . . . . . . . . . 488 15-2 POWER6 or later systems specific terminology and metrics . . . . . . . . 491 15-3 IBM i CPU utilization guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 16-1 QAPMSHRMP field details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546© Copyright IBM Corp. 2012. All rights reserved. xix
  • 21. xx IBM PowerVM Virtualization Managing and Monitoring
  • 22. Examples 2-1 Finding which LPAR is holding the tape drive using dsh . . . . . . . . . . . . . 21 2-2 Finding which LPAR is holding the optical drive using ssh . . . . . . . . . . . . 21 2-3 Checking the version of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . 23 2-4 Checking whether any virtual media repository is already defined . . . . . . 23 2-5 List of available storage pools and defining a virtual media repository . . . 23 2-6 Creating a virtual optical media disk in the virtual media repository . . . . . 24 2-7 Creating an iso image from CD/DVD drive . . . . . . . . . . . . . . . . . . . . . . . . 24 2-8 Creating an optical virtual target device . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2-9 Loading the virtual media on the virtual target device. . . . . . . . . . . . . . . . 25 2-10 Checking the virtual optical device contents on a AIX client . . . . . . . . . . 26 2-11 Loading a new disk on the virtual media device . . . . . . . . . . . . . . . . . . . 26 2-12 The fget_config command for the DS4000 series. . . . . . . . . . . . . . . . . . 30 2-13 SAN storage listing on the Virtual I/O Server version 2.1 . . . . . . . . . . . . 30 2-14 Tracing virtual SCSI storage from Virtual I/O Server . . . . . . . . . . . . . . . 33 2-15 Tracing NPIV virtual storage from the Virtual I/O Server . . . . . . . . . . . . 34 2-16 List all disk mappings in a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2-17 Displaying the Virtual I/O Server device mapping . . . . . . . . . . . . . . . . . . 39 2-18 Virtual I/O Server hdisk to LUN tracing . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2-19 Virtual I/O Server virtual to physical Fibre Channel adapter mapping . . 43 2-20 Brocade SAN switch nameserver registration information . . . . . . . . . . . 45 2-21 DS8000 DSCLI displaying the logged in host initiators. . . . . . . . . . . . . . 46 2-22 List of SCSI disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2-23 Information of scsi1 adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2-24 Device mapping information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2-25 Creating the cluster with one node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2-26 Adding nodes to a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2-27 Checking the status of the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2-28 Listing the cluster information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2-29 List of physical volumes capable of being added . . . . . . . . . . . . . . . . . . 50 2-30 Adding the physical volume to the shared storage pool . . . . . . . . . . . . . 50 2-31 A list of the physical volumes in the shared storage pool . . . . . . . . . . . . 51 2-32 Listing the shared storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2-33 Creating a thin and a thick logical unit . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2-34 Mapping the logical unit to a vhost adapter. . . . . . . . . . . . . . . . . . . . . . . 52 2-35 Creating and mapping of a logical unit with one command. . . . . . . . . . . 52 2-36 Listing the attributes of a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2-37 Listing the logical units in a shared storage pool . . . . . . . . . . . . . . . . . . 53 2-38 Listing the mapping on a specific host . . . . . . . . . . . . . . . . . . . . . . . . . . 54© Copyright IBM Corp. 2012. All rights reserved. xxi
  • 23. 2-39 vhost adapters mapped to client partition 4 . . . . . . . . . . . . . . . . . . . . . . 54 2-40 Mapping information of vhost1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2-41 Abstract from cfgassist menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2-42 Unmapping a logical unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2-43 Removing the logical unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2-44 Remove the logical unit specified by the luudid . . . . . . . . . . . . . . . . . . . 56 2-45 Find the disk to remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2-46 Replacing a disk in the shared storage pool . . . . . . . . . . . . . . . . . . . . . . 67 2-47 Removing a NPIV Fibre Channel adapter in the Virtual I/O Server . . . . 77 2-48 Show available Fibre Channel adapters . . . . . . . . . . . . . . . . . . . . . . . . . 78 2-49 WWPN of the virtual Fibre Channel client adapter in the NPIV partition. 87 2-50 Zoning WWPN for fcs2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3-1 Dynamically modifying the additional VLANs field . . . . . . . . . . . . . . . . . 102 3-2 Dynamically modifying VLANs field and setting the IEEE 802.1q flag . . 102 3-3 Dynamically modifying the additional VLANs field . . . . . . . . . . . . . . . . . 102 3-4 Creating the VLAN tagged interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3-5 Creating the VLAN tagged interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3-6 Creating a VLAN tagged interface on Linux . . . . . . . . . . . . . . . . . . . . . . 105 3-7 Removing a VLAN tagged interface on Linux . . . . . . . . . . . . . . . . . . . . . 105 3-8 Loading the 8021q module into the kernel . . . . . . . . . . . . . . . . . . . . . . . 105 3-9 Listing an adapter MAC address within AIX . . . . . . . . . . . . . . . . . . . . . . 109 3-10 Changing an adapter MAC address within AIX. . . . . . . . . . . . . . . . . . . 110 3-11 Failed changing of an adapter MAC address within AIX. . . . . . . . . . . . 111 3-12 Changing an Ethernet adapter MAC address within IBM i . . . . . . . . . . 112 3-13 Displaying an adapter MAC address within Linux . . . . . . . . . . . . . . . . . 112 3-14 Changing an adapter MAC address within Linux . . . . . . . . . . . . . . . . . 112 3-15 Displaying an adapter firmware MAC address within Linux . . . . . . . . . 113 3-16 Failed changing of an adapter MAC address in Linux . . . . . . . . . . . . . 113 3-17 Virtual Ethernet adapter slot number . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3-18 Path MTU display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 3-19 The default MSS value in AIX 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 3-20 Example of no fragmentation using AIX . . . . . . . . . . . . . . . . . . . . . . . . 132 3-21 Example of fragmentation using AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3-22 Example of no fragmentation using IBM i . . . . . . . . . . . . . . . . . . . . . . . 134 3-23 Example of exceeding MTU size on IBM i . . . . . . . . . . . . . . . . . . . . . . 134 3-24 No response from TRCTCPRTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3-25 The tracepath command on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3-26 Largesend option for Shared Ethernet Adapter . . . . . . . . . . . . . . . . . . 137 3-27 Creating an SEA (ent7) with Load Sharing mode . . . . . . . . . . . . . . . . . 141 3-28 Adding a trunk adapter and changing SEA (ent6) failover mode . . . . . 142 3-29 Statistics for adapters in the Shared Ethernet Adapter . . . . . . . . . . . . . 143 3-30 Configuring QoS for an SEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 3-31 Configuring VLAN for an existing VLAN device . . . . . . . . . . . . . . . . . . 148xxii IBM PowerVM Virtualization Managing and Monitoring
  • 24. 3-32 Enabling network traffic regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503-33 Using tcptr for network traffic regulation for sendmail service. . . . . . . . 1504-1 Stopping network services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524-2 Using the viosecure command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534-3 Displaying the current rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534-4 Removing the rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544-5 Checking the rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554-6 High level firewall settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1564-7 Creating an ldap user on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . 1704-8 Log on to the Virtual I/O Server using an LDAP user . . . . . . . . . . . . . . . 1714-9 Searching the LDAP server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1714-10 Content of the /home/padmin/config/ntp.conf file . . . . . . . . . . . . . . . . . 1724-11 Start of the xntpd deamon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724-12 Too large time error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724-13 Successful ntp synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1724-14 Creating a system administrator user and checking its attributes . . . . . 1764-15 Creating a service representative account . . . . . . . . . . . . . . . . . . . . . . 1774-16 lsgcl command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1784-17 Using the mkrole command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1904-18 Using the lsrole command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914-19 Creating a new user linked to a role . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914-20 Displaying a user’s role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1914-21 Access to run command is not valid message . . . . . . . . . . . . . . . . . . . 1915-1 Backing up the Virtual I/O Server to tape . . . . . . . . . . . . . . . . . . . . . . . . 2165-2 Backing up the Virtual I/O Server to DVD-RAM . . . . . . . . . . . . . . . . . . . 2175-3 Backing up the Virtual I/O Server to the nim_resources.tar file . . . . . . . 2205-4 Backing up the Virtual I/O Server to the mksysb image . . . . . . . . . . . . . 2205-5 Performing a backup using the viosbr command . . . . . . . . . . . . . . . . . . 2235-6 Scheduling regular backups using the viosbr command. . . . . . . . . . . . . 2235-7 Sample output from the lsmap command . . . . . . . . . . . . . . . . . . . . . . . . 2255-8 Displaying shared storage pool information . . . . . . . . . . . . . . . . . . . . . . 2265-9 Restore of Virtual I/O Server to the same logical partition . . . . . . . . . . . 2365-10 Devices recovered if restored to a different server . . . . . . . . . . . . . . . . 2395-11 Using viosbr -view to display backup contents . . . . . . . . . . . . . . . . . . . 2415-12 Disks and volume groups to restore . . . . . . . . . . . . . . . . . . . . . . . . . . . 2445-13 Creating an HMC system plan from the HMC command line . . . . . . . . 2465-14 lsmap -all command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505-15 The netstat -v comand on the virtual I/O client . . . . . . . . . . . . . . . . . . . 2575-16 The netstat -cdlistats command on the primary Virtual I/O Server . . . . 2585-17 The netstat -cdlistats command on the secondary Virtual I/O Server . . 2585-18 The mdstat command showing a healthy environment. . . . . . . . . . . . . 2615-19 AIX LVM Mirror Resync. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2635-20 lsdev -type adapter command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Examples xxiii
  • 25. 5-21 lsmcode -d fcs0 command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 5-22 FTP transfer of adapter firmware to the Virtual I/O Server . . . . . . . . . . 271 5-23 Unpacking the adapter firmware package on the Virtual I/O Server . . . 272 5-24 diag command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 5-25 errlog short listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 5-26 Detailed error listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 5-27 Content of /tmp/syslog.add file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 5-28 Creating a new error log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 5-29 Copy errlog and view it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 5-30 snapshot create command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 5-31 snapshot rollback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 6-1 Removing the Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 6-2 lscfg command on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 6-3 lsvpd command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 6-4 Display virtual SCSI and network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 6-5 List the management server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 6-6 Linux finds new processors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 6-7 The lparcfg command before adding CPU dynamically . . . . . . . . . . . . . 326 6-8 The lparcfg command after adding 0.1 CPU dynamically . . . . . . . . . . . . 327 6-9 Ready to die message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 6-10 Display of total memory in the partition before adding memory . . . . . . 328 6-11 Total memory in the partition after adding 1 GB dynamically . . . . . . . . 330 6-12 Rescanning a SCSI host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 7-1 HMC CLI migrlpar -i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 8-1 lshwres command output showing reserved storage device properties . 354 8-2 Suspending partition p71ibmi08 from the HMC command line . . . . . . . . 363 8-3 Listing partition p71ibmi08 state from the HMC command line . . . . . . . . 364 8-4 Shutting down and suspending a partition . . . . . . . . . . . . . . . . . . . . . . . 364 8-5 Verifying the state of the partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 10-1 The default behavior of ssh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10-2 Using host specific options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10-3 Configuring SSH public key authentication . . . . . . . . . . . . . . . . . . . . . . 389 10-4 Running a non-interactive command . . . . . . . . . . . . . . . . . . . . . . . . . . 390 10-5 Profile modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 10-6 Memory dynamic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 10-7 Virtual adapter dynamic operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 11-1 Installing IBM Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 11-2 IBM Systems Director agent manager configuration. . . . . . . . . . . . . . . 404 11-3 Starting IBM Systems Director server . . . . . . . . . . . . . . . . . . . . . . . . . . 404 11-4 Checking the status of the IBM Systems Director server . . . . . . . . . . . 405 11-5 Starting the Virtual I/O Server’s Systems Director common agent . . . . 408 11-6 Installing IBM Director Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . 409 11-7 checking the status of the common agent subsystem . . . . . . . . . . . . . 410xxiv IBM PowerVM Virtualization Managing and Monitoring
  • 26. 11-8 IBMi QSH command window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41111-9 Agent installation from QSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41113-1 lparstat -i command output on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47613-2 Listing partition resources on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . 47714-1 Using topas to display CPU and memory usage on the VIO . . . . . . . . 48115-1 topas -cecdisp command on Virtual I/O Server. . . . . . . . . . . . . . . . . . . 49715-2 topas -C command on virtual I/O client . . . . . . . . . . . . . . . . . . . . . . . . . 49815-3 topas -C command global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50115-4 Monitoring processor pools with topas -C . . . . . . . . . . . . . . . . . . . . . . . 50215-5 Shared pool partitions listing in topas . . . . . . . . . . . . . . . . . . . . . . . . . . 50315-6 Basic topas monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50715-7 Logical partition information report in topas (press L) . . . . . . . . . . . . . . 50815-8 Upper part of topas busiest CPU report . . . . . . . . . . . . . . . . . . . . . . . . 50915-9 Topas basic panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51015-10 Initial window of the NMON application. . . . . . . . . . . . . . . . . . . . . . . . 51015-11 Display of command help for monitoring system resources . . . . . . . . 51115-12 Monitoring CPU activity with NMON . . . . . . . . . . . . . . . . . . . . . . . . . . 51215-13 NMON monitoring of CPU and network resources . . . . . . . . . . . . . . . 51215-14 Monitoring with the vmstat command . . . . . . . . . . . . . . . . . . . . . . . . . 51315-15 Monitoring using the lparstat command . . . . . . . . . . . . . . . . . . . . . . . 51515-16 Variable processor frequency view with lparstat . . . . . . . . . . . . . . . . . 51515-17 Individual CPU Monitoring using the sar command . . . . . . . . . . . . . . 51615-18 The sar command working a previously saved file . . . . . . . . . . . . . . . 51715-19 Individual CPU Monitoring using the mpstat command . . . . . . . . . . . 51915-20 IBM i component report for component interval activity . . . . . . . . . . . 52815-21 IBM i System Report for Resource Utilization Expansion . . . . . . . . . . 52915-22 Using iostat for CPU monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53316-1 Cross-partition memory monitoring with topas -C . . . . . . . . . . . . . . . . . 53716-2 Performance rule of thumb for page faults . . . . . . . . . . . . . . . . . . . . . . 53916-3 IBM i Component Report for Storage Pool . . . . . . . . . . . . . . . . . . . . . . 53916-4 Linux monitoring memory statistics using meminfo. . . . . . . . . . . . . . . . 54216-5 Sample query execution output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54816-6 Displaying hypervisor paging information using vmstat . . . . . . . . . . . . 54916-7 Displaying hypervisor paging information using vmstat -h . . . . . . . . . . 55016-8 Shared memory partition with some free memory not backed by physical memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55116-9 Shared memory partition not loaning memory . . . . . . . . . . . . . . . . . . . 55116-10 Shared memory partition loaning memory . . . . . . . . . . . . . . . . . . . . . 55216-11 The lparstat -m command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55316-12 The lparstat -me command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55416-13 The topas -L command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55416-14 Displaying I/O memory entitlement using topas . . . . . . . . . . . . . . . . . 55516-15 The topas -C command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Examples xxv
  • 27. 16-16 Displaying shared memory pool attributes using topas. . . . . . . . . . . . 556 16-17 AMD values with lparstat command . . . . . . . . . . . . . . . . . . . . . . . . . . 557 16-18 AMD activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 16-19 Using the amsstat command for displaying AMS metrics . . . . . . . . . . 558 16-20 Monitoring Active Memory Expansion with the amepat command . . . 560 16-21 Monitoring Active Memory Expansion with the topas command . . . . . 562 16-22 Monitoring Active Memory Expansion with the vmstat command . . . . 563 16-23 Monitoring Active Memory Expansion with the lparstat command . . . 563 16-24 Monitoring Active Memory Expansion with the svmon command . . . . 564 17-1 Monitoring I/O performance with viostat . . . . . . . . . . . . . . . . . . . . . . . . 566 17-2 Shared storage pool listing with the lssp command . . . . . . . . . . . . . . . 567 17-3 Configuring a storage pool threshold using the alert command . . . . . . 568 17-4 Viewing the Virtual I/O Server error log using errlog command . . . . . . 568 17-5 AIX lspath command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 17-6 AIX client lsattr command to show hdisk attributes . . . . . . . . . . . . . . . . 571 17-7 Using the chdev command for setting hdisk recovery parameters . . . . 571 17-8 Check for any missing disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 17-9 AIX command to recover from stale partitions . . . . . . . . . . . . . . . . . . . 573 17-10 Monitoring disk performance with iostat . . . . . . . . . . . . . . . . . . . . . . . 574 17-11 IBM i System Report for Disk Utilization (PRTSYSRPT) . . . . . . . . . . 577 17-12 IBM i Resource Report for Disk Utilization (PRTRSCRPT). . . . . . . . . 577 18-1 Verifying the active channel in an EtherChannel . . . . . . . . . . . . . . . . . 584 18-2 Errorlog message when the primary channel fails . . . . . . . . . . . . . . . . 585 18-3 Verifying the active channel in an EtherChannel . . . . . . . . . . . . . . . . . 585 18-4 Manual switch to primary channel using entstat . . . . . . . . . . . . . . . . . . 586 18-5 Checking for the link failure count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 18-6 Output of entstat on SEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 18-7 entstat -all command on SEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 18-8 entstat -all command after file transfer attempt 1 . . . . . . . . . . . . . . . . . 592 18-9 entstat -all command after file transfer attempt 2 . . . . . . . . . . . . . . . . . 593 18-10 entstat -all command after file transfer attempt 3 . . . . . . . . . . . . . . . . 594 18-11 entstat -all command after reset of Ethernet adapters . . . . . . . . . . . . 595 18-12 entstat -all command after opening one ftp session . . . . . . . . . . . . . . 596 18-13 entstat -all command after opening two ftp session . . . . . . . . . . . . . . 597 18-14 Enabling advanced SEA monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 599 18-15 Sample seastat statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 18-16 seastat statistics using search criterion. . . . . . . . . . . . . . . . . . . . . . . . 603 18-17 Topas Shared Ethernet Adapter Monitor . . . . . . . . . . . . . . . . . . . . . . 604 18-18 IBM i System Report for TCP/IP Summary . . . . . . . . . . . . . . . . . . . . . 606 18-19 IBM i Resource Report for Disk Utilization . . . . . . . . . . . . . . . . . . . . . 607 19-1 nmon output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 19-2 Using a script to update partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616 19-3 Running the script and listing output . . . . . . . . . . . . . . . . . . . . . . . . . . . 617xxvi IBM PowerVM Virtualization Managing and Monitoring
  • 28. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service thatdoes not infringe any IBM intellectual property right may be used instead. However, it is the usersresponsibility to evaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document.The furnishing of this document does not give you any license to these patents. You can send licenseinquiries, in writing, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where suchprovisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATIONPROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS ORIMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimerof express or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM maymake improvements and/or changes in the product(s) and/or the program(s) described in this publication atany time without notice.Any references in this information to non-IBM websites are provided for convenience only and do not in anymanner serve as an endorsement of those websites. The materials at those websites are not part of thematerials for this IBM product and use of those websites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Any performance data contained herein was determined in a controlled environment. Therefore, the resultsobtained in other operating environments may vary significantly. Some measurements may have been madeon development-level systems and there is no guarantee that these measurements will be the same ongenerally available systems. Furthermore, some measurement may have been estimated throughextrapolation. Actual results may vary. Users of this document should verify the applicable data for theirspecific environment.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirmthe accuracy of performance, compatibility or any other claims related to non-IBM products. Questions onthe capabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrate programming© Copyright IBM Corp. 2012. All rights reserved. xxvii
  • 29. techniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which thesample programs are written. These examples have not been thoroughly tested under all conditions. IBM,therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corporation in the United States, other countries, or both. These and other IBM trademarkedterms are marked on their first occurrence in this information with the appropriate symbol (® or ™),indicating US registered or common law trademarks owned by IBM at the time this information waspublished. Such trademarks may also be registered or common law trademarks in other countries. A currentlist of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtmlThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: Active Memory™ GPFS™ pSeries® AIX 5L™ HACMP™ Redbooks® AIX® i5/OS® Redbooks (logo) ® BladeCenter® IBM Systems Director Active System i® DB2® Energy Manager™ System p® DS4000® IBM® System Storage® DS6000™ Micro-Partitioning® System x® DS8000® Parallel Sysplex® System z® Electronic Service Agent™ POWER Hypervisor™ Systems Director VMControl™ EnergyScale™ Power Systems™ Tivoli® Enterprise Storage Server® POWER6® z/OS® Focal Point™ POWER7® z/VM® GDPS® PowerHA® Geographically Dispersed PowerVM® Parallel Sysplex™ POWER®The following terms are trademarks of other companies:Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,other countries, or both.Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. andother countries.UNIX is a registered trademark of The Open Group in the United States and other countries.Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, IntelSpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or itssubsidiaries in the United States and other countries.Other company, product, or service names may be trademarks or service marks of others.xxviii IBM PowerVM Virtualization Managing and Monitoring
  • 30. Preface IBM® PowerVM® virtualization technology is a combination of hardware and software that supports and manages the virtual environments on POWER5-, POWER5+, IBM POWER6®, and IBM POWER7®-based systems. Available on IBM Power Systems™, and IBM BladeCenter® servers as optional Editions, and supported by the IBM AIX®, IBM i, and Linux operating systems, this set of comprehensive systems technologies and services is designed to enable you to aggregate and manage resources using a consolidated, logical view. Deploying PowerVM virtualization and IBM Power Systems offers you the following benefits: Lower energy costs through server consolidation Reduced cost of your existing infrastructure Better management of the growth, complexity, and risk of your infrastructure To achieve this goal, PowerVM virtualization provides the following technologies: Virtual Ethernet Shared Ethernet Adapter Virtual SCSI IBM Micro-Partitioning® technology Multiple Shared-Processor Pools N_Port Identifier Virtualization PowerVM Live Partition Mobility IBM Active Memory™ Sharing Active Memory Expansion Partition Suspend and Resume Shared Storage Pools This IBM Redbooks® publication is an extension of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. It provides an organized view of best practices for managing and monitoring your PowerVM environment with respect to virtualized resources managed by the Virtual I/O Server. This publication is divided into two parts: Part 1 focuses on system management. It provides best practices to optimize the resources, and illustrates these practices using practical examples. It also details how to secure and maintain your virtual environments. Part 2 describes how to monitor a PowerVM virtualization infrastructure. Rather than presenting a list of tools, it addresses practical situations to help© Copyright IBM Corp. 2012. All rights reserved. xxix
  • 31. you select and use the monitoring tool that best shows the resources you are interested in. Reminders of the key PowerVM features are also provided.The team who wrote this book This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Poughkeepsie Center. Nicolas Guerin is an IT Specialist working for IBM France in Aubiere. He has 15 years of experience in the Information Technology field. His areas of expertise include AIX, system performance and tuning, PowerVM, IBM PowerHA®, Power Systems, and SAN. He has been working for IBM for 17 years. He is an IBM Certified Advanced Technical Expert - pSeries® and AIX and an IBM Certified Systems Expert - pSeries. He is a teacher at the university of Clermont-Ferrand. He has coauthored three IBM Redbooks. Jimi Inge is a Lead Service Architect in Sweden. He has 15 years of experience in the IT industry. He worked at IBM for 11 years and currently works for a ISV and Business Partner to IBM, Tieto. Jimi’s areas of expertise include design of infrastructure and services with focus on Power Systems, IBM i and capacity. He is a regular speaker at IBM event in Europe, and has published a number of articles on Virtualization, capacity and IBM i. He received the IBM Smarter Business Partner of the year 2011 award for a solution built on virtualization and Power Systems. Narutsugu Itoh is a Senior IT Specialist working as a Technical Support in IBM Japan. He provides pre-sales technical consultation and implementation of IBM Power Systems and virtualization environments. He has 16 years of experience with IBM Power Systems, and his areas of expertise include Power Systems, PowerVM, VIOS, AIX, Linux, and SAN. He also has experience in early support of Power Systems, PowerVM, and AIX. He is an author of the IBM PowerVM Live Partition Mobility Redbook. Robert Miciovici is an IT Specialist at IBM Romania. He has 15 years of experience in IT, and holds a degree in IT Engineering. His areas of expertise include infrastructure architecture and service delivery based on AIX, Power Systems, PowerVM, SAN, and IBM Total Storage. He is an IBM Certified Advanced Technical Expert - Power Systems with AIX v2. He teaches AIX and PowerVM classes and workshops. Rajendra Patel works at IBM US in Austin. He has been working at IBM for 16 years where he currenty works in Virtual I/O server as a Suppport & Development Specialist. His areas of expertise include AIX, system performance and tuning, database tuning ( Oracle, IBM DB2®, Sybase ) on AIX and Powerxxx IBM PowerVM Virtualization Managing and Monitoring
  • 32. Systems , PowerVM, Power Systems, Graphics Performance, SAN, Networking,VMware, System Director, VMControl, Windows, and Linux.Arthur Török is an IT Specialist at IBM Lab Services in Hungary. He has 10years of experience in the IT industry and has been working with AIX and PowerSystems for the past 5 years. Currently he is working as a consultant for IBMSystems Director with VMControl and Active Energy Manager. His areas ofexpertise include AIX, Power Systems, and PowerVM.The project that produced this publication was managed by:Scott Vetter, PMPThanks to the following people for their contributions to this project:David BenninRichard M. ConwayAnn LundAlfred SchwabTed SullivanIBM USBruno BlanchardIBM FranceThe authors of the first edition are:Tomas Baublys IBM GermanyDamien Faure Bull FranceJackson Alfonso Krainer IBM BrazilMichael Reed IBM USThe authors of the second edition are:Ingo Dimmer IBM GermanyVolker Haug IBM GermanyThierry Huché IBM FranceAnil K Singh IBM IndiaMorten Vågmo IBM NorwayThe authors of the third edition are:Stuart Devenish IBM AustrailiaIngo Dimmer IBM GermanyRafael Folco IBM Brazil Preface xxxi
  • 33. Mark Roy Sysarb, Inc. Austrailia Stephane Saleur IBM France Oliver Stadler IBM Switzerland Naoya Takizawa IBM JapanNow you can become a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400xxxii IBM PowerVM Virtualization Managing and Monitoring
  • 34. Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html Preface xxxiii
  • 35. xxxiv IBM PowerVM Virtualization Managing and Monitoring
  • 36. Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7590-03 for IBM PowerVM Virtualization Managing and Monitoring as created or updated on June 21, 2012.May 2012, Fourth Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information Shared Storage Pool Thick provisioning, see 2.6.1, “Creating the shared storage pool” on page 48. Shared Ethernet Adapter with load sharing, see 3.7, “Shared Ethernet Adapter failover with Load Sharing” on page 138. Snapshot and rollback, see 5.12, “VM Storage Snapshots/Rollback” on page 284. Active Memory Deduplication, see “Monitoring Active Memory De-duplication” on page 556. Changed information Multi-node Shared Storage Pool, see Chapter 2, “Virtual storage management” on page 13. Suspend and resume for IBM i, see Chapter 8, “Partition Suspend and Resume” on page 351.© Copyright IBM Corp. 2012. All rights reserved. xxxv
  • 37. xxxvi IBM PowerVM Virtualization Managing and Monitoring
  • 38. Part 1Part 1 PowerVM virtualization management Part 1 describes best practices to manage your Power Systems and PowerVM environment by providing you a summary of the different PowerVM Editions and the maintenance strategies. It also includes the new functions of the Virtual I/O Server Version 2.2. The following topics are covered: Virtual storage management Virtual network Virtual I/O Server security Virtual I/O Server maintenance Dynamic operations PowerVM Live Partition Mobility System Planning Tool Automated management© Copyright IBM Corp. 2012. All rights reserved. 1
  • 39. 2 IBM PowerVM Virtualization Managing and Monitoring
  • 40. 1 Chapter 1. Introduction This chapter describes the available PowerVM Editions. It also provides an overview of the new Virtual I/O Server Version 2.2 features and PowerVM enhancements. This chapter contains the following sections: PowerVM Editions Maintenance strategy New features for Virtual I/O Server Version 2.2© Copyright IBM Corp. 2012. All rights reserved. 3
  • 41. 1.1 PowerVM Editions Virtualization technology is offered in three editions on Power Systems: PowerVM Express Edition, PowerVM Standard Edition, and PowerVM Enterprise Edition. All Power Systems servers can utilize a limited set of base virtualization functions, often referred to as logical partitioning, by using either the Hardware Management Console (HMC), the Integrated Virtualization Manager (IVM) or Systems Director Management Console (SDMC). Logical partitions enable clients to run separate workloads in separate partitions on the same physical server. This helps lower costs and improve energy efficiency. Logical Partitions (LPARs) are designed to be shielded from each other to provide a high level of data security and increased application availability. This has been certified by the United States National Institute of Standards and Technology (NIST) and the United States National Security Agency (NSA), under the National Information Assurance Partnership (NIAP) program. The complete certification report and others can be found at the following URL: http://www.niap-ccevs.org/st/vid10299/ Dynamic LPAR operations allow clients to reallocate system resources among application partitions without rebooting. This simplifies overall systems administration and workload balancing, and enhances availability. PowerVM Editions extend the base system functions to include IBM Micro-Partitioning and Virtual I/O Server capabilities. These are designed to allow businesses to increase system utilization, while helping to ensure that applications continue to get the resources they need. Micro-Partitioning technology can help lower costs by allowing the system to be finely tuned to consolidate multiple independent workloads. Micro-partitions can be defined as small as 1/10th of a processor, and can be changed in increments as small as 1/100th of a processor. Up to 10 micro-partitions can be created per core on a server. The Virtual I/O Server allows for the sharing of expensive disk, tape, and optical devices, as well as communications and Fibre Channel adapters, to help drive down complexity, and system and administrative expenses. Also included is support for Multiple Shared Processor Pools. This support allows for automatic, nondisruptive balancing of processing power between partitions assigned to the shared pools. The result is increased throughput and the potential to reduce processor-based software licensing costs and Shared Dedicated Capacity, which in turn optimizes the use of processor cycles.4 IBM PowerVM Virtualization Managing and Monitoring
  • 42. An uncapped partition enables the processing capacity of that partition to exceed its entitled capacity when the shared processing pool has available resources. This means that idle processor resource within a server can be used by any uncapped partition, resulting in an overall increase of the physical processor resource utilization. However, in an uncapped partition, the total number of virtual processors configured limits the total amount of physical processor resource that the partition can potentially consume. As an example, assume that a server has eight physical processors. The uncapped partition is configured with two processing units (the equivalent of two physical processors) as its entitled capacity and four virtual processors. In this example, the partition is only able to use a maximum of four physical processors. This is because a single virtual processor can only consume a maximum equivalent of one physical processor. A dynamic LPAR operation to add more virtual processors would be required to enable the partition to potentially use more physical processor resources. In the following sections, the three PowerVM Editions are described in detail.1.1.1 PowerVM Express Edition PowerVM Express Edition is offered only on the following server models: 520 and 550 710, 720, 730, 740, and 750 PS700, PS701, and PS702 Blades It is designed for clients who want an introduction to advanced virtualization features at a highly affordable price. With PowerVM Express Edition, clients can create up to three partitions on a server (two client partitions and one for the Virtual I/O Server and Integrated Virtualization Manager). They can use virtualized disk and optical devices, and try out the shared processor pool. All virtualization features, such as Micro-Partitioning, Shared Processor Pool, Virtual I/O Server, PowerVM Lx86, Shared Dedicated Capacity, N_Port ID Virtualization, and Virtual Tape, can be managed by using the Integrated Virtualization Manager.1.1.2 PowerVM Standard Edition For clients ready to get the full value from their server, IBM offers PowerVM Standard Edition. This provides the most complete virtualization functionality for UNIX and Linux in the industry. This option is available for all IBM Power Systems servers. Chapter 1. Introduction 5
  • 43. With PowerVM Standard Edition, you can create up to 254 partitions on a server. You can use virtualized disk and optical devices, and try out the shared processor pool. All virtualization features, such as Micro-Partitioning, Shared Processor Pool, Virtual I/O Server, PowerVM Lx86, Shared Dedicated Capacity, N_Port ID Virtualization, and Virtual Tape, can be managed by using an Hardware Management Console or the Integrated Virtualization Manager. The PowerVM Standard Edition also includes support for Multiple Shared Processor Pools, Shared Storage Pools, and Suspend/Resume.1.1.3 PowerVM Enterprise Edition PowerVM Enterprise Edition is offered exclusively on POWER6 (or later) servers. It includes all the features of PowerVM Standard Edition, plus Live Partition Mobility, Active Memory Sharing and Active Memory Expansion. Live Partition Mobility allows for the movement of a running partition from one POWER6 (or later) technology-based server to another with no application downtime. This results in better system utilization, improved application availability, and energy savings. With Live Partition Mobility, planned application downtime due to regular server maintenance can be a thing of the past. For more information about Live Partition Mobility, see Chapter 7, “PowerVM Live Partition Mobility” on page 339. Active Memory Sharing is an IBM PowerVM advanced memory virtualization technology that provides system memory virtualization capabilities to IBM Power Systems, allowing multiple logical partitions to share a common pool of physical memory. Active Memory Sharing can be exploited to increase memory utilization on the system either by decreasing the system memory requirement or by allowing the creation of additional logical partitions on an existing system. Active Memory Sharing allows overcommitment of memory resources. Because logical memory is mapped to physical memory depending on logical partitions’ memory demand, the sum of all logical partitions’ logical memory can exceed the shared memory pool’s size. Active Memory Sharing is available on POWER6 or later processor based systems, and supports AIX, IBM i and Linux. For more information about Active Memory Sharing, see IBM PowerVM Virtualization Active Memory Sharing, REDP-4470.6 IBM PowerVM Virtualization Managing and Monitoring
  • 44. Active Memory Expansion is a technology allowing more data to be placed intomemory, it uses memory compression technology to transparently compressin-memory data.For a description of each component of the PowerVM editions feature, seeTable 1-1. Table 1-1 describes each component of the PowerVM Editions feature,the editions in which each component is included, and the processor-basedhardware on which each component is available.Table 1-1 PowerVM Editions components, editions, and hardware support Component Description PowerVM Edition Hardware Micro-Partitioning The ability to allocate Express Edition POWER5 technology processors to logical Standard Edition and later partitions in increments Enterprise Edition of 0.01, thus allowing multiple logical partitions to share the systems processing power. Virtual I/O Server Software that facilitates Express Edition POWER5 the sharing of physical Standard Edition and later I/O resources between Enterprise Edition client logical partitions within the server. Lx86 A product that makes a Express Edition POWER6 Power system Standard Edition and later compatible with x86 Enterprise Edition applications. This extends the application support for Linux on Power systems, allowing applications that are available on x86 but not on Power systems to be run on the Power system. Integrated The graphical interface Express Edition POWER5 Virtualization of the Virtual I/O Server Standard Edition and later Manager management partition Enterprise Edition on some servers that are not managed by an Hardware Management Console (HMC). Chapter 1. Introduction 7
  • 45. Component Description PowerVM Edition Hardware Suspend/Resume Provides long-term Standard Edition POWER7 suspension of partition Enterprise Edition and later state (memory, NVRAM, and VSP state) and resumption on the same or separate server. N_Port ID Provides direct access Express Edition POWER6 Virtualization to Fibre Channel Standard Edition and later adapters from multiple Enterprise Edition client partitions. Multiple Shared Allows creating a set of Standard Edition POWER6 Processor Pools micro-partitions with the Enterprise Edition and later purpose of controlling the processor capacity that can be consumed from the physical shared-processor pool. Shared Storage A cluster of one Virtual Standard Edition POWER5 Pools I/O Server partition Enterprise Edition and later connected to the same shared storage pool and having access to distributed storage. Live Partition The ability to migrate an Enterprise Edition POWER6 Mobility active or inactive AIX or and later Linux logical partition from one system to another. Active Memory Memory virtualization Enterprise Edition POWER6 Sharing technology that allows and later multiple logical partitions to share a common pool of physical memory.8 IBM PowerVM Virtualization Managing and Monitoring
  • 46. 1.1.4 How to determine the PowerVM Edition As mentioned, PowerVM Edition is available in three options: Express Standard Enterprise You can determine the Edition available on your server by reviewing the VET code on the POD website at the following URL: http://www-912.ibm.com/pod/pod Use bits 25-28 from the VET code listed on the website. Here are examples of VET codes: 450F28E3D581AF727324000000000041FA B905E3D284DF097DCA1F00002C0000418F 0F0DA0E9B40C5449CA1F00002c20004102 The highlighted values are as follows: 0000 = PowerVM Express Edition 2c00 = PowerVM Standard Edition 2c20 = PowerVM Enterprise Edition1.1.5 Software licensing From a software licensing perspective, vendors have different pricing structures on which they license their applications running in an uncapped partition. Because an application has the potential of using more processor resources than the partition’s entitled capacity, many software vendors that charge on a per-processor basis require additional processor licenses to be purchased based on the possibility that the application might consume more processor resource than it is entitled. When deciding to implement an uncapped partition, check with your software vendor for more information about their licensing terms. For more information about software licensing, see the Software licensing in a virtualized environment section in the IBM Redbooks publication IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.1.2 Maintenance strategy Having a maintenance strategy is important in any computing environment. It is often recommended that you consider guidelines for updates and changes before going into production. Therefore, when managing a complex virtualization Chapter 1. Introduction 9
  • 47. configuration on a Power Systems server that runs dozens of partitions managed by different departments or customers, you have to plan maintenance window requests. There is no ideal single maintenance strategy for every enterprise and situation. Each maintenance strategy must be individually developed based on the business availability goals. PowerVM offers various techniques that enable you to avoid the need for service window requests. For example, Live Partition Mobility can be used to move a workload from one server to another without interruption. Dual Virtual I/O Server configuration allows Virtual I/O Server maintenance without disruption for clients. Combining Power Systems virtualization and SAN technologies allows you to create flexible and responsive implementation in which any hardware or software can be exchanged and upgraded. Cross-platform tools such as IBM Systems Director, Tivoli®, and Extreme Cloud Administration Toolkit (xCAT) offer single management interfaces for multiple physical and virtual systems. The Hardware Management Console allows you to manage multiple virtual systems on Power Systems. The Integrated Virtualization Manager manages virtual systems on a single server. The advantages provided by virtualization (infrastructure simplification, energy savings, flexibility, and responsiveness) also include manageability. Even if the managed virtual environment looks advanced, keep in mind that a virtualized environment replaces not a single server, but dozens and sometimes hundreds of hard-to-manage, minimally utilized, stand-alone servers.1.3 New features for Virtual I/O Server Version 2.2 IBM PowerVM technology has been enhanced to boost the flexibility of Power Systems servers with support for the following features: Role-based access control. You can define roles based on job functions in an organization by using role-based access control (RBAC). Dynamic add or remove of VLANs. You can add, remove, or modify the existing set of virtual local area networks (VLAN) for a virtual Ethernet adapter that is assigned to an active partition by using the Hardware Management Console (HMC).10 IBM PowerVM Virtualization Managing and Monitoring
  • 48. Support for Universal Serial Bus (USB) (DAT320)-attached tape devices.Support for POWER7 offerings.Virtual I/O Server now supports the POWER7 offerings.Shared Storage Pools.Shared storage pools are storage pools that provide distributed storageaccess to the Virtual I/O Server logical partition in a cluster, where eachcluster can have up to four Virtual I/O Servers.Partition Suspend and ResumeYou can suspend an AIX or Linux logical partition with its operating systemand applications, and store its virtual server state to persistent storage. At alater time, you can resume the operation of the logical partition and the sameor another server. Chapter 1. Introduction 11
  • 49. 12 IBM PowerVM Virtualization Managing and Monitoring
  • 50. 2 Chapter 2. Virtual storage management The Virtual I/O Server maps physical storage to virtual I/O clients. This chapter outlines the best practices for managing disk, tape, and optical storage in the virtual environment, keeping track of physical storage and allocating it to virtual I/O clients. The chapter describes maintenance scenarios such as replacing a physical disk on the Virtual I/O Server that is used as a backing device. It also covers migration scenarios, including moving a partition with virtual storage from one server to another and moving existing storage (for example, physical or dedicated) into the virtual environment where possible. This chapter contains the following sections: Disk mapping options Virtual optical devices Virtual tape devices Using file-backed virtual optical devices Mapping LUNs over vSCSI to hdisks Managing Shared Storage Pools Replacing a disk on the Virtual I/O Server Managing multiple storage security zones Storage planning with migration in mind Managing N_Port ID virtualization© Copyright IBM Corp. 2012. All rights reserved. 13
  • 51. 2.1 Disk mapping options The Virtual I/O Server presents disk storage to virtual I/O clients as virtual SCSI disks. These virtual disks must be mapped to physical storage by the Virtual I/O Server. There are several ways to perform this mapping, each with its own benefits: Physical volumes Logical volumes File backed devices Logical units The general rule for choosing between these options for dual Virtual I/O Server configurations is that disk devices being accessed through a SAN should be exported as physical volumes or logical units, with storage allocation managed in the SAN. If you need additional flexibility, better storage utilization, or simple administration for SAN disks, consider using logical units. Internal and SCSI-attached disk devices should be exported with either logical volumes or storage pools so that storage can be allocated in the server. Remember: When using IBM i client partitions, dedicated physical volumes (hdisks) should be mapped on the Virtual I/O Server for the best performance. Up to 16 virtual disk LUNs and up to 16 virtual optical LUNs are supported per IBM i virtual SCSI client adapter. This section explains how to map physical storage to file-backed devices. The mapping of physical storage to physical volumes and logical volumes is covered in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.2.1.1 Physical volumes The Virtual I/O Server can export physical volumes intact to virtual I/O clients. This method of exporting storage has several advantages over logical volumes: Physical disk devices can be exported from two or more Virtual I/O Servers concurrently for multipath redundancy. The code path for exporting physical volumes is shorter, which might lead to better performance. Physical disk devices can be moved from one Virtual I/O Server to another with relative ease. In certain cases, existing LUNs from physical servers can be migrated into the virtual environment with the data intact.14 IBM PowerVM Virtualization Managing and Monitoring
  • 52. One consideration for exporting physical volumes is that the size of the device is not managed by the Virtual I/O Server, and the Virtual I/O Server does not allow partitioning of a single device among multiple clients. This is generally only a concern for internal and SCSI-attached disks. There is no general requirement to subdivide SAN-attached disks because storage allocation can be managed at the storage server. In the SAN environment, provision and allocate LUNs for each LPAR on the storage servers and export them from the Virtual I/O Server as physical volumes. When a SAN disk is available, all storage associated with a virtual I/O client should be stored in the SAN, including rootvg and paging space. This makes management simpler because partitions will not be dependent on both internal logical volumes and external LUNs. It also makes it easier to move virtual servers from one Virtual I/O Server to another. For more information about this topic and a prerequisite if you want to use Live Partition Mobility, see Chapter 7, “PowerVM Live Partition Mobility” on page 339.2.1.2 Logical volumes The Virtual I/O Server can export logical volumes to virtual I/O clients. This method does have some advantages over physical volumes: Logical volumes can subdivide physical disk devices between clients. The logical volume interface is familiar to those who have AIX experience. Important: The rootvg on the Virtual I/O Server should not be used to host exported logical volumes because manual intervention might be required. Certain types of software upgrades and system restores might alter the logical volume to target device mapping for logical volumes within rootvg. When an internal or SCSI-attached disk is used, the logical volume manager (LVM) enables disk devices to be subdivided between virtual I/O clients. For small servers, this enables several virtual servers to share internal disks or RAID arrays. Best practices for exporting logical volumes The Integrated Virtualization Manager (IVM) and HMC-managed environments present two separate interfaces for storage management under different names. The storage pool interface under the IVM is essentially the same as the logical volume manager interface under the HMC, and in some cases, the documentation uses the terms interchangeably. The remainder of this chapter uses the term volume group to refer to both volume groups and storage pools, Chapter 2. Virtual storage management 15
  • 53. and the term logical volume to refer to both logical volumes and storage pool backing devices. Tip: The storage pool commands are also available on the HMC. Logical volumes enable the Virtual I/O Server to subdivide a physical volume between multiple virtual I/O clients. In many cases, the physical volumes used will be internal disks, or RAID arrays built of internal disks. A single volume group should not contain logical volumes used by virtual I/O clients and logical volumes used by the Virtual I/O Server operating system. Keep Virtual I/O Server file systems within the rootvg, and use other volume groups to host logical volumes for virtual I/O clients. A single volume group or logical volume cannot be accessed by two Virtual I/O Servers concurrently. Do not attempt to configure MPIO on virtual I/O clients for VSCSI devices that reside on logical volumes. If redundancy is required in logical volume configurations, use LVM mirroring on the virtual I/O client to mirror across logical volumes on different Virtual I/O Servers. Although logical volumes that span multiple physical volumes are supported, a logical volume should reside wholly on a single physical volume for optimum performance. To guarantee this, volume groups can be composed of single physical volumes. Remember: Keeping an exported storage pool backing device or logical volume on a single hdisk results in optimized performance. When exporting logical volumes to clients, the mapping of individual logical volumes to virtual I/O clients is maintained in the Virtual I/O Server. The additional level of abstraction provided by the logical volume manager makes it important to track the relationship between physical disk devices and virtual I/O clients. For more information, see 2.5, “Mapping LUNs over vSCSI to hdisks” on page 27. Storage pools When managed by the Integrated Virtualization Manager (IVM), the Virtual I/O Server can export storage pool backing devices to virtual I/O clients. This method is similar to logical volumes, and has some advantages over physical volumes: Storage pool backing devices can subdivide physical disk devices between separate clients. The storage pool interface is easy to use through IVM.16 IBM PowerVM Virtualization Managing and Monitoring
  • 54. Important: The default storage pool in IVM is the root volume group of the Virtual I/O Server. Be careful not to allocate backing devices within the root volume group because certain types of software upgrades and system restores might alter the logical volume to target device mapping for logical volumes in rootvg, requiring manual intervention. Systems in a single server environment under the management of IVM are often not attached to a SAN, and these systems typically use internal and SCSI-attached disk storage. The IVM interface allows storage pools to be created on physical storage devices so that a single physical disk device can be divided among several virtual I/O clients. As with logical volumes, storage pool backing devices cannot be accessed by multiple Virtual I/O Servers concurrently, so they cannot be used with MPIO on the virtual I/O client. If redundancy is required, use LVM mirroring on the virtual I/O client.2.1.3 File-backed devices Starting with Version 1.5 of Virtual I/O Server, there is a feature called file-backed virtual SCSI devices. This feature provides additional flexibility for provisioning and managing virtual SCSI devices. In addition to backing a virtual SCSI device (disk or optical) by physical storage, a virtual SCSI device can be backed by a file. File-backed virtual SCSI devices continue to be accessed as standard SCSI-compliant storage. Remember: If LVM mirroring is used in the client, make sure that each mirror copy is placed on a separate disk, and mirrored across storage pools.2.1.4 Logical units Starting with Virtual I/O Server Version 2.2.0.11, Fix Pack 24 Service Pack 1, or later, there is a new feature called logical units. This feature provides flexibility for provisioning and better storage utilization for disk devices being accessed through a SAN. Logical units are created in a shared storage pool, can be thin or thick provisioned and are accessed as standard SCSI storage on a client partition. Virtual disks from a shared storage pool support the persistent reservation. Chapter 2. Virtual storage management 17
  • 55. The shared storage pool has to be defined on a Virtual I/O Server cluster specific to this purpose. Starting with version 2.2.1.3, Fix Pack 25 Service Pack 1 a Virtual I/O Server cluster can have 1 to 4 nodes, and storage devices can be shared among them to create a shared storage pool with a maximum capacity of 128TB. Supported sizes for the logical units allocated from the pool are from 1GB to 4TB. In case of thin provisioned disks only a minimal amount will be used on the physical disks however if the logical unit is created as thick provisioned it will instantly occupy the space on the disk according to it’s defined size.2.2 Virtual optical devices The Virtual I/O Server support for virtual optical devices allows sharing of a physical CD or DVD drive assigned to the Virtual I/O Server between multiple AIX, IBM i, and Linux client partitions. The shared optical drive can be accessed only by one virtual I/O client partition at a time. For further information about setting up and managing virtual optical devices for client partitions, see Chapter 3, Setting up virtualization: the basics in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.2.3 Virtual tape devices Virtual tape devices are assigned and operated similarly to virtual optical devices. Only one virtual I/O client can have access at a time. The advantage of a virtual tape device is that you do not have to move the parent SCSI adapter between virtual I/O clients. Restriction: The virtual tape drive cannot be moved to another Virtual I/O Server because client SCSI adapters cannot be created in a Virtual I/O Server. If you want the tape drive in another Virtual I/O Server, the virtual device must be unconfigured and the parent SAS adapter must be unconfigured and moved using dynamic LPAR. The physical tape drive is a SAS device, but mapped to virtual clients as a virtual SCSI device. If the tape drive is to be used locally in the parent Virtual I/O Server, the virtual device must be unconfigured first.18 IBM PowerVM Virtualization Managing and Monitoring
  • 56. The following sections describe how to manage a shared virtual tape device for usage between multiple partitions. For information about how to set up a virtual tape device on the Virtual I/O Server and client partitions, see Chapter 3 of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.2.3.1 Moving the virtual tape drive When you want to use a virtual tape drive on a certain client partition, you should move the virtual tape drive if the other client partition has the virtual tape drive whose backing device is same as the physical tape drive you want to use. Tip: Moving the virtual optical device is similar to the following scenario for moving the virtual tape drive. Moving the virtual tape drive on AIX If your documentation does not provide the vscsi adapter number, you can find it by using the lscfg|grep Cn command, where n is the slot number of the virtual client adapter from the HMC. 1. Use the rmdev -Rl vscsin command to change the vscsi adapter and the tape drive to a defined state in the AIX client partition that holds the drive. Adding the -d option also removes the adapter from the ODM. 2. Using the cfgmgr command in the target LPAR will make the drive available. Remember: If the tape drive is not assigned to another LPAR, the drive will show up as an install device in the SMS menu. Moving the virtual tape drive on IBM i Deallocating or allocating a shared virtual tape drive on IBM i works the same way as for a shared virtual optical device, which is described in more detail in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. To deallocate a shared virtual device on the IBM i client partition, it should be varied off first, then released for use by another Virtual I/O Server client partition by resetting the IBM i virtual IOP. To allocate a shared virtual device, if the virtual IOP is in inoperative state, the virtual IOP needs to be re-IPLed first. When the IOP is operational, the shared virtual device can be varied on. Chapter 2. Virtual storage management 19
  • 57. Moving the virtual tape drive on Linux If your documentation does not provide the vscsi adapter number, you can find it by using the lscfg|grep Cn command, where n is the slot number of the virtual client adapter from the HMC. Important: If the virtual SCSI server adapter on the Virtual I/O Server is configured with the Any client partition can connect option and shared among Linux client partitions, the virtual tape drive cannot be moved from the running Linux client partition because the virtual SCSI adapter on the Linux client cannot be removed by Linux commands. If you want to remove the virtual SCSI adapter on the Linux, remove the virtual SCSI adapter using the DLPAR operation on the HMC. In this situation, another way to move the virtual tape drive is to shut down the Linux client partition holding the virtual tape drive. If you want to move the virtual tape drive on the Linux client partition without the DLPAR or shutdown operation, configure each virtual SCSI server-client pairs for each client partitions without the Any client partition can connect option. 1. Type echo 1 > /sys/block/st0/device/delete to remove the tape drive from the Linux client partition that holds the drive. 2. On the Virtual I/O Server, remove the virtual target device for the virtual tape drive and map the physical tape drive to the target client partition. 3. Type echo “- - -” > /sys/class/scsi_host/hostX/scan to recognize the tape drive on the target LPAR where X stands for the SCSI bus you want to scan.2.3.2 Finding the partition that holds the virtual tape drive You can use the dsh command to find the AIX client currently holding the drive, as shown in Example 2-1 on page 21. dsh is installed by default in AIX. You can use dsh with rsh, ssh, or Kerberos authentication as long as dsh can run commands without being prompted for a password. When using SSH, a key exchange is required to be able to run commands without being prompted for password. The dshbak command sorts the output by target system.20 IBM PowerVM Virtualization Managing and Monitoring
  • 58. Set the DSH_REMOTE_CMD=/usr/bin/ssh variable if you use SSH forauthentication:# export DSH_REMOTE_CMD=/usr/bin/ssh# export DSH_LIST=<file listing lpars># dsh lsdev -Cc tape | dshbakExample 2-1 Finding which LPAR is holding the tape drive using dsh# dsh lsdev -Cc tape | dshbakHOST: app-aix61-TL2-------------------rmt0 Defined Virtual Tape DriveHOST: db-aix61-TL2------------------rmt0 Available Virtual Tape DriveIt is useful to put the DSH_LIST and DSH_REMOTE_CMD definitions in .profileon your admin server. You can change the file containing names of target virtualservers without redefining DSH_LIST. Tip: If some partitions do not appear in the list, it is usually because the drive has never been assigned to the partition or was completely removed with the -d option.Alternatively, you can use the ssh command as shown in Example 2-2.Example 2-2 Finding which LPAR is holding the optical drive using ssh# for i in db-aix61-TL2 app-aix61-TL2> do> echo $i; ssh $i lsdev -Cc tape> donedb-aix61-TL2rmt0 Available Virtual Tape Driveapp-aix61-TL2rmt0 Defined Virtual Tape DriveIf you have Linux or/and IBM i client, to find the Partition ID of the partitionholding the drive, use the lsmap -all command on the Virtual I/O Server. Tip: AIX6 offers a graphical interface to system management called IBM Systems Console for AIX. This has a menu setup for dsh. Chapter 2. Virtual storage management 21
  • 59. 2.3.3 Unconfiguring a virtual tape drive for local use Follow these steps to unconfigure the virtual tape drive when it is going to be used in the Virtual I/O Server for local backups: 1. Release the drive from the partition holding it. 2. Unconfigure the virtual device in the Virtual I/O Server with the rmdev -dev name -ucfg command. 3. When you finish using the drive locally, use the cfgdev command in the Virtual I/O Server to restore the drive as a virtual drive.2.3.4 Unconfiguring a virtual tape drive to be moved Follow these steps to unconfigure the virtual tape drive in one Virtual I/O Server when it is going to be moved physically to another partition. 1. Release the drive from the partition holding it. 2. Unconfigure the virtual device in the Virtual I/O Server. 3. Unconfigure the SAS adapter recursively with the rmdev -dev adapter -recursive -ucfg command. The correct adapter can be identified with the lsdev -slots command. Tip: You can use the lsdev -slots command to display all adapters that could be subject to dynamic LPAR operations. If you cannot display the correct adapter using the lsdev -slots command, use the lsdev -dev adapter -parent command to find the correct parent adapter. 4. Use the HMC to move the adapter to the target partition. 5. Run the cfgmgr command on an AIX partition, or the cfgdev command for a Virtual I/O Server partition, to configure the drive. 6. When finished, remove the SAS adapter recursively. 7. Use the HMC to move the adapter back to the Virtual I/O Server. 8. Run the cfgdev command to configure the drive and the virtual SCSI adapter in the Virtual I/O Server partition. This will make the virtual tape drive available again for client partitions.22 IBM PowerVM Virtualization Managing and Monitoring
  • 60. 2.4 Using file-backed virtual optical devices With file-backed devices it is possible, for example, to use an ISO image as a virtual device and share it among all the partitions on your system, such as a virtualized optical drive. First, verify that the version of Virtual I/O Server is 1.5 or later. The virtual media repository is used to store virtual optical media that can be conceptually inserted into file-backed virtual optical devices. To check this, log into the Virtual I/O Server using the padmin user and run the ioslevel command. The output of the command should be similar to Example 2-3. Example 2-3 Checking the version of the Virtual I/O Server $ ioslevel 2.1.0.1-FP-20.0 After you are sure that you are running the right version of the Virtual I/O Server, you can check whether a virtual media repository has already been created. If it has, you can use the lsrep command to list and display information about it. If you receive output similar to Example 2-4, it means that you do not have a virtual media repository set up yet. Example 2-4 Checking whether any virtual media repository is already defined $ lsrep The DVD repository has not been created yet. Use the mkrep command to define it. The command creates the virtual media repository in the specific storage pool. To list the storage pools defined on the Virtual I/O Server, use the lssp command. Restriction: The virtual media repository cannot be created in a shared storage pool. In Example 2-5, there is only one storage pool defined (rootvg). This storage pool can be used to create a virtual media repository with 14 GB (enough space to fit 3 DVD images of 4.7 GB each). After that, recheck the virtual media repository definition with the lsrep command. Example 2-5 List of available storage pools and defining a virtual media repository $ lssp Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type Chapter 2. Virtual storage management 23
  • 61. rootvg 69888 46848 128 0 LVPOOL $ mkrep -sp rootvg -size 14G Virtual Media Repository Created Repository created within "VMLibrary_LV" logical volume $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 14277 14277 rootvg 69888 32512 Next, copy the ISO image to the Virtual I/O Server. You can use Secure Copy (SCP) to accomplish this. After the file is uploaded to the Virtual I/O Server, you can create a virtual optical media disk in the virtual media repository by using the mkvopt command. Tip: By default, the virtual optical disk is created as DVD-RAM media. If the -ro flag is specified with the mkvopt command, the disk is created as DVD-ROM media. In Example 2-6, a virtual optical media disk named ibm_directory_cd1 is created from the tds61-aix-ppc64-cd1.iso ISO image located on the /home/padmin directory. Using the -f flag with the mkvopt command will copy the ISO file from its original location into the repository. Thus, after executing this command, the file from the /home/padmin directory can be removed because it will be stored on the virtual media repository. You can check the repository configuration by using the lsrep command. Example 2-6 Creating a virtual optical media disk in the virtual media repository $ mkvopt -name ibm_directory_cd1 -file /home/padmin/tds61-aix-ppc64-cd1.iso $ rm tds61-aix-ppc64-cd1.iso rm: Remove tds61-aix-ppc64-cd1.iso? y $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 14278 13913 rootvg 69888 32512 Name File Size Optical Access ibm_directory_cd1 365 None rw Alternatively, you can create an ISO image directly from the CD/DVD drive as shown in Example 2-7. The default path for the ISO image that is created will be /var/vio/VMLibrary. Thus, file.iso will be stored as /var/vio/VMLibrary/file.iso. Example 2-7 Creating an iso image from CD/DVD drive $ mkvopt -name file.iso -dev cd0 -ro $ lsrep24 IBM PowerVM Virtualization Managing and Monitoring
  • 62. Size(mb) Free(mb) Parent Pool Parent Size Parent Free 10198 5715 clientvg 355328 340992 Name File Size Optical Access file.iso 4483 None ro $ ls /var/vio/VMLibrary/ file.iso lost+found Now that a file-backed virtual optical device has been created, you need to map it to the virtual server adapter. Because it is not possible to use the optical virtual device created in Example 2-6 on page 24 as an input to the mkvdev command, a special virtual device is required. This is a special type of virtual target device known as a virtual optical device. You can create it by using the mkvdev command with the -fbo flag. The creation of this new virtual adapter is shown in Example 2-8. Example 2-8 Creating an optical virtual target device $ mkvdev -fbo -vadapter vhost1 vtopt0 Available The virtual optical device cannot be used until the virtual media is loaded into the device. Use the loadopt command to load the media as shown in Example 2-9.Example 2-9 Loading the virtual media on the virtual target device$ loadopt -disk ibm_directory_cd1 -vtd vtopt0$ lsmap -vadapter vhost1SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost1 U9117.MMA.100F6A0-V1-C20 0x00000002VTD vnim_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk12Physloc U789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-L12000000000000VTD vtopt0Status AvailableLUN 0x8300000000000000Backing device /var/vio/VMLibrary/ibm_directory_cd1Physloc U789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-L13000000000000 When this is done, it is needed to recognize the virtual optical device on the client partition. If you select this unit as an installation media location, you will be able Chapter 2. Virtual storage management 25
  • 63. to see the contents of the ISO file on the Virtual I/O Server and use it as a DVD-RAM unit On the AIX client, run the cfgmgr command to configure the virtual optical device and a new CD-ROM unit will appear on AIX. On the IBM i client, with the default system value QAUTOCFG=1 setting, the mapped file-backed virtual optical device is automatically configured and available as an optical device with type-model 632C-002 and OPTxx resource name. On the Linux client, type echo “- - -” > /sys/class/scsi_host/hostX/scan to configure the virtual optical device where X stands for the SCSI bus you want to scan. Example 2-10 displays the new cd1 device that we created in our AIX client, along with a list of its contents.Example 2-10 Checking the virtual optical device contents on a AIX client# cfgmgr# lsdev -C |grep cdcd0 Defined Virtual SCSI Optical Served by VIO Servercd1 Available Virtual SCSI Optical Served by VIO Server# lscfg -vl cd1 cd1 U9117.MMA.100F6A0-V2-C20-T1-L830000000000 Virtual SCSI Optical Served byVIO Server# installp -L -d /dev/cd1gskjs:gskjs.rte:7.0.3.30::I:C:::::N:AIX Certificate and SSL Java only Base Runtime::::0::gsksa:gsksa.rte:7.0.3.30::I:C:::::N:AIX Certificate and SSL Base Runtime ACME Toolkit::::0::gskta:gskta.rte:7.0.3.30::I:C:::::N:AIX Certificate and SSL Base Runtime ACME Toolkit::::0:: After you are done with one disk, you might want to insert another disk. To do so, you must create a new virtual optical media as described in Example 2-6 on page 24. In Example 2-11, we create three more virtual disk media. The example illustrates how to check which virtual media device is loaded. It also shows how to unload a virtual media device (ibm_directory_cd1) from the virtual target device (vtopt0) and load a new virtual media device (ibm_directory_cd13) into it. To unload the media, use the unloadopt command with the -vtd flag and the virtual target device. To load a new virtual media, reuse the loadopt command as described in Example 2-9 on page 25. Example 2-11 Loading a new disk on the virtual media device $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free26 IBM PowerVM Virtualization Managing and Monitoring
  • 64. 14279 13013 rootvg 69888 32384 Name File Size Optical Access ibm_directory_cd1 365 vtopt0 rw ibm_directory_cd2 308 None rw ibm_directory_cd3 351 None rw ibm_directory_cd4 242 None rw $ unloadopt -vtd vtopt0 $ loadopt -disk ibm_directory_cd3 -vtd vtopt0 $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 14279 13013 rootvg 69888 32384 Name File Size Optical Access ibm_directory_cd1 365 None rw ibm_directory_cd2 308 None rw ibm_directory_cd3 351 vtopt0 rw ibm_directory_cd4 242 None rw Note that you do not need to unload virtual media each time you want to load a new virtual media. Instead, you can create more than one virtual target device (as shown in Example 2-8 on page 25) and load individual virtual media on new virtual optical devices (as shown in Example 2-9 on page 25).2.5 Mapping LUNs over vSCSI to hdisks An important aspect of managing a virtual environment is keeping track of which virtual objects correspond to which physical objects. This is particularly challenging in the storage arena, where individual virtual servers can have hundreds of virtual disks. This mapping is critical to managing performance and to understanding which systems will be affected by hardware maintenance. Chapter 2. Virtual storage management 27
  • 65. As illustrated in Figure 2-1, virtual disks can be mapped to physical disks as physical volumes or as logical volumes. Logical volumes can be mapped from volume groups or storage pools. Logical Volume Mapping Physical Disk Mapping Client Partition Client Partition hdisk1 hdisk1 Client VSCSI Client VSCSI Adapter Adapter Server VSCSI Server VSCSI Virtual Adapter Virtual Adapter Target Target Free Physical SCSI Physical FC LV1 Adapter hdisk15 Adapter LV2 SCSI Disk Server Partition Server Partition FC Switch Physical Resources FC LUN Virtual ResourcesFigure 2-1 Logical versus physical drive mapping Depending on which method you choose, you might need to track the following information: Virtual I/O Server: – Server host name. – Physical disk location. – Physical adapter device name. – Physical hdisk device name. – Cluster name (for shared storage pool export only). – Volume group or storage pool name (for logical volume or storage pool export only).28 IBM PowerVM Virtualization Managing and Monitoring
  • 66. – Logical volume or storage pool backing device name (for logical volume or storage pool export only). – Virtual SCSI adapter slot. – Virtual SCSI adapter device name. – Virtual target device. Virtual I/O client: – Client host name. – Virtual SCSI adapter slot. – Virtual SCSI adapter device name. – Virtual disk device name. You can use the System Planning Tool (SPT) for planning and documenting your configuration. The SPT system plan can be deployed through an HMC or the Integrated Virtualization Manager (IVM). It ensures correct naming and numbering. For more information about deploying a SPT system plan, see Chapter 9, “System Planning Tool” on page 369.2.5.1 Naming conventions A well-planned naming convention is key to managing information. One strategy for reducing the amount of data that must be tracked is to make settings match on the virtual I/O client and server wherever possible. This can include corresponding volume group, logical volume, and virtual target device names. Integrating the virtual I/O client host name into the virtual target device name can simplify tracking on the server. When using Fibre Channel disks on a storage server that supports LUN naming, this feature can be used to make it easier to identify LUNs. Commands such as lssdd for the IBM System Storage® DS8000® and DS6000™ series storage servers, and the fget_config or mpio_get_config command for the IBM DS4000® series, can be used to match hdisk devices with LUN names. Chapter 2. Virtual storage management 29
  • 67. The fget_config command is part of a storage device driver. Therefore, prior to Virtual I/O Server version 2.1, you cannot use the fget_config command from the ioscli command line. Instead, you must use the oem_setup_env command as shown in Example 2-12. Example 2-12 The fget_config command for the DS4000 series $ oem_setup_env # fget_config -Av ---dar0--- User array name = FAST200 dac0 ACTIVE dac1 ACTIVE Disk DAC LUN Logical Drive utm 31 hdisk4 dac1 0 Server1_LUN1 hdisk5 dac1 1 Server1_LUN2 hdisk6 dac1 2 Server-520-2-LUN1 hdisk7 dac1 3 Server-520-2-LUN2 In many cases, using LUN names can be simpler than tracing devices using Fibre Channel world wide port names and numeric LUN identifiers. The Virtual I/O Server version 2.1 uses MPIO as a default device driver. Example 2-13 shows the listing of a DS4800 disk subsystem. Proper User Label naming in the SAN makes it much easier to track the LUN-to-hdisk relation. Example 2-13 SAN storage listing on the Virtual I/O Server version 2.1 $ oem_setup_env # mpio_get_config -Av Frame id 0: Storage Subsystem worldwide name: 60ab800114632000048ed17e Controller count: 2 Partition count: 1 Partition 0: Storage Subsystem Name = ITSO_DS4800 hdisk LUN # Ownership User Label hdisk6 0 A (preferred) VIOS1 hdisk7 1 A (preferred) AIX61 hdisk8 2 B (preferred) AIX53 hdisk9 3 A (preferred) SLES10 hdisk10 4 B (preferred) RHEL52 hdisk11 5 A (preferred) IBMi61_0 hdisk12 6 B (preferred) IBMi61_130 IBM PowerVM Virtualization Managing and Monitoring
  • 68. hdisk13 7 A (preferred) IBMi61_0m hdisk14 8 B (preferred) IBMi61_1m2.5.2 Virtual device slot numbers After you establish the naming conventions, also establish slot numbering conventions for the virtual I/O adapters. All Virtual SCSI and Virtual Ethernet devices have slot numbers. In complex systems, there will tend to be far more storage devices than network devices because each virtual SCSI device can only communicate with one server or client. One common example of slot number assignment is to reserve lower slot numbers for Ethernet adapters. To avoid mixing slot number ranges, allow for growth both for Ethernet and SCSI devices. Virtual I/O Servers typically have many more virtual adapters than client partitions. Remember: Several disks can be mapped to the same server-client SCSI adapter pair. Management can be simplified by keeping slot numbers consistent between the virtual I/O client and server. However, when partitions are moved from one server to another, this might not be possible. In environments with only one Virtual I/O Server, add storage adapters incrementally starting with slot 21 and higher. When clients are attached to two Virtual I/O Servers, the adapter slot numbers should be alternated from one Virtual I/O Server to the other. The first Virtual I/O Server should use odd numbered slots starting at 21, and the second should use even numbered slots starting at 22. In a two-server scenario, allocate slots in pairs, with each client using two adjacent slots such as 21 and 22, or 33 and 34. Set the maximum virtual adapters number to at least 100. As shown in Figure 2-2 on page 32, the default value is 10 when you create an LPAR. The appropriate number for your environment depends on the number of virtual servers and adapters expected on each system. Each unused virtual adapter slot consumes a small amount of memory, so the allocation should be balanced. Use the System Planning Tool available from the following URL to plan memory requirements for your system configuration: http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.html Chapter 2. Virtual storage management 31
  • 69. Important: When you plan for the number of virtual I/O slots on your LPAR, the maximum number of virtual adapter slots available on a partition is set by the partition’s profile. To increase the maximum number of virtual adapters you must change the profile, stop the partition (not just a reboot), and start the partition. To add new virtual I/O clients without shutting down the LPAR or Virtual I/O Server partition, leave plenty of room for expansion when setting the maximum number of slots. The maximum number of virtual adapters should not be set higher than 1024 as it can cause performance problems.Figure 2-2 Setting maximum number of virtual adapters in a partition profile32 IBM PowerVM Virtualization Managing and Monitoring
  • 70. Because virtual SCSI connections operate at memory speed, there is generally no performance gain from adding multiple adapters between a Virtual I/O Server and client. However, multiple virtual adapters should be configured when you are using the multiple storage subsystems for availability. See 4.5.1 AIX LVM mirroring in the client partition, of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 for more details. For AIX virtual I/O client partitions, each adapter pair can handle up to 85 virtual devices with the default queue depth of three. For IBM i clients, up to 16 virtual disk and 16 optical devices are supported. For Linux clients, by default, up to 192 virtual SCSI targets are supported. In situations where virtual devices per partition are expected to exceed that number, or where the queue depth on certain devices might be increased above the default, reserve additional adapter slots for the Virtual I/O Server and the virtual I/O client partition. When tuning queue depths, the VSCSI adapters have a fixed queue depth. There are 512 command elements, of which 2 are used by the adapter, 3 are reserved for each VSCSI LUN for error recovery, and the rest are used for I/O requests. Thus, with the default queue depth of 3 for VSCSI LUNs, that allows for up to 85 LUNs to use an adapter: (512 - 2) / (3 + 3) = 85 rounding down. If you need higher queue depths for the devices, the number of LUNs per adapter is reduced. For example, if you want to use a queue depth of 25, that allows 510/28 = 18 LUNs per adapter for an AIX client partition. For Linux clients, the maximum number of LUNs per virtual SCSI adapter is decided by the max_id and max_channel parameters. The max_id is set to 3 by default and can be increased to 7. The max_channel is set to 64 by default, which is the maximum value. With the default values, the Linux client can have 3 * 64 = 192 virtual SCSI targets. Note that if you overload an adapter, your performance will be reduced.2.5.3 Tracing a configuration It might become necessary to manually trace a client virtual disk back to the physical hardware. AIX virtual storage configuration tracing AIX virtual storage (including NPIV and logical units from a shared storage pools) can be traced from Virtual I/O Server using the lsmap command. Example 2-14 illustrates tracing virtual SCSI storage from the Virtual I/O Server.Example 2-14 Tracing virtual SCSI storage from Virtual I/O Server$ lsmap -allSVSA Physloc Client Partition ID Chapter 2. Virtual storage management 33
  • 71. --------------- -------------------------------------------- ------------------vhost0 U9117.MMA.101F170-V1-C21 0x00000003VTD aix61_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk7Physloc U789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L1000000000000SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost1 U9117.MMA.101F170-V1-C22 0x00000004VTD NO VIRTUAL TARGET DEVICE FOUND Example 2-15 illustrates how to trace NPIV storage devices from the Virtual I/O Server. ClntID shows the LPAR ID as seen from the HMC. ClntName depicts the host name. For more information about NPIV, see 2.10, “Managing N_Port ID virtualization” on page 74. Example 2-15 Tracing NPIV virtual storage from the Virtual I/O Server $ lsmap -npiv -all Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost0 U9117.MMA.101F170-V1-C31 3 AIX61 AIX Status:LOGGED_IN FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2 Ports logged in:2 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs2 VFC client DRC:U9117.MMA.101F170-V3-C31-T1 Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost1 U9117.MMA.101F170-V1-C32 4 Status:NOT_LOGGED_IN FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2 Ports logged in:0 Flags:4<NOT_LOGGED> VFC client name: VFC client DRC:34 IBM PowerVM Virtualization Managing and Monitoring
  • 72. Example 2-16 shows how to trace storage assigned from a shared storage pool.Example 2-16 List all disk mappings in a cluster$ lsmap -clustername ssp_cluster -allPhysloc Client Partition ID------------------------------------------------- ------------------U8233.E8B.061AA6P-V33-C136 0x00000024VTD vtscsi0LUN 0x8100000000000000Backing device sspdisk06.f578cbb7b4d930ccbe3abc27f8f62376Physloc Client Partition ID------------------------------------------------- ------------------U8233.E8B.100EF5R-V1-C104 0x00000004VTD vtscsi0LUN 0x8100000000000000Backing device sspdisk01.198d854abebe7e965214d8360eae60feThe IBM Systems Hardware Information Center contains a guide to tracingvirtual disks, which is available at the following address:http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/iphb1/iphb1_vios_managing_mapping.htmIBM i virtual storage configuration tracingThis sections describes how to trace the configuration of virtual SCSI and virtualFibre Channel LUNs from the IBM i client perspective down to the actual physicalstorage resource.IBM i virtual SCSI disk configuration tracingVirtual SCSI LUNs always show up as disk units with device type 6B22 model050 on the IBM i client, regardless of which storage subsystem ultimatelyprovides the backing physical storage. Chapter 2. Virtual storage management 35
  • 73. Figure 2-3 shows an example of the disk configuration from a new IBM i client after SLIC install that is set up for mirroring its disk units across two Virtual I/O Servers. Display Disk Configuration Status Serial Resource ASP Unit Number Type Model Name Status 1 Mirrored 1 Y3WUTVVQMM4G 6B22 050 DD001 Active 1 YYUUH3U9UELD 6B22 050 DD004 Resume Pending 2 YD598QUY5XR8 6B22 050 DD003 Active 2 YTM3C79KY4XF 6B22 050 DD002 Resume Pending Press Enter to continue. F3=Exit F5=Refresh F9=Display disk unit details F11=Disk configuration capacity F12=Cancel Figure 2-3 IBM i SST Display Disk Configuration Status panel36 IBM PowerVM Virtualization Managing and Monitoring
  • 74. To trace the IBM i disk units to the corresponding SCSI devices on the Virtual I/OServer, we look at the disk unit details information as shown in Figure 2-4. Display Disk Unit Details Type option, press Enter. 5=Display hardware resource information details Serial Sys Sys I/O I/O OPT ASP Unit Number Bus Card Adapter Bus Ctl Dev Compressed 1 1 Y3WUTVVQMM4G 255 21 0 1 0 No 1 1 YYUUH3U9UELD 255 22 0 2 0 No 1 2 YD598QUY5XR8 255 21 0 2 0 No 1 2 YTM3C79KY4XF 255 22 0 1 0 No F3=Exit F9=Display disk units F12=CancelFigure 2-4 IBM i SST Display Disk Unit Details panel Tip: To trace down an IBM i virtual disk unit to the corresponding virtual target device (VTD) and backing hdisk on the Virtual I/O Server, use the provided system card Sys Card and controller Ctl information from the IBM i client as follows: Sys Card shows the IBM i virtual SCSI client adapter slot as configured in the IBM i partition profile Ctl XOR 0x80 corresponds to the virtual target device LUN information about the Virtual I/O Server Chapter 2. Virtual storage management 37
  • 75. The following example illustrates how to trace the IBM i mirrored load source (disk unit 1) reported on IBM i at Sys Card 21 Ctl 1 and Sys Card 22 Ctl 2 down to the devices on the two Virtual I/O Servers and the SAN storage system. 1. Look at the virtual adapter mapping in the IBM i partition properties on the HMC, as shown in Figure 2-5. Because the Sys Card 21 and 22 information from the IBM i client corresponds to the virtual SCSI adapter slot numbers, the partition properties information shows that the IBM i client virtual SCSI client adapters 21 and 22 connect to the virtual SCSI server adapters 23 and 23 of the Virtual I/O Server partitions vios1 and vios2.Figure 2-5 IBM i partition profile virtual adapters configuration38 IBM PowerVM Virtualization Managing and Monitoring
  • 76. 2. Knowing the corresponding virtual SCSI server adapter slots 23 and 23, look at the device mapping on our two Virtual I/O Servers. The lsmap command on Virtual I/O Server vios1 shows the device mapping between physical and virtual devices shown in Example 2-17.Example 2-17 Displaying the Virtual I/O Server device mapping$ lsdev -slots# Slot Description Device(s)U789D.001.DQDYKYW-P1-T1 Logical I/O Slot pci4 usbhc0 usbhc1U789D.001.DQDYKYW-P1-T3 Logical I/O Slot pci3 sissas0U9117.MMA.101F170-V1-C0 Virtual I/O Slot vsa0U9117.MMA.101F170-V1-C2 Virtual I/O Slot vasi0U9117.MMA.101F170-V1-C11 Virtual I/O Slot ent2U9117.MMA.101F170-V1-C12 Virtual I/O Slot ent3U9117.MMA.101F170-V1-C13 Virtual I/O Slot ent4U9117.MMA.101F170-V1-C21 Virtual I/O Slot vhost0U9117.MMA.101F170-V1-C22 Virtual I/O Slot vhost1U9117.MMA.101F170-V1-C23 Virtual I/O Slot vhost2U9117.MMA.101F170-V1-C24 Virtual I/O Slot vhost3U9117.MMA.101F170-V1-C25 Virtual I/O Slot vhost4U9117.MMA.101F170-V1-C50 Virtual I/O Slot vhost5U9117.MMA.101F170-V1-C60 Virtual I/O Slot vhost6$ lsmap -vadapter vhost2SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost2 U9117.MMA.101F170-V1-C23 0x00000005VTD IBMi61_0Status AvailableLUN 0x8100000000000000Backing device hdisk11Physloc U789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L5000000000000VTD IBMi61_1Status AvailableLUN 0x8200000000000000Backing device hdisk12Physloc U789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L6000000000000 3. Because the IBM i client disk unit 1 is connected to Sys Card 21 and vios1 shows Ctl 1, we find the corresponding virtual target device LUN on the Virtual I/O Server as follows: Ctl 1 XOR 0x80 = 0x81. That is, LUN 0x81, which is backed by hdisk11, corresponds to the disk unit 1 of our IBM i client whose mirror side is connected to vios1. Chapter 2. Virtual storage management 39
  • 77. 4. To locate hdisk11 and its physical disk and LUN on the SAN storage system, use the lsdev command to see its kind of multipath device. After determining that it is a MPIO device, use the mpio_get_config command as shown in Example 2-18 to finally determine that our IBM i disk unit 1 corresponds to LUN 5 on our DS4800 storage subsystem. Example 2-18 Virtual I/O Server hdisk to LUN tracing $ lsdev -dev hdisk11 name status description hdisk11 Available MPIO Other DS4K Array Disk $ oem_setup_env # mpio_get_config -Av Frame id 0: Storage Subsystem worldwide name: 60ab800114632000048ed17e Controller count: 2 Partition count: 1 Partition 0: Storage Subsystem Name = ITSO_DS4800 hdisk LUN # Ownership User Label hdisk6 0 A (preferred) VIOS1 hdisk7 1 A (preferred) AIX61 hdisk8 2 B (preferred) AIX53 hdisk9 3 A (preferred) SLES10 hdisk10 4 B (preferred) RHEL52 hdisk11 5 A (preferred) IBMi61_0 hdisk12 6 B (preferred) IBMi61_1 hdisk13 7 A (preferred) IBMi61_0m hdisk14 8 B (preferred) IBMi61_1m IBM i virtual Fibre Channel disk configuration tracing Virtual Fibre Channel LUNs from NPIV-attached IBM System Storage DS8000 storage systems show up as disk units with their native device type 2107 and model Axx as configured on the DS8000. They report in under a virtual IOP/IOA device type-model 6B25-001 for the virtual Fibre Channel client adapter as can40 IBM PowerVM Virtualization Managing and Monitoring
  • 78. be seen from the following IBM i System Service Tools (SST) panel fromHardware Service Manager for Logical Hardware Resources shown inFigure 2-6. Logical Hardware Resources Associated with IOP Type options, press Enter. 2=Change detail 4=Remove 5=Display detail 6=I/O debug 7=Verify 8=Associated packaging resource(s) Resource Opt Description Type-Model Status Name Virtual IOP 6B25-001 Operational CMB09 Virtual Storage IOA 6B25-001 Operational DC04 Disk Unit 2107-A02 Operational DD002 Disk Unit 2107-A02 Operational DD003 Disk Unit 2107-A02 Operational DD004 Disk Unit 2107-A02 Operational DD012 F3=Exit F5=Refresh F6=Print F8=Include non-reporting resources F9=Failed resources F10=Non-reporting resources F11=Display serial/part numbers F12=CancelFigure 2-6 IBM i SST Logical Hardware Resources Associated with IOP Chapter 2. Virtual storage management 41
  • 79. To trace down the IBM i disk units to the corresponding DS8000 volume IDs, select F11=Display serial/part numbers to display the disk unit serial numbers as shown in Figure 2-7. The IBM i disk unit serial number 50-XXXXYYY includes the 4-digit DS8000 volume ID XXXX followed by a 3-digit suffix YYY which is by default composed of the last 3-digits from the DS8000 world-wide node name (WWNN) or, on older DS8000 machines, the default 001 or a user-defined number. Logical Hardware Resources Associated with IOP Type options, press Enter. 2=Change detail 4=Remove 5=Display detail 6=I/O debug 7=Verify 8=Associated packaging resource(s) Serial Part Opt Description Type-Model Number Number Virtual IOP 6B25-001 00-00000 Virtual Storage IOA 6B25-001 00-00000 Disk Unit 2107-A02 50-1000001 Disk Unit 2107-A02 50-1001001 Disk Unit 2107-A02 50-1100001 Disk Unit 2107-A02 50-1101001 F3=Exit F5=Refresh F6=Print F8=Include non-reporting resources F9=Failed resources F10=Non-reporting resources F11=Display logical address F12=Cancel Figure 2-7 IBM i SST Logical Hardware Resources disk unit serial numbers42 IBM PowerVM Virtualization Managing and Monitoring
  • 80. Selecting option 5=Display detail for the virtual IOA displays the IBM i virtual Fibre Channel client adapter’s world-wide port name C05076030398000E and slot number 41 we configured for the IBM i client partition on the HMC as shown in Figure 2-8. Auxiliary Storage Hardware Resource Detail Description . . . . . . . . . . . . : Virtual Storage IOA Type-model . . . . . . . . . . . . : 6B25-001 Status . . . . . . . . . . . . . . : Operational Serial number . . . . . . . . . . . : 00-00000 Part number . . . . . . . . . . . . : Resource name . . . . . . . . . . . : DC04 Port . . . . . . . . . . . . . . . : 0 Worldwide port name . . . . . . . : C05076030398000E Physical location . . . . . . . . . : U8233.E8B.061AA6P-V6-C41 SPD bus . . . . . . . . . . . . . . : System bus . . . . . . . . . . . : 255 System board . . . . . . . . . . : 128 System card . . . . . . . . . . . : 41 Storage . . . . . . . . . . . . . . : I/O adapter . . . . . . . . . . . : I/O bus . . . . . . . . . . . . . : 127 Controller . . . . . . . . . . . : More... F3=Exit F5=Refresh F6=Print F9=Change detail F11=Display additional port information F12=Cancel Figure 2-8 IBM i SST Auxiliary Storage Hardware Resource Detail With knowing the IBM i partition’s virtual Fibre Channel adapter resource location U8233.E8B.061AA6P-V6-C41, i.e. slot 41, or resource name DC04, we can trace down the corresponding physical Fibre Channel adapter with its location and WWPN (shown in the network address field) used on the Virtual I/O Server using the lsmap -all -npiv and lsdev -dev fcsX -vpd command as shown in Example 2-19.Example 2-19 Virtual I/O Server virtual to physical Fibre Channel adapter mapping$ lsmap -all -npivName Physloc ClntID ClntName ClntOS------------- ---------------------------------- ------ -------------- -------vfchost0 U8233.E8B.061AA6P-V1-C36 7 P7_2_AIX AIXStatus:LOGGED_INFC name:fcs0 FC loc code:U5802.001.0086848-P1-C2-T1Ports logged in:2Flags:a<LOGGED_IN,STRIP_MERGE> Chapter 2. Virtual storage management 43
  • 81. VFC client name:fcs0 VFC client DRC:U8233.E8B.061AB2P-V3-C36-T1Name Physloc ClntID ClntName ClntOS------------- ---------------------------------- ------ -------------- -------vfchost1 U8233.E8B.061AA6P-V1-C41 6 IBM i IBM iStatus:LOGGED_INFC name:fcs0 FC loc code:U5802.001.0086848-P1-C2-T1Ports logged in:1Flags:a<LOGGED_IN,STRIP_MERGE>VFC client name:DC04 VFC client DRC:U8233.E8B.061AA6P-V6-C41$ lsdev -dev fcs0 -vpd fcs0 U5802.001.0086848-P1-C2-T1 8Gb PCI Express Dual Port FC Adapter(df1000f114108a03) Part Number.................10N9824 Serial Number...............1B02104269 Manufacturer................001B EC Level....................D76482B Customer Card ID Number.....577D FRU Number..................10N9824 Device Specific.(ZM)........3 Network Address.............10000000C99FC71E ROS Level and ID............02781174 Device Specific.(Z0)........31004549 Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........09030909 Device Specific.(Z4)........FF781116 Device Specific.(Z5)........02781174 Device Specific.(Z6)........07731174 Device Specific.(Z7)........0B7C1174 Device Specific.(Z8)........20000000C99FC71E Device Specific.(Z9)........US1.11X4 Device Specific.(ZA)........U2D1.11X4 Device Specific.(ZB)........U3K1.11X4 Device Specific.(ZC)........00000000 Hardware Location Code......U5802.001.0086848-P1-C2-T1 PLATFORM SPECIFIC Name: fibre-channel Model: 10N9824 Node: fibre-channel@0 Device Type: fcp Physical Location: U5802.001.0086848-P1-C2-T144 IBM PowerVM Virtualization Managing and Monitoring
  • 82. To locate the virtual Fibre Channel client adapter login in the SAN switch, look atthe name server registrations on the switch as shown with the nsshow commandfor a Brocade FOS SAN switch in Example 2-20. The Pid column informationshows the Fibre Channel address in the form DDPPNN with DD=switch domainin hex, PP = switch port in hex, NN = sequential number for the physical or virtualFC port connected to the switch port in hex. We can see that the IBM i virtualFibre Channel adapter with WWPN C05076030398000E is logged in to switchdomain 01, port 00 as NPIV port number 02 of the physical adapter WWPN10000000C99FC71E.Example 2-20 Brocade SAN switch nameserver registration informationitso-aus-san-01:admin> nsshow{ Type Pid COS PortName NodeName TT N 010000; 2,3;10:00:00:00:c9:9f:c7:1e;20:00:00:00:c9:9f:c7:1e; na Fabric Port Name: 20:00:00:05:1e:02:aa:c1 Permanent Port Name: 10:00:00:00:c9:9f:c7:1e Port Index: 0 Share Area: No Device Shared in Other AD: No Redirect: No N 010001; 2,3;c0:50:76:03:03:9e:00:0a;c0:50:76:03:03:9e:00:0a; na Fabric Port Name: 20:00:00:05:1e:02:aa:c1 Permanent Port Name: 10:00:00:00:c9:9f:c7:1e Port Index: 0 Share Area: No Device Shared in Other AD: No Redirect: No N 010002; 2,3;c0:50:76:03:03:98:00:0e;c0:50:76:03:03:98:00:0e; na Fabric Port Name: 20:00:00:05:1e:02:aa:c1 Permanent Port Name: 10:00:00:00:c9:9f:c7:1e Port Index: 0 Share Area: No Device Shared in Other AD: No Redirect: No N 010100; 2,3;10:00:00:00:c9:9f:c7:1f;20:00:00:00:c9:9f:c7:1f; na Fabric Port Name: 20:01:00:05:1e:02:aa:c1 Permanent Port Name: 10:00:00:00:c9:9f:c7:1f Port Index: 1 Share Area: No Device Shared in Other AD: No Redirect: No... Chapter 2. Virtual storage management 45
  • 83. Using the lshostconnect -login command from the DS8000 DSCLI we can see that the IBM i virtual Fibre Channel adapter with WWPN C05076030398000E is logged in to DS8000 host adapter port I0201 as shown in Example 2-21. Example 2-21 DS8000 DSCLI displaying the logged in host initiators dscli> lshostconnect -login Date/Time: 17. Dezember 2010 17:23:07 CET IBM DSCLI Version: 6.5.15.19 DS: IBM.2107-75BALB1 WWNN WWPN ESSIOport LoginType Name ID ======================================================================== ... C0507603039E000A C0507603039E000A I0201 SCSI AIX_NPIV_1 000F C05076030398000E C05076030398000E I0201 SCSI IBMi_NPIV 0014 20000000C99FC3F6 10000000C99FC3F6 I0201 SCSI P7_2_vios1_1 0004 ... Linux virtual SCSI disk configuration tracing Virtual SCSI disks in a Linux client partition can be traced by following steps. 1. On a Linux client partition, use the lsscsi command to display the information about virtual SCSI disks as shown in Example 2-22. In this example, [1:0:1:1] means that sda is Host: scsi1, Channel: 00, Target: 01, and Lun: 00. Example 2-22 List of SCSI disks [root@Power7-2-RHEL ~]# lsscsi -v [1:0:1:0] disk AIX VDASD 0001 /dev/sda dir: /sys/bus/scsi/devices/1:0:1:0 [/sys/devices/vio/30000036/host1/target1:0:1/1:0:1:0] [2:0:1:0] disk AIX VDASD 0001 /dev/sdb dir: /sys/bus/scsi/devices/2:0:1:0 [/sys/devices/vio/30000037/host2/target2:0:1/2:0:1:0] [3:0:0:0] disk IBM 2107900 .278 /dev/sdc dir: /sys/bus/scsi/devices/3:0:0:0 [/sys/devices/vio/30000038/host3/rport-3:0-0/target3:0:0/3:0:0:0] [4:0:0:0] disk IBM 2107900 .278 /dev/sdd dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/vio/30000039/host4/rport-4:0-0/target4:0:0/4:0:0:0] [root@Power7-2-RHEL ~]# lsscsi -c Attached devices: Host: scsi1 Channel: 00 Target: 01 Lun: 00 Vendor: AIX Model: VDASD Rev: 0001 Type: Direct-Access ANSI SCSI revision: 03 Host: scsi2 Channel: 00 Target: 01 Lun: 00 Vendor: AIX Model: VDASD Rev: 0001 Type: Direct-Access ANSI SCSI revision: 03 Host: scsi3 Channel: 00 Target: 00 Lun: 0046 IBM PowerVM Virtualization Managing and Monitoring
  • 84. Vendor: IBM Model: 2107900 Rev: .278 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi4 Channel: 00 Target: 00 Lun: 00 Vendor: IBM Model: 2107900 Rev: .278 Type: Direct-Access ANSI SCSI revision: 05 a. Example 2-23 shows how to display the information of the location code of the virtual SCSI adapter corresponding to Host: scsi1, the partition name of the Virtual I/O Server which has the corresponding virtual SCSI host adapter and the vhost name of the virtual SCSI host adapter. Example 2-23 Information of scsi1 adapter [root@Power7-2-RHEL ~]# cat /sys/class/scsi_host/host1/vhost_loc U8233.E8B.061AB2P-V5-C54-T1 [root@Power7-2-RHEL ~]# cat /sys/class/scsi_host/host1/partition_name P7_2_vios1 [root@Power7-2-RHEL ~]# cat /sys/class/scsi_host/host1/vhost_name vhost1 2. On the Virtual I/O Server, use the lsmap command to display the device mapping between physical and virtual devices as shown in Example 2-24. In our example, the backing device of vhost1 has LUN 0x81 and 0x81 corresponds Target: 01 of the sda information in step1. Therefore, the sda on the Linux client partition corresponds hdisk19 of the Virtual I/O Server.Example 2-24 Device mapping information$ lsmap -vadapter vhost1SVSA Physloc Client Partition ID--------------- -------------------------------------------- ------------------vhost1 U8233.E8B.061AB2P-V1-C54 0x00000001VTD rhel_hd19Status AvailableLUN 0x8100000000000000Backing device hdisk19Physloc U5802.001.0087356-P1-C2-T1-W500507630410412C-L4011401600000000Mirrored false For NPIV devices, type cat /sys/class/scsi_host/hostX/port_loc_code to display the physical location code of the backing physical Fibre Channel adapter where the X stands for scsi adapter number in step2. Chapter 2. Virtual storage management 47
  • 85. 2.6 Managing Shared Storage Pools Shared storage pools can only contain disks from a SAN subsystem with no restrictions as long as the devices are supported by the multipath driver on the Virtual I/O server. If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool. When using thin provisioned devices the total size of logical units can be larger than the size of the shared storage pool. However, the free space of the shared storage pool becomes small if the actual physical usage of logical units becomes larger. In this case you need to add an additional physical volume to the shared storage pool.2.6.1 Creating the shared storage pool This section provides a description of the prerequisites and steps for creating the shared storage pool. Prerequisites These are the prerequisites at the time of writing: Virtual I/O Server version 2.2.1.3 Fix Pack 25, Service Pack 1 or newer. HMC version 7.7.4.0 or newer. The minimum storage required by your storage vendor All SAN provisioned disks must be zoned to all Fibre Channel adapters on the Virtual I/O servers that will be members of the shared storage pool cluster. Set the Fibre Channel adapters parameters as follows: chdev -dev fscsi0 -attr dyntrk=yes -perm chdev -dev fscsi0 -attr fc_err_recov=fast_fail -perm The disks need to have the reserve policy set to “no_reserve”. However, for repository disks, this is taken care of by the Cluster Aware AIX (CAA) layer on the Virtual I/O Server. One disk of minimum 10 GB is used as the repository disk for the cluster. At least one LUN of 10 GB or greater must be used to create the shared storage pool. TCP/IP communication is required, and host name resolution (DNS or host file based) must be set up between the members of the cluster.48 IBM PowerVM Virtualization Managing and Monitoring
  • 86. Creating the cluster and adding nodesAfter all the prerequisites are met, create the cluster by issuing the clustercommand with the parameters shown in Example 2-25.Example 2-25 Creating the cluster with one node$ cluster -create -clustername ssp_cluster -repopvs hdisk1 -spname ssp_pool_1-sppvs hdisk2 hdisk3 -hostname p71vios1Cluster ssp_cluster has been created successfully. Requirements: The host name specified must have a fully qualified name, and must be resolved by DNS from all VIO nodes configured for Shared Storage Pool (SSP). For example: host p71vios1.austin.ibm.com p71vios1j.austin.ibm.com is 9.53.100.36 host 9.53.100.36 p71vios1.austin.ibm.com is 9.53.100.36In the example, the cluster is named ssp_cluster and the shared storage poolssp_pool_1. hdisk1 is the repository disk, and hdisk2 and hdisk3 are added intothe shared storage pool. Restriction: Only one cluster can be defined per Virtual I/O Server, and only one shared storage pool can be defined per cluster.Add nodes to the cluster (up to a maximum of 4) as shown in Example 2-26Example 2-26 Adding nodes to a cluster$ cluster -addnode -clustername ssp_cluster -hostname p71vios2Partition p71vios2 has been added to the ssp_cluster cluster.$ cluster -addnode -clustername ssp_cluster -hostname p72vios1Partition p72vios1 has been added to the ssp_cluster cluster.$ cluster -addnode -clustername ssp_cluster -hostname p72vios2Partition p72vios2 has been added to the ssp_cluster cluster.To check the status of the cluster and the nodes, use the cluster command asshown in Example 2-27 on page 50 Chapter 2. Virtual storage management 49
  • 87. Example 2-27 Checking the status of the cluster $ cluster -status -clustername ssp_cluster Cluster Name State ssp_cluster OK Node Name MTM Partition Num State Pool State p71vios1 8233-E8B02100EF5R 1 OK OK p71vios2 8233-E8B02100EF5R 2 OK OK p72vios1 8233-E8B02061AA6P 33 OK OK p72vios2 8233-E8B02061AA6P 34 OK OK To make sure that a cluster is defined, use the command in Example 2-28 to list the current configuration. Example 2-28 Listing the cluster information $ cluster -list Cluster Name Cluster ID ssp_cluster b2326428161811e1bc83001a64bb69482.6.2 Adding physical volumes to the shared storage pool Before you start, ensure that there are valid candidates for being part of the shared storage pool. Example 2-29 shows how to display a list of physical volumes capable of being added. Example 2-29 List of physical volumes capable of being added $ lspv -clustername ssp_cluster -capable PV NAME SIZE(MB) PVUDID hdisk3 51200 200B75BALB1101107210790003IBMfcp hdisk7 30720 200B75BALB1101507210790003IBMfcp hdisk19 51200 200B75BALB1100307210790003IBMfcp hdisk20 102400 200B75BALB1110007210790003IBMfcp hdisk21 51200 200B75BALB1110107210790003IBMfcp hdisk22 51200 200B75BALB1110207210790003IBMfcp To add a physical volume to the shared storage pool, use the chsp command as shown in Example 2-30. Example 2-30 Adding the physical volume to the shared storage pool $ chsp -add -clustername ssp_cluster -sp ssp_pool_1 hdisk7 Current request action progress: % 5 Current request action progress: % 8050 IBM PowerVM Virtualization Managing and Monitoring
  • 88. Current request action progress: % 100 To display the physical volumes in the shared storage pool, use the lspv command as shown in Example 2-31. Note: The shared storage pool can contain up to 256 physical disks. Example 2-31 A list of the physical volumes in the shared storage pool $ lspv -clustername ssp_cluster -sp ssp_pool_1 PV NAME SIZE(MB) PVUDID hdisk2 51200 3E213600A0B8000291B0800006F0707E6A2120F1815 FAStT03IBMfcp hdisk3 51200 200B75BALB1101107210790003IBMfcp hdisk4 51200 200B75BALB1101207210790003IBMfcp hdisk5 51200 200B75BALB1101307210790003IBMfcp To display the size of the shared storage pool and free space, use the lssp command as shown in Example 2-32. Example 2-32 Listing the shared storage pool $ lssp -clustername ssp_cluster Pool Size(mb) Free(mb) TotalLUSize(mb) LUs Type PoolID ssp_pool_1 459840 422716 153600 9 CLPOOL FFFFFFFFAC101564000000004ECD63412.6.3 Creating and mapping logical units Two types of logical units can be created in a shared storage pool, thin and thick. The default logical unit is thin, meaning it will only use a minimal initial space on the physical disk and it will not significantly reduce the size of the pool. In case of a thick unit the actual size of the logical unit will be allocated on the physical disks from the shared storage pool and this will be reflected when checking the size of the pool. Thick provisioning is similar to taking a slice from the storage subsystem and assigning it to a virtual server. Logical units from a shared storage pool can be assigned to AIX, IBM i and Linux partitions. Chapter 2. Virtual storage management 51
  • 89. To create a logical unit use the mkbdsp command as shown in Example 2-33. To create a thick unit specify the -thick parameter. Example 2-33 Creating a thin and a thick logical unit $ mkbdsp -clustername ssp_cluster -sp ssp_pool_1 10G -bd sspdisk09 Lu Name:sspdisk09 Lu Udid:8d35def91f4b434f776bfa1131af7052 $ mkbdsp -clustername ssp_cluster -sp ssp_pool_1 10G -bd sspdisk09 -thick Lu Name:sspdisk09 Lu Udid:1702a77d0c4fbb6725e2478e76a28a81 $ lssp -clustername ssp_cluster -sp ssp_pool_1 -bd Lu Name Size(mb) ProvisionType Lu Udid sspdisk09 10240 THIN 8d35def91f4b434f776bfa1131af7052 sspdisk09 10240 THICK 1702a77d0c4fbb6725e2478e76a28a81 Tip: Logical unit names do not have to be unique, however it makes administration easier. In case of non unique names you have to use the Lu Udid to identify the logical unit. To map the logical unit use the same command as shown in Example 2-34. Notice the usage of the -luudid parameter. Example 2-34 Mapping the logical unit to a vhost adapter $ mkbdsp -clustername ssp_cluster -sp ssp_pool_1 -bd sspdisk09 -vadapter vhost0 Specified LU is not unique. Please select the LU UDID from the below list. LU Name LU UDID sspdisk09 8d35def91f4b434f776bfa1131af7052 sspdisk09 1702a77d0c4fbb6725e2478e76a28a81 $ mkbdsp -clustername ssp_cluster -sp ssp_pool_1 -luudid 8d35def91f4b434f776bfa1131af7052 -vadapter vhost0 Assigning file "8d35def91f4b434f776bfa1131af7052" as a backing device. VTD:vtscsi10 You can also create and assign the logical unit in one command as shown in Example 2-35. Example 2-35 Creating and mapping of a logical unit with one command $ mkbdsp -clustername ssp_cluster -sp ssp_pool_1 10G -bd sspdisk09 -vadapter vhost0 Lu Name:sspdisk09 Lu Udid:aceabaf43ad0a9b19bc18e29fdc27dea52 IBM PowerVM Virtualization Managing and Monitoring
  • 90. Assigning file "sspdisk09" as a backing device. VTD:vtscsi11 After mapping is complete you should discover the disk on the client partition and check that it’s parameters are correct. Make sure you pay attention to the queue_depth value. The default of 3 can be safely changed as usually there are several physical backing devices behind every virtual disk in a shared storage pool. Example 2-36 shows the default value. Example 2-36 Listing the attributes of a disk root@p71aix90 /tmp/sysdir/agents # lsattr -El hdisk1 PCM PCM/friend/vscsi Path Control Module False algorithm fail_over Algorithm True hcheck_cmd test_unit_rdy Health Check Command True hcheck_interval 0 Health Check Interval True hcheck_mode nonactive Health Check Mode True max_transfer 0x40000 Maximum TRANSFER Size True pvid 00f70ef5fbfb8bd40000 Physical volume identifier False queue_depth 3 Queue DEPTH True reserve_policy no_reserve Reserve Policy True The recommended starting value for queue_depth is 32 and can be increased in case the avg_wqsz size as reported by iostat -D command is consistently above 0. To change the value use chdev -l hdisk1 -a queue_depth=32 -P and restart the system. The value cannot be changed while the disk is in use. Remember: The SCSI limitation is 512 command elements for each vscsi adapter out of which 2 are reserved for the adapter and 3 for each vdisk. You can use this formula for calculations of queue depth: virtual_disks=(512-2)/(Q+3) where Q equals the queue_depth of each virtual disk.2.6.4 Tracing logical units In order to list the logical units in a shared storage pool, use the lssp command as shown in Example 2-37. Example 2-37 Listing the logical units in a shared storage pool $ lssp -clustername ssp_cluster -sp ssp_pool_1 -bd Lu Name Size(mb) ProvisionType Lu Udid sspdisk01 30720 THIN 198d854abebe7e965214d8360eae60fe sspdisk02 30720 THIN b64b1ad9f28fd2052f0355a4b3fd8481 Chapter 2. Virtual storage management 53
  • 91. Use lsmap to display the mapping of logical units to virtual adapters and partitions as shown in Example 2-38. Notice the name of the backing devices. The Client Partition ID is shown in hexadecimal, you need to convert it to decimal to find the corresponding id on the HMC. Example 2-38 Listing the mapping on a specific host $ lsmap -clustername ssp_cluster -hostname p71vios1 Physloc Client Partition ID ------------------------------------------------- ------------------ U8233.E8B.100EF5R-V1-C104 0x00000004 VTD vtscsi0 LUN 0x8100000000000000 Backing device sspdisk01.198d854abebe7e965214d8360eae60fe Physloc Client Partition ID ------------------------------------------------- ------------------ U8233.E8B.100EF5R-V1-C105 0x00000005 VTD vtscsi1 LUN 0x8100000000000000 Backing device sspdisk02.b64b1ad9f28fd2052f0355a4b3fd8481 Tip: You can display the mapping for all members of a cluster by changing the hostname in the command shown above. If you use the -all parameter the output will include mappings for all members. Please consider the case when the Virtual I/O Servers in the cluster are on different physical servers, the Client Partition ID can be the same. To display the detailed mapping information, use the lsmap -all command. If you just want to find the vhost adapter mapped to a partition you can filter on the Partition ID as shown in Example 2-39. Example 2-39 vhost adapters mapped to client partition 4 $ lsmap -all|grep 0x00000004 vhost1 U8233.E8B.100EF5R-V1-C104 0x00000004 vhost5 U8233.E8B.100EF5R-V1-C904 0x00000004 To get the details for the vhost adapter only use lsmap as shown in Example 2-40 Example 2-40 Mapping information of vhost1 $ lsmap -vadapter vhost1 SVSA Physloc Client Partition ID --------------- ---------------------------- ------------------ vhost1 U8233.E8B.100EF5R-V1-C104 0x0000000454 IBM PowerVM Virtualization Managing and Monitoring
  • 92. VTD vtscsi0 Status Available LUN 0x8100000000000000 Backing device sspdisk01.198d854abebe7e965214d8360eae60fe Physloc Mirrored N/A You can also get a combined view (as shown in Example 2-41) from the cfgassist menu by selecting Shared Storage Pools  Manage Logical Units in Storage Pool  List Logical Unit Maps. Example 2-41 Abstract from cfgassist menu SVSA(VHOST) Physloc Client ID VTD Name Backing dev(LU) Name -------------------------- ---------- -------- --------------- U8233.E8B.061AA6P-V33-C136 0x00000024 vtscsi0 sspdisk06 U8233.E8B.100EF5R-V1-C104 0x00000004 vtscsi0 sspdisk01 U8233.E8B.100EF5R-V2-C104 0x00000004 vtscsi0 sspdisk01 U8233.E8B.100EF5R-V1-C105 0x00000005 vtscsi1 sspdisk02 U8233.E8B.100EF5R-V2-C106 0x00000006 vtscsi1 sspdisk03 U8233.E8B.100EF5R-V1-C106 0x00000006 vtscsi2 sspdisk03 U8233.E8B.100EF5R-V2-C103 0x00000003 vtscsi2 sspdisk04 U8233.E8B.100EF5R-V1-C103 0x00000003 vtscsi3 sspdisk04 U8233.E8B.100EF5R-V2-C105 0x00000005 vtscsi3 sspdisk02 U8233.E8B.100EF5R-V1-C111 0x0000000b vtscsi4 sspdisk05 U8233.E8B.100EF5R-V2-C111 0x0000000b vtscsi4 sspdisk05 U8233.E8B.100EF5R-V1-C190 0x0000005a vtscsi7 sspdisk07 U8233.E8B.100EF5R-V2-C190 0x0000005a vtscsi7 sspdisk07 U8233.E8B.100EF5R-V1-C190 0x0000005a vtscsi9 sspdisk99 U8233.E8B.100EF5R-V1-C103 0x00000003 vtscsi10 sspdisk092.6.5 Unmapping and removing logical units To unmap a logical unit, use the rmbdsp command with -vtd option. Example 2-42 shows unmapping of sspdisk09 from vhost0. Example 2-42 Unmapping a logical unit $ rmbdsp -vtd vtscsi10 vtscsi10 deleted $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- ---------------------------- ------------------ vhost0 U8233.E8B.100EF5R-V1-C103 0x00000003 VTD vtscsi3 Chapter 2. Virtual storage management 55
  • 93. Status Available LUN 0x8100000000000000 Backing device sspdisk04.54d3f8476cda67dad4f298b4be7a5f43 Physloc Mirrored N/A Important: Ensure the virtual scsi disk is removed from the client virtual server prior to unmapping. To remove the logical unit, use the rmbdsp command as shown in Example 2-43. Example 2-43 Removing the logical unit $ rmbdsp -clustername ssp_cluster -sp ssp_pool_1 -bd sspdisk09 Logical unit sspdisk09 with udid "420b2e3c9ab070139149cba71193fb12" is removed. Remember: If there is a virtual adapter mapped to the logical unit, it will automatically be removed by the command above. The shared storage pool can have multiple logical units with the same name where the above example will not work. If you want to remove one of multiple logical units with the same name, specify the logical unit unique device identifier (udid) as shown in Example 2-44. Example 2-44 Remove the logical unit specified by the luudid LU Name LU UDID sspdisk09 baa5d06f3b1563eca389e53f754ccfba sspdisk09 eb7131ee20811b2d7a839a212357cfae $ rmbdsp -clustername ssp_cluster -sp ssp_pool_1 -luudid eb7131ee20811b2d7a839a212357cfae Logical unit with udid "eb7131ee20811b2d7a839a212357cfae" is removed.56 IBM PowerVM Virtualization Managing and Monitoring
  • 94. 2.6.6 Managing VLAN tagging Cluster AIX Awareness (CAA) is designed that the network interfaces configured for cluster communication cannot be on tagged VLANs. The following are your options when working with this design. Generally, use option 1. 1. A physical adapter used in SEA configuration can be used by both the virtual I/O server (VIOS) and the client LPARs. The VIOS clients can be on tagged VLANs. To elaborate, for cluster communication between the VIO server nodes in the cluster, the IP address should be configured on the SEA interface. It should not be on a VLAN device created from any additional VLAN id of the SEA. However, the SEA can have VLAN tagging configured with additional VLAN ids added to the SEA. Also, the client LPARs can have the IP configured on any of these VLAN device interfaces. 2. Assign multiple adapters to the VIOS, and dedicate one adapter to cluster communication that is not in a tagged VLAN. Dedicate the other adapter to clients. 3. Configure a pair of VIOSs for Storage and another pair for Networking.2.7 Replacing a disk on the Virtual I/O Server If it becomes necessary to replace a disk on the Virtual I/O Server, you must first identify the virtual I/O clients affected and the target disk drive. Important: Before replacing a disk device using this procedure, make sure that the disk can be hotswapped. If you run in a single Virtual I/O Server environment without disk mirroring on the virtual I/O clients, replacing a non-RAID protected physical disk requires data to be restored. This also applies if you have the same disk exported through two Virtual I/O Servers using MPIO. MPIO by itself does not protect against outages due to disk replacement. You should evaluate protecting the data on a disk or LUN using either mirroring or RAID technology. This section covers the following disk replacement procedures in a dual Virtual I/O Server environment using software mirroring on the client: 2.7.1, “Replacing an LV-backed disk in the mirroring environment” on page 58 2.7.2, “Replacing a mirrored storage pool-backed disk” on page 63 Chapter 2. Virtual storage management 57
  • 95. 2.7.1 Replacing an LV-backed disk in the mirroring environment In the logical volume (LV)-backed mirroring scenario, we want to replace hdisk2 on the Virtual I/O Server, which is part of its volume group vioc_rootvg_1, and which contains the LV vioc_1_rootvg mapped to the virtual target device vtscsi0. It has the following attributes: The size is 32 GB. The virtual disk is software mirrored on the virtual I/O client. The failing disk on the AIX virtual I/O client is hdisk1. The virtual SCSI adapter on the virtual I/O client is vscsi1. The volume group on the AIX virtual I/O client is rootvg. Figure 2-9 shows the setup using an AIX virtual I/O client as an example. However, the following replacement procedure covers an AIX client, an IBM i client, and a Linux client. VSCSI VSCSI VSCSI VSCSI VIOS 1 VIOS 2 hdisk0 hdisk1 Logical volume Logical volume "vioc_rootvg_1" "vioc_rootvg_1" hdisk2 LVM hdisk2 Mirroring rootvg Client PartitionFigure 2-9 AIX LVM mirroring environment with LV-backed virtual disks Check the state of the client disk first for an indication that a disk replacement might be required: On the AIX client, use the lsvg -pv volumegroup command to check whether the PV STATE information shows missing. On the IBM i client, enter the STRSST command and log in to System Service Tools (SST) selecting the options 3. Work with disk units  1. Display disk configuration  1. Display disk configuration status to check if a mirrored disk unit shows suspended. On the Linux client, type cat /proc/mdstat to check the disk status. For more details, see Section 5.6.4 Linux client mirroring of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.58 IBM PowerVM Virtualization Managing and Monitoring
  • 96. Important: Before replacing the disk, document the virtual I/O client, logical volume (LV), backing devices, vhost and vtscsi associated, and the size of the LV mapped to the vtscsi device. See 2.5, “Mapping LUNs over vSCSI to hdisks” on page 27 for more information about managing this.Procedure to replace a physical disk on the Virtual I/O ServerTo replace the physical disk on the Virtual I/O Server, perform these steps:1. Identify the physical disk drive with the diagmenu command.2. Select Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.). In the next list, select Hot Plug Task.3. In this list, select SCSI and SCSI RAID Hot Plug Manager and select Identify a Device Attached to a SCSI Hot Swap Enclosure Device.4. In the next list, select the hdisk and press Enter. A window similar to the one shown in Example 2-45 opens. Note that this is an example from a p5-570 with internal disks. Example 2-45 Find the disk to remove IDENTIFY DEVICE ATTACHED TO SCSI HOT SWAP ENCLOSURE DEVICE 802483 The following is a list of devices attached to SCSI Hot Swap Enclosure devices. Selecting a slot will set the LED indicator to Identify. Make selection, use Enter to continue. [MORE...4] slot 3+------------------------------------------------------+ slot 4¦ ¦ slot 5¦ ¦ slot 6¦ The LED should be in the Identify state for the ¦ ¦ selected device. ¦ ¦ ¦ ses1 ¦ Use Enter to put the device LED in the ¦ slot 1¦ Normal state and return to the previous menu. ¦ slot 2¦ ¦ slot 3¦ ¦ slot 4¦ ¦ slot 5¦ ¦ slot 6¦ ¦ [BOTTOM] ¦ ¦ ¦ F3=Cancel F10=Exit Enter ¦ F1=Help +------------------------------------------------------+ Chapter 2. Virtual storage management 59
  • 97. 5. For an AIX virtual I/O client: a. Unmirror the rootvg, as follows: # unmirrorvg -c 1 rootvg hdisk1 0516-1246 rmlvcopy: If hd5 is the boot logical volume, please run chpv -c <diskname> as root user to clear the boot record and avoid a potential boot off an old boot image that may reside on the disk from which this logical volume is moved/removed. 0301-108 mkboot: Unable to read file blocks. Return code: -1 0516-1132 unmirrorvg: Quorum requirement turned on, reboot system for this to take effect for rootvg. 0516-1144 unmirrorvg: rootvg successfully unmirrored, user should perform bosboot of system to reinitialize boot records. Then, user must modify bootlist to just include: hdisk0. b. Reduce the AIX client rootvg: # reducevg rootvg hdisk1 c. Remove hdisk1 device from the AIX client configuration: # rmdev -l hdisk1 -d hdisk1 deleted 6. On the Virtual I/O Server, remove the vtscsi/vhost association: $ rmdev -dev vtscsi0 vtscsi0 deleted 7. On the Virtual I/O Server, reduce the volume group. If you get an error, as in the following example, and you are sure that you have only one hdisk per volume group, you can use the deactivatevg and exportvg commands. Important: If you use the exportvg command, it will delete all the logical volumes inside the volume group’s ODM definition. If your volume group contains more than one hdisk, the logical volumes on this hdisk are also affected. Use the lspv command to check. In this case, it is safe to use the exportvg vioc_rootvg_1 command: $ lspv NAME PVID VG STATUS hdisk0 00c478de00655246 rootvg active hdisk1 00c478de008a399b rootvg active hdisk2 00c478de008a3ba1 vioc_rootvg_1 active hdisk3 00c478deb4b0d4b0 None60 IBM PowerVM Virtualization Managing and Monitoring
  • 98. $ reducevg -rmlv -f vioc_rootvg_1 hdisk2 Some error messages may contain invalid information for the Virtual I/O Server environment. 0516-062 lqueryvg: Unable to read or write logical volume manager record. PV may be permanently corrupted. Run diagnostics 0516-882 reducevg: Unable to reduce volume group. $ deactivatevg vioc_rootvg_1 Some error messages may contain invalid information for the Virtual I/O Server environment. 0516-062 lqueryvg: Unable to read or write logical volume manager record. PV may be permanently corrupted. Run diagnostics $ exportvg vioc_rootvg_18. On the Virtual I/O Server, remove the hdisk device: $ rmdev -dev hdisk2 hdisk2 deleted9. Replace the physical disk drive.10.On the Virtual I/O Server, configure the new hdisk device with the cfgdev command and check the configuration using the lspv command to determine that the new disk is configured: $ cfgdev $ lspv NAME PVID VG STATUS hdisk2 none None hdisk0 00c478de00655246 rootvg active hdisk1 00c478de008a399b rootvg active hdisk3 00c478deb4b0d4b0 None11.On the Virtual I/O Server, extend the volume group with the new hdisk using the mkvg command if you only have one disk per volume group. Use the extendvg command if you have more disks per volume group. Here, we have only one volume group per disk. If the disk has a PVID, use the -f flag on the mkvg command. $ mkvg -vg vioc_rootvg_1 hdisk2 vioc_rootvg_1 0516-1254 mkvg: Changing the PVID in the ODM. Tip[: As an alternative to the manual disk replacement Steps 7 - 11, on the Virtual I/O Server you can use the replphyvol command. Chapter 2. Virtual storage management 61
  • 99. 12.On the Virtual I/O Server, recreate the logical volume of exactly the original size for the vtscsi device: $ mklv -lv vioc_1_rootvg vioc_rootvg_1 32G vioc_1_rootvg Remember: For an IBM i client partition, do not attempt to determine the size for the virtual target device from the IBM i client partition because due to the 8-to-9 sector conversion (520-byte sectors on IBM i versus 512-byte sectors on the Virtual I/O Server), the IBM i client shows less capacity for the disk unit than what actually needs to be configured on the Virtual I/O Server. If you did not record the size for the LV before the Virtual I/O Server disk failure, determine the size from the output of the lslv logicalvolume command on the Virtual I/O Server of the active mirror side by multiplying the logical partitions (LPs) with the physical partition size (PP SIZE) information. 13.On the Virtual I/O Server, check that the LV does not span disks: $ lslv -pv vioc_1_rootvg vioc_1_rootvg:N/A PV COPIES IN BAND DISTRIBUTION hdisk2 512:000:000 42% 000:218:218:076:000 14.On the Virtual I/O Server, recreate the virtual device: $ mkvdev -vdev vioc_1_rootvg -vadapter vhost0 vtscsi0 Available 15.For an AIX virtual I/O client: a. Reconfigure the new hdisk1. If the parent device is unknown, then you can execute the cfgmgr command without any parameters: # cfgmgr -l vscsi1 b. Extend the rootvg: # extendvg rootvg hdisk1 0516-1254 extendvg: Changing the PVID in the ODM. c. Mirror the rootvg again: # mirrorvg -c 2 rootvg hdisk1 0516-1124 mirrorvg: Quorum requirement turned off, reboot system for this to take effect for rootvg. 0516-1126 mirrorvg: rootvg successfully mirrored, user should perform bosboot of system to initialize boot records. Then, user must modify bootlist to include: hdisk0 hdisk1.62 IBM PowerVM Virtualization Managing and Monitoring
  • 100. d. Initialize boot records and set the bootlist: # bosboot -a bosboot: Boot image is 18036 512 byte blocks. # bootlist -m normal hdisk0 hdisk1 16.For an IBM i client, verify in SST that the previously suspended disk unit automatically changed its mirrored state to resuming and finally active when the mirror re-synchronization completed. 17.For a Linux client, see Section 5.6.4, Linux client mirroring in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 to recover the mirroring.2.7.2 Replacing a mirrored storage pool-backed disk In the storage pool backed mirroring scenario, we want to replace hdisk2 on the Virtual I/O Server in the storage pool vioc_rootvg_1, which contains the backing device vioc_1_rootvg associated to vhost0. It has the following attributes: The size is 32 GB. The virtual disk is software-mirrored on the virtual I/O client. The volume group on the AIX virtual I/O client is rootvg. Tip: The storage pool and backing device concept on the Virtual I/O Server is similar to the volume group and logical volume concept from AIX, but it hides some of its complexity. Figure 2-10 shows the setup using an AIX virtual I/O client as an example. However, the following replacement procedure covers an AIX client, an IBM i client, and a Linux client. VSCSI VSCSI VSCSI VSCSI VIOS 1 VIOS 2 hdisk0 hdisk1 Backing device Backing device "vioc_1_rootvg" "vioc_1_rootvg" Storage pool Storage pool "vioc_rootvg_1" hdisk2 “vioc_rootvg_1” hdisk2 LVM Mirroring rootvg Client PartitionFigure 2-10 AIX LVM mirroring with storage pool-backed virtual disks Chapter 2. Virtual storage management 63
  • 101. Check the state of the client disk first for an indication that a disk replacement might be required: On the AIX client, use the lsvg -pv volumegroup command to check if the PV STATE information shows missing. On the IBM i client, enter the STRSST command and log in to System Service Tools (SST). Select the options 3. Work with disk units  1. Display disk configuration  1. Display disk configuration status to check if a mirrored disk unit shows suspended. On the Linux client, type cat /proc/mdstat to check the disk status. For more detail, see Section 5.6.4 Linux client mirroring of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. Important: Before replacing the disk, document the virtual I/O client, logical volume (LV), backing devices, vhost and vtscsi associated, and the size of the backing device. See 2.5, “Mapping LUNs over vSCSI to hdisks” on page 27 for more information about managing this. Procedure to replace a physical disk on the Virtual I/O Server To replace a physical disk on the Virtual I/O Server, perform these steps: 1. Identify the physical disk drive: see Step 1 on page 59. 2. For an AIX virtual I/O client: a. Unmirror the rootvg as follows: # unmirrorvg -c 1 rootvg hdisk1 0516-1246 rmlvcopy: If hd5 is the boot logical volume, please run chpv -c <diskname> as root user to clear the boot record and avoid a potential boot off an old boot image that may reside on the disk from which this logical volume is moved/removed. 0301-108 mkboot: Unable to read file blocks. Return code: -1 0516-1132 unmirrorvg: Quorum requirement turned on, reboot system for this to take effect for rootvg. 0516-1144 unmirrorvg: rootvg successfully unmirrored, user should perform bosboot of system to reinitialize boot records. Then, user must modify bootlist to just include: hdisk0. b. Reduce the AIX client rootvg: # reducevg rootvg hdisk1 c. Remove hdisk1 device from the AIX client configuration: # rmdev -l hdisk1 -d hdisk1 deleted64 IBM PowerVM Virtualization Managing and Monitoring
  • 102. 3. On the Virtual I/O Server, remove the backing device: $ rmbdsp -bd vioc_1_rootvg vtscsi0 deleted4. Remove the disk from the disk pool. If you receive an error message, such as in the following example, and you are sure that you have only one hdisk per storage pool, you can use the deactivatevg and exportvg commands. Important: If you use exportvg, it will delete all the logical volumes inside the volume group. If your volume group contains more than one hdisk, the logical volumes on this hdisk are also affected. Use lspv to check. In this case, it is safe to use exportvg vioc_rootvg_1: $ lspv NAME PVID VG STATUS hdisk0 00c478de00655246 rootvg active hdisk1 00c478de008a399b rootvg active hdisk2 00c478de008a3ba1 vioc_rootvg_1 active hdisk3 00c478deb4b0d4b0 None $ chsp -rm -f -sp vioc_rootvg_1 hdisk2 Some error messages may contain invalid information for the Virtual I/O Server environment. 0516-062 lqueryvg: Unable to read or write logical volume manager record. PV may be permanently corrupted. Run diagnostics 0516-882 reducevg: Unable to reduce volume group. $ deactivatevg vioc_rootvg_1 Some error messages may contain invalid information for the Virtual I/O Server environment. 0516-062 lqueryvg: Unable to read or write logical volume manager record. PV may be permanently corrupted. Run diagnostics $ exportvg vioc_rootvg_15. On the Virtual I/O Server, remove the hdisk device: $ rmdev -dev hdisk2 hdisk2 deleted6. Replace the physical disk drive. Chapter 2. Virtual storage management 65
  • 103. 7. On the Virtual I/O Server, configure the new hdisk device using cfgdev and check the configuration using lspv to determine that the new disk is configured: $ cfgdev $ lspv NAME PVID VG STATUS hdisk2 none None hdisk0 00c478de00655246 rootvg active hdisk1 00c478de008a399b rootvg active hdisk3 00c478deb4b0d4b0 None 8. On the Virtual I/O Server, add the hdisk to the storage pool using the chsp command when you have more than one disk per storage pool. If you only have one storage pool per hdisk, use the mksp command: $ mksp vioc_rootvg_1 hdisk2 vioc_rootvg_1 0516-1254 mkvg: Changing the PVID in the ODM. Tip: As an alternative to the manual disk replacement steps 4 to 7, on the Virtual I/O Server you may use the replphyvol command. 9. On the Virtual I/O Server, recreate the backing device of exactly the original size and attach it to the virtual device: $ mkbdsp -sp vioc_rootvg_1 32G -bd vioc_1_rootvg -vadapter vhost0 Creating logical volume "vioc_1_rootvg" in storage pool "vioc_rootvg_1". vtscsi0 Available vioc_1_rootvg Remember: For an IBM i client partition, do not attempt to determine the size for the virtual target backing device from the IBM i client partition because, due to the 8-to-9 sector conversion (520-byte sectors on IBM i versus 512-byte sectors on the Virtual I/O Server), the IBM i client shows less capacity for the disk unit than what actually needs to be configured on the Virtual I/O Server. If you did not record the size for the backing device before the Virtual I/O Server disk failure, determine the size from the output of lslv backingdevice on the Virtual I/O Server of the active mirror side by multiplying the logical partitions (LPs) with the physical partition size (PP SIZE) information. 10.On the Virtual I/O Server, check that the backing device does not span a disk in the storage pool. Here, we have only have one hdisk per storage pool.66 IBM PowerVM Virtualization Managing and Monitoring
  • 104. 11.For an AIX virtual I/O client: a. Reconfigure the new hdisk1. If the parent device is unknown, type cfgmgr without any parameters: # cfgmgr -l vscsi1 b. Extend the rootvg: # extendvg rootvg hdisk1 0516-1254 extendvg: Changing the PVID in the ODM. c. Re-establish mirroring of the AIX client rootvg: # mirrorvg -c 2 rootvg hdisk1 0516-1124 mirrorvg: Quorum requirement turned off, reboot system for this to take effect for rootvg. 0516-1126 mirrorvg: rootvg successfully mirrored, user should perform bosboot of system to initialize boot records. Then, user must modify bootlist to include: hdisk0 hdisk1. d. Initialize the AIX boot record and set the bootlist: # bosboot -a bosboot: Boot image is 18036 512 byte blocks. # bootlist -m normal hdisk0 hdisk1 12.For an IBM i client, verify in SST that the previously suspended disk unit automatically changed its mirrored state to resuming and finally to active when the mirror re-synchronization completed. 13.For a Linux client, see Section 5.6.4 Linux client mirroring of IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 to recover the mirroring.2.7.3 Replacing a disk in the shared storage pool You can not remove a physical disk from a shared storage pool, however you can replace it with another of the same or bigger size. After replacing it the device will be removed from the shared storage pool and can be removed from the Virtual I/O Server. To replace a disk use the chsp command shown in Example 2-46. Example 2-46 Replacing a disk in the shared storage pool $ chsp -replace -clustername ssp_cluster -sp ssp_pool_1 -oldpv hdisk7 -newpv hdisk10 Current request action progress: % 5 Current request action progress: % 20 Current request action progress: % 40 Chapter 2. Virtual storage management 67
  • 105. Current request action progress: % 60 Current request action progress: % 80 Current request action progress: % 1002.8 Managing multiple storage security zones Remember: Security in a virtual environment depends on the integrity of the Hardware Management Console and the Virtual I/O Server. Access to the HMC and Virtual I/O Server must be closely monitored because they are able to modify existing storage assignments and establish new storage assignments on virtual servers within the managed systems. When you plan for multiple storage security zones in a SAN environment, study the enterprise security policy for the SAN environment and the current SAN configuration. If separate security zones or disk subsystems share SAN switches, the virtual SCSI devices can share the HBAs because the hypervisor firmware acts in a manner similar to a SAN switch. If a LUN is assigned to a partition by the Virtual I/O Server, it cannot be used or seen by any other partitions. The hypervisor is designed in such a way that no operation within a client partition can gain control of or use a shared resource that is not assigned to the client partition. When you assign a LUN in the SAN environment for the different partitions, remember that the zoning is done by the Virtual I/O Server. Therefore, in the SAN environment, assign all the LUNs to the HBAs used by the Virtual I/O68 IBM PowerVM Virtualization Managing and Monitoring
  • 106. Server. The Virtual I/O Server assigns the LUNs (hdisk) to the virtual SCSIserver adapters (vhost) that are associated to the virtual SCSI client adapters(vscsi) used by the partitions. See Figure 2-11.Figure 2-11 Create virtual SCSIIf accounting or security audits are made from the LUN assignment list, you willnot see the true owners of LUNs because all the LUNs are assigned to the sameHBA. This might cause audit remarks.You can produce the same kind of list from the Virtual I/O Server using lsmap.However, if it is a business requirement that the LUN mapping be at the storagelist, then you must use different HBA pairs for each account and security zone.You can still use the same Virtual I/O Server because this will not affect thesecurity policy.If it is a security requirement, and not a hardware issue, that security zones ordisk subsystems do not share SAN switches, you cannot share an HBA. In thiscase, you cannot use multiple Virtual I/O Servers to virtualize the LUN in onemanaged system because the hypervisor firmware will act as a single SANswitch. Chapter 2. Virtual storage management 69
  • 107. Exception: The discussion of this section is only applied to the environment using virtual SCSI devices. If you use N_Port ID Virtualization (NPIV) in a Virtual I/O Server, NPIV devices share the physical Fibre Channel adapters but have individual virtual Fibre Channel adapters. Therefore, the security zone and the LUN mapping can be configured for each virtual Fibre Channel adapters similar to the environment in which client partitions have physical Fibre Channel adapters.2.9 Storage planning with migration in mind Managing storage resources during an LPAR move can be more complex than managing network resources. Careful planning is required to ensure that the storage resources belonging to an LPAR are in place on the target system. This section assumes that you have fairly good knowledge of PowerVM Live Partition Mobility. For more details about that subject, see the IBM Redbooks publication IBM PowerVM Live Partition Mobility, SG24-7460.2.9.1 Virtual adapter slot numbers Virtual SCSI and virtual Fibre Channel adapters are tracked by slot number and partition ID on both the Virtual I/O Server and client. The number of virtual adapters in a Virtual I/O Server must equal the sum of the virtual adapters in a70 IBM PowerVM Virtualization Managing and Monitoring
  • 108. client partition that it serves. The Virtual I/O Server vhost or vfchost adapter slot numbers are not required to match the client partition vscsi or fscsi slot numbers as shown for virtual SCSI in Figure 2-12.Figure 2-12 Slot numbers that are identical in the source and target system You can apply any numbering scheme as long as server-client adapter pairs match. To avoid interchanging types and slot numbers, reserve a range of slot numbers for each type of virtual adapter. This is also important when partitions are moved between systems. Important: Do not increase the maximum number of adapters for a partition beyond 1024. Chapter 2. Virtual storage management 71
  • 109. 2.9.2 SAN considerations for LPAR migration All storage associated with a partition to be moved should be hosted on LUNs in a Fibre Channel SAN and exported as physical volumes in the Virtual I/O Server. The LUNs used by the LPAR to be moved must be visible to both the source and target Virtual I/O Servers. This can involve zoning, LUN masking, or other configuration of the SAN fabric and storage devices, which is beyond the scope of this document. If multipath software is employed in the Virtual I/O Servers on the source system, the same multipath software must be in place on the target Virtual I/O Servers. For example, if SDDPCM (or SDD or Powerpath) is in use on the source server, the same level must also be in use on the target server. Storage should not be migrated between multipath environments during an LPAR move because this might affect the visibility of the unique tag on the disk devices. Another important consideration is whether to allow concurrent access to the LUNs used by the LPAR. By default, the Virtual I/O Server acquires a SCSI reserve on a LUN when its hdisk is configured as a virtual SCSI target device. This means that only one Virtual I/O Server can export the LUN to an LPAR at a time. The SCSI reserve prevents the LPAR from being started on multiple systems at once, which could lead to data corruption. The SCSI reserve does make the move process more complicated, because configuration is required on both the source and target Virtual I/O Servers between the time that the LPAR is shut down on the source and activated on the target server. Turning off the SCSI reserve on the hdisk devices associated with the LUNs makes it possible to move the LPAR with no configuration changes on the Virtual I/O Servers during the move. However, it raises the possibility of data corruption if the LPAR is accidentally activated on both servers concurrently. Important: To eliminate the possibility of booting the operating system on two servers concurrently, which could result in data corruption, leave the SCSI reserve active on the hdisks in rootvg. Query the SCSI reserve setting with the lsdev command and modify it with the chdev command on the Virtual I/O Server. The exact setting can differ for various types of storage. The setting for LUNs on IBM storage servers should not be no_reserve. $ lsdev -dev hdisk7 -attr reserve_policy value72 IBM PowerVM Virtualization Managing and Monitoring
  • 110. no_reserve $ chdev -dev hdisk7 -attr reserve_policy=single_path hdisk7 changed $ lsdev -dev hdisk7 -attr reserve_policy value single_path Consult the documentation from your storage vendor for the reserve setting on other types of storage. Remember: In a dual Virtual I/O Server configuration, both servers must have access to the same LUNs. In this case, the reserve policy must be set to no_reserve on the LUNs on both Virtual I/O Servers. In situations where the LPAR normally participates in concurrent data access, such as an IBM GPFS™ cluster, the SCSI reserve should remain deactivated on hdisks that are concurrently accessed. These hdisks should be in separate volume groups, and the reserve should be active on all hdisks in rootvg to prevent concurrent booting of the partition. If the storage of the virtual servers targeted for live partition mobility is provisioned solely from a shared storage pool LPM operations will work between the Virtual I/O servers included in the shared storage pool cluster without any additional configuration.2.9.3 Backing devices and virtual target devices The source and destination partitions must have access to the same backing devices from the Virtual I/O Servers on the source and destination system. Each backing device must have a corresponding virtual target device. The virtual target device refers to a SCSI target for the backing disk or LUN. The destination server is the system to which the partition is moving. Tip: Fibre Channel LUNs might have different hdisk device numbers on the source and destination Virtual I/O Server. The hdisk device numbers increment as new devices are discovered, so the order of attachment and number of other devices can influence the hdisk numbers assigned. Use the WWPN and LUN number in the device physical location to map corresponding hdisk numbers on the source and destination partitions. Use the lsmap command on the source Virtual I/O Server to list the virtual target devices that must be created on the destination Virtual I/O Server and Chapter 2. Virtual storage management 73
  • 111. corresponding backing devices. If the vhost adapter numbers for the source Virtual I/O Server are known, run lsmap with the -vadapter flag for the adapter or adapters. Otherwise, run lsmap with the -all flag, and any virtual adapters attached to the source partition should be noted. The following listing is for an IBM System Storage DS4000 series device: $ lsmap -all SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U9117.570.107CD9E-V1-C4 0x00000007 VTD lpar07_rootvg LUN 0x8100000000000000 Backing device hdisk5 Physloc U7879.001.DQD186N-P1-C3-T1-W200400A0B8110D0F-L0 The Physloc identifier for each backing device on the source Virtual I/O Server can be used to identify the appropriate hdisk device on the destination Virtual I/O Server from the output of the lsdev -vpd command. In certain cases with multipath I/O and multi-controller storage servers, the Physloc string can vary by a few characters, depending on which path or controller is in use on the source and destination Virtual I/O Server. $ lsdev -vpd -dev hdisk4 hdisk4 U787A.001.DNZ00XY-P1-C5-T1-W200500A0B8110D0F-L0 3542 (20 0) Disk Array Device PLATFORM SPECIFIC Name: disk Node: disk Device Type: block Make a note of the hdisk device on the destination Virtual I/O Server that corresponds to each backing device on the source Virtual I/O Server.2.10 Managing N_Port ID virtualization N_Port ID Virtualization (NPIV) is an industry standard technology for virtualizing physical Fibre Channel adapters to be shared by multiple partitions. The following sections describe how to manage NPIV in a Virtual I/O Server environment.74 IBM PowerVM Virtualization Managing and Monitoring
  • 112. For more details about NPIV and its configuration for IBM Power Systems Virtual I/O Server client partitions see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 available at: http://www.redbooks.ibm.com/abstracts/sg247940.html?Open2.10.1 Managing virtual Fibre Channel adapters This section provides management information for servers managed by an HMC and for servers managed by IVM. Virtual Fibre Channel for HMC-managed systems On HMC-managed systems, you can dynamically add and remove virtual Fibre Channel adapters to and from the Virtual I/O Server partition and each virtual I/O client partition. You can also view information about the virtual and physical Fibre Channel adapters and the WWPNs by using Virtual I/O Server commands. To enable NPIV on the managed system, you create the required virtual Fibre Channel adapters and connections as follows: 1. You use the HMC to create virtual Fibre Channel server adapters on the Virtual I/O Server partition and associate them with virtual Fibre Channel client adapters on the virtual I/O client partitions. 2. On the HMC, you create virtual Fibre Channel client adapters on each virtual I/O client partition and associate them with virtual Fibre Channel server adapters on the Virtual I/O Server partition. When you create a virtual Fibre Channel client adapter on a client logical partition, the HMC generates a pair of unique WWPNs for the virtual Fibre Channel client adapter. 3. Then you map the virtual Fibre Channel server adapters on the Virtual I/O Server to the physical port of the physical Fibre Channel adapter by running the vfcmap command on the Virtual I/O Server. The IBM POWER Hypervisor™ generates WWPNs based on the range of names available for use with the prefix in the vital product data on the managed system. This 6-digit prefix comes with the purchase of the managed system and includes 32,000 pairs of WWPNs. When you delete a virtual Fibre Channel client adapter from a virtual I/O client partition, the hypervisor does not reuse the WWPNs that are assigned to the virtual Fibre Channel client adapter on the client logical partition. Remember: The POWER hypervisor does not reuse the deleted WWPNs when generating WWPNs for virtual Fibre Channel adapters in the future. If you run out of WWPNs, you must obtain an activation code that includes another prefix with another 32,000 pairs of WWPNs. Chapter 2. Virtual storage management 75
  • 113. Tip: For more information about how to obtain the activation code, contact your IBM sales representative or your IBM Business Partner representative. Virtual Fibre Channel for IVM-managed systems On systems that are managed by the Integrated Virtualization Manager (IVM), you can dynamically change the physical ports that are assigned to a logical partition. You can also dynamically change the logical partitions that are assigned to a physical port. Furthermore, you can view information about the virtual and physical Fibre Channel adapters and the WWPNs. To use NPIV on the managed system, you assign logical partitions directly to the physical ports of the physical Fibre Channel adapters. You can assign multiple logical partitions to one physical port. When you assign a logical partition to a physical port, the IVM automatically creates the following connections: The IVM creates a virtual Fibre Channel server adapter on the management partition and associates it with the virtual Fibre Channel adapter on the logical partition. The IVM generates a pair of unique WWPNs and creates a virtual Fibre Channel client adapter on the logical partition. The IVM assigns the WWPNs to the virtual Fibre Channel client adapter on the logical partition, and associates the virtual Fibre Channel client adapter on the logical partition with the virtual Fibre Channel server adapter on the management partition. The IVM connects the virtual Fibre Channel server adapter on the management partition to the physical port on the physical Fibre Channel adapter. The IVM generates WWPNs based on the range of names available for use with the prefix in the vital product data on the managed system. This 6–digit prefix comes with the purchase of the managed system and includes 32,000 pairs of WWPNs. When you remove the connection between a logical partition and a physical port, the hypervisor deletes the WWPNs that are assigned to the virtual Fibre Channel client adapter on the logical partition. Remember: The IVM does not reuse the deleted WWPNs when generating WWPNs for virtual Fibre Channel client adapters in the future. If you run out of WWPNs, you must obtain an activation code that includes another prefix with 32,000 pairs of WWPNs.76 IBM PowerVM Virtualization Managing and Monitoring
  • 114. Tip: For more information about how to obtain the activation code, contact your IBM sales representative or your IBM Business Partner representative.2.10.2 Replacing a Fibre Channel adapter configured with NPIV This section shows a procedure to deactivate and remove a NPIV Fibre Channel adapter. This procedure can be used for removing or replacing such adapters. Example 2-47 illustrates how to remove the adapter in the Virtual I/O Server. The adapter must be unconfigured or removed from the operating system before it can be physically removed: First identify the adapter to be removed. For a dual port card, both ports must be removed. In the Virtual I/O Server, the mappings must be unconfigured. The Fibre Channel adapters and their child devices must be unconfigured or deleted. If deleted, they are recovered with the cfgdev command for the Virtual I/O Server or the cfgmgr command in AIX. The adapter can then be removed using the diagmenu command in the Virtual I/O Server or the diag command in AIX. Example 2-47 Removing a NPIV Fibre Channel adapter in the Virtual I/O Server $ lsdev -dev fcs4 -child name status description fcnet4 Defined Fibre Channel Network Protocol Device fscsi4 Available FC SCSI I/O Controller Protocol Device $ lsdev -dev fcs5 -child name status description fcnet5 Defined Fibre Channel Network Protocol Device fscsi5 Available FC SCSI I/O Controller Protocol Device $ rmdev -dev vfchost0 -ucfg vfchost0 Defined $ rmdev -dev vfchost1 -ucfg vfchost1 Defined $ rmdev -dev fcs4 -recursive -ucfg fscsi4 Defined fcnet4 Defined fcs4 Defined $ rmdev -dev fcs5 -recursive -ucfg fscsi5 Defined Chapter 2. Virtual storage management 77
  • 115. fcnet5 Defined fcs5 Defined diagmenu In the DIAGNOSTIC OPERATING INSTRUCTIONS menu, press Enter and select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Replace/Remove a PCI Hot Plug Adapter. Select the adapter to be removed and follow the instructions on the panel. Important: When replacing a physical NPIV adapter in the Virtual I/O Server, the virtual WWPNs are retained and no new mappings, zoning, or LUN assignments need to be updated.2.10.3 Migrating to virtual Fibre Channel adapter environments This section explains how to migrate your existing AIX environment to a NPIV-based environment. Note: For Linux environment, migrating from a physical Fibre Channel adapter to a virtual Fibre Channel adapter is possible by the similar fashion to AIX. If you use SLES, you should switch disk mounting method from the by-id to the by-uuid. Migrating from a physical to a virtual channel adapter You can migrate any rootvg or non-rootvg disk assigned from a LUN that is mapped through a physical Fibre Channel adapter to a virtual Fibre Channel mapped environment. The following steps explain how to perform the migration. In the example, vios1 is the name for the Virtual I/O Server partition and NPIV is the name for the virtual I/O client partition. 1. Example 2-48 shows that a physical Fibre Channel adapter with two ports is assigned in the NPIV partition. Example 2-48 Show available Fibre Channel adapters # lscfg |grep fcs + fcs0 U789D.001.DQDYKYW-P1-C1-T1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) + fcs1 U789D.001.DQDYKYW-P1-C1-T2 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)78 IBM PowerVM Virtualization Managing and Monitoring
  • 116. # lscfg |grep disk * hdisk0 U789D.001.DQDYKYW-P1-C1-T2-W203200A0B811A662-L0 MPIO Other DS4K Array Disk # lspath Enabled hdisk0 fscsi1 A LUN is mapped to this physical Fibre Channel adapter within the IBM DS4800 storage system, as shown in Figure 2-13. There is one path available to hdisk0.Figure 2-13 LUN mapped to a physical Fibre Channel adapter Chapter 2. Virtual storage management 79
  • 117. 2. Add a virtual Fibre Channel server adapter to the Virtual I/O Server partition. In the HMC, select your managed server and the Virtual I/O Server partition vios1. Click Tasks  Dynamic Logical Partitioning  Virtual Adapters as shown in Figure 2-14.Figure 2-14 Add Virtual Adapter to the vios1 partition80 IBM PowerVM Virtualization Managing and Monitoring
  • 118. 3. Click Actions  Create  Fibre Channel Adapter as shown in Figure 2-15.Figure 2-15 Create virtual Fibre Channel server adapter in the vios1 partition Chapter 2. Virtual storage management 81
  • 119. 4. Enter the Adapter ID for the virtual Fibre Channel server adapter, the name of the partition it should be connected to, and the Client Adapter ID for the slot number of the virtual Fibre Channel client adapter. Then click OK as shown in Figure 2-16. Figure 2-16 Set Adapter IDs in the vios1 partition 5. Click OK.82 IBM PowerVM Virtualization Managing and Monitoring
  • 120. 6. Add a virtual Fibre Channel client adapter to the virtual I/O client partition. In the HMC, select your managed server and the partition NPIV. Click Tasks  Dynamic Logical Partitioning  Virtual Adapters as shown in Figure 2-17.Figure 2-17 Add a virtual adapter to the NPIV partition Chapter 2. Virtual storage management 83
  • 121. 7. Click Actions  Create  Fibre Channel Adapter as shown in Figure 2-18.Figure 2-18 Create virtual Fibre Channel client adapter in the NPIV partition84 IBM PowerVM Virtualization Managing and Monitoring
  • 122. 8. Enter the Adapter ID for the virtual Fibre Channel client adapter, the name of the Virtual I/O Server partition it should be connected to, and the Server adapter ID for the slot number of the virtual Fibre Channel server adapter. Then click OK as shown in Figure 2-19. Figure 2-19 Set Adapter IDs in the NPIV partition Tip: After adding the virtual Fibre Channel client adapter by DLPAR, you should save the partition profile so that the adapter added is stored to the partition profile. Do not edit the existing partition profile to add the virtual Fibre Channel client adapter added by DLPAR for next boot because the WWPNs of the adapter added by editing the profile is different from the adapter added by DLPAR. Starting with HMC 7.7.3 the existing profile can be overwritten (previously you needed to specify a new profile name)9. Click OK.10.Log in as padmin to the Virtual I/O Server partition vios1.11.Check all available virtual Fibre Channel server adapters with the lsdev command: $ lsdev -dev vfchost* name status description vfchost0 Available Virtual FC Server Adapter vfchost1 Available Virtual FC Server Adapter vfchost2 Available Virtual FC Server Adapter vfchost3 Available Virtual FC Server Adapter Chapter 2. Virtual storage management 85
  • 123. vfchost4 Available Virtual FC Server Adapter vfchost5 Available Virtual FC Server Adapter 12.Run cfgdev to configure the previously added virtual Fibre Channel server adapter: $ cfgdev 13.Run lsdev again to show the newly configured virtual Fibre Channel server adapter vfchost6: $ lsdev -dev vfchost* name status description vfchost0 Available Virtual FC Server Adapter vfchost1 Available Virtual FC Server Adapter vfchost2 Available Virtual FC Server Adapter vfchost3 Available Virtual FC Server Adapter vfchost4 Available Virtual FC Server Adapter vfchost5 Available Virtual FC Server Adapter vfchost6 Available Virtual FC Server Adapter 14.Determine the slot number for vfchost6 using the lsdev command: $ lsdev -dev vfchost6 -vpd vfchost6 U9117.MMA.101F170-V1-C55 Virtual FC Server Adapter Hardware Location Code......U9117.MMA.101F170-V1-C55 PLATFORM SPECIFIC Name: vfc-server Node: vfc-server@30000037 Device Type: fcp Physical Location: U9117.MMA.101F170-V1-C55 Tip: As previously defined in the HMC, vfchost6 is available in slot 55. 15.Map the virtual Fibre Channel server adapter vfchost6 with the physical Fibre Channel adapter fcs3 by using the vfcmap command: $ vfcmap -vadapter vfchost6 -fcp fcs3 vfchost6 changed 16.Check the mapping by using the lsmap -all -npiv command: $ lsmap -npiv -vadapter vfchost6 Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ================ vfchost6 U9117.MMA.101F170-V1-C55 1286 IBM PowerVM Virtualization Managing and Monitoring
  • 124. Status:NOT_LOGGED_IN FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2 Ports logged in:0 Flags:4<NOT_LOGGED> VFC client name: VFC client DRC: e. Log in to the virtual I/O client partition named NPIV.17.Run cfgmgr to configure the previously defined virtual Fibre Channel client adapter. Check all available Fibre Channel adapters by using the lsdev command: # lscfg |grep fcs + fcs2 U9117.MMA.101F170-V12-C5-T1 Virtual Fibre Channel Client Adapter + fcs0 U789D.001.DQDYKYW-P1-C1-T1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) + fcs1 U789D.001.DQDYKYW-P1-C1-T2 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) A new virtual Fibre Channel client adapter fcs2 has been added to the operating system.18.Get the WWPN of this Virtual Fibre Channel client adapter by using the lscfg command as shown in Example 2-49. Example 2-49 WWPN of the virtual Fibre Channel client adapter in the NPIV partition # lscfg -vl fcs2 fcs2 U9117.MMA.101F170-V12-C5-T1 Virtual Fibre Channel Client Adapter Network Address.............C05076000AFE0034 ROS Level and ID............ Device Specific.(Z0)........ Device Specific.(Z1)........ Device Specific.(Z2)........ Device Specific.(Z3)........ Device Specific.(Z4)........ Device Specific.(Z5)........ Device Specific.(Z6)........ Device Specific.(Z7)........ Device Specific.(Z8)........C05076000AFE0034 Device Specific.(Z9)........ Hardware Location Code......U9117.MMA.101F170-V12-C5-T1 Chapter 2. Virtual storage management 87
  • 125. 19.Log in to your SAN switch and zone the WWPN of the virtual Fibre Channel client adapter as shown in Example 2-50 for the IBM 2109-F32. Example 2-50 Zoning WWPN for fcs2 itsosan02:admin> portloginshow 15 Type PID World Wide Name credit df_sz cos ===================================================== fe 660f02 c0:50:76:00:0a:fe:00:34 40 2048 c scr=3 fe 660f01 c0:50:76:00:0a:fe:00:14 40 2048 c scr=3 fe 660f00 10:00:00:00:c9:74:a4:75 40 2048 c scr=3 ff 660f02 c0:50:76:00:0a:fe:00:34 12 2048 c d_id=FFFFFC ff 660f01 c0:50:76:00:0a:fe:00:14 12 2048 c d_id=FFFFFC ff 660f00 10:00:00:00:c9:74:a4:75 12 2048 c d_id=FFFFFC itsosan02:admin> zoneadd "vios1", "c0:50:76:00:0a:fe:00:34" itsosan02:admin> cfgsave You are about to save the Defined zoning configuration. This action will only save the changes on Defined configuration. Any changes made on the Effective configuration will not take effect until it is re-enabled. Do you want to save Defined zoning configuration only? (yes, y, no, n): [no] y Updating flash ... itsosan02:admin> cfgenable npiv You are about to enable a new zoning configuration. This action will replace the old zoning configuration with the current configuration selected. Do you want to enable npiv configuration (yes, y, no, n): [no] y zone config "npiv" is in effect Updating flash ... itsosan02:admin> zoneshow Defined configuration: cfg: npiv vios1; vios2 zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:14; 10:00:00:00:c9:74:a4:95; c0:50:76:00:0a:fe:00:34 zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62 Effective configuration: cfg: npiv zone: vios1 20:32:00:a0:b8:11:a6:62 c0:50:76:00:0a:fe:00:14 10:00:00:00:c9:74:a4:95 c0:50:76:00:0a:fe:00:34 zone: vios2 c0:50:76:00:0a:fe:00:12 20:43:00:a0:b8:11:a6:6288 IBM PowerVM Virtualization Managing and Monitoring
  • 126. 20.Add a new host port for the WWPN to the DS4800 as shown in Figure 2-20.Figure 2-20 Add a new host port 21.Run cfgmgr again to define a second path to the existing disk: # lspath Enabled hdisk0 fscsi1 Enabled hdisk0 fscsi2 22.Before you can remove the physical Fibre Channel adapter, you must remove the device from the operating system by using the rmdev command: # rmdev -dl fcs0 -R fcnet0 deleted fscsi0 deleted fcs0 deleted # rmdev -dl fcs1 -R fcnet1 deleted fscsi1 deleted fcs1 deleted Remember: You must remove two fcs devices because this is a two-port Fibre Channel adapter. Chapter 2. Virtual storage management 89
  • 127. 23.Use the bosboot and bootlist commands to update your boot record and boot list: # bosboot -ad /dev/hdisk0 bosboot: Boot image is 39570 512 byte blocks. # bootlist -m normal hdisk0 # bootlist -m normal -o hdisk0 blv=hd5 24.On the HMC, select the partition NPIV and then select Tasks  Dynamic Logical Partitioning  Physical Adapters  Move or Remove as shown in Figure 2-21.Figure 2-21 Remove a physical Fibre Channel adapter90 IBM PowerVM Virtualization Managing and Monitoring
  • 128. 25.Select the physical adapter that you want to remove from the list and click OK as shown in Figure 2-22. Figure 2-22 Select the adapter to be removed Important: Make sure that the adapter to be removed is defined as desired in the partition profile. Otherwise, it cannot be removed.Migrating from vSCSI to NPIVMigrating from a LUN that is mapped to a virtual I/O client through the Virtual I/OServer is not supported. You cannot remove the vSCSI mapping and then remapit to a WWPN coming from the virtual Fibre Channel adapter. Chapter 2. Virtual storage management 91
  • 129. If you want to migrate rootvg disks, you have four options: using mirroring, installing alternate disks, using the migratepv command, or using NIM backup and restore. This section explains how to use each option. Mirroring You can create additional disks mapped over NPIV. Then you can add these disks to the rootvg by using the following command: $ lspv hdisk0 00ca58bd2ed277ef rootvg active hdisk1 00ca58bd2f512b88 None $ extendvg -f rootvg hdisk1 $ lspv hdisk0 00ca58bd2ed277ef rootvg active hdisk1 00ca58bd2f512b88 rootvg active $ After you have added hdisk1 to the rootvg, you can mirror hdisk0 to hdisk1 by using the mirrorvg command and boot from hdisk1. Installing alternate disks With this option you first create additional disks mapped over NPIV, and then use the alternate disk installation method. For more information about this option, search for alternate disk installation in the IBM AIX Information Center at: http://publib.boulder.ibm.com/infocenter/systems/scope/aix/index.jsp Using the migratepv command If you want to migrate a disk, you can create additional disks mapped over NPIV. Then you can migrate the disks by using the migratepv command, which moves physical partitions from one physical volume to one or more physical volumes. Using NIM backup and restore You can back up the rootvg onto a NIM server and then restore it to a new disk mapped over NPIV. For detailed information about NIM backup and restore, see NIM from A to Z in AIX 5L™, SG24-7296.92 IBM PowerVM Virtualization Managing and Monitoring
  • 130. 3 Chapter 3. Virtual network management Network connectivity in the virtual environment is extremely flexible. This chapter describes how to perform common configuration tasks related to network configuration. We discuss how to change the IP or the VLAN in a virtualized environment, along with mapping management and tuning packet sizes for best performance. It is assumed you are well-versed in setting up a virtual network environment. To obtain detailed information about this task, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. This chapter contains the following sections: Modifying IP addresses Modifying VLANs Modifying MAC addresses Managing the mapping of network devices SEA threading on the Virtual I/O Server Tuning network throughput Shared Ethernet Adapter failover with Load Sharing Quality of Service Denial of Service hardening© Copyright IBM Corp. 2012. All rights reserved. 93
  • 131. 3.1 Modifying IP addresses Many hosts, regardless of operating system can have numerous IP addresses depending on the number of network interfaces that are configured. Generally, one of these IP addresses is considered to be the primary IP address and is registered against the host name of the system. This is the interface that is used for the majority of administrative tasks and it is the address that is registered in any directory services such as DNS. The remaining IP addresses are often used for specific tasks such as point-to-point connections or access to other networks. This section describes how to change the IP addresses assigned to the Virtual I/O Server and client partitions and the implications of these changes. Important: If the IP address you are modifying is the address used for RMC connectivity between the HMC and your partition take care to ensure this connectivity still exists after the address changes otherwise this will disable the ability to perform DLPAR operations on your partition.3.1.1 Virtual I/O Server The primary IP address of the Virtual I/O Server is used for: RMC communication for dynamic LPAR operations on the Virtual I/O Server. Remote access to the Virtual I/O Server through telnet or Secure Shell (SSH). NIM operations. Usually, this address is configured in one of these two ways: It is configured on a stand-alone interface that is dedicated solely to system administration. It is configured on top of a Shared Ethernet Adapter (SEA) device. Using either method, the IP address is transparent to client partitions and can be changed without affecting their operation. For example, if the IP address must be changed on en5 from 9.3.5.108 to 9.3.5.109 and the host name must be changed from VIO_Server1 to VIO_Server2, use the following command: mktcpip -hostname VIO_Server2 -inetaddr 9.3.5.109 -interface en594 IBM PowerVM Virtualization Managing and Monitoring
  • 132. If you only want to change the IP address or the gateway of a network interface, you can also use the chtcpip command: chtcpip -interface en5 -inetaddr 9.3.5.109 To change the adapter at the same time, such as from en5 to en8: First delete the TCP/IP definitions on en5 using the rmtcpip command. Then run mktcpip on en8. Finally it is also possible to make these changes using the cfgassist menu system. Important: If the IP address you are modifying is configured on top of an Shared Ethernet Adapter (SEA) device, take care not to modify or remove the layer-2 device (the ent device) because this will disrupt any traffic being serviced by the SEA. Only changes to the layer-3 device (the en device) are transparent to clients of the SEA.3.1.2 Client partitions Client partition IP addresses can be changed as you would in a physical environment. There is no specific requirement to modify any configuration on the Virtual I/O Server as long as there is no requirement to change the VLAN configuration to support the new IP address. VLAN modifications are covered in 3.2, “Modifying VLANs” on page 96. AIX The primary interface on an AIX partition is used for the same tasks as on a Virtual I/O Server, and the process to modify IP addresses is identical. For an AIX virtual I/O client, to change the IP address on a virtual Ethernet adapter use SMIT or the mktcpip command. In this example, we change the IP address from 9.3.5.113 to 9.3.5.112 and the host name from lpar03 to lpar02. The virtual Ethernet adapter can be modified in the same way you modify a physical adapter, using the following command: mktcpip -h lpar02 -a 9.3.5.112 -i en0 Chapter 3. Virtual network management 95
  • 133. IBM i For an IBM i virtual I/O client, change the IP address on a physical or virtual Ethernet adapter using the following procedure: 1. Add a new TCP/IP interface with the new IP address (9.3.5.123) to an existing Ethernet line description (ETH01) by using the ADDTCPIFC command as follows: ADDTCPIFC INTNETADR(9.3.5.123) LIND(ETH01) SUBNETMASK(255.255.254.0) 2. Start the new TCP/IP interface using the STRTCPIFC command as follows: STRTCPIFC INTNETADR(9.3.5.123) 3. The TCP/IP interface with the old IP address (9.3.5.119) can now be ended and removed using the ENDTCPIFC and RMVTCPIFC commands as follows: ENDTCPIFC INTNETADR(9.3.5.119) RMVTCPIFC INTNETADR(9.3.5.119) Alternatively, you can use the CFGTCP command. Choosing the option 1. Work with TCP/IP interfaces allows a menu-based change of TCP/IP interfaces. To change the host name for an IBM i virtual I/O client, use the CHGTCPDMN command. Linux Changing the IP address of an interface using the ifconfig command will not persist through a reboot of the operating system. For systems running Red Hat, the fastest method is to use the system-config-network application to make the changes. On SUSE systems, use the yast or yast2 applications. Both are menu driven systems that work in the command shell and update the necessary configuration files.3.2 Modifying VLANs Changing the VLAN configuration of a partition can be a more in-depth process than changing the IP address of an interface. Because VLANs are configured lower in the networking model than IP, changes are likely to be required in more than one place to ensure the desired connectivity is achieved. In addition, there are many ways to configure VLANs in a system to suit your network infrastructure. It is important to have a strong understanding of VLANs before commencing this section. For an introduction and more in-depth description of VLANs, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.96 IBM PowerVM Virtualization Managing and Monitoring
  • 134. 3.2.1 Process overview Regardless of the operating system, there are generally three places where VLAN modifications to a partition need to be made: The Hardware Management Console. Changes to the virtual Ethernet adapter configuration for partitions are performed in the HMC. VLAN IDs are added and removed here. On the Shared Ethernet Adapter in the Virtual I/O Server. The SEA is the bridge between the internal hypervisor networks and external networks. If a partition needs to participate in a VLAN that is external to the hypervisor, access to the external network will be provided by a Shared Ethernet Adapter. Within the operating system. Additional configuration within the target operating system might be required for the changes to be effective. In the HMC, the three methods for configuring a VLAN in a partition are: Add an additional virtual Ethernet adapter which has the Port VLAN ID (PVID) set to the required VLAN ID. This requires no additional configuration within the operating system. All Ethernet traffic originating from this interface uses the PVID assigned to the adapter. Add an additional virtual Ethernet Adapter to the partition with the required VLAN included in the 802.1q Additional VLANs field. This method requires an operating system that is capable of 802.1q VLAN tagging. Typically, a pseudo interface is configured within the operating system on top of the base adapter to tag traffic with the required VLAN ID. Modify the list of 802.1q VLANs on an existing virtual Ethernet adapter to include the required VLAN. As with the previous method, this method requires an operating system that is capable of 802.1q VLAN tagging. The ability to dynamically modify an existing virtual Ethernet adapter using the DLPAR functionality is dependent on the target operating system and system firmware. If your environment doesn’t support dynamic VLAN modifications, perform either of the following tasks: – Unconfigure and remove the adapter from the partition, then re-add it with the correct list of VLANs included. This will disrupt traffic using the interface. – Modify the adapter in the partition profile and re-activate the partition. Chapter 3. Virtual network management 97
  • 135. Table 3-1 shows the current supported versions of components required to perform dynamic VLAN modifications. Table 3-1 Required versions for dynamic VLAN modifications Component Required version for dynamic VLAN HMC 7.7.2 System Firmware efw7.2 Virtual I/O Server 2.2.0.10 FP24 AIX AIX V6.1 TL 6 IBM i Not supported Linux Not supported Reminder: The virtual Ethernet adapter supports 20 VLAN IDs in addition to the PVID. If you require more than 20 VLANs, additional virtual Ethernet adapters will be needed Tip: IBM i V7R1 TR3 or later will support 802.3ad, often referred to as Link Aggregation, Ethernetchannel or LACP3.2.2 Hardware Management Console All operations can be performed through the HMC graphical interface or on the command line. Only examples of dynamic VLAN modifications are shown in this section because the procedures for DLPAR add and remove operations are well defined elsewhere. The following examples show how to dynamically modify the virtual Ethernet adapter in slot 5 of the partition named P7_2_vios2, running on the managed system POWER7_2-061AB2P to include VLAN ID 200.98 IBM PowerVM Virtualization Managing and Monitoring
  • 136. Dynamic VLAN modification in the GUI To modify the VLAN using the GUI, perform the following steps: 1. Navigate to the partition and to the Dynamic Logical Partitioning menu and select Virtual Adapters as depicted in Figure 3-1.Figure 3-1 Dynamically adding a virtual adapter to a partition Chapter 3. Virtual network management 99
  • 137. 2. Select the virtual Ethernet adapter to be modified and click File  Edit as shown in Figure 3-2. Figure 3-2 Modifying an existing adapter100 IBM PowerVM Virtualization Managing and Monitoring
  • 138. 3. Modify the list of VLANs to include the required VLANs. In the example in Figure 3-3, VLAN ID 200 is being added. More than one ID can be added by supplying a comma separated list in the New VLAN ID field. To remove VLANs, select them in the Additional VLANs list and click Remove. Figure 3-3 Adding VLAN 200 to the additional VLANs field After the DLPAR operation has completed, the new VLAN will be available on the virtual adapter.Dynamic VLAN modification in the CLIThe following examples show the HMC CLI method of modifying VLANs usingthe same systems as the previous section. Chapter 3. Virtual network management 101
  • 139. The command in Example 3-1 shows the command to modify the virtual Ethernet adapter in slot 5 of the partition named P7_2_vios2, running on the managed system POWER7_2-061AB2P. The operation is adding the VLAN ID 200 to the additional vlans field. Example 3-1 Dynamically modifying the additional VLANs field hscroot@hmc9:~> chhwres -r virtualio --rsubtype eth -m POWER7_2-061AB2P -o s -p P7_2_vios2 -s 5 -a "addl_vlan_ids+=200" The command in Example 3-2 is an extension of the previous example and slightly more complicated. The operation is enabling the IEEE 802.1q capability and adding the VLAN ID 200 to the additional VLANs field in a single operation. Example 3-2 Dynamically modifying VLANs field and setting the IEEE 802.1q flag chhwres -r virtualio --rsubtype eth -m POWER7_2-061AB2P -o s -p P7_2_vios2 -s 5 -a "ieee_virtual_eth=1,addl_vlan_ids+=200" The command in Example 3-3 demonstrates removing VLAN ID 200 from the configuration. Example 3-3 Dynamically modifying the additional VLANs field chhwres -r virtualio --rsubtype eth -m POWER7_2-061AB2P -o s -p P7_2_vios2 -s 5 -a "ieee_virtual_eth=1,addl_vlan_ids-=200"3.2.3 Virtual I/O Server Modifying VLANs in the Virtual I/O Server environment is generally required for one of the following reasons: The Virtual I/O Server is required to participate in a particular VLAN. A client partition requires access to a particular VLAN through a Shared Ethernet Adapter. A combination of these scenarios. If the Virtual I/O Server is required to participate in the VLAN, use either of the following methods: Add a new virtual Ethernet adapter with the PVID field set to the required VLAN ID. All traffic originating on this adapter will use the PVID. If access to external networks is required for this VLAN, at least one Virtual I/O Server on the managed system must have a Shared Ethernet Adapter capable of bridging the VLAN.102 IBM PowerVM Virtualization Managing and Monitoring
  • 140. Add or modify an existing adapter such that the required VLAN is listed in the 802.1q Additional VLAN fields. Then use the mkvdev command to create a VLAN tagged interface over the base adapter. This method also works if the virtual Ethernet adapter is a member of a Shared Ethernet Adapter. The mkvdev syntax is demonstrated in Example 3-4. Important: It is not supported to dynamically modify additional VLAN ID field to add a VLAN ID that already exists on another trunk adapter within the same virtual switch on a Virtual I/O Server, because it may cause unpredictable behavior.If a client partition requires access to a particular VLAN through a SharedEthernet Adapter, use either of the following methods to enable the VLAN on theSEA: If your system is not capable of dynamic VLAN modifications, or if you have reached the limit of 20 additional VLANs per virtual Ethernet adapter, add an additional virtual Ethernet Adapter with the required VLAN listed in the 802.1q Additional VLAN field. Then add the new adapter into an existing SEA configuration using the chdev command to modify the virt_adapters field of the SEA device. The SEA will immediately begin bridging the new VLAN ID without interruption to existing traffic. If your system is capable of dynamic VLAN modifications, select the virtual Ethernet adapter that is a member of the SEA and modify the list of 802.1q Additional VLANs to include (or exclude) the required VLAN. The changes will take effect immediately without affecting existing traffic on the SEA.Example 3-4 demonstrates the use of the mkvdev command to create a VLANtagged interface over the ent9 interface. In this example, ent9 is a SharedEthernet Adapter.Example 3-4 Creating the VLAN tagged interface$ lsdev -dev ent9name status descriptionent9 Available Shared Ethernet Adapter$ mkvdev -vlan ent9 -tagid 200ent10 Availableen10et10$ lsdev -dev ent10 -attrattribute value description user_settablebase_adapter ent9 VLAN Base Adapter True Chapter 3. Virtual network management 103
  • 141. vlan_priority 0 VLAN Priority True vlan_tag_id 200 VLAN Tag ID True Important: If your system doesn’t support dynamic VLAN modifications and you are modifying the VLAN list of a virtual Ethernet adapter that is configured in a SEA with ha_mode enabled, the HMC will not allow you to reconfigure the list of VLANs on that interface. You will need to add an additional virtual Ethernet adapter and modify the virt_adapters list of the SEA, or modify the profile of both Virtual I/O Servers and re-activate both Virtual I/O Servers at the same time.3.2.4 Client partitions The process of modifying the VLAN configuration in client partitions is similar to modifying VLANs in the Virtual I/O Server, without the complexity of the Shared Ethernet Adapter. AIX To enable an AIX partition to participate in a particular VLAN, use either of the following methods: Add a new virtual Ethernet adapter with the PVID field set to the required VLAN ID. All traffic originating on this adapter will use the PVID. Add or modify an existing adapter such that the required VLAN is listed in the 802.1q Additional VLAN fields. Then use either the smitty vlan fastpath or the mkvdev command to create a VLAN tagged interface over the base adapter. The mkdev command syntax is demonstrated in Example 3-5. Example 3-5 demonstrates the use of the mkdev command on AIX. This example creates a VLAN tagged interface for VLAN 200, using ent1 as the base adapter. Example 3-5 Creating the VLAN tagged interface P7_2_AIX:/ # mkdev -c adapter -s vlan -t eth -a base_adapter=ent1 -a vlan_tag_id=200 ent2 Available P7_2_AIX:/ # /usr/lib/methods/defif en2 et2 IBM i To enable an IBM i partition to participate in a particular VLAN, an additional adapter needs to be added to the profile which has the PVID field set to the104 IBM PowerVM Virtualization Managing and Monitoring
  • 142. required VLAN. IBM i doesn’t support 802.1q VLAN tagging. For an IBM ioperating system to participate in multiple VLANs, an adapter per VLAN needs tobe configured.LinuxTo enable a Linux partition to participate in a particular VLAN, use either of thefollowing methods: Add a new virtual Ethernet adapter with the PVID field set to the required VLAN ID. All traffic originating on this adapter will use the PVID. Add or modify an existing adapter such that the required VLAN is listed in the 802.1q Additional VLAN fields. Then use the vconfig command to create a VLAN tagged interface over the base adapter. The vconfig command syntax is demonstrated in Example 3-6.The command in Example 3-6 creates an interface on top of eth0 that will tagframes with VLAN 200. The resulting device is eth0.200.Example 3-6 Creating a VLAN tagged interface on Linux[root@P7-1-RHEL ~]# vconfig add eth0 200Added VLAN with VID == 200 to IF -:eth0:-[root@P7-1-RHEL ~]# ifconfig eth0.200eth0.200 Link encap:Ethernet HWaddr 22:5C:2A:1A:23:02 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)To remove this device, use the vconfig command again as show in Example 3-7.Example 3-7 Removing a VLAN tagged interface on Linux[root@P7-1-RHEL ~]# vconfig rem eth0.200Removed VLAN -:eth0.200:-If you receive an error about the 8021q module not being loaded, use themodprobe command to load the 8021q module as shown in Example 3-8.Example 3-8 Loading the 8021q module into the kernel[root@P7-1-RHEL ~]# modprobe 8021q Chapter 3. Virtual network management 105
  • 143. 3.3 Modifying MAC addresses In a Power Systems server, the hardware MAC address of a virtual Ethernet adapter is automatically generated by the HMC when it is defined. Enhancements introduced in POWER7 servers allow the LPAR administrator to: Specify the hardware MAC address of the virtual Ethernet adapter at creation time. Restrict the range of addresses that are allowed to be configured by the operating system within the LPAR. These features further improve the flexibility and security of the PowerVM networking stack. The following examples show how to configure these new features.3.3.1 Hardware Management Console When creating a virtual Ethernet adapter, the Advanced tab of the Create Virtual Ethernet page contains the settings related to custom MAC addresses. Defining a custom MAC address The MAC Address field displays a value of auto-assigned unless you choose to specify a custom MAC address by selecting the Override check box.106 IBM PowerVM Virtualization Managing and Monitoring
  • 144. Figure 3-4 shows the Create Virtual Ethernet Adapter window with the Overrideoption selected, and the custom MAC address set to 06:00:00:00:00:AA.Figure 3-4 Defining a custom MAC addressThe MAC address of an Ethernet adapter is a twelve-character (6 bytes)hexadecimal string. The valid characters are from 0 to 9 and A to F, and thecharacters are not case-sensitive.There are two rules for custom MAC addresses: Bit 1 of byte zero of the MAC address is reserved for Ethernet multicasting and must always be 0. Bit 2 of byte zero of the MAC address must always be 1 because it indicates that the MAC is a locally administered address. Chapter 3. Virtual network management 107
  • 145. Without a strong understanding of the MAC address format, these rules can be difficult to interpret. Figure 3-5 shows their locations. 6 bytes Most significant Least significant byte 0 byte 1 byte 2 byte 3 byte 4 Byte 5 8 7 6 5 4 3 2 1 bits 0: unicast 1: multicast 0: universally administered 1: locally administeredFigure 3-5 MAC address format These rules mean that valid MAC addresses must conform to the following format, where x is a hexidecimal value (0-9 or A-F). x2:xx:xx:xx:xx:xx x6:xx:xx:xx:xx:xx xA:xx:xx:xx:xx:xx xE:xx:xx:xx:xx:xx Restricting allowable MAC addresses The MAC Address Restriction option allows the administrator to fine-tune the operating system’s ability to override the hardware MAC address of the adapter. These options are independent of the choice to override the auto-assignment of the hardware MAC address. The restrictions can be used with an auto-assigned MAC address or with a custom MAC address. The options are: Allow all O/S Defined MAC Addresses Permits the operating system to override the hardware MAC address with any valid MAC address. Deny all O/S Defined MAC Addresses Deny any modifications to the MAC address of this adapter by the operating system.108 IBM PowerVM Virtualization Managing and Monitoring
  • 146. Specify Allowable O/S Defined MAC Addresses Allows the administrator to define up to four addresses that are allowed to be used in an override by the operating system on this adapter. Note that if you are configuring a custom hardware MAC address on this adapter, it doesn’t need to be included in this list. Important: The rules in the previous section regarding custom hardware MAC address do not apply to addresses defined by the operating system. If the Allow all O/S Defined MAC Addresses option is used, the operating system can define any hexidecimal value as the MAC address.3.3.2 Operating system MAC modifications This section gives examples on how to perform the following tasks on AIX, IBM i and Linux: Display the current MAC address of an adapter. Modify the MAC address within the operating system and the result of this operation if the MAC address modification is not permitted. For all examples, an adapter was configured with a custom hardware MAC address of 06:00:00:00:00:AA in the HMC, and then later changed on the operating system to 06:00:00:00:00:BB. AIX To show the current MAC address of an Ethernet adapter in AIX, use the entstat or lscfg command as shown in Example 3-9. Example 3-9 Listing an adapter MAC address within AIX P7_2_AIX:/ # entstat -d ent2 | grep "Hardware Address" Hardware Address: 06:00:00:00:00:aa P7_2_AIX:/ # lscfg -vl ent2 | grep "Network Address" Network Address.............0600000000AA Using either command, it is not possible to determine whether the MAC address is the true hardware address or an operating system defined address. Note that this is still true when the output is not filtered as in the previous example. Chapter 3. Virtual network management 109
  • 147. Modifications to an Ethernet adapter MAC address by AIX are controlled by two parameters on the layer-2 (ent) device, they are: use_alt_addr This parameter enables the alternate address. Valid values are yes and no. alt_addr The address to use, represented as a hex value. As an example, the MAC address 06:00:00:00:00:BB would be entered as 0x0600000000BB. If the value of use_alt_addr is no, the adapter is using the hardware MAC address. If the value is yes, the address specified in the alt_addr parameter is in use. Example 3-10 shows changing the MAC from the hardware address of 06:00:00:00:00:AA to the operating system defined address of 06:00:00:00:00:BB and back again. We can tell that 06:00:00:00:00:AA is the true hardware address as the use_alt_addr field is initially set to no. The device in this example is ent2 and it has been configured in the HMC to “Allow all O/S Defined MAC addresses”. Example 3-10 Changing an adapter MAC address within AIX P7_2_AIX:/ # lsattr -El ent2 -a use_alt_addr -a alt_addr use_alt_addr no Enable Alternate Ethernet Address True alt_addr 0x000000000000 Alternate Ethernet Address True P7_2_AIX:/ # entstat -d ent2 | grep "Hardware Address" Hardware Address: 06:00:00:00:00:aa P7_2_AIX:/ # chdev -l ent2 -a use_alt_addr=yes -a alt_addr=0x0600000000BB ent2 changed P7_2_AIX:/ # entstat -d ent2 | grep "Hardware Address" Hardware Address: 06:00:00:00:00:bb P7_2_AIX:/ # chdev -l ent2 -a use_alt_addr=no ent2 changed P7_2_AIX:/ # entstat -d ent2 | grep "Hardware Address" Hardware Address: 06:00:00:00:00:aa110 IBM PowerVM Virtualization Managing and Monitoring
  • 148. In Example 3-11, the same adapter has been redefined with the “Deny all O/SDefined MAC addresses” option. The example shows the layer-2 device (ent2) isstill allowed to be modified, however the layer-3 device (en2) has been deconfigured by the AIX kernel and is in the not available for use.Example 3-11 Failed changing of an adapter MAC address within AIXP7_2_AIX:/ # entstat -d ent2 | grep "Hardware Address"Hardware Address: 06:00:00:00:00:aaP7_2_AIX:/ # chdev -l ent2 -a use_alt_addr=yes -a alt_addr=0x0600000000BBent2 changedP7_2_AIX:/ # entstat -d ent2entstat: 0909-003 Unable to connect to device ent2, errno = 22P7_2_AIX:/ # lsdev | egrep "en2|ent2"en2 Defined Standard Ethernet Network Interfaceent2 Available Virtual I/O Ethernet Adapter (l-lan)IBM iTo display the current MAC address of an Ethernet adapter in IBM i, use theDSPLIND line_description CL command as shown in Figure 3-6. Line description . . . . . . . . . : ETH02 Option . . . . . . . . . . . . . . : *BASIC Category of line . . . . . . . . . : *ELAN Resource name . . . . . . . . . . : CMN05 Online at IPL . . . . . . . . . . : *YES Vary on wait . . . . . . . . . . . : *NOWAIT Network controller . . . . . . . . : ETH02NET Local adapter address . . . . . . : 0600000000AA Exchange identifier . . . . . . . : 056E970F Ethernet standard . . . . . . . . : *ETHV2 Line speed . . . . . . . . . . . . : *AUTO Current line speed . . . . . . . . : 1G Duplex . . . . . . . . . . . . . . : *AUTO Current duplex . . . . . . . . . . : *FULL Serviceability options . . . . . . : *NONE Maximum frame size . . . . . . . . : 1496Figure 3-6 IBM i Display line description Chapter 3. Virtual network management 111
  • 149. To change an Ethernet adapter’s MAC address in IBM i: End any TCP interfaces associated with the Ethernet line description. Vary off the line description. Change the adapter’s address. Vary on the line description again. Start any associated TCP interfaces again. Example 3-12 shows the CL commands we used to change the MAC address of the (virtual) Ethernet adapter in IBM i. Example 3-12 Changing an Ethernet adapter MAC address within IBM i ENDTCPIFC INTNETADR(172.16.20.196) VRYCFG CFGOBJ(ETH02) CFGTYPE(*LIN) STATUS(*OFF) CHGLINETH LIND(ETH02) ADPTADR(0600000000BB) VRYCFG CFGOBJ(ETH02) CFGTYPE(*LIN) STATUS(*ON) STRTCPIFC INTNETADR(172.16.20.196) When trying to change the MAC address of a virtual Ethernet adapter that was defined with the Deny all O/S Defined MAC addresses option on the HMC, changing the MAC address in the line description still works, but trying to vary on the line description fails with a CPI59F1 message Line ETH02 failed. Internal system failure. Linux To see the current hardware address of an Ethernet adapter in Linux, use the ifconfig command as show in Example 3-13. Example 3-13 Displaying an adapter MAC address within Linux [root@Power7-2-RHEL /]# ifconfig eth1 | grep "HWaddr" eth1 Link encap:Ethernet HWaddr 06:00:00:00:00:AA The ifconfig command can also be used to modify the address. Example 3-14 shows changing the hardware address from 06:00:00:00:00:AA to 06:00:00:00:00:BB. The device in this example is eth1, and it has been configured in the HMC to “Allow all O/S Defined MAC addresses.” Example 3-14 Changing an adapter MAC address within Linux [root@Power7-2-RHEL /]# ifconfig eth1 | grep "HWaddr" eth1 Link encap:Ethernet HWaddr 06:00:00:00:00:AA [root@Power7-2-RHEL /]# ifconfig eth1 hw ether 06:00:00:00:00:BB112 IBM PowerVM Virtualization Managing and Monitoring
  • 150. [root@Power7-2-RHEL /]# ifconfig eth1 | grep "HWaddr"eth1 Link encap:Ethernet HWaddr 06:00:00:00:00:BBThe ifconfig command can only show you the current MAC address. Usingifconfig, it is not possible to determine whether this is the true hardware MACaddress or an operating-system-defined MAC address. Note that this is true evenwhen the output is not filtered as in the previous example. On Power Systemsservers, it is possible to determine this information from the /proc file system asshown in Example 3-15. In this example, the adapter was configured in slot 3,hence the details in the 30000003 file are relevant to our adapter (the leastsignificant digits are the hex value of the adapter slot).Example 3-15 Displaying an adapter firmware MAC address within Linux[root@Power7-2-RHEL /]# grep MAC /proc/net/ibmveth/*/proc/net/ibmveth/30000002:Current MAC: 6E:8D:DA:FD:46:02/proc/net/ibmveth/30000002:Firmware MAC: 6E:8D:DA:FD:46:02/proc/net/ibmveth/30000003:Current MAC: 06:00:00:00:00:BB/proc/net/ibmveth/30000003:Firmware MAC: 06:00:00:00:00:AAIn Example 3-16, the same adapter has been redefined with the “Deny all O/SDefined MAC addresses” option. Similarly to AIX, we can see that the MACaddress change appears to work on both the eth1 interface and in the /proc file.However, when we attempt to enable the interface with an IP configuration, itfails. When we revert it back to the original hardware MAC, the commandsucceeds.Example 3-16 Failed changing of an adapter MAC address in Linux[root@Power7-2-RHEL /]# ifconfig eth1 | grep "HWaddr"eth1 Link encap:Ethernet HWaddr 06:00:00:00:00:AA[root@Power7-2-RHEL /]# ifconfig eth1 hw ether 06:00:00:00:00:BB[root@Power7-2-RHEL /]# ifconfig eth1 | grep "HWaddr"eth1 Link encap:Ethernet HWaddr 06:00:00:00:00:BB[root@Power7-2-RHEL /]# grep MAC /proc/net/ibmveth/*/proc/net/ibmveth/30000002:Current MAC: 6E:8D:DA:FD:46:02/proc/net/ibmveth/30000002:Firmware MAC: 6E:8D:DA:FD:46:02/proc/net/ibmveth/30000003:Current MAC: 06:00:00:00:00:BB/proc/net/ibmveth/30000003:Firmware MAC: 06:00:00:00:00:AA[root@Power7-2-RHEL /]# ifconfig eth1 172.200.0.100 netmask 255.255.255.0 upSIOCSIFFLAGS: Machine is not on the networkSIOCSIFFLAGS: Machine is not on the network Chapter 3. Virtual network management 113
  • 151. [root@Power7-2-RHEL /]# ifconfig eth1 hw ether 06:00:00:00:00:AA [root@Power7-2-RHEL /]# ifconfig eth1 172.200.0.100 netmask 255.255.255.0 up3.4 Managing the mapping of network devices One of the keys to managing a virtual environment is keeping track of what virtual objects correspond to what physical objects. In the network area, this can involve physical and virtual network adapters, and VLANs that span across hosts and switches. This mapping is critical for managing performance and to understand what systems will be affected by hardware maintenance. In environments that require redundant network connectivity, this section focuses on the SEA failover method in preference to the Network Interface Backup method of providing redundancy. Depending on whether you choose to use 802.1Q tagged VLANs, you might need to track the following information: For the virtual I/O Server: – Server host name – Physical adapter device name – Switch port – SEA device name – Virtual adapter device name – Virtual adapter slot number – Port virtual LAN ID (in tagged and untagged usages) – Additional virtual LAN IDs For the virtual I/O client: – Client host name – Virtual adapter device name – Virtual adapter slot number – Port virtual LAN ID (in tagged and untagged usages) – Additional virtual LAN IDs Because of the number of fields to be tracked, you should use a spreadsheet or database program to track this information. Record the data when the system is installed, and track it over time as the configuration changes.114 IBM PowerVM Virtualization Managing and Monitoring
  • 152. 3.4.1 Virtual network adapters and VLANs Virtual network adapters operate at memory speed. In many cases where additional physical adapters are needed, there is no need for additional virtual adapters. However, transfers that remain inside the virtual environment can benefit from using large MTU sizes on separate adapters. This can lead to improved performance and reduced CPU utilization for transfers that remain inside the virtual environment. The POWER Hypervisor supports tagged VLANs that can be used to separate traffic in the system. Separate adapters can be used to accomplish the same goal. Which method you choose, or a combination of both, should be based on common networking practice in your data center.3.4.2 Virtual device slot numbers Virtual storage and virtual network devices have a unique slot number. In complex systems, there tend to be far more storage devices than network devices because each virtual SCSI device can only communicate with one server or client. Slot numbers through 20 should be reserved for network devices on all LPARs to keep the network devices grouped together. In some complex network environments with many adapters, more slots might be required for networking. The maximum number of virtual adapter slots per LPAR should be increased above the default value of 10 when you create an LPAR. The appropriate number for your environment depends on the number of LPARs and adapters expected on each system. Each unused virtual adapter slot consumes a small amount of memory, so the allocation should be balanced with expected requirements. To plan memory requirements for your system configuration, use the System Planning Tool available at: http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.html3.4.3 Tracing a configuration Despite the best intentions in record keeping, it sometimes becomes necessary to manually trace virtual network connections back to the physical hardware. AIX Virtual Ethernet configuration tracing For an AIX virtual I/O client partition with multiple virtual network adapters, the slot number of each adapter can be determined by using the adapter physical Chapter 3. Virtual network management 115
  • 153. location from the lscfg command. In the case of virtual adapters, this field includes the card slot following the letter C, as shown in the Example 3-17. Example 3-17 Virtual Ethernet adapter slot number # lscfg -l ent* ent0 U9117.MMA.101F170-V3-C2-T1 Virtual I/O Ethernet Adapter (l-lan) You can use the slot numbers from the physical location field to trace back through the HMC Virtual Network Management option and determine what connectivity and VLAN tags are available on each adapter as illustrated in Figure 3-7: 1. From the HMC Systems Management  Servers view, select your Power Systems server and select Configuration  Virtual Network Management as shown in Figure 3-7.Figure 3-7 HMC Virtual Network Management116 IBM PowerVM Virtualization Managing and Monitoring
  • 154. 2. To determine where your AIX virtual I/O client Ethernet adapter is connected to, select a VLAN. In our example, the AIX61 LPAR virtual I/O client adapter ent0 in slot2 is on VLAN1 as shown in Figure 3-8.Figure 3-8 Virtual Ethernet adapter slot assignments Restriction: The HMC Virtual Network Management function currently does not support IBM i partitions. Chapter 3. Virtual network management 117
  • 155. IBM i Virtual Ethernet configuration tracing For an IBM i virtual I/O client partition with virtual Ethernet adapters, the slot number of each adapter can be determined by using the adapter location information: 1. To display the adapter location information, use the WRKHDWRSC *CMN command. Select option 7=Display resource detail for the virtual Ethernet adapter (type 268C) as shown in Figure 3-9. Type options, press Enter. 5=Work with configuration descriptions 7=Display resource detail Opt Resource Type Status Text CMB06 6B03 Operational Comm Processor LIN03 6B03 Operational Comm Adapter CMN02 6B03 Operational Comm Port CMB07 6B03 Operational Comm Processor LIN01 6B03 Operational Comm Adapter CMN03 6B03 Operational Comm Port CMB08 268C Operational Comm Processor LIN02 268C Operational LAN Adapter 7 CMN01 268C Operational Ethernet Port Figure 3-9 IBM i Work with Communication Resources panel118 IBM PowerVM Virtualization Managing and Monitoring
  • 156. The location field includes the card slot following the letter C as shown withslot 2 in Figure 3-10. Display Resource Detail System:E101F170 Resource name . . . . . . . : CMN01 Text . . . . . . . . . . . . : Ethernet Port Type-model . . . . . . . . . : 268C-002 Serial number . . . . . . . : 00-00000 Part number . . . . . . . . : Location : U9117.MMA.101F170-V5-C2-T1 Logical address: SPD bus: System bus 255 System board 128 More...Figure 3-10 IBM i Display Resource Details panelYou can use this slot number from the physical location field to trace backthrough the HMC partition properties and determine what connectivity andVLAN tags are available on each adapter. Chapter 3. Virtual network management 119
  • 157. 2. From the HMC Systems Management  Servers view, select your Power Systems server. Also select your IBM i partition and choose Properties to open the partition properties window as shown in Figure 3-11.Figure 3-11 HMC IBMi partition properties panel120 IBM PowerVM Virtualization Managing and Monitoring
  • 158. 3. Selecting the IBM i client virtual Ethernet adapter and choosing Actions  Properties shows, for our example, that the IBM i client virtual Ethernet adapter in slot 2 is on VLAN1, as shown in Figure 3-12. Figure 3-12 HMC virtual Ethernet adapter properties panel3.5 SEA threading on the Virtual I/O Server The Virtual I/O Server enables you to virtualize both disk and network traffic for AIX, IBM i, and Linux operating system-based clients. The main difference between these types of traffic is their persistence. If the Virtual I/O Server has to move network data around, it must do this immediately because network data has no persistent storage. For this reason, the network services provided by the Virtual I/O Server (such as the Shared Ethernet Adapter) run with the highest priority. Disk data for virtual SCSI devices is run at a lower priority than the network because the data is stored on the disk and there is less of a danger of losing data due to timeouts. The devices are also normally slower. The shared Ethernet process of the Virtual I/O Server prior to Version 1.3 runs at the interrupt level that was optimized for high performance. With this approach, it ran with a higher priority than the virtual SCSI if there was high network traffic. If the Virtual I/O Server did not provide enough CPU resource for both, the virtual SCSI performance could experience a degradation of service. Starting with Virtual I/O Server Version 1.3, the Shared Ethernet function is implemented using kernel threads. This enables a more even distribution of the processing power between virtual disk and network. Chapter 3. Virtual network management 121
  • 159. This threading can be turned on and off per Shared Ethernet Adapter (SEA) by changing the thread attribute and can be changed while the SEA is operating without any interruption to service. A value of 1 indicates that threading is to be used and 0 indicates the original interrupt method: $ lsdev -dev ent2 -attr thread value 0 $ chdev -dev ent2 -attr thread=1 ent2 changed $ lsdev -dev ent2 -attr thread value 1 Using threading requires a minimal increase of CPU usage for the same network throughput; but with the burst nature of network traffic, enabling threading (this is now the default) is generally a good idea. By this, we mean that network traffic will come in spikes, as users log on or as web pages load, for example. These spikes might coincide with disk access. For example, a user logs on to a system, generating a network activity spike, because during the logon process some form of password database stored on the disk will most likely be accessed or the user profile read. The one scenario where you should consider disabling threading is where you have a Virtual I/O Server dedicated for network and another dedicated for disk. This is only a good configuration when mixing extreme disk and network loads together on a CPU constricted server. Usually the network CPU requirements will be higher than those for disk. In addition, you will probably have the disk Virtual I/O Server setup to provide a network backup with SEA failover if you want to remove the other Virtual I/O Server from the configuration for scheduled maintenance. In this case, you will have both disk and network running through the same Virtual I/O Server, so use threading.3.6 Tuning network throughput This section looks at tuning the amount of data that can be transmitted through the network stack in a Power Systems server. The details in this chapter are not specific to Power Systems servers: they are standard networking protocols and concepts. In most cases in IP networking, it is desirable to transmit the largest sized network payloads possible to maximise bandwidth and reduce protocol122 IBM PowerVM Virtualization Managing and Monitoring
  • 160. overheads. There are numerous places where this can be tuned. We will cover information about jumbo frames, the maximum transfer unit (MTU), and maximum segment size (MSS). We also describe the path MTU discovery changes in AIX Version 5.3 and later, and other performance variables.3.6.1 Network Layers The network layering concept is a framework for implementing network protocols. There are generally two models used: the Open Systems Interconnect model (OSI), and the TCP/IP model. This chapter references the OSI model. Although the specifics of each layer are outside the scope of this book. Table 3-2 provides a quick guide for the purpose of understanding the concepts in this chapter. See network specific publications or the Internet for more information. Table 3-2 OSI seven layer network model Layer and Name Common Protocols 7: Application layer HTTP / Telnet / SSH 6: Presentation layer SSL / MIME 5: Session layer Sockets and Remote Procedure Call (RPC) 4: Transport layer Transmission Control Protocol (TCP) 3: Network layer Internet Protocol (IP) 2: Data link layer Ethernet / Frame Relay 1: Physical layer IEEE 802.x3.6.2 Operating system device configuration All operating systems configure network devices in a different manner. Some differentiate between layers whereas others do not. This section provides a reference for each operating system that runs on the Power platform. Chapter 3. Virtual network management 123
  • 161. AIX AIX differentiates between layer-2 and layer-3 devices as follows: en Layer-3 device for Ethernet version 2. This is the most commonly used layer-3 interface. et Layer-3 device for IEEE 802.3 Ethernet. Not used as often as the en device. ent Layer-2 device. IBM i The IBM i operating system uses a single interface for both layers. ETH Layer-2 and layer-3 device. Linux The Linux operating system uses a single interface for both layers. eth Layer-2 and layer-3 device.3.6.3 Tuning network payloads Tuning network payloads can result in significant throughput increases especially in configurations using Network Attached Storage (NAS). For this section, it is important to understand the differentiation between jumbo frames and the maximum transmission unit. They are not the same thing, however they are closely related. Often the term is used interchangeably and it can be difficult to describe one in the absence of the other. Use the following information as a reference in this chapter: Jumbo frames Refers to the Ethernet payload size. Configured at layer-2 of the OSI networking model. Generally refers to a 9000 byte payload. Maximum transfer unit The maximum size of the IP datagram. Configured at layer-3 of the OSI networking model. Maximum segment size The maximum segment refers to the size of the payload of the Transmission Control Protocol (TCP) packet. Configured at layer-4 of the OSI networking model.124 IBM PowerVM Virtualization Managing and Monitoring
  • 162. Jumbo framesTraditional Ethernet specifications defined a payload of 1500 bytes. This is theamount of data that can be delivered by every Ethernet frame on the wire. Theterm jumbo frames refers to increasing the payload area of the frame beyond the1500 byte standard. Technically speaking, the term is accurate for any sizebeyond 1500 bytes. However, manufacturers have generally agreed on a 9000byte payload as a standard. Jumbo Frames on Power based systems use a 9000byte payload.In an Internet Protocol network, the IP datagram is encapsulated within thepayload area of the Ethernet frame. The maximum size of the IP datagram,including all headers, data, and padding, is called the Maximum TransmissionUnit (MTU). The MTU is discussed in “Maximum transfer unit” on page 126.Enabling Jumbo Frames on a physical Ethernet adapter under AIX requires youto modify the jumbo_frames parameter on the layer-2 (ent) device.It is important to point out that there is no attribute for jumbo frames on a virtualEthernet adapter. Traffic through a virtual Ethernet adapter is handled by thePower hypervisor through memory buffers. This virtualized implementationdoesn’t interact with traditional layer-1 mediums such as cable, and thereforemany Ethernet specific attributes such as jumbo frames have limited relevance toa virtual adapter.If a virtual Ethernet adapter needs to communicate to an external network, theShared Ethernet Adapter on the Virtual I/O Server handles the framing of thetraffic and jumbo frames needs to be enabled on the SEA.Shared Ethernet Adapter and jumbo framesThe primary purpose of the SEA is to bridge network communication betweenthe virtual I/O clients and an external network. The SEA is capable of bridgingtraffic that require jumbo frames. To do so the jumbo_frames attribute has to beenabled on both the physical adapter layer-2 device and the Shared EthernetAdapter layer-2 device.It is important to note that although the SEA can bridge jumbo frames, it cannotgenerate them from its own layer-3 interface. The layer-3 interfaces associatedwith the SEA are generally meant for administration traffic.See 3.6.4, “Payload tuning examples” on page 130 for information about how toset the jumbo frames parameters. Chapter 3. Virtual network management 125
  • 163. Important: Before you enable jumbo frames on a physical adapter or SEA, ensure the other devices in your network (or VLAN) are also configured for jumbo frames. Errors will occur if your system attempts to transmit Ethernet frames into a layer 2 network that is not configured to handle them. Maximum transfer unit The Maximum Transfer Unit (MTU) value of the Internet Protocol denotes the maximum size of an IP datagram. Ideally the value of the MTU is one that will fit within the payload area of the underlying layer-2 protocol, in this case Ethernet. If the MTU is greater than the payload size of the layer-2 protocol, the IP datagram will need to be fragmented to be delivered. As you can see, the MTU size can affect the network performance between source and target systems. The use of large MTU sizes allows the operating system to send fewer packets of a larger size to reach the same network throughput. The larger packets reduce the processing required in the operating system because each packet requires the same amount of overhead but delivers a greater payload. On the other hand, incorrectly configuring the MTU will result in fragmentation and potentially undeliverable packets. This can reduce performance significantly so care needs to be taken to ensure the correct value is chosen. Note that if the workload is only sending small messages, the larger MTU size might not result in an increase in performance, though it should not decrease performance. See 3.6.4, “Payload tuning examples” on page 130 for information about how to set the MTU. Path MTU discovery Every network link has a maximum packet size described by the MTU. The datagrams can be transferred from one system to another through many links with different MTU values. If the source and destination system have different MTU values, it can cause fragmentation or dropping of packets when the smallest MTU for the link is selected. The smallest MTU for all the links in a path is called the path MTU, and the process of determining the smallest MTU along the entire path from the source to the destination is called path MTU discovery (PMTUD). With AIX Version 5.2 or earlier, the Internet Control Message Protocol (ICMP) echo request and ICMP echo reply packets are used to discover the path MTU using IPv4. The basic procedure is simple. When one system tries to optimize its transmissions by discovering the path MTU, it sends packets of its maximum126 IBM PowerVM Virtualization Managing and Monitoring
  • 164. size. If these do not fit through one of the links between the two systems, anotification from this link is sent back saying what maximum size this link willsupport. The notifications return an ICMP “Destination Unreachable” message tothe source of the IP datagram, with a code indicating “fragmentation needed andDF set” (type 3, type 4).When the source receives the ICMP message, it lowers the send MSS and triesagain using this lower value. This is repeated until the maximum possible valuefor all of the link steps is found.Possible outcomes during the path MTU discovery procedure include: The packet can get across all the links to the destination system without being fragmented. The source system can get an ICMP message from any hop along the path to the destination system, indicating that the MSS is too large and not supported by this link.This ICMP echo request and reply procedure has a few considerations. Somesystem administrators do not use path MTU discovery because they believe thatthere is a risk of denial of service (DoS) attacks.Also, if you already use the path MTU discovery, routers or fire walls can blockthe ICMP messages being returned to the source system. In this case, thesource system does not have any messages from the network environment andsets the default MSS value, which might not be supported across all links.The discovered MTU value is stored in the routing table using a cloningmechanism in AIX Version 5.2 or earlier, so it cannot be used for multi pathrouting. This is because the cloned route is always used instead of alternatingbetween the two multi path network routes. For this reason, you can see thediscovered MTU value using the netstat -rn command.Beginning with AIX Version 5.3, there are changes in the procedure for path MTUdiscovery. Here the ICMP echo reply and request packets are not used anymore.AIX Version 5.3 uses TCP packets and UDP datagrams rather than ICMP echoreply and request packets. In addition, the discovered MTU will not be stored inthe routing table. Therefore, it is possible to enable multi path routing to work withpath MTU discovery. Chapter 3. Virtual network management 127
  • 165. When one system tries to optimize its transmissions by discovering the path MTU, a pmtu entry is created in a Path MTU (PMTU) table. You can display this table using the pmtu display command as shown in Example 3-18. To avoid the accumulation of pmtu entries, unused pmtu entries will expire and be deleted when the pmtu_expire time is exceeded. Example 3-18 Path MTU display # pmtu display dst gw If pmtu refcnt redisc_t exp ------------------------------------------------------------------------- 9.3.4.148 9.3.5.197 en0 1500 1 22 0 9.3.4.151 9.3.5.197 en0 1500 1 5 0 9.3.4.154 9.3.5.197 en0 1500 3 6 0 9.3.5.128 9.3.5.197 en0 1500 15 1 0 9.3.5.129 9.3.5.197 en0 1500 5 4 0 9.3.5.171 9.3.5.197 en0 1500 1 1 0 9.3.5.197 127.0.0.1 lo0 16896 18 2 0 192.168.0.1 9.3.4.1 en0 1500 0 1 0 192.168.128.1 9.3.4.1 en0 1500 0 25 5 9.3.5.230 9.3.5.197 en0 1500 2 4 0 9.3.5.231 9.3.5.197 en0 1500 0 6 4 127.0.0.1 127.0.0.1 lo0 16896 10 2 0 Path MTU table entry expiration is controlled by the pmtu_expire option of the no command. The pmtu_expire option is set to 10 minutes by default. For IBM i, path MTU discovery is enabled by default for negotiation of larger frame transfers. To change the IBM i path MTU discovery setting, use the CHGTCPA command. IPv6 never sends ICMPv6 packets to detect the PMTU. The first packet of a connection always starts the process. In addition, IPv6 routers are designed to never fragment packets and always return an ICMPv6 Packet too big message if they are unable to forward a packet because of a smaller outgoing MTU. Therefore, for IPv6, no changes are necessary to make PMTU discovery work with multi path routing. TCP MSS The maximum segment size (MSS) corresponds to the payload area of the TCP packet. This is the IP MTU size minus IP and TCP header information. The MSS is the largest data or payload that the TCP layer can send to the destination system. When a connection is established, each system announces an MSS value. If one system does not receive an MSS from the other system, it uses the default MSS value.128 IBM PowerVM Virtualization Managing and Monitoring
  • 166. In AIX Version 5.2 or earlier, the default MSS value was 512 bytes, but startingwith AIX Version 5.3 1460 bytes is supported as the default value.The no -a command displays the value of the default MSS as tcp_mssdflt. OnAIX you receive the information shown in Example 3-19.Example 3-19 The default MSS value in AIX 6.1# no -a |grep tcp tcp_bad_port_limit = 0 tcp_ecn = 0 tcp_ephemeral_high = 65535 tcp_ephemeral_low = 32768 tcp_finwait2 = 1200 tcp_icmpsecure = 0 tcp_init_window = 0 tcp_inpcb_hashtab_siz = 24499 tcp_keepcnt = 8 tcp_keepidle = 14400 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_limited_transmit = 1 tcp_low_rto = 0 tcp_maxburst = 0 tcp_mssdflt = 1460 tcp_nagle_limit = 65535 tcp_nagleoverride = 0 tcp_ndebug = 100 tcp_newreno = 1 tcp_nodelayack = 0 tcp_pmtu_discover = 1 tcp_recvspace = 16384 tcp_sendspace = 16384 tcp_tcpsecure = 0 tcp_timewait = 1 tcp_ttl = 60 tcprexmtthresh = 3For IBM i, the default MTU size specified by default in the Ethernet linedescription’s maximum frame size parameter is 1496 bytes, which means1500 bytes for non-encapsulated TCP/IP packets.If the source network does not receive an MSS when the connection is firstestablished, the system uses the default MSS value. Most network environmentsare Ethernet, and this can support at least a 1500 byte MTU. Chapter 3. Virtual network management 129
  • 167. 3.6.4 Payload tuning examples Now that we have covered the theory, let’s look at how to configure these options within the operating systems that run on Power hardware. AIX AIX differentiates between layer-2 and layer-3 devices, to configure your system to use an MTU of 9000 with jumbo frames when you are using a physical adapter, you need configure both devices. If you are using a virtual Ethernet adapter you only need to configure the MTU of the layer-3 device to enable the larger packet size to work on inter-partition networks within the hypervisor. Remember, to send the larger sized packets outside the managed system without fragmentation, the associated SEA on the Virtual I/O Server must be configured to bridge jumbo frames. To configure an MTU of 9000 on a layer-3 interface, use the following command: $chdev -l <en_device> -a mtu=9000 To configure jumbo frames on a layer-2 interface, use the following command: $chdev -l <ent_device> -a jumbo_frames=yes In AIX, network settings can be can configured in three places: Globally using the no command. Per interface using the chdev command on the specific device. This is the most common approach. Using the ifconfig command in the same manner as it is on most Unix operating systems. However, these changes will not persist through a reboot. IBM i For an IBM i virtual I/O client with the default setting of the MTU size defined in the Ethernet line description, use the following procedure to increase it to MTU 9000. 1. End the TCP/IP interface for the virtual Ethernet adapter using the ENDTCPIFC command: ENDTCPIFC INTNETADR(9.3.5.119) 2. Vary off the virtual Ethernet adapter line description using the VRYCFG command: VRYCFG CFGOBJ(ETH01) CFGTYPE(*LIN) STATUS(*OFF)130 IBM PowerVM Virtualization Managing and Monitoring
  • 168. 3. Change the corresponding virtual Ethernet adapter line description using the CHGLIND command: CHGLINETH LIND(ETH01) MAXFRAME(8996)4. Vary on the virtual Ethernet adapter line description again using the VRYCFG command: VRYCFG CFGOBJ(ETH01) CFGTYPE(*LIN) STATUS(*ON)5. Start the TCP/IP interface again using the STRTCPIFC command: STRTCPIFC INTNETADR(9.3.5.119)6. Verify that jumbo frames are enabled on the IBM i virtual I/O client using the WRKTCPSTS *IFC command and selecting F11=Display interface status, as shown in Figure 3-13. Internet Subnet Type of Line Opt Address Mask Service MTU Type 9.3.5.119 255.255.254.0 *NORMAL 8992 *ELAN 127.0.0.1 255.0.0.0 *NORMAL 576 *NONE Figure 3-13 IBM i Work with TCP/IP Interface Status panel Remember: Using the default setting of *ALL for the Ethernet standard parameter allows for a maximum frame size of 1496 or 8996 when protocols like SNA or TCP are required. Setting the line description Ethernet standard to *ETHV2 allows using the full Ethernet MTU size of 1500 or 9000.LinuxTo configure MTU 9000 on any interface in Linux, use the ifconfig command.This also sets the driver to use jumbo frames, although this is transparent to theadministrator.[root@Power7-2-RHEL ~]# ifconfig eth0 mtu 9000 up[root@Power7-2-RHEL ~]# ifconfig eth0eth0 Link encap:Ethernet HWaddr 6E:8D:DA:FD:46:02 inet addr:172.16.20.174 Bcast:172.16.23.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:1479508 errors:0 dropped:0 overruns:0 frame:0 TX packets:386984 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 Chapter 3. Virtual network management 131
  • 169. RX bytes:323734168 (308.7 MiB) TX bytes:38172890 (36.4 MiB) Interrupt:18 Virtual I/O Server To configure a Shared Ethernet Adapter for the purpose of bridging jumbo frames, first ensure the physical adapter has been configured correctly using the chdev command as follows. $ chdev -dev <physical_adapter> -attr jumbo_frames=yes Then create the Shared Ethernet Adapter using the jumbo_frames=yes parameter. Substitute the appropriate adapter values for your system. $ mkvdev -sea ent1 -vadapter ent8 -default ent8 -defaultid 2 -attr ctl_chan=ent7 ha_mode=auto accounting=enabled jumbo_frames=yes You can also use the chdev command to modify the SEA device after creation if required: $ chdev -dev <sea_ent> -attr jumbo_frames=yes3.6.5 Payload tuning verification After you’ve configured your payloads at the various layers of the networking stack, you need to ensure the configuration works as intended by testing fragmentation. Each operating system has tools to check this configuration. The basic idea is to send a packet to a remote host and ensure the packet is not fragmented in transit. Programs generally use Internet Control Message Protocol (ICMP) packets and set the Do Not Fragment (DNF) flag to measure this. AIX and Virtual I/O Server The traceroute command on AIX and Virtual I/O Server can help determine if fragmentation is occurring along a given path. Example 3-20 shows a trace between two AIX systems though an interface configured with an MTU of 1500. Note there are no messages regarding fragmentation. Example 3-20 Example of no fragmentation using AIX # traceroute 172.16.20.172 trying to get source for 172.16.20.172 source should be 172.16.20.92 traceroute to 172.16.20.172 (172.16.20.172) from 172.16.20.92 (172.16.20.92), 30 hops max outgoing MTU = 1500 1 172.16.20.172 (172.16.20.172) 1 ms 0 ms 0 ms132 IBM PowerVM Virtualization Managing and Monitoring
  • 170. The systems in the previous example are on separate managed systems. InExample 3-21 the MTU of the source system is increased to 9000. No checkswhere made on other network devices such as Shared Ethernet Adapters or thenetwork switches to see if they were configured for jumbo frames. Notice thefragmentation that occurs when attempting to trace this path. In this example, theShared Ethernet Adapters were not configured correctly, which caused thisbehavior.Example 3-21 Example of fragmentation using AIX# chdev -l en0 -a mtu=9000en0 changed# traceroute 172.16.20.172trying to get source for 172.16.20.172source should be 172.16.20.92traceroute to 172.16.20.172 (172.16.20.172) from 172.16.20.92 (172.16.20.92),30 hops maxoutgoing MTU = 8166 1 P7_1_AIX (172.16.20.92) 0 msfragmentation required, trying new MTU = 8146 1 0 msfragmentation required, trying new MTU = 4464 1 0 msfragmentation required, trying new MTU = 4444 1 0 msfragmentation required, trying new MTU = 4352 1 0 msfragmentation required, trying new MTU = 4332 1 0 msfragmentation required, trying new MTU = 2048 1 0 msfragmentation required, trying new MTU = 2028 1 0 msfragmentation required, trying new MTU = 2002 1 0 msfragmentation required, trying new MTU = 1982 1 0 msfragmentation required, trying new MTU = 1536 1 0 msfragmentation required, trying new MTU = 1516 1 0 msfragmentation required, trying new MTU = 1500 1 172.16.20.172 (172.16.20.172) 0 ms 0 ms 0 ms Chapter 3. Virtual network management 133
  • 171. IBM i The Trace TCP/IP Route command, TRCTCPRTE, in IBM i can help determine if fragmentation is occurring along a given path. Example 3-22 shows a trace between two systems though an interface configured with an MTU of 1500 on the source system. In the TRCTCPRTE command, we specify: PKTLEN(1500) To set the packet length we want to use. FRAGMENT(*NO) To set on the Do Not Fragment option in the IP header of the probe packet. Note that there are no messages indicating any errors in the results returned by the TRCTCPRTE command. This indicates that all components in the network can handle a packet size of 1500. Example 3-22 Example of no fragmentation using IBM i TRCTCPRTE RMTSYS(172.16.20.90) PKTLEN(1500) FRAGMENT(*NO) Probing possible routes to 172.16.20.90 using *ANY interface. 1 172.16.20.90 0.101 0.064 0.068 In Example 3-23, we attempt to send a packet that is larger than the MTU size configured on the source system. Notice the error message indicating the frame was not sent. Example 3-23 Example of exceeding MTU size on IBM i TRCTCPRTE RMTSYS(172.16.20.90) PKTLEN(9000) FRAGMENT(*NO) Probing possible routes to 172.16.20.90 using *ANY interface. *RAWSEND socket operation code 2 failed. Error number 3432, Message size out of range..134 IBM PowerVM Virtualization Managing and Monitoring
  • 172. Selecting the error message and pressing F1 reveals that it was a Send Dataerror, suggesting the frame did not leave the IBM i partition, as shown inFigure 3-14. Message ID . . . . . . : TCP3263 Severity . . . . . . . : 40 Message . . . . : *RAWSEND socket operation code 2 failed. Error number 3432, Message size out of range.. Cause . . . . . : The socket operation codes are: 1 - Create socket. 2 - Send data. 3 - Receive data. 4 - Bind socket to port 0, IP address 172.16.20.90. 5 - Listen operation. 6 - Connect to destination socket. 7 - Accept incoming connection. Recovery . . . : Correct the error and try the request again. If the problem persists, contact service. BottomFigure 3-14 Send Data errorDiagnosing fragmentation in the network using only IBM i is more difficult. Whensending packets smaller than the MTU defined in IBM i, but larger than the MTUsize configured in an external network component, the error shown inExample 3-24 occurs. However, the same error is issued for other reasons, suchas the target blocking ICMP.Example 3-24 No response from TRCTCPRTETRCTCPRTE RMTSYS(172.16.20.90) RANGE(1) PKTLEN(7000) FRAGMENT(*NO)Probing possible routes to 172.16.20.90 using *ANY interface.1 * * *2 * * *3 * * *4 * * *... omitted lines ...29 * * *30 * * *IBM i cannot determine what that real cause of the error was. To eliminate othercauses of the problem, you can try and send a small packet. It is unlikely thatsmall MTU sizes would be configured in any network. If sending a small packetsucceeds but larger packets fail, the likely cause of the failure is an MTU settingsomewhere in the network. Chapter 3. Virtual network management 135
  • 173. Linux The tracepath command traces a network path and displays the MTU value of each hop. Example 3-25 shows the tracepath command failing with an MTU of 9000 configured. The MTU is then changed to 1500 and the trace works. Example 3-25 The tracepath command on Linux [root@Power7-2-RHEL ~]# tracepath 172.16.20.90 1: 172.16.20.174 (172.16.20.174) 0.056ms pmtu 9000 1: no reply 2: no reply 3: no reply 4: no reply <interupted> [root@Power7-2-RHEL ~]# ifconfig eth0 mtu 1500 up [root@Power7-2-RHEL ~]# tracepath 172.16.20.90 1: 172.16.20.174 (172.16.20.174) 0.075ms pmtu 1500 1: 172.16.20.90 (172.16.20.90) 0.270ms reached Resume: pmtu 1500 hops 1 back 1 The ping command can also be used with the -s and -M options on Linux to determine the MTU between hosts.3.6.6 TCP checksum offload The TCP checksum offload option enables the network adapter to verify the TCP checksum when transmitting and receiving, which saves the host CPU from having to compute the checksum. This feature is used to detect a corruption of data in the packet during transmission. This option is enabled by default on virtual Ethernet adapters. If the physical adapter in a SEA is using checksum offload, ensure the setting is also set on the virtual Ethernet adapters in the SEA. It can be enabled or disabled using the attribute chksum_offload of the adapter. PCI-X Gigabit Ethernet Adapters can operate at wire speed with the option set so it is enabled by default.3.6.7 Largesend option The Gigabit or higher Ethernet adapters for IBM Power Systems support TCP segmentation offload (also called largesend). In largesend environments, TCP sends a big chunk of data to the adapter when TCP knows that the adapter supports largesend. A physical ethernet adapter breaks up this big TCP packet into multiple smaller TCP packets that fit the outgoing MTU size of the adapter, thus saving system processor load and increasing network throughput.136 IBM PowerVM Virtualization Managing and Monitoring
  • 174. You can apply this TCP largesend capability on virtual Ethernet adapters and Shared Ethernet Adapters (SEA). It helps you to reduce the processor utilization of VIOSs significantly. The TCP largesend capability is extended from the Virtual I/O client all the way down to the physical adapter of VIOS. The TCP stack on the Virtual I/O client will determine whether the Virtual I/O server supports largesend. If the Virtual I/O server supports TCP largesend, the Virtual I/O client sends a big TCP packet directly to the Virtual I/O server. You can enable or disable the largesend option on the SEA by using the VIOS CLI chdev command. To enable it, use the -attr largesend=1 option as shown in Example 3-26. In case you need to disable it, use the -attr largesend=0 option. As of Virtual I/O Server Version 2.2, the largesend option is not set on the SEA by default, and you need to make it enable explicitly.Example 3-26 Largesend option for Shared Ethernet Adapter$ chdev -dev ent6 -attr largesend=1ent6 changed$ lsdev -dev ent6 -attrattribute value description user_settableaccounting disabled Enable per-client accounting of network statistics Truectl_chan ent5 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode auto High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 1 Enable Hardware Transmit TCP ResegmentationTruelldpsvc no Enable IEEE 802.1qbg services Truenetaddr 0 Address to ping Truepvid 10 PVID to use for the SEA device Truepvid_adapter ent4 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode disabled N/A Truereal_adapter ent1 Physical adapter associated with the SEA Truethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent4 List of virtual adapters associated with the SEA (comma separated) True Chapter 3. Virtual network management 137
  • 175. The physical adapter in the SEA must also be enabled for the TCP largesend for the segmentation offload from the Virtual I/O client to the SEA to work. The large_send option for a physical ethernet adapter is enabled by default. You can check the status of a physical adapter by using lsdev command on an VIOS, and also change the attribute by chdev. $ lsdev -dev ent1 -attr | grep large large_send yes Enable hardware TX TCP resegmentation True You can also enable largesend option for virtual ethernet adapters on a virtual I/O client, use ifconfig command on the interface (not on the adapter) as below: # ifconfig en0 largesend To check largesend option is enabled on the interface, use following command: # ifconfig en0 en0: flags=1e080863,4c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64 BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN> inet 172.16.21.104 netmask 0xfffffc00 broadcast 172.16.23.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 It can be disabled using the following command: # ifconfig en0 -largesend Note: After restarting the operating system, it will be reset the largesend option enabled on an interface by using ifconfig command on the Virtual I/O client. If you need to enable the largesend option on a virtual ethernet adapter automatically when the Operating System started, add an ifconfig command entry into an initial file such as /etc/rc.net.3.7 Shared Ethernet Adapter failover with Load Sharing The Virtual I/O Server Version 2.2.1.0, or later, provides a load sharing function to enable to use the bandwidth of the backup Shared Ethernet Adapter (SEA). The hitherto known SEA failover configuration provides redundancy by configuring a primary and backup SEA pair on Virtual I/O Servers (VIOS). The backup SEA is in standby mode, and is used when the primary SEA fails. The bandwidth of the backup SEA is not used in normal operation.138 IBM PowerVM Virtualization Managing and Monitoring
  • 176. Figure 3-15 shows a basic SEA failover configuration. All network packets of all Virtual I/O clients are bridged by the primary VIOS. Power System LPAR1 LPAR2 LPAR3 Virtual Virtual Virtual Ethernet Ethernet Ethernet VID = 10 VID = 20 VID = 30 POWER Hypervisor Trunk Adapter A Trunk Adapter B Trunk Adapter C Trunk Adapter D VID = 10, 20 VID = 30 VID = 10, 20 VID = 30 VIOS1 Priority = 1 Priority = 1 Priority = 2 Priority = 2 VIOS2 (Primary) Control Control (Backup) Channel Channel Shared Ethernet Adapter (SEA) Shared Ethernet Adapter (SEA) Priority = 1 Priority = 2 Physical Physical Ethernet Ethernet Adapter Adapter Ethernet Network Inactive Trunk AdapterFigure 3-15 SEA failover Primary-Backup configuration If you need more detailed information for the SEA failover concepts and how to configure SEA failover environment, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. Chapter 3. Virtual network management 139
  • 177. On the other hand, SEA failover with Load Sharing makes effective use of the backup SEA bandwidth, as shown in Figure 3-16. In this example, network packets of LPAR1 and LPAR2 are bridged by VIOS2, and LPAR3 is bridged by VIOS1. Power System LPAR1 LPAR2 LPAR3 Virtual Virtual Virtual Ethernet Ethernet Ethernet VID = 10 VID = 20 VID = 30 POWER Hypervisor Trunk Adapter A Trunk Adapter B Trunk Adapter C Trunk Adapter D VID = 10, 20 VID = 30 VID = 10, 20 VID = 30 VIOS1 Priority = 1 Priority = 1 Priority = 2 Priority = 2 VIOS2 (Primary) Control Control (Backup) Channel Channel Shared Ethernet Adapter (SEA) Shared Ethernet Adapter (SEA) Priority = 1 Priority = 2 Physical Physical Ethernet Ethernet Adapter Adapter Ethernet Network Inactive Trunk AdapterFigure 3-16 SEA failover with Load Sharing Prerequisites and requirements for SEA failover with Load Sharing are as follows:  Both primary and backup Virtual I/O Servers are at Version 2.2.1.0, or later.  Two or more trunk adapters are configured for the primary and backup SEA pair.  Load Sharing mode must be enabled on both primary and backup SEA pair.  The virtual local area network (VLAN) definitions of the trunk adapters are identical between the primary and backup SEA pair. Important: You need to set the same priority to all trunk adapters under one SEA. The primary and backup priority definitions are set at the SEA level, not at trunk adapters level.140 IBM PowerVM Virtualization Managing and Monitoring
  • 178. You can check these prerequisites and requirements in this sample, SEA failover with Load Sharing configuration shown as in Figure 3-16 on page 140. Both VIOS1 and VIOS2 should be at Version 2.2.1.0, or later. Two trunk adapters, Adapter A and B, are configured on the primary SEA on VIOS1, and Adapter C and D are configured on the backup SEA on VIOS2. All of the VLAN definitions of trunk adapters match. The primary SEA on VIOS1 has Adapter A with VLANs 10 and 20, and the backup SEA on VIOS2 has Adapter C with VLANs 10 and 20. The Adapter B and D is the same. Configuring SEA failover with Load Sharing mode is the same as configuring SEA in failover mode. You set ha_mode to sharing insted of auto when you create a SEA (Example 3-27).Example 3-27 Creating an SEA (ent7) with Load Sharing mode$ mkvdev -sea ent1 -vadapter ent4,ent5 -default ent4 -defaultid 10 -attr ha_mode=sharingctl_chan=ent6ent7 Available$ lsdev -dev ent7 -attrattribute value description user_settableaccounting disabled Enable per-client accounting of network statistics Truectl_chan ent6 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode sharing High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 1 Enable Hardware Transmit TCP Resegmentation Truelldpsvc no Enable IEEE 802.1qbg services Truenetaddr 0 Address to ping Truepvid 10 PVID to use for the SEA device Truepvid_adapter ent4 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode disabled N/A Truereal_adapter ent1 Physical adapter associated with the SEATruethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent4,ent5 List of virtual adapters associated with the SEA (comma separated) True You have already configured an SEA failover environment in failover mode, you can change the ha_mode attribute from auto to sharing by using the chdev command dynamically. You can also add a new trunk adapter to the existing SEA if you need, as shown in Example 3-28 on page 142. If you need to disable it while it is running in load sharing mode, set the ha_mode to any value other than sharing, for example standby, auto, or disable. Chapter 3. Virtual network management 141
  • 179. Important: To create or enable the SEA failover with Load Sharing, you have to enable the load sharing mode on the primary SEA first before enabling load sharing mode on the backup SEA. To change the ha_mode from sharing to auto, disable the load sharing mode, and set ha_mode to auto on the primary SEA first. Then set it on the backup to minimize the chance of a broadcast storm of the SEA.Example 3-28 Adding a trunk adapter and changing SEA (ent6) failover modeAdding a trunk adaper to the SEA. ent6: SEA ent4: trunk adapter which is already part of the SEA ent7: new trunk adapter adding to the SEA$ chdev -dev ent6 -attr virt_adapters=ent4,ent7ent6 changedChanging the SEA to Load Sharing mode.$ chdev -dev ent6 -attr ha_mode=sharingent6 changed$ lsdev -dev ent6 -attrattribute value description user_settableaccounting disabled Enable per-client accounting of network statistics Truectl_chan ent5 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode sharing High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 1 Enable Hardware Transmit TCP Resegmentation Truelldpsvc no Enable IEEE 802.1qbg services Truenetaddr 0 Address to ping Truepvid 10 PVID to use for the SEA device Truepvid_adapter ent4 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode disabled N/A Truereal_adapter ent1 Physical adapter associated with the SEA Truethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent4,ent7 List of virtual adapters associated with the SEA (comma separated) True142 IBM PowerVM Virtualization Managing and Monitoring
  • 180. The entstat command provides detailed information for the current SEA statussuch as State, Trunk Adapter Priority, and VLAN IDs. The output of entstatconsists of some statistics for physical and virtual adapters in the SEA, as shownin Example 3-29.Example 3-29 Statistics for adapters in the Shared Ethernet Adapter$ entstat -all ent6-------------------------------------------------------------ETHERNET STATISTICS (ent6) :Device Type: Shared Ethernet AdapterHardware Address: 00:1a:64:bb:69:49Elapsed Time: 0 days 10 hours 44 minutes 30 secondsTransmit Statistics: Receive Statistics:-------------------- -------------------...--------------------------------------------------------------Statistics for adapters in the Shared Ethernet Adapter ent6--------------------------------------------------------------...VLAN Ids : ent4: 10 ent7: 30 130Real Side Statistics: Packets received: 34275...Type of Packets Received: ... Limbo Packets: 0 State: PRIMARY_SH Bridge Mode: Partial VID shared: 10 Number of Times Server became Backup: 0 Number of Times Server became Primary: 1 High Availability Mode: Sharing Priority: 1--------------------------------------------------------------Real Adapter: ent1ETHERNET STATISTICS (ent1) :Device Type: 4-Port 10/100/1000 Base-TX PCI-X Adapter (14101103)...--------------------------------------------------------------Virtual Adapter: ent4ETHERNET STATISTICS (ent4) :Device Type: Virtual I/O Ethernet Adapter (l-lan)... Chapter 3. Virtual network management 143
  • 181. Virtual I/O Ethernet Adapter (l-lan) Specific Statistics: --------------------------------------------------------- RQ Length: 4481 Trunk Adapter: True Priority: 1 Active: True Filter MCast Mode: False ... Port VLAN ID: 10 VLAN Tag IDs: None ... -------------------------------------------------------------- Virtual Adapter: ent7 ETHERNET STATISTICS (ent7) : Device Type: Virtual I/O Ethernet Adapter (l-lan) ... Virtual I/O Ethernet Adapter (l-lan) Specific Statistics: --------------------------------------------------------- RQ Length: 4481 Trunk Adapter: True Priority: 1 Active: False Filter MCast Mode: False ... Port VLAN ID: 130 VLAN Tag IDs: 30 ... -------------------------------------------------------------- Control Channel Adapter: ent5 ... This example shows that the SEA consists of one physical adapter, two trunk adapters with VLAN ID 10 and 30, and one control channel adapter. The details are as follows: State: PRIMARY_SH The “_SH“ means that the SEA is running in load sharing mode. You can also see the status as High Availability Mode: Sharing. Priority:1 Active: True This shows that the trunk adapter is configured as a part of the primary SEA and the adapter is activated. Priority:1 Active: False This shows that the trunk adapter is configured as a part of the primary SEA and the adapter is deactivated.144 IBM PowerVM Virtualization Managing and Monitoring
  • 182. Tip: The load sharing algorithm automatically determines which trunk adapters will be activated and will treat network packets for VLANs in the SEA pair. You can not specify the active trunk adapters of the SEAs manually in the load sharing mode.3.8 Quality of Service The Shared Ethernet Adapter is capable of enforcing Quality of Service (QoS), based on the IEEE 802.1q standard. This section explains how QoS works for SEA and how it can be configured. SEA QoS provides a means whereby the VLAN tagged egress traffic is prioritized among 7 priority queues. However, note that QoS only comes into play when contention is present. As explained in 3.5, “SEA threading on the Virtual I/O Server” on page 121, each SEA instance has certain threads (currently 7) for multiprocessing. Each thread will have 9 queues to take care of network jobs. Each queue will take care of jobs at a different priority level. One queue is kept aside that is used when QoS is disabled. Important: QoS works only for tagged packets; that is, all packets emanating from the VLAN pseudo device of the virtual I/O client. Therefore, because virtual Ethernet does not tag packets, its network traffic cannot be prioritized. The packets will be placed in queue 0, which is the default queue at priority level 1. Each thread will independently follow the same algorithm to determine from which queue to send a packet. A thread will sleep when there are no packets on any of the 9 queues. Note the following points: If QoS is enabled, SEA will check the priority value of all tagged packets and put that packet in the corresponding queue. If QoS is not enabled, then regardless of whether the packet is tagged or untagged, SEA will ignore the priority value and place all packets in the disabled queue. This will ensure that the packets being enqueued while QoS is disabled will not be sent out of order when QoS is enabled. Chapter 3. Virtual network management 145
  • 183. When QoS is enabled, there are two algorithms to schedule jobs: strict mode and loose mode.3.8.1 Strict mode In strict mode, all packets from higher priority queues will be sent before any from a lower priority queue. The SEA will examine the highest priority queue for any packets to send out. If there are any packets to send, the SEA will send that packet. If there are no packets to send in a higher priority queue, the SEA will then check the next highest priority queue for any packets to send out and so on. After sending out a packet from the highest priority queue with packets, the SEA will start the algorithm over again. This allows for high priorities to always be serviced before those of the lower priority queues.3.8.2 Loose mode It is possible, in strict mode, that lower priority packets will never be serviced if there are always higher priorities. To address this issue, the loose mode algorithm was devised. With loose mode, if the number of bytes allowed has already been sent out from one priority queue, then the SEA will check all lower priorities at least once for packets to send before sending out packets from the higher priority again. When initially sending out packets, SEA will check its highest priority queue. It will continue to send packets out from the highest priority queue until either the queue is empty or the cap is reached. After either of those two conditions has been met, SEA will then move on to service the next priority queue. It will continue using the same algorithm until either of the two conditions have been met in that queue. At that point, it would move on to the next priority queue. On a fully saturated network, this would allocate certain percentages of bandwidth to each priority. The caps for each priority will be distinct and non-configurable. A cap is placed on each priority level so that after a number of bytes is sent for each priority level, the following level is serviced. This method ensures that all packets are eventually sent. More important traffic is given less bandwidth with this mode than with strict mode. However, the caps in loose mode are such that more bytes are sent for the more important traffic, so it still gets more bandwidth than less important traffic. Set loose mode using this command: chdev -dev -attr qos_mode=loose146 IBM PowerVM Virtualization Managing and Monitoring
  • 184. The cap for each priority level is shown in Table 3-3. Table 3-3 Cap values for loose mode Priority Cap in KB 1 256 2 128 0 64 3 32 4 16 5 8 6 4 7 23.8.3 Setting up QoS QoS for SEA can be configured by using the chdev command. The attribute to be configured is qos_mode, and its value can be disabled, loose, or strict. In Example 3-30, ent5 is an SEA and it has been enabled for loose mode QoS monitoring.Example 3-30 Configuring QoS for an SEA# lsattr -El ent5accounting enabled Enable per-client accounting of network statistics Truectl_chan ent3 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode auto High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 0 Enable Hardware Transmit TCP Resegmentation Truenetaddr 0 Address to ping Truepvid 1 PVID to use for the SEA device Truepvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode disabled N/A Truereal_adapter ent0 Physical adapter associated with the SEA Truethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent2 List of virtual adapters associated with the SEA (comma separated) True# chdev -l ent5 -a qos_mode=looseent5 changed Chapter 3. Virtual network management 147
  • 185. # lsattr -El ent5accounting enabled Enable per-client accounting of network statistics Truectl_chan ent3 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode auto High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 0 Enable Hardware Transmit TCP Resegmentation Truenetaddr 0 Address to ping Truepvid 1 PVID to use for the SEA device Truepvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode loose N/A Truereal_adapter ent0 Physical adapter associated with the SEA Truethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent2 List of virtual adapters associated with the SEA (comma separated) True Next, you can set the priority for the existing VLAN device by using smitty vlan and selecting the desired VLAN device. You will see a panel like Example 3-31, where you can set the VLAN priority level. Example 3-31 Configuring VLAN for an existing VLAN device Change / Show Characteristics of a VLAN Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] VLAN Name ent1 VLAN Base Adapter [ent0] + VLAN Tag ID [20] +# VLAN Priority [0] +# F1=Help F2=Refresh F3=Cancel F4=List F5=Reset F6=Command F7=Edit F8=Image F9=Shell F10=Exit Enter=Do148 IBM PowerVM Virtualization Managing and Monitoring
  • 186. 3.8.4 General rules for setting modes for QoS The following general rules apply to setting modes for QoS. Use strict mode for the following conditions: When maintaining priority is more important than preventing “starvation”. When the network administrator has a thorough understanding of the network traffic. When the network administrator understands the possibility of overhead and bandwidth starvation, and knows how to prevent this from occurring. Use loose mode for the following condition: When preventing starvation is more important than maintaining priority.3.9 Denial of Service hardening A Denial of Service (DoS) attack targets a machine and makes it unavailable. The target machine is bombarded with fake network communication requests for a service (like ftp, telnet, and so on) that cause it to allocate resources for each request. For every request, a port is held busy and process resources are allocated. Eventually, the target machine is exhausted of all its resources and becomes unresponsive. Like other operating systems, VIOS/AIX was vulnerable to DoS attacks. To corporations, network security is paramount and it was unacceptable to have servers bogged down by DoS attacks. Tip: Users can set DoS hardening rules on default ports by using the viosecure -level high command. See 4.1.4, “Security hardening rules” on page 156 for more details.3.9.1 Solution One solution, adopted from IBM z/OS®, is to limit the total number of active connections an application has at one time. This puts a restriction on the number of address spaces created by forking applications such as ftpd, telnetd, and so on. A fair share algorithm is also provided, based on the percentage of remaining available connections already held by a source IP address. A fair share algorithm will enforce TCP traffic regulations policies. Chapter 3. Virtual network management 149
  • 187. To utilize network traffic regulation, you need to enable it first. Example 3-32 shows how to enable network traffic regulation. Example 3-32 Enabling network traffic regulation # no -p -p tcptr_enable=1 # no -a |grep tcptr tcptr_enable = 1 The tcptr command can be used to display the current policy for various services, and modify it. For the Virtual I/O Server, you need to execute it from the root shell. It has the following syntax: tcptr -add <start_port> <end_port> <max> <div> tcptr -delete <start_port> <end_port> tcptr -show Where: <start_port> This is the starting TCP port for this policy. <end_port> This is the ending TCP port for this policy. <max> This is the maximum pool of connections for this policy. <div> This is the divisor (<32) governing the available pool. Example 3-33 shows how to regulate network traffic for port 25 (sendmail service). Example 3-33 Using tcptr for network traffic regulation for sendmail service # tcptr -show policy: Error failed to allocate memory (1) root @ core13: 6.1.2.0 (0841A_61D) : / # tcptr -add 25 25 1000 StartPort=25 EndPort=25 MaxPool=1000 Div=0 (0) root @ core13: 6.1.2.0 (0841A_61D) : / # tcptr -show TCP Traffic Regulation Policies: StartPort=25 EndPort=25 MaxPool=1000 Div=0 Used=0150 IBM PowerVM Virtualization Managing and Monitoring
  • 188. 4 Chapter 4. Virtual I/O Server security This chapter describes how to harden Virtual I/O Server security using the viosecure command provided in Version 1.3 and later. This chapter includes the following sections: Network security The Virtual I/O Server as an LDAP client Network Time Protocol configuration Setting up Kerberos on the Virtual I/O Server Managing users Role-based access control© Copyright IBM Corp. 2012. All rights reserved. 151
  • 189. 4.1 Network security If your Virtual I/O Server has an IP address assigned after installation, certain network services are running and open by default. The services in the listening open state are listed in Table 4-1. Table 4-1 Default open ports on Virtual I/O Server Port number Service Purpose 21 FTP Unencrypted file transfer 22 SSH Secure shell and file transfer 23 Telnet Unencrypted remote login 111 rpcbind NFS connection 657 RMC RMC connections (used for dynamic LPAR operations) In most cases the secure shell (SSH) service for remote login and the secure copy (SCP) for copying files should be sufficient for login and file transfer. Telnet and FTP are not using encrypted communication and can be disabled. Port 657 for RMC must be left open if you are considering using dynamic LPAR operations. This port is used for the communication between the logical partition and the Hardware Management Console.4.1.1 Stopping network services To stop Telnet and FTP and prevent them from starting automatically after reboot, use the stopnetsvc command as shown in Example 4-1. Example 4-1 Stopping network services $ stopnetsvc telnet 0513-127 The telnet subserver was stopped successfully. $ stopnetsvc ftp 0513-127 The ftp subserver was stopped successfully.4.1.2 Setting up the firewall The Virtual I/O Server firewall is not enabled by default. To enable the Virtual I/O Server firewall with the default configuration that enables the services wbem-https, wbem-http, wbem-rmi, rmc, https, http,152 IBM PowerVM Virtualization Managing and Monitoring
  • 190. domain, ssh, ftp and ftp-data, you can use the viosecure -firewall on -reloadcommand as shown in Example 4-2.Example 4-2 Using the viosecure command$ viosecure -firewall on -reload Tip: Default rules are loaded from the /home/ios/security/viosecure.ctl file.To display the current rules, use the viosecure -firewall view command asshown in Example 4-3.Example 4-3 Displaying the current rules$ viosecure -firewall viewFirewall ON ALLOWED PORTS Local RemoteInterface Port Port Service IPAddress ExpirationTime(seconds)--------- ---- ---- ------- --------- ---------------all 5989 any wbem-https 0.0.0.0 0all 5988 any wbem-http 0.0.0.0 0all 5987 any wbem-rmi 0.0.0.0 0all any 657 rmc 0.0.0.0 0all 657 any rmc 0.0.0.0 0all 443 any https 0.0.0.0 0all any 427 svrloc 0.0.0.0 0all 427 any svrloc 0.0.0.0 0all 80 any http 0.0.0.0 0all any 53 domain 0.0.0.0 0all 22 any ssh 0.0.0.0 0all 21 any ftp 0.0.0.0 0all 20 any ftp-data 0.0.0.0 0 Chapter 4. Virtual I/O Server security 153
  • 191. A common approach to designing a firewall or IP filter is to determine ports that are necessary for operation, to determine sources from which those ports will be accessed, and to close everything else. Assume we have hosts on our network as listed in Table 4-2. Table 4-2 Hosts in the network Host IP Address Comment VIO Server 172.16.20.171 Hardware Management 172.16.20.111 For dynamic LPAR and for monitoring, Console RMC communication should be allowed to VIOS. NIM Server, Management 172.16.20.41 For administration, SSH server communication should be allowed to VIOS. Administrators workstation 172.16.254.38 SSH communication can be allowed from the administrator’s workstation, but it is better use a “jump” to the management server. Therefore, our firewall would consist of the following rules: 1. Allow RMC from the Hardware Management console. 2. Allow SSH from NIM or the administrator’s workstation. 3. Deny anything else. To deploy this scenario, we issue the viosecure -firewall command to remove all existing default rules and apply new rules. Tip: You can also set up a firewall from the configuration menu accessed by the cfgassist command. 1. Turn off the firewall first so you do not accidentally lock yourself out: $ viosecure -firewall off 2. Remove any existing allow rules as shown in Example 4-4. Example 4-4 Removing the rules $ viosecure -firewall deny -port 0 The port for the allow rule was not found in the database154 IBM PowerVM Virtualization Managing and Monitoring
  • 192. 3. Now set your allow rules. They are going to be inserted before the deny all rule and be matched first. Change the IP addresses used in the example to match your network. $ viosecure -firewall allow -port 657 -address 172.16.20.111 $ viosecure -firewall allow -port 22 -address 172.16.20.41 $ viosecure -firewall allow -port 657 -address 172.16.254.38 4. Check your rules. Your output should look like Example 4-5. Example 4-5 Checking the rules $ viosecure -firewall view Firewall OFF ALLOWED PORTS Local Remote Interface Port Port Service IPAddress Expiration Time(seconds) --------- ---- ---- ------- --------- --------------- all 22 any ssh 172.16.254.38 0 all 22 any ssh 172.16.20.41 0 all 657 any rmc 172.16.20.111 0 5. Turn on your firewall and test connections: $ viosecure -firewall on Important: Lockout can occur if you restrict (that is, if you add a deny rule for) the protocol through which you are connected to the machine. To avoid lockout, configure the firewall using the virtual terminal connection, not the network connection. Our rule set allows the desired network traffic only and blocks any other requests. The rules set with the viosecure command only apply to inbound traffic. However, this setup will also block any ICMP requests, thus making it impossible to ping the Virtual I/O Server or to get any ping responses. This might be an issue if you are using the ping command to determine Shared Ethernet Adapter (SEA) failover or for EtherChannel.4.1.3 Enabling ping through the firewall As described in 4.1.2, “Setting up the firewall” on page 152, our sample firewall setup also blocks all incoming ICMP requests. If you need to enable ICMP for a Shared Ethernet Adapter configuration or Monitoring, use the oem_setup_env command and root access to define ICMP rules. Chapter 4. Virtual I/O Server security 155
  • 193. We can create additional ICMP rules that will allow pings by using two commands: /usr/sbin/genfilt -v 4 -a P -s 0.0.0.0 -m 0.0.0.0 -d 0.0.0.0 -M 0.0.0.0 -g n -c icmp -o eq -p 0 -O any -P 0 -r L -w I -l N -t 0 -i all -D echo_reply and: /usr/sbin/genfilt -v 4 -a P -s 0.0.0.0 -m 0.0.0.0 -d 0.0.0.0 -M 0.0.0.0 -g n -c icmp -o eq -p 8 -O any -P 0 -r L -w I -l N -t 0 -i all -D echo_request4.1.4 Security hardening rules The viosecure command can also be used to configure security hardening rules. Users can enforce either the preconfigured security levels or choose to customize them, based on their requirements. Currently preconfigured rules are high, medium, and low. Each rule has a number of security policies that can be enforced as shown in the following command: $ viosecure -level low -apply Processedrules=44 Passedrules=42 Failedrules=2 Level=AllRules Input file=/home/ios/security/viosecure.xml Alternatively, users can choose the policies they want as shown in Example 4-6 (the command has been truncated because of its length).Example 4-6 High level firewall settings$ viosecure -level high1. hls_ISSServerSensorLite:Enable RealSecure Server Sensor Lite: Enables high level policies for RealSecure Server SensorLite2. hls_ISSServerSensorFull:Enable RealSecure Server Sensor Full: Enables high level policies for RealSecure Server SensorFull3. hls_tcptr:TCP Traffic Regulation High: Enforces denial-of-service mitigation on popular ports.4. hls_rootpwdintchk:Root Password Integrity Check: Makes sure that the root password being set is not weak5. hls_sedconfig:Enable SED feature: Enable Stack Execution Disable feature6. hls_removeguest:Remove guest account: Removes guest account and its files7. hls_chetcftpusers:Add root user in /etc/ftpusers file: Adds root username in /etc/ftpusers file8. hls_xhost:Disable X-Server access: Disable access control for X-Server9. hls_rmdotfrmpathnroot:Remove dot from non-root path: Removes dot from PATH environment variable from files .profile,.kshrc, .cshrc and .login in users home directory10. hls_rmdotfrmpathroot:Remove dot from path root: Remove dot from PATH environment variable from files .profile, .kshrc,.cshrc and .login in roots home directory? 1,211. hls_loginherald:Set login herald: Set login herald in default stanza12. hls_crontabperm:Crontab permissions: Ensures roots crontab jobs are owned and writable only by root13. hls_limitsysacc:Limit system access: Makes root the only user in cron.allow file and removes the cron.deny file14. hls_core:Set core file size: Specifies the core file size to 0 for root15. hls_umask:Object creation permissions: Specifies default object creation permissions to 07716. hls_ipsecshunports:Guard host against port scans: Shuns vulnerable ports for 5 minutes to guard the host against portscans17. hls_ipsecshunhost:Shun host for 5 minutes: Shuns the hosts for 5 minutes, which tries to access un-used ports156 IBM PowerVM Virtualization Managing and Monitoring
  • 194. 18. hls_sockthresh:Network option sockthresh: Set network option sockthreshs value to 6019. hls_tcp_tcpsecure:Network option tcp_tcpsecure: Set network option tcp_tcpsecures value to 720. hls_sb_max:Network option sb_max: Set network option sb_maxs value to 1MB To view the current security rules, use the viosecure -view command. To undo all security policies, use the viosecure -undo command.4.1.5 DoS hardening To overcome Denial of Service attacks, a new feature was implemented in a previous release of Virtual I/O Server. For more information about this topic, see 3.9, “Denial of Service hardening” on page 149.4.2 The Virtual I/O Server as an LDAP client The Lightweight Directory Access Protocol defines a standard method for accessing and updating information about a directory (a database) either locally or remotely in a client-server model. The LDAP method is used by a cluster of hosts to allow centralized security authentication and access to user and group information. Virtual I/O Server Version 1.4 introduced LDAP authentication for the Virtual I/O Server’s users and with Version 1.5 of Virtual I/O Server a secure LDAP authentication is also supported, using a secure sockets layer (SSL). LDAP is packaged on the Virtual I/O Server Expansion Pack media. The steps necessary to create an SSL certificate, set up a server and then configure the Virtual I/O Server as a client are described in the following sections.4.2.1 Creating a key database file All the steps described here suppose that an IBM Tivoli Directory Server is installed on one server in the environment and the GSKit file sets. More information about the IBM Tivoli Directory Server can be found at: http://www.ibm.com/software/tivoli/products/directory-server/ Chapter 4. Virtual I/O Server security 157
  • 195. To create the key database file and certificate (self-signed for simplicity in this example), follow these steps: 1. Ensure that the GSKit and gsk7ikm are installed on the LDAP server as follows: # lslpp -l |grep gsk gskjs.rte 7.0.3.30 COMMITTED AIX Certificate and SSL Java gsksa.rte 7.0.3.30 COMMITTED AIX Certificate and SSL Base gskta.rte 7.0.3.30 COMMITTED AIX Certificate and SSL Base 2. Start the gsk7ikm utility with X Window. This is located in /usr/bin/gsk7ikm, which is a symbolic link to /usr/opt/ibm/gskta/bin/gsk7ikm. A window like the one shown in Figure 4-1 will appear.Figure 4-1 The ikeyman program initial window158 IBM PowerVM Virtualization Managing and Monitoring
  • 196. 3. Click Key Database File  New. A window similar to the one in Figure 4-2 will appear. Figure 4-2 Create new key database window4. On the same window, change the Key database type to CMS, change the File Name (to ldap_server.kdb, in this example), and set the Location to a directory where the keys can be stored (/etc/ldap, in this example). The final window will be similar to Figure 4-3. Figure 4-3 Creating the ldap_server key5. Click OK.6. A new window will appear. Enter the key database file password, and confirm it. Remember this password because it is required when the database file is edited. In this example the key database password was set to passw0rd.7. Accept the default expiration time. Chapter 4. Virtual I/O Server security 159
  • 197. 8. If you want the password to be masked and stored in a stash file, select Stash the password to a file. A stash file can be used by certain applications so that the application does not have to know the password to use the key database file. The stash file has the same location and name as the key database file and has an extension of *.sth. The panel should be similar to the one shown in Figure 4-4. Figure 4-4 Setting the key database password160 IBM PowerVM Virtualization Managing and Monitoring
  • 198. 9. Click OK. This completes the creation of the key database file. There is a set of default signer certificates. These are the default certificate authorities that are recognized. This is shown in Figure 4-5.Figure 4-5 Default certificate authorities available on the ikeyman program Chapter 4. Virtual I/O Server security 161
  • 199. 10.At this time, the key could be exported and sent to a certificate authority to be validated and then used. In this example, for simplicity reasons, the key is signed using a self-signed certificate. To create a self-signed certificate, click Create  New Self-Signed Certificate. A window similar to the one in Figure 4-6 will appear. Figure 4-6 Creating a self-signed certificate initial panel 11.Type a name in the Key Label field that GSKit can use to identify this new certificate in the key database. In this example the key is labeled ldap_server. 12.Accept the defaults for the Version field (X509V3) and for the Key Size field. 13.Type your company name in the Organization field.162 IBM PowerVM Virtualization Managing and Monitoring
  • 200. 14.Complete any optional fields or leave them blank: the default for the Country field and 365 for the Validity Period field. The window should look like the one in Figure 4-7. Figure 4-7 Self-signed certificate information15.Click OK. GSKit generates a new public and private key pair and creates the certificate. This completes the creation of the LDAP client’s personal certificate. It is displayed in the Personal Certificates section of the key database file. Next, the LDAP Server’s certificate must be extracted to a Base64-encoded ASCII data file.16.Highlight the self-signed certificate that was just created.17.Click Extract Certificate.18.Select Base64-encoded ASCII data as the type.19.Type a certificate file name for the newly extracted certificate. The certificate file’s extension is usually *.arm.20.Type the location where you want to store the extracted certificate and then click OK.21.Copy this extracted certificate to the LDAP server system.This file will only be used if the key database is going to be used as an SSL in aweb server. This can happen when the LDAP administrator decides to manage Chapter 4. Virtual I/O Server security 163
  • 201. the LDAP through its web interface. Then this *.arm file can be transferred to your PC and imported to the web browser. You can find more about the GSKit at: http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic= /com.ibm.itame.doc_5.1/am51_webinstall223.htm4.2.2 Configuring the LDAP server Because the key database was generated, it can now be used to configure the LDAP server. In the following example, we use a LDAP server on AIX. IBM i and Linux can also be used for LDAP server instead of AIX, depending on your situation. For further information about IBM i LDAP server support, see the IBM i Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp Navigate to IBM i 7.1 Information Center  Networking  TCP/IP applications, protocols, and services  IBM Tivoli Directory Server for IBM i (LDAP). For Linux, see the relevant product documentation. The mksecldap command is used to set up an AIX system as an LDAP server or client for security authentication and data management. A description of how to set up the AIX system as an LDAP server is provided in this section. Remember that all file sets of the IBM Tivoli directory Server 6.1 have to be installed before configuring the system as an LDAP server. When installing the LDAP server file set, the LDAP client file set and the backend DB2 software are automatically installed as well. No DB2 preconfiguration is required to run this command for the LDAP server setup. When the mksecldap command is run to set up a server, the command does the following: 1. Creates the DB2 instance with ldapdb2 as the default instance name. 2. Because in this case the IBM Directory Server 6.1 is being configured, an LDAP server instance with the default name of ldapdb2 is created. A prompt is displayed for the encryption seed to create the key files. The input encryption seed must be at least 12 characters. 3. Creates a DB2 database with ldapdb2 as the default database name.164 IBM PowerVM Virtualization Managing and Monitoring
  • 202. 4. Creates the base DN (o=ibm in this example). The directory information tree that will be created in this example by default is shown in Figure 4-8. o=ibm ou=People ou=Groups Figure 4-8 Default directory information tree created by mksecldap command 1. Because the -u NONE flag was not specified, the data from the security database from the local host is exported into the LDAP database. Because the -S option was used and followed by rfc2307aix, the mksecldap command exports users or groups using this schema. 2. The LDAP administrator DN is set to cn=admin and the password is set to passw0rd. 3. Because the -k flag was used, the server will use SSL (secure socket layer). 4. The plugin libldapaudit.a is installed. This plugin supports an AIX audit of the LDAP server. 5. The LDAP server is started after all the above steps are completed. 6. The LDAP process is added to /etc/inittab to have the LDAP server start after a reboot. The command and its output are shown here:# mksecldap -s -a cn=admin -p passw0rd -S rfc2307aix -d o=ibm -k /etc/ldap/ldap_server.kdb -wpassw0rdldapdb2s New password:Enter the new password again:Enter an encryption seed to generate key stash files:You have chosen to perform the following actions:GLPICR020I A new directory server instance ldapdb2 will be created.GLPICR057I The directory server instance will be created at: /home/ldapdb2.GLPICR013I The directory server instances port will be set to 389.GLPICR014I The directory server instances secure port will be set to 636.GLPICR015I The directory instances administration server port will be set to 3538.GLPICR016I The directory instances administration server secure port will be set to 3539.GLPICR019I The description will be set to: IBM Tivoli Directory Server Instance V6.1.GLPICR021I Database instance ldapdb2 will be configured. Chapter 4. Virtual I/O Server security 165
  • 203. GLPICR028I Creating directory server instance: ldapdb2.GLPICR025I Registering directory server instance: ldapdb2.GLPICR026I Registered directory server instance: : ldapdb2.GLPICR049I Creating directories for directory server instance: ldapdb2.GLPICR050I Created directories for directory server instance: ldapdb2.GLPICR043I Creating key stash files for directory server instance: ldapdb2.GLPICR044I Created key stash files for directory server instance: ldapdb2.GLPICR040I Creating configuration file for directory server instance: ldapdb2.GLPICR041I Created configuration file for directory server instance: ldapdb2.GLPICR034I Creating schema files for directory server instance: ldapdb2.GLPICR035I Created schema files for directory server instance: ldapdb2.GLPICR037I Creating log files for directory server instance: ldapdb2.GLPICR038I Created log files for directory server instance: ldapdb2.GLPICR088I Configuring log files for directory server instance: ldapdb2.GLPICR089I Configured log files for directory server instance: ldapdb2.GLPICR085I Configuring schema files for directory server instance: ldapdb2.GLPICR086I Configured schema files for directory server instance: ldapdb2.GLPICR073I Configuring ports and IP addresses for directory server instance: ldapdb2.GLPICR074I Configured ports and IP addresses for directory server instance: ldapdb2.GLPICR077I Configuring key stash files for directory server instance: ldapdb2.GLPICR078I Configured key stash files for directory server instance: ldapdb2.GLPICR046I Creating profile scripts for directory server instance: ldapdb2.GLPICR047I Created profile scripts for directory server instance: ldapdb2.GLPICR069I Adding entry to /etc/inittab for the administration server for directory instance:ldapdb2.GLPICR070I Added entry to /etc/inittab for the administration server for directory instance:ldapdb2.GLPICR118I Creating runtime executable for directory server instance: ldapdb2.GLPICR119I Created runtime executable for directory server instance: ldapdb2.GLPCTL074I Starting admin daemon instance: ldapdb2.GLPCTL075I Started admin daemon instance: ldapdb2.GLPICR029I Created directory server instance: : ldapdb2.GLPICR031I Adding database instance ldapdb2 to directory server instance: ldapdb2.GLPCTL002I Creating database instance: ldapdb2.GLPCTL003I Created database instance: ldapdb2.GLPCTL017I Cataloging database instance node: ldapdb2.GLPCTL018I Cataloged database instance node: ldapdb2.GLPCTL008I Starting database manager for database instance: ldapdb2.GLPCTL009I Started database manager for database instance: ldapdb2.GLPCTL049I Adding TCP/IP services to database instance: ldapdb2.GLPCTL050I Added TCP/IP services to database instance: ldapdb2.GLPICR081I Configuring database instance ldapdb2 for directory server instance: ldapdb2.GLPICR082I Configured database instance ldapdb2 for directory server instance: ldapdb2.GLPICR052I Creating DB2 instance link for directory server instance: ldapdb2.GLPICR053I Created DB2 instance link for directory server instance: ldapdb2.GLPICR032I Added database instance ldapdb2 to directory server instance: ldapdb2.You have chosen to perform the following actions:GLPDPW004I The directory server administrator DN will be set.166 IBM PowerVM Virtualization Managing and Monitoring
  • 204. GLPDPW005I The directory server administrator password will be set.GLPDPW009I Setting the directory server administrator DN.GLPDPW010I Directory server administrator DN was set.GLPDPW006I Setting the directory server administrator password.GLPDPW007I Directory server administrator password was set.You have chosen to perform the following actions:GLPCDB023I Database ldapdb2 will be configured.GLPCDB024I Database ldapdb2 will be created at /home/ldapdb2GLPCDB035I Adding database ldapdb2 to directory server instance: ldapdb2.GLPCTL017I Cataloging database instance node: ldapdb2.GLPCTL018I Cataloged database instance node: ldapdb2.GLPCTL008I Starting database manager for database instance: ldapdb2.GLPCTL009I Started database manager for database instance: ldapdb2.GLPCTL026I Creating database: ldapdb2.GLPCTL027I Created database: ldapdb2.GLPCTL034I Updating the database: ldapdb2GLPCTL035I Updated the database: ldapdb2GLPCTL020I Updating the database manager: ldapdb2.GLPCTL021I Updated the database manager: ldapdb2.GLPCTL023I Enabling multi-page file allocation: ldapdb2GLPCTL024I Enabled multi-page file allocation: ldapdb2GLPCDB005I Configuring database ldapdb2 for directory server instance: ldapdb2.GLPCDB006I Configured database ldapdb2 for directory server instance: ldapdb2.GLPCTL037I Adding local loopback to database: ldapdb2.GLPCTL038I Added local loopback to database: ldapdb2.GLPCTL011I Stopping database manager for the database instance: ldapdb2.GLPCTL012I Stopped database manager for the database instance: ldapdb2.GLPCTL008I Starting database manager for database instance: ldapdb2.GLPCTL009I Started database manager for database instance: ldapdb2.GLPCDB003I Added database ldapdb2 to directory server instance: ldapdb2.You have chosen to perform the following actions:GLPCSF007I Suffix o=ibm will be added to the configuration file of the directory serverinstance ldapdb2.GLPCSF004I Adding suffix: o=ibm.GLPCSF005I Added suffix: o=ibm.GLPSRV034I Server starting in configuration only mode.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPSRV155I The DIGEST-MD5 SASL Bind mechanism is enabled in the configuration file.GLPCOM021I The preoperation plugin is successfully loaded from libDigest.a.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM024I The extended Operation plugin is successfully loaded from libtranext.a.GLPCOM023I The postoperation plugin is successfully loaded from libpsearch.a.GLPCOM024I The extended Operation plugin is successfully loaded from libpsearch.a.GLPCOM025I The audit plugin is successfully loaded from libldapaudit.a.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM023I The postoperation plugin is successfully loaded from libpsearch.a.GLPCOM024I The extended Operation plugin is successfully loaded from libpsearch.a. Chapter 4. Virtual I/O Server security 167
  • 205. GLPCOM022I The database plugin is successfully loaded from libback-config.a.GLPCOM024I The extended Operation plugin is successfully loaded from libloga.a.GLPCOM024I The extended Operation plugin is successfully loaded from libidsfget.a.GLPSRV180I Pass-through authentication is disabled.GLPCOM003I Non-SSL port initialized to 389.Stopping the LDAP server.GLPSRV176I Terminated directory server instance ldapdb2 normally.GLPSRV041I Server starting.GLPCTL113I Largest core file size creation limit for the process (in bytes): 1073741312(Softlimit) and -1(Hard limit).GLPCTL121I Maximum Data Segment(Kbytes) soft ulimit for the process was 131072 and it ismodified to the prescribed minimum 262144.GLPCTL119I Maximum File Size(512 bytes block) soft ulimit for the process is -1 and theprescribed minimum is 2097151.GLPCTL122I Maximum Open Files soft ulimit for the process is 2000 and the prescribed minimum is500.GLPCTL121I Maximum Physical Memory(Kbytes) soft ulimit for the process was 32768 and it ismodified to the prescribed minimum 262144.GLPCTL121I Maximum Stack Size(Kbytes) soft ulimit for the process was 32768 and it is modifiedto the prescribed minimum 65536.GLPCTL119I Maximum Virtual Memory(Kbytes) soft ulimit for the process is -1 and the prescribedminimum is 1048576.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM024I The extended Operation plugin is successfully loaded from libtranext.a.GLPCOM024I The extended Operation plugin is successfully loaded from libldaprepl.a.GLPSRV155I The DIGEST-MD5 SASL Bind mechanism is enabled in the configuration file.GLPCOM021I The preoperation plugin is successfully loaded from libDigest.a.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM024I The extended Operation plugin is successfully loaded from libtranext.a.GLPCOM023I The postoperation plugin is successfully loaded from libpsearch.a.GLPCOM024I The extended Operation plugin is successfully loaded from libpsearch.a.GLPCOM025I The audit plugin is successfully loaded from libldapaudit.a.GLPCOM025I The audit plugin is successfully loaded from/usr/ccs/lib/libsecldapaudit64.a(shr.o).GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM023I The postoperation plugin is successfully loaded from libpsearch.a.GLPCOM024I The extended Operation plugin is successfully loaded from libpsearch.a.GLPCOM022I The database plugin is successfully loaded from libback-config.a.GLPCOM024I The extended Operation plugin is successfully loaded from libevent.a.GLPCOM024I The extended Operation plugin is successfully loaded from libtranext.a.GLPCOM023I The postoperation plugin is successfully loaded from libpsearch.a.GLPCOM024I The extended Operation plugin is successfully loaded from libpsearch.a.GLPCOM022I The database plugin is successfully loaded from libback-rdbm.a.GLPCOM010I Replication plugin is successfully loaded from libldaprepl.a.GLPCOM021I The preoperation plugin is successfully loaded from libpta.a.GLPSRV017I Server configured for secure connections only.GLPSRV015I Server configured to use 636 as the secure port.GLPCOM024I The extended Operation plugin is successfully loaded from libloga.a.GLPCOM024I The extended Operation plugin is successfully loaded from libidsfget.a.168 IBM PowerVM Virtualization Managing and Monitoring
  • 206. GLPSRV180I Pass-through authentication is disabled.GLPCOM004I SSL port initialized to 636.Migrating users and groups to LDAP server.# At this point a query can be issued to the LDAP server to test its functionality. The ldapsearch command is used to retrieve information from the LDAP server and to execute an SSL search on the server that was just started. It can be used in the following way:/opt/IBM/ldap/V6.1/bin/ldapsearch -D cn=admin -w passw0rd -h localhost -Z -K/etc/ldap/ldap_server.kdb -p 636 -b "cn=SSL,cn=Configuration" "(ibm-slapdSslAuth=*)"cn=SSL, cn=Configurationcn=SSLibm-slapdSecurePort=636ibm-slapdSecurity=SSLOnlyibm-slapdSslAuth=serverauthibm-slapdSslCertificate=noneibm-slapdSslCipherSpec=AESibm-slapdSslCipherSpec=AES-128ibm-slapdSslCipherSpec=RC4-128-MD5ibm-slapdSslCipherSpec=RC4-128-SHAibm-slapdSslCipherSpec=TripleDES-168ibm-slapdSslCipherSpec=DES-56ibm-slapdSslCipherSpec=RC4-40-MD5ibm-slapdSslCipherSpec=RC2-40-MD5ibm-slapdSslFIPSProcessingMode=falseibm-slapdSslKeyDatabase=/etc/ldap/ldap_server.kdbibm-slapdSslKeyDatabasePW={AES256}31Ip2qH5pLxOIPX9NTbgvA==ibm-slapdSslPKCS11AcceleratorMode=noneibm-slapdSslPKCS11Enabled=falseibm-slapdSslPKCS11Keystorage=falseibm-slapdSslPKCS11Lib=libcknfast.soibm-slapdSslPKCS11TokenLabel=noneobjectclass=topobjectclass=ibm-slapdConfigEntryobjectclass=ibm-slapdSSL In this example the SSL configuration is retrieved from the server. Note that the database key password is stored in a cryptographic form: ({AES256}31Ip2qH5pLxOIPX9NTbgvA==). After the LDAP server has been shown to be working, the Virtual I/O Server can be configured as a client. Chapter 4. Virtual I/O Server security 169
  • 207. 4.2.3 Configuring the Virtual I/O Server as an LDAP client The first thing to be checked on the Virtual I/O Server before configuring it as a secure LDAP client is whether the ldap.max_crypto_client file sets are installed. To check this, issue the lslpp command on the Virtual I/O Server as root as follows: # lslpp -l |grep ldap ldap.client.adt 5.2.0.0 COMMITTED Directory Client SDK ldap.client.rte 5.2.0.0 COMMITTED Directory Client Runtime (No ldap.max_crypto_client.adt ldap.max_crypto_client.rte ldap.client.rte 5.2.0.0 COMMITTED Directory Client Runtime (No If the file sets are not installed, proceed with the installation before going forward with these steps. These file sets can be found on the Virtual I/O Server Expansion Pack media. The Expansion Pack media comes with the Virtual I/O Server Version installation media. Transfer the database key from the LDAP server to the Virtual I/O Server. In this example, ldap_server.kdb and ldap_server.sth were transferred from /etc/ldap on the LDAP server to /etc/ldap on the Virtual I/O Server. On the Virtual I/O Server, the mkldap command is used to configure it as an LDAP client. To configure the Virtual I/O Server as a secure LDAP client of the LDAP server that was previously configured, use the following command: $ mkldap -bind cn=admin -passwd passw0rd -host NIM_server -base o=ibm -keypath /etc/ldap/ldap_server.kdb -keypasswd passw0rd -port 636 gskjs.rte gskjt.rte gsksa.rte gskta.rte To check whether the secure LDAP configuration is working, create an LDAP user using the mkuser command with the -ldap flag, and then use the lsuser command to check its characteristics as shown in Example 4-7. Note that the registry of the user is now stored on the LDAP server. Example 4-7 Creating an ldap user on the Virtual I/O Server $ mkuser -ldap itso itsos Old password: itsos New password: Enter the new password again: $ lsuser itso170 IBM PowerVM Virtualization Managing and Monitoring
  • 208. itso roles=Admin account_locked=false expires=0 histexpire=0 histsize=0loginretries=0 maxage=0 maxexpired=-1 maxrepeats=8 minage=0 minalpha=0mindiff=0 minlen=0 minother=0 pwdwarntime=330 registry=LDAP SYSTEM=LDAPWhen the user itso tries to log in, its password has to be changed as shown inExample 4-8.Example 4-8 Log on to the Virtual I/O Server using an LDAP userlogin as: itsoitso@9.3.5.108s password:[LDAP]: 3004-610 You are required to change your password. Please choose a new one.WARNING: Your password has expired.You must change your password now and login again!Changing password for "itso"itsos Old password:itsos New password:Enter the new password again:Another way to test whether the configuration is working is to use the ldapsearchcommand to do a search on the LDAP directory. In Example 4-9, this commandis used to search for the characteristics of the o=ibm object.Example 4-9 Searching the LDAP server$ ldapsearch -b o=ibm -h NIM_server -D cn=admin -w passw0rd -s base -p 636 -K/etc/ldap/ldap_server.kdb -N ldap_server -P passw0rd objectclass=*o=ibmobjectclass=topobjectclass=organizationo=ibmThe secure LDAP connection between the LDAP server and the Virtual I/OServer is now configured and operational. Chapter 4. Virtual I/O Server security 171
  • 209. 4.3 Network Time Protocol configuration A synchronized time is important for error logging, Kerberos, and various monitoring tools. The Virtual I/O Server has an NTP client installed. To configure it you can create or edit the configuration file /home/padmin/config/ntp.conf using the following command as shown in Example 4-10: $ vi /home/padmin/config/ntp.conf Example 4-10 Content of the /home/padmin/config/ntp.conf file server ptbtime1.ptb.de server ptbtime2.ptb.de driftfile /home/padmin/config/ntp.drift tracefile /home/padmin/config/ntp.trace logfile /home/padmin/config/ntp.log After it is configured, you start the xntpd service using the startnetsvc command as shown in Example 4-11. Example 4-11 Start of the xntpd deamon $ startnetsvc xntpd 0513-059 The xntpd Subsystem has been started. Subsystem PID is 123092. After the daemon is started, check your ntp.log file. If it shows messages similar to those in Example 4-12, you have to set the time manually first. Example 4-12 Too large time error $ cat config/ntp.log 5 Dec 13:52:26 xntpd[516180]: SRC stop issued. 5 Dec 13:52:26 xntpd[516180]: exiting. 5 Dec 13:56:57 xntpd[516188]: synchronized to 9.3.4.7, stratum=3 5 Dec 13:56:57 xntpd[516188]: time error 3637.530348 is way too large (set clock manually) In order to set the date on the Virtual I/O Server, use the chdate command: $ chdate 1206093607 $ Thu Dec 6 09:36:16 CST 2007 If the synchronization is successful, your log in /home/padmin/config/ntp.log should look like Example 4-13. Example 4-13 Successful ntp synchronization 6 Dec 09:48:55 xntpd[581870]: synchronized to 9.3.4.7, stratum=2 6 Dec 10:05:34 xntpd[581870]: time reset (step) 998.397993 s172 IBM PowerVM Virtualization Managing and Monitoring
  • 210. 6 Dec 10:05:34 xntpd[581870]: synchronisation lost 6 Dec 10:10:54 xntpd[581870]: synchronized to 9.3.4.7, stratum=2 Remember: In Virtual I/O Server version 1.5.2.0 and earlier, the default configuration file used by the startnetsvc xntpd command, and the rc.tcpip startup file can differ. This might cause unpredictable results when rebooting a partition. See APAR IZ13781. In subsequent releases the default file does not differ.4.4 Setting up Kerberos on the Virtual I/O Server In order to use Kerberos on the Virtual I/O Server, you first have to install the Kerberos krb5.client.rte file set from the Virtual I/O Server Expansion Pack. You then have to insert the first expansion pack media in the DVD drive. In case the drive is mapped for the other partitions to access it, you have to unmap it on the Virtual I/O Server with the rmvdev command, as follows: $ lsmap -all | grep cd Backing device cd0 $ rmvdev -vdev cd0 vtopt0 deleted You can then run the installp command. We use the oem_setup_env command to do this because installp must run with the root login. $ echo "installp -agXYd /dev/cd0 krb5.client.rte" | oem_setup_env +-----------------------------------------------------------------------------+ Pre-deinstall Verification... +-----------------------------------------------------------------------------+ Verifying selections...done [ output part removed for clarity purpose ] Installation Summary -------------------- Name Level Part Event Result ------------------------------------------------------------------------------- krb5.client.rte 1.4.0.3 USR APPLY SUCCESS krb5.client.rte 1.4.0.3 ROOT APPLY SUCCESS The Kerberos client file sets are now installed on the Virtual I/O Server. The login process to the operating system remains unchanged. Therefore, you must configure the system to use Kerberos as the primary means of user authentication. Chapter 4. Virtual I/O Server security 173
  • 211. To configure the Virtual I/O Server to use Kerberos as the primary means of user authentication, run the mkkrb5clnt command with the following parameters: $ oem_setup_env # mkkrb5clnt -c KDC -r realm -a admin -s server -d domain -A -i database -K -T # exit The mkkrb5clnt command parameters are: -c Sets the Kerberos Key Center (KDC) that centralizes authorizations. -r Sets the Kerberos realm. -s Sets the Kerberos admin server. -K Specifies Kerberos to be configured as the default authentication scheme. -T Specifies the flag to acquire server admin TGT based admin ticket. For integrated login, the -i flag requires the name of the database being used. For LDAP, use the load module name that specifies LDAP. For local files, use the keyword files. For example, to configure the VIO_Server1 Virtual I/O Server to use the ITSC.AUSTIN.IBM.COM realm, the krb_master admin and KDC server, the itsc.austin.ibm.com domain, and the local database, type the following: $ oem_setup_env # mkkrb5clnt -c krb_master.itsc.austin.ibm.com -r ITSC.AUSTIN.IBM.COM -s krb_master.itsc.austin.ibm.com -d itsc.austin.ibm.com -A -i files -K -T Password for admin/admin@ITSC.AUSTIN.IBM.COM: Configuring fully integrated login Authenticating as principal admin/admin with existing credentials. WARNING: no policy specified for host/VIO_Server1@ITSC.AUSTIN.IBM.COM; defaulting to no policy. Note that policy may be overridden by ACL restrictions. Principal "host/VIO_Server1@ITSC.AUSTIN.IBM.COM" created. Administration credentials NOT DESTROYED. Making root a Kerberos administrator Authenticating as principal admin/admin with existing credentials. WARNING: no policy specified for root/VIO_Server1@ITSC.AUSTIN.IBM.COM; defaulting to no policy. Note that policy may be overridden by ACL restrictions. Enter password for principal "root/VIO_Server1@ITSC.AUSTIN.IBM.COM": Re-enter password for principal "root/VIO_Server1@ITSC.AUSTIN.IBM.COM": Principal "root/VIO_Server1@ITSC.AUSTIN.IBM.COM" created. Administration credentials NOT DESTROYED. Configuring Kerberos as the default authentication scheme174 IBM PowerVM Virtualization Managing and Monitoring
  • 212. Cleaning administrator credentials and exiting. # exit This example results in the following actions: 1. Creates the /etc/krb5/krb5.conf file. Values for realm name, Kerberos admin server, and domain name are set as specified on the command line. Also, this updates the paths for the default_keytab_name, kdc, and kadmin log files. 2. The -i flag configures fully integrated login. The database entered is the location where AIX user identification information is stored. This is different than the Kerberos principal storage. The storage where Kerberos principals are stored is set during the Kerberos configuration. 3. The -K flag configures Kerberos as the default authentication scheme. This allows the users to become authenticated with Kerberos at login time. 4. The -A flag adds an entry in the Kerberos database to make root an admin user for Kerberos. 5. The -T flag acquires the server admin TGT-based admin ticket. If a system is installed that is located in a separate DNS domain than the KDC, the following additional actions must be performed: 1. Edit the /etc/krb5/krb5.conf file and add another entry after [domain realm]. 2. Map the separate domain to your realm. For example, if you want to include a client that is in the abc.xyz.com domain into your MYREALM realm, the /etc/krb5/krb5.conf file includes the following additional entry: [domain realm] .abc.xyz.com = MYREALM4.5 Managing users When the Virtual I/O Server is installed, the only user type that is active is the prime administrator (padmin), which can create additional user IDs with the following roles: System administrator Service representative Development engineer Restriction: You cannot create the prime administrator (padmin) user ID. It is automatically created and enabled after the Virtual I/O Server is installed. Chapter 4. Virtual I/O Server security 175
  • 213. Table 4-3 lists the user management tasks available on the Virtual I/O Server and the commands you must run to accomplish each task. Table 4-3 Task and associated command to manage Virtual I/O Server users Task Command Create a system administrator user ID mkuser Create a service representative (SR) user ID mkuser with the -sr flag Create a development engineer (DE) user ID mkuser with the -de flag Create a LDAP user mkuser with the -ldap flag List a user’s attributes lsuser Change a user’s attributes chuser Switch to another user su Remove a user rmuser4.5.1 Creating a system administrator account In Example 4-14 we show how to create a system administration account with the default values and then check its attributes. Example 4-14 Creating a system administrator user and checking its attributes $ mkuser johng johngs New password: Enter the new password again: $ lsuser johng johng roles=Admin account_locked=false expires=0 histexpire=0 histsize=0 loginretries=0 maxage=0 maxexpired=-1 maxrepeats=8 minage=0 minalpha=0 mindiff=0 minlen=0 minother=0 pwdwarntime=330 registry=files SYSTEM=compat The system administrator account has access to all commands except: cleargcl lsfailedlogin lsgcl mirrorios mkuser oem_setup_env rmuser shutdown unmirrorios176 IBM PowerVM Virtualization Managing and Monitoring
  • 214. 4.5.2 Creating a service representative (SR) account In Example 4-15, we have created a service representative (SR) account. This type of account enables a service representative to run commands required to service the system without being logged in as root. This includes the following command types: Run diagnostics, including service aids (for example, hot plug tasks, certify, format, and so forth). Run all commands that can be run by a group system. Configure and unconfigure devices that are not busy. Use the service aid to update the system microcode. Perform the shutdown and reboot operations. The preferred SR login user name is qserv. Example 4-15 Creating a service representative account $ mkuser -sr qserv qservs New password: Enter the new password again: $ lsuser qserv qserv roles=SRUser account_locked=false expires=0 histexpire=0 histsize=0 loginretries=0 maxage=0 maxexpired=-1 maxrepeats=8 minage=0 minalpha=0 mindiff=0 minlen=0 minother=0 pwdwarntime=330 registry=files SYSTEM=compat When the server representative user logs in to the system for the first time, it is asked to change its password. After changing it, the diag menu is automatically loaded. It can then execute any task from that menu, or get out of it and execute commands on the command line.4.5.3 Creating a read-only account The Virtual I/O Server mkuser command allows read-only accounts to be created. Read-only accounts are able to view everything a system administrator account can view, but they cannot change anything. Auditors are usually given read-only accounts. Read-only accounts are created by padmin with the following command: $ mkuser -attr pgrp=view auditor Tip: A read-only account will not be able to even write on its own home directory, but it can view all configuration settings. Chapter 4. Virtual I/O Server security 177
  • 215. 4.5.4 Checking the global command log (gcl) After the users and their roles are set up, it is important to periodically check what they have been doing on the Virtual I/O Server. We accomplish this with the lsgcl command. The lsgcl command lists the contents of the global command log (gcl). This log contains a listing of all commands that have been executed by all Virtual I/O Server users. Each listing contains the date and time of execution, and the user ID the command was executed from. Example 4-16 shows the output of this command on our Virtual I/O Server. Restriction: The lsgcl command can only be executed by the prime administrator (padmin) user. Example 4-16 lsgcl command output Nov 16 2007, 17:12:26 padmin ioslevel Nov 16 2007, 17:25:55 padmin updateios -accept -dev /dev/cd0 ... Nov 20 2007, 15:26:34 padmin uname -a Nov 20 2007, 15:29:26 qserv diagmenu Nov 20 2007, 16:16:11 padmin lsfailedlogin Nov 20 2007, 16:25:51 padmin lsgcl Nov 20 2007, 16:28:52 padmin passwd johng Nov 20 2007, 16:30:40 johng lsmap -all Nov 20 2007, 16:30:53 johng lsmap -vadapter vhost0 Nov 20 2007, 16:32:11 padmin lsgcl4.6 Role-based access control With Virtual I/O Server Version 2.2, and later, a system administrator can define roles based on job functions in an organization by using role-based access control (RBAC). A system administrator can use role-based access control (RBAC) to define roles for users in the Virtual I/O Server. A role confers a set of permissions or authorizations to the assigned user. Thus, a user can only perform a specific set of system functions depending on the access rights that user is given. For example, if the system administrator creates the role UserManagement with authorization to access user management commands (Example 4-19 on page 191) and assigns this role to a user, that user can manage users on the system but has no further access rights.178 IBM PowerVM Virtualization Managing and Monitoring
  • 216. The benefits of using role-based access control with the Virtual I/O Server are as follows: Splitting system management functions Providing better security by granting only necessary access rights to users Implementing and enforcing system management and access control consistently Managing and auditing system functions with ease Role-based access control is based on the concepts of authorizations, roles, and privileges. An overview of these concepts is provided in the following sections, followed by an example of using role-based access control.4.6.1 Authorizations The Virtual I/O Server creates authorizations that closely emulate the authorizations of the AIX operating system. The authorizations emulate naming conventions and descriptions, but are only applicable to the Virtual I/O Server specific requirements. By default, the padmin user is granted all the authorizations on the Virtual I/O Server, and can run all the commands. The other types of users (created by using the mkuser command) retain their command execution permissions. The mkauth command creates a new user-defined authorization in the authorization database. You can create authorization hierarchies by using a dot (.) in the auth parameter to create an authorization of the form ParentAuth.SubParentAuth.SubSubParentAuth.... All parent elements in the auth parameter must exist in the authorization database before the authorization is created. The maximum number of parent elements that you can use to create an authorization is eight. You can set authorization attributes when you create authorizations through the Attribute=Value parameter. Every authorization that you create must have a value for the id authorization attribute. If you do not specify the id attribute using the mkauth command, the command automatically generates a unique ID for the authorization. If you specify an ID, the value must be unique and greater than 15000. The IDs 1 - 15000 are reserved for system-defined authorizations. The system-defined authorizations in the Virtual I/O Server start with vios. Hence, user-defined authorizations must not start with vios. or aix. Because the authorizations that start with vios. and aix. are considered system-defined authorizations, users cannot add any further hierarchies to these authorizations. Chapter 4. Virtual I/O Server security 179
  • 217. Unlike in the AIX operating system, users cannot create authorizations for all Virtual I/O Server commands. In the AIX operating system, an authorized user can create a hierarchy of authorizations for all the commands. However, in the Virtual I/O Server, authorizations can only be created for the commands or scripts owned by the user. Users cannot create any authorizations that start with vios. or aix. because they are considered system-defined authorizations. Hence, users cannot add any further hierarchies to these authorizations. Authorization names must not begin with a dash (-), plus sign (+), at sign (@), or tilde (~). They must not contain spaces, tabs, or newline characters. You cannot use the keywords ALL, default, ALLOW_OWNER, ALLOW_GROUP, ALLOW_ALL, or an asterisk (*) as an authorization name. Do not use the following characters within an authorization string: : (colon) " (quotation mark) # (number sign) , (comma) = (equal sign) (backslash) / (forward slash) ? (question mark) (single quotation mark) ` (grave accent) Table 4-4 lists the authorizations corresponding to the Virtual I/O Server commands. The vios and subsequent child authorizations, for example vios and vios.device, are not used. If a user is given a role that has either the parent or subsequent child authorization, for example vios or vios.device, that user will have access to all the subsequent children authorizations and their related commands. For example, a role with the authorization vios.device gives the user access to all vios.device.config and vios.device.manage authorizations and their related commands. Table 4-4 Authorizations corresponding to Virtual I/O Server commands Command Authorization activatevg vios.lvm.manage.varyon alert vios.system.cluster.alert alt_root_vg vios.lvm.change.altrootvg artexdiff vios.system.rtexpert.diff artexget vios.system.rtexpert.get artexlist vios.system.rtexpert.list180 IBM PowerVM Virtualization Managing and Monitoring
  • 218. Command Authorizationartexmerge vios.system.rtexpert.mergeartexset vios.system.rtexpert.setbackup vios.fs.backupbackupios vios.install.backupbootlist vios.install.bootlistcattracerpt vios.system.trace.formatcfgassist vios.security.cfgassistcfgdev vios.device.configcfglnagg vios.network.config.lnaggcfgnamesrv vios.system.dnscfgsvc vios.system.config.agentchauth vios.security.auth.changechbdsp vios.device.manage.backing.changechdate vios.system.config.date.changechdev vios.device.manage.changechkdev vios.device.manage.checkchlang vios.system.config.localechlv vios.lvm.manage.changechpath vios.device.manage.path.changechrep vios.device.manage.repos.changechrole vios.security.role.changechsp -defaulta vios.device.manage.spool.changechtcpip vios.network.tcpip.changechuser vios.security.user.changechvg vios.lvm.manage.changechvopt vios.device.manage.optical.changecl_snmp vios.security.manage.snmp.query Chapter 4. Virtual I/O Server security 181
  • 219. Command Authorization cleandisk vios.system.cluster cluster vios.system.cluster.create cplv vios.lvm.manage.copy cpvdi vios.lvm.manage.copy deactivatevg vios.lvm.manage.varyoff diagmenu vios.system.diagnostics dsmc vios.system.manage.tsm entstat vios.network.stat.ent errlog vios.system.log.view exportvg vios.lvm.manage.export extendlv vios.lvm.manage.extend extendvg vios.lvm.manage.extend fcstat vios.network.stat.fc fsck vios.fs.check hostmap vios.system.config.address hostname vios.system.config.hostname importvg vios.lvm.manage.import invscout vios.system.firmware.scout ioslevel vios.system.level ldapadd vios.security.manage.ldap.add ldapsearch vios.security.manage.ldap.search ldfware vios.system.firmware.load license vios.system.license.view license -accept vios.system.license loadopt vios.device.manage.optical.load loginmsg vios.security.user.login.msg lsauth vios.security.auth.list182 IBM PowerVM Virtualization Managing and Monitoring
  • 220. Command Authorizationlsdev vios.device.manage.listlsfailedlogin vios.security.user.login.faillsfware vios.system.firmware.listlsgcl vios.security.log.listlslparinfo vios.system.lpar.listlslv vios.lvm.manage.listlsmap vios.device.manage.map.phyvirtlsnetsvc vios.network.service.listlsnports vios.device.manage.listlspath vios.device.manage.listlspv vios.device.manage.listlsrep vios.device.manage.repos.listlsrole vios.security.role.listlssecattr vios.security.cmd.listlssp vios.device.manage.spool.listlssvc vios.system.config.listlssw vios.system.software.listlstcpip vios.network.tcpip.listlsuserb vios.security.user.listlsvg vios.lvm.manage.listlsvopt vios.device.manage.optical.listmigratepv vios.device.manage.migratemirrorios vios.lvm.manage.mirrorios.createmkauth vios.security.auth.createmkbdsp vios.device.manage.backing.createmkkrb5clnt vios.security.manage.kerberos.createmkldap vios.security.manage.ldap.create Chapter 4. Virtual I/O Server security 183
  • 221. Command Authorization mklv vios.lvm.manage.create mklvcopy vios.lvm.manage.mirror.create mkpath vios.device.manage.path.create mkrep vios.device.manage.repos.create mkrole vios.security.role.create mksp vios.device.manage.spool.create mktcpip vios.network.tcpip.config mkuser vios.security.user.create mkvdev vios.device.manage.create mkvdev -lnagg vios.device.manage.create.lnagg mkvdev -sea vios.device.manage.create.sea mkvdev -vdev vios.device.manage.create.virtualdisk mkvdev -vlan vios.device.manage.create.vlan mkvg vios.lvm.manage.create mkvopt vios.device.manage.optical.create motd vios.security.user.msg mount vios.fs.mount netstat vios.network.tcpip.list optimizenet vios.network.config.tune oem_platform_level vios.system.level oem_setup_env vios.oemsetupenv passwdc vios.security.passwd pdump vios.system.dump.platform ping vios.network.ping postprocesssvc vios.system.config.agent prepdev vios.device.config.prepare redefvg vios.lvm.manage.reorg184 IBM PowerVM Virtualization Managing and Monitoring
  • 222. Command Authorizationreducevg vios.lvm.manage.changerefreshvlan vios.network.config.refvlanremote_management vios.system.manage.remotereplphyvol vios.device.manage.replacerestore vios.fs.backuprestorevgstruct vios.lvm.manage.restorermauth vios.security.auth.removermbdsp vios.device.manage.backing.removermdev vios.device.manage.removermlv vios.lvm.manage.removermlvcopy vios.lvm.manage.mirror.removermpath vios.device.manage.path.removermrep vios.device.manage.repos.removermrole vios.security.role.removermsecattr vios.security.cmd.removermsp vios.device.manage.spool.removermtcpip vios.network.tcpip.removermuser vios.security.user.removermvdev vios.device.manage.removermvopt vios.device.manage.optical.removerolelist vios.security.role.listsavevgstruct vios.lvm.manage.savesave_base vios.device.manage.saveinfoseastat vios.network.stat.seasetkst vios.security.kst.setsetsecattr vios.security.cmd.setshowmount vios.fs.mount.show Chapter 4. Virtual I/O Server security 185
  • 223. Command Authorization shutdown vios.system.boot.shutdown snap vios.system.trace.format snmp_info vios.security.manage.snmp.info snmpv3_ssw vios.security.manage.snmp.switch snmp_trap vios.security.manage.snmp.trap startnetsvc vios.network.service.start startsvc vios.system.config.agent.start startsysdump vios.system.dump stopnetsvc vios.network.service.stop stopsvc vios.system.config.agent.stop stoptrace vios.system.trace.stop svmon vios.system.stat.memory syncvg vios.lvm.manage.sync sysstat vios.system.stat.list topas vios.system.config.topas topasrec vios.system.config.topasrec tracepriv vios.security.priv.trace traceroute vios.network.route.trace uname vios.system.uname unloadopt vios.device.manage.optical.unload unmirrorios vios.lvm.manage.mirrorios.remove unmount vios.fs.unmount updateios vios.install vasistat vios.network.stat.vasi vfcmap vios.device.manage.map.virt viosbr vios.system.backup.cfg viosbr -view vios.system.backup.cfg.view186 IBM PowerVM Virtualization Managing and Monitoring
  • 224. Command Authorization viosecure vios.security.manage.firewall viostat vios.system.stat.io vmstat vios.system.stat.memory wkldagent vios.system.manage.workload.agent wkldmgr vios.system.manage.workload.manager wkldout vios.system.manage.workload.process a. Other options of the chsp command can be run by all. b. Any user can run this command to view a minimal set of user attributes. However, only users with this authorization can view all the user attributes. c. Users can change their own password without having this authorization. This authorization is required only if the user wants to change the password of other users.4.6.2 Roles The Virtual I/O Server retains its current roles and will have the appropriate authorizations assigned to the roles. Additional roles that closely emulate the roles in the AIX operating system can be created. The roles emulate naming conventions and descriptions, but are only applicable to the Virtual I/O Server specific requirements. Users cannot view, use, or modify any of the default roles in the Virtual I/O Server. The following roles are the default roles in the AIX operating system. These roles are unavailable to the Virtual I/O Server users, and are not displayed. AccountAdmin BackupRestore DomainAdmin FSAdmin SecPolicy SysBoot SysConfig isso sa so The following roles are the default roles in the Virtual I/O Server: Admin DEUser Chapter 4. Virtual I/O Server security 187
  • 225. PAdmin RunDiagnostics SRUser SYSAdm ViewOnly The mkrole command creates a role. The newrole parameter must be a unique role name. You cannot use the ALL or default keywords as the role name. Every role must have a unique role ID that is used for security decisions. If you do not specify the id attribute when you create a role, the mkrole command automatically assigns a unique ID to the role. There is no standard naming convention for roles. However, existing names of roles cannot be used for creating roles. The role parameter cannot contain spaces, tabs, or newline characters. To prevent inconsistencies, restrict role names to characters in the POSIX portable file name character set. You cannot use the keywords ALL or default as a role name. Do not use the following characters within a role-name string: : (colon) " (quotation mark) # (number sign) , (comma) = (equal sign) (backslash) / (forward slash) ? (question mark) (single quotation mark) ` (grave accent)4.6.3 Privileges A Privilege is an attribute of a process through which the process can bypass specific restrictions and limitations of the system. Privileges are associated with a process, and are acquired by running a privileged command. Privileges are defined as bit-masks in the operating system kernel and enforce access control over privileged operations. For example, the privilege bit PV_KER_TIME might control the kernel operation to modify the system date and time. Nearly 80 privileges are included with the operating system kernel and provide granular control over privileged operations. You can acquire the least privilege required to perform an operation through division of privileged operations in the kernel. This feature leads to enhanced security because a process hacker can only get access to one or two privileges in the system, and not to root user privileges.188 IBM PowerVM Virtualization Managing and Monitoring
  • 226. Authorizations and roles are a user-level tool to configure user access to privileged operations. Privileges are the restriction mechanism used in the operating system kernel to determine if a process has authorization to perform an action. Hence, if a user is in a role session that has an authorization to run a command, and that command is run, a set of privileges are assigned to the process. There is no direct mapping of authorizations and roles to privileges. Access to several commands can be provided through an authorization. Each of those commands can be granted a different set of privileges.4.6.4 Using role-based access control Table 4-5 lists the commands related to role-based access control (RBAC). Table 4-5 RBAC commands and their descriptions Command Description chauth Modifies attributes of the authorization that is identified by the newauth parameter chrole Changes attributes of the role identified by the role parameter lsauth Displays attributes of user-defined and system-defined authorizations from the authorization database lsrole Displays the role attributes lssecattr Lists the security attributes of one or more commands, devices, or processes mkauth Creates new user-defined authorizations in the authorization database mkrole Creates new roles rmauth Removes the user-defined authorization identified by the auth parameter rmrole Removes the role identified by the role parameter from the roles database rmsecattr Removes the security attributes for a command, a device, or a file entry that is identified by the Name parameter from the appropriate database rolelist Provides role and authorization information to the caller about the roles assigned to them setkst Reads the security databases and loads the information from the databases into the kernel security tables Chapter 4. Virtual I/O Server security 189
  • 227. Command Description setsecattr Sets the security attributes of the command, device, or process that are specified by the Name parameter swrole Creates a role session with the roles that are specified by the Role parameter tracepriv Records the privileges that a command attempts to use when the command is run We will use some of these commands to create a new role. We will create a role called UserManagement with authorization to access user management commands and assigns this role to a user, so that user can manage users on the system but has no further access rights. This scenario may arise in a business where user access management is administered by a single team across all operating systems. This team would need user access management rights on all operating systems, but perhaps no authority to do anything else. This new role will be given access to the following commands: passwd chuser mkuser lsuser lsfailedlogin loginmsg motd rmuser First, we create the new role using the mkrole command as shown in Example 4-17. Example 4-17 Using the mkrole command $ mkrole authorizations=vios.security.passwd,vios.security.user.change,vios.security.us er.create,vios.security.user.list,vios.security.user.login.fail,vios.security.u ser.login.msg,vios.security.user.msg,vios.security.user.remove UserAccessManagement The values for the authorizations parameter were obtained from Table 4-4 on page 180.190 IBM PowerVM Virtualization Managing and Monitoring
  • 228. To confirm the role has been created correctly, we can use the lsrole commandto display the role’s attributes as shown in Example 4-18.Example 4-18 Using the lsrole command$ lsrole UserAccessManagementUserAccessManagementauthorizations=vios.security.passwd,vios.security.user.change,vios.security.user.create,vios.security.user.list,vios.security.user.login.fail,vios.security.user.login.msg,vios.security.user.msg,vios.security.user.remove rolelist= groups=visibility=1 screens=* dfltmsg= msgcat= auth_mode=INVOKER id=21Next we create a new user (uam1), linking the new user to the newly created role(UserAccessManagement) using the mkuser command as shown inExample 4-19.Example 4-19 Creating a new user linked to a role$ mkuser -attr roles=UserAccessManagement uam1uam1s New password:Enter the new password again:If we wanted to add existing users to the new role, we can use the chusercommand.To verify that the user is now linked to the role, use the lsuser command asshown in Example 4-20.Example 4-20 Displaying a user’s role$ lsuser uam1uam1 roles=UserAccessManagement default_roles= account_locked=false expires=0histexpire=0 histsize=0 loginretries=0 maxage=0 maxexpired=-1 maxrepeats=8minage=0 minalpha=0 mindiff=0 minlen=0 minother=0 pwdwarntime=330registry=files SYSTEM=compatIf the new user logs on and attempts to execute any command other than theones specified through the UserAccessManagement role, they will receive theerror shown in Example 4-21.Example 4-21 Access to run command is not valid message$ ioslevelAccess to run command is not valid. Chapter 4. Virtual I/O Server security 191
  • 229. 192 IBM PowerVM Virtualization Managing and Monitoring
  • 230. 5 Chapter 5. Virtual I/O Server maintenance As with all other servers included in an enterprise’s data recovery program, you need to back up and update the Virtual I/O Server logical partition. This chapter includes the following sections: Installing or migrating to Virtual I/O Server Version 2.x Virtual I/O server back up strategy Scheduling backups of the Virtual I/O Server Backing up the Virtual I/O Server operating system Backing up user-defined virtual devices Backing up user-defined virtual devices using backupios Restoring the Virtual I/O Server Rebuilding the Virtual I/O Server Updating the Virtual I/O Server Updating Virtual I/O Server adapter firmware Error logging on the Virtual I/O Server VM Storage Snapshots/Rollback© Copyright IBM Corp. 2012. All rights reserved. 193
  • 231. 5.1 Installing or migrating to Virtual I/O Server Version 2.x There are four procedures for installing or migrating to Virtual I/O Server Version 2.x: “Installing Virtual I/O Server Version 2.2.1.0” on page 195. “Migrating from an HMC” on page 197. “Migrating from a DVD that is managed by an HMC” on page 198. “Migrating from a DVD that is managed by an IVM” on page 208. Restriction: A migration to Virtual I/O Server Version 2.x is only supported if you run Virtual I/O Server Version 1.3 or later. If you are running Virtual I/O Server Version 1.2 or earlier, you need to apply the latest Virtual I/O Server Version 1.5 Fix Pack before migrating. Before you begin a migration, back up your existing Virtual I/O Server installation and then follow the steps for the installation method that you choose. There are two DVDs shipped with every new Virtual I/O server 2.x order: Virtual I/O Server Version 2.x Migration DVD. Virtual I/O Server Version 2.x Installation DVD. Customers with a Software Maintenance Agreement (SWMA) can order both sets of the Virtual I/O Server Version 2.x media from the following website: http://www.ibm.com/servers/eserver/ess/ProtectedServlet.wss In a redundant Virtual I/O Server environment, you can install or migrate one Virtual I/O Server at a time to avoid any interruption of service. The client LPAR can be up and running through the migration process of each of the Virtual I/O Servers. Tip: Use Fix Central to check for any available Fix Pack: http://www-933.ibm.com/support/fixcentral/ The following sections explain in greater detail how to install or migrate to a Virtual I/O Server Version 2.x environment.194 IBM PowerVM Virtualization Managing and Monitoring
  • 232. 5.1.1 Installing Virtual I/O Server Version 2.2.1.0 For multi-node SSP support you will need to install at least Virtual I/O Server Version 2.2.1.0 and apply the latest available Fix Pack and Service Pack from IBM Fix Central, the current minimum supported version is 2.2.1.3 FP25 SP01. Please make sure to check out the Release Notes at: http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.ai x.ntl%2FRELNOTES%2FGI11-4302-06.htm Pay extra attention to the following modifications: VIOS Version 2.2.1.0 installation changes The VIOS software is distributed on two DVDs. When you boot from DVD 1, you are prompted to insert DVD 2. If you want to install another language fileset after the initial installation is complete, insert the second DVD into the DVD drive and refer to the CLI chlang command. Beginning with VIOS Version 2.2.1.0, the media no longer ships on a single DVD disc. Therefore, installing using the installios or OS_install commands now prompts the user to switch the media. The installios and OS_install commands on the following products have been updated to reflect this change and are now required to install VIOS Version 2.2.1.0 and later: HMC Version 7 Release 7.4.0, or later. AIX Version 6.1 Technology Level 7, or later. AIX Version 7.1 Technology Level 1, or later. Memory requirements The minimum memory requirement for VIOS Version 2.2.1.0 varies based on the configuration. A general rule for a minimum current memory requirement for VIOS Version 2.2.1.0 is 512 MB. A smaller minimum current memory requirement might support a configuration with a very small number of devices or a small maximum memory configuration. VIOS Version 2.2.1.0 requires the minimum current memory requirement to increase as the maximum memory configuration or the number of devices scales upward, or both. Larger maximum memory configurations or additional devices scale up the minimum current memory requirement. If the minimum memory requirement is not increased along with the maximum memory configuration, the partition freezes during the initial program load (IPL). Chapter 5. Virtual I/O Server maintenance 195
  • 233. ROOTVG requirements for release 2.2.x and beyond VIOS now requires a minimum of 30 GB disk space for installation. Ensure that the disk allocated for the VIOS installation contains at least 30 GB of available space before you attempt to install VIOS. Host Ethernet Adapter memory requirements Configurations that contain one or more Host Ethernet Adapter (HEA) require more memory than the 512 MB minimum. Each logical HEA port that is configured requires an additional 102 MB of memory. The minimum memory requirement for configurations with one or more HEA ports, where n is the number of HEA ports, is 512 MB + n x 102 MB. It is advisable to check out the FP25 Release Notes at: http://www-01.ibm.com/support/docview.wss?uid=isg400000800 Before you begin, make sure that you have the Virtual I/O Server Version 2.2.1.0 Installation DVD available, and then follow the Virtual I/O Server installation procedure described in IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. Creating a NIM mksysb for VIOS 2.2.1.0 from the DVDs (there is a THIRD mksysb image on the second DVD of the distribution) is useful for NIM installations. Appending the third image to the result of the concatenation of the two mksysb images from the first DVD yielded a successful SPOT creation, as follows: 1. Mount VIOS 2.2.1.0 DVD 1 of 2 2. cd /usr/sys/inst.images 3. cat mksysb_image mksys)image2 > / 4. Unmount first DVD 5. Mount VIOS 2.2.1.0 DVD 2 of 2 6. cd /usr/sys/inst.images 7. cat mksysb_image >> / Then proceed with NIM resource creation.196 IBM PowerVM Virtualization Managing and Monitoring
  • 234. 5.1.2 Migrating from an HMC Before you begin the migration from an HMC, make sure that the following requirements are fulfilled: An HMC is attached to the system. The HMC version is at a minimum level of V7R7.4.0 or later, and the server firmware is at the appropriate level. You have the Virtual I/O Server Version 2.x Migration DVD. You have hmcsuperadmin authority. Run the backupios command and save the mksysb image to a secure location. To start the Virtual I/O Server migration, follow these steps: 1. Insert the Virtual I/O Server Version 2.x Migration DVD into the DVD drive of the HMC. 2. If the HMC is communicating with the Flexible Service Processor (FSP) through a private network channel (for example, HMC (eth0) = 192.168.0.101) and the installation is over a private network, then the INSTALLIOS_PRIVATE_IF=eth0 variable will need to be exported to force the network installation over that private interface. Not doing so will prevent the client logical partition from successfully running a BOOTP from the HMC. The INSTALLIOS_PRIVATE_IF variable is not required for all public network installations. To use the variable, type the following command from the HMC command line: export INSTALLIOS_PRIVATE_IF=interface where interface is the network interface through which the installation should take place. 3. To begin the migration installation, enter the following command from the HMC command line: installios 4. Choose the server where your Virtual I/O Server partition is located. 5. Select the Virtual I/O Server partition you want to migrate. 6. Select the partition profile. 7. Enter the source of the installation images. The default installation media is /dev/cdrom. 8. Enter the IP address of the Virtual I/O Server partition. 9. Enter the subnet mask of the Virtual I/O Server partition. Chapter 5. Virtual I/O Server maintenance 197
  • 235. 10.Enter the IP address of the gateway. 11.Enter the speed for the Ethernet interface. 12.Enter the information indication whether it is full duplex or half duplex. Remember: The installios command defaults to the network setting of 100 Mbps/full duplex for its speed and duplex setting. Check the network switch configuration or consult the network administrator to see what the correct speed/duplex setting is in your environment. 13.Enter no if prompted for the client’s network configured after the installation. 14.The information for all available network adapters is being retrieved. At that point the Virtual I/O Server partition reboots. Choose the correct physical Ethernet adapter. 15.Enter the appropriate language and locale. 16.Verify that your settings are correct. If so, press Enter and proceed with the installation. After the migration is complete, the Virtual I/O Server partition is restarted to the configuration that it had before the migration installation. Run the ioslevel command and verify that the migration was successful and it is at the expected level. Tip: After you successfully migrate to Virtual I/O Server Version 2.x, and if you had manually added multi-path drivers (such as IBM SDD or IBM SDDPCM) to your multi-path drivers such as IBM SDD or IBM SDDPCM, then you need to remove them and install the corresponding version for an AIX Version 6.1 kernel. See your multi-path driver vendors documentation for the correct replacement procedure.5.1.3 Migrating from a DVD that is managed by an HMC Before you begin migrating from a DVD, make sure that the following requirements are fulfilled: An HMC is attached to the system. The HMC version is at a minimum level of V7R7.4.0 or later, and the server firmware is at the appropriate level. A DVD drive is assigned to the Virtual I/O Server partition, and you have the Virtual I/O Server Version 2.x Migration DVD. The Virtual I/O Server is currently at version 1.3 or later. Run the backupios command and save the mksysb image to a secure location.198 IBM PowerVM Virtualization Managing and Monitoring
  • 236. Important: Do not use the updateios command to migrate the Virtual I/O Server.To start the Virtual I/O Server migration, follow these steps:1. Insert the Virtual I/O Server Version 2.x Migration DVD into the DVD drive assigned to the Virtual I/O Server partition.2. Shut down the Virtual I/O Server partition as follows: – On a Virtual I/O Server command line, execute the command shutdown -force and wait for the shutdown to complete. or – Check the Virtual I/O Server partition on the HMC menu by clicking Systems Management  Servers  <name_of_server>. – Click Tasks  Operations  Shutdown. – In the Shutdown menu, select delayed, click OK, and wait for the shutdown to complete.3. Activate the Virtual I/O Server partition and boot it into the SMS menu by clicking Tasks  Operations  Activate.4. In the subsequent window, select the correct profile, select Open a terminal window or console session, and then select SMS as the Boot mode in the Advanced selection. Click OK.5. A console window opens and the partition starts the SMS main menu.6. In the SMS menu, enter 5 for option 5. Select Boot Options. Then press Enter.7. Enter 1 for option 1. Select Install/Boot device and press Enter.8. Enter 4 for option 3. CD/DVD and press Enter.9. Select 6 for option 6. List all devices and press Enter.10.Select the installation drive and press Enter.11.Enter 2 for option 2. Normal Mode Boot and press Enter.12.Enter 1 for option Yes and press Enter. Chapter 5. Virtual I/O Server maintenance 199
  • 237. 13.The partition will now boot from the Migration DVD. Figure 5-1 shows the menu appearing after a few moments. Select the desired console and press Enter. Figure 5-1 Define the System Console 14.Type 1 in the next panel and press Enter to use English during the installation.200 IBM PowerVM Virtualization Managing and Monitoring
  • 238. 15.The migration proceeds and the main menu will appear as shown in Figure 5-2. Figure 5-2 Installation and Maintenance main menu16.Type 1 to select option 1 Start Install Now with Default Settings, or verify the installation settings by choosing option 2 Change/Show Installation Settings and Install. Then press Enter. Chapter 5. Virtual I/O Server maintenance 201
  • 239. 17.Figure 5-3 shows the Virtual I/O Server Installation and Settings menu. Figure 5-3 Virtual I/O Server Migration Installation and Settings Type option 1 to verify the system settings.202 IBM PowerVM Virtualization Managing and Monitoring
  • 240. 18.Figure 5-4 shows the menu where you can select the disks for migration. In our example we had a mirrored Virtual I/O Server Version 1.5 environment and therefore we used option 1. Figure 5-4 Change Disk Where You Want to Install Tip: Here you can see that the existing Virtual I/O Server is reported as an AIX 5.3 system. Be aware that other disks, not part of the Virtual I/O Server rootvg, can also have AIX 5.3 installed. The first two disks shown in Figure 5-4 are internal SAS disks where the existing Virtual I/O Server 1.x resides. hdisk4 is another Virtual I/O Server installation on a SAN LUN. Chapter 5. Virtual I/O Server maintenance 203
  • 241. 19.Type 0 to continue and start the migration as shown in Figure 5-5. Figure 5-5 Virtual I/O Server Migration Installation and Settings - start migration204 IBM PowerVM Virtualization Managing and Monitoring
  • 242. 20.The migration will start and then prompt you for a final confirmation as shown in Figure 5-6. At this point you can still stop the migration and boot up your existing Virtual I/O Server Version 1.x environment. Figure 5-6 Migration Confirmation Chapter 5. Virtual I/O Server maintenance 205
  • 243. 21.Type 0 to continue with the migration. After a few seconds, the migration will start as shown in Figure 5-7. Figure 5-7 Running migration The running migration process might take some time to complete.206 IBM PowerVM Virtualization Managing and Monitoring
  • 244. 22.After the migration has completed, you must set the terminal type. Enter vt320 and press Enter as shown in Figure 5-8. Figure 5-8 Set Terminal Type23.Accept the licence agreements.24.After accepting the license agreements, exit the menu by pressing F10 (or using Esc+0). You will see the Virtual I/O Server login screen.25.Login as padmin and verify the new Virtual I/O Server Version with the ioslevel command.26.Check the configuration of all disks and Ethernet adapters on the Virtual I/O Server and the mapping of the virtual resources to the virtual I/O clients. Use the lsmap -all and lsdev -virtual commands.27.Start the client partitions.Verify the Virtual I/O Server environment, document the update, and create anew backup of your Virtual I/O Server. Chapter 5. Virtual I/O Server maintenance 207
  • 245. Remember: After you successfully migrate to Virtual I/O Server Version 2.1, and if you had manually added multi-path drivers (such as IBM SDD or IBM SDDPCM) on your multi-path drivers such as IBM SDD or IBM SDDPCM, you must remove them and install the corresponding version for an AIX Version 6.1 kernel. See your multi-path driver vendors documentation for the correct replacement procedure.5.1.4 Migrating from a DVD that is managed by an IVM Before you begin the migration from a DVD using the Integrated Virtualization Manager (IVM), make sure that the following requirements are fulfilled: A DVD drive is assigned to the Virtual I/O Server partition and you have the Virtual I/O Server Version 2.x Migration DVD. The Virtual I/O Server is currently at version 1.3 or later. The partition profile data for the management partition and its clients is backed up before you back up the Virtual I/O Server. Use the bkprofdata command to save the partition configuration data to a secure location. Restriction: The IVM configuration in Virtual I/O Server 2.x is not backward-compatible. If you want to revert back to an earlier version of the Virtual I/O Server, you must restore the partition configuration data from the backup file. Run the backupios command, and save the mksysb image to a secure location. To start the Virtual I/O Server migration, follow these steps: 1. This step is for a blade server environment only: Access the Virtual I/O Server logical partition using the management module of the blade server: a. Verify that all logical partitions except the Virtual I/O Server logical partition are shut down. b. Insert the Virtual I/O Server Migration DVD into the DVD drive assigned to your Virtual I/O Server partition. c. Use telnet to connect to the management module of the blade server on which the Virtual I/O Server logical partition is located. d. Enter the command env -T system:blade[x], where x is the specific number of the blade to be migrated.208 IBM PowerVM Virtualization Managing and Monitoring
  • 246. e. Enter the console command. f. Login into the Virtual I/O Server using the padmin user. g. Enter the shutdown -restart command. h. When the system management services (SMS) logo appears, select 1 to enter the SMS menu. i. Skip to step 3.2. Step 2 is for a non-blade server environment only. Access the Virtual I/O Server partition using the Advanced System Management Interface (ASMI) with a Power Systems server that is not managed by an HMC: a. Verify that all logical partitions except the Virtual I/O Server logical partition are shut down. b. Insert the Virtual I/O Server Migration DVD into the Virtual I/O Server logical partition. c. Log in to the ASCII terminal to communicate with the Virtual I/O Server. If you need assistance, see Access the ASMI without an HMC at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/i phby/ascii.htm d. Sign on to the Virtual I/O Server using the padmin user. e. Enter the shutdown -restart command. f. When the SMS logo appears, select 1 to enter the SMS menu.3. Select the boot device: a. Select option 5 Select Boot Options and press Enter. b. Select option 1 Select Install/Boot Device and press Enter. c. Select IDE and press Enter. d. Select the device number that corresponds to the DVD and press Enter. (You can also select List all devices, select the device number from a list, and press Enter.) e. Select Normal mode boot. f. Select Yes to exit SMS.4. Install the Virtual I/O Server: Follow the steps described in 5.1.3, “Migrating from a DVD that is managed by an HMC” on page 198, beginning with step 13. Chapter 5. Virtual I/O Server maintenance 209
  • 247. Remember: After you successfully migrate to Virtual I/O Server Version 2.x, and if you had manually added multi-path drivers (such as IBM SDD or IBM SDDPCM) on your previous Virtual I/O Server Version 1.x, you need to remove them and install the corresponding version for an AIX Version 6.1 kernel. See your multi-path driver vendors documentation for the correct replacement procedure, or see the following paper: http://www-01.ibm.com/support/search.wss?rs=540&tc=ST52G7&dc=DA480+D B100&dtm5.2 Virtual I/O server back up strategy A complete disaster recovery strategy for the Virtual I/O Server should include backing up several components such that you can recover the virtual devices and their physical backing devices. The Virtual I/O Server contains the following types of information that you need to back up: The Virtual I/O Server operating system includes the base code, applied fix packs, custom device drivers to support disk subsystems, Kerberos, and LDAP client configurations. All of this information is backed up when you use the backupios command. User-defined virtual devices include metadata, such as virtual device mappings, that define the relationship between the physical environment and the virtual environment. This data can be saved in two ways: – If you plan to restore the configuration to the same Virtual I/O Server partition from which it was backed up you can use the viosbr command. – If you plan to restore the configuration to a separate Virtual I/O Server (for example in case of a disaster) you can save the data as described in 5.6, “Backing up user-defined virtual devices using backupios” on page 223 to a location that is automatically backed up when you use the backupios command. The user-defined virtual device configuration will also be restored from the backup created using the backupios command. Nevertheless regular backups using the viosbr command should be taken because they allow you to restore user-defined virtual device configurations in an easy way. They also allow you to reinstall the Virtual I/O Server and then reapply the user-defined virtual device configuration.210 IBM PowerVM Virtualization Managing and Monitoring
  • 248. In a complete disaster recovery scenario, the Virtual I/O Server can be restored to a new or repaired system. You must then back up both the Virtual I/O Server and user-defined virtual devices. Restriction: It is not possible to restore a suspended partition to a separate server. The only way to move a suspended partition to a different server is by using partition mobility. However this requires both the source and the target system to be available Furthermore, in this situation, you must also back up the following components of your environment to fully recover your Virtual I/O Server configuration: External device configurations, such as Storage Area Network (SAN) devices. Resources defined on the Hardware Management Console (HMC) or on the Integrated Virtualization Manager (IVM), such as processor and memory allocations, and physical or virtual adapter configuration. The operating systems and applications running in the client logical partitions.5.2.1 Backing up external device configuration Planning should be included into the end-to-end backup strategy for the event that a natural or man-made disaster destroys a complete site. This is probably part of your disaster recovery strategy, but consider it in the complete backup strategy. The backup strategy for this depends on the hardware specifics of the storage, networking equipment, and SAN devices, to name but a few. Examples of the type of information you will need to record include the network Virtual Local Area Network (VLAN) or Logical Unit Number (LUN) information from a storage subsystem. This information is beyond the scope of this document, but it is mentioned here to make you aware that a complete disaster recovery solution for a physical or virtual server environment has a dependency on this information. The method to collect and record the information depends not only on the vendor and model of the infrastructure systems at the primary site, but also on what is present at the disaster recovery site.5.2.2 Backing up HMC resources If the system is managed by an HMC, the HMC information needs to be backed up. The definition of the Virtual I/O Server logical partition on the HMC includes, for example, how much CPU and memory and what physical adapters are to be used. In addition to this, you have the virtual device configuration (for example, Chapter 5. Virtual I/O Server maintenance 211
  • 249. virtual Ethernet adapters and to which virtual LAN ID they belong) that needs to be captured. Note that, especially if you are planning for disaster recovery, you might have to rebuild selected HMC profiles from scratch on new hardware. In this case, it is important to have detailed documentation of the configuration, such as how many Ethernet cards are needed. Using the system plans and the viewer can help record such information, but you should check that this is appropriate and that it records all the information needed in every case. Starting with HMC V7, you can save the current system configuration to an HMC system plan. The system plan can be redeployed to rebuild the complete partition configuration. Tip: Check that the system plan is valid by viewing the report. Look for a message saying that the system plan cannot be deployed (in red). Refer to the mksysplan command and the HMC interface for more information. Also note that an HMC backup must be restored on hardware supporting that level of backup. In a disaster recovery scenario, this is worth a check.5.2.3 Backing up IVM resources If the system is managed by the Integrated Virtualization Manager, you need to back up your partition profile data for the management partition and its clients before you back up the Virtual I/O Server operating system. To do so, from the Service Management menu, click Backup/Restore. The Backup/Restore page is displayed. Then click Generate Backup. This operation can also be done from the Virtual I/O Server. To do so, enter this command: bkprofdata -o backup -f /home/padmin/profile.bak5.2.4 Backing up operating systems from the client logical partitions Backing up and restoring the operating systems and applications running in the client logical partitions is a separate topic. This is because the Virtual I/O Server manages only the devices and the linking of these devices along with the Virtual I/O operating system itself. The AIX, IBM i, or Linux operating system-based clients of the Virtual I/O Server should have a backup strategy independently defined as part of your existing server backup strategy.212 IBM PowerVM Virtualization Managing and Monitoring
  • 250. For example, if you have an AIX 6.1 server made up of virtual disks and virtual networks, you would still have an mksysb, savevg, or equivalent strategy in place to back up the system. This backup strategy can rely on the virtual infrastructure, for example, backing up to an Network Installation Manager (NIM) or IBM Tivoli Storage Manager server over a virtual network interface through a physical Shared Ethernet Adapter.5.2.5 Backing up the Virtual I/O Server operating system The Virtual I/O Server operating system consists of the base code, fix packs, custom device drivers to support disk subsystems, and user-defined customization. An example of user-defined customization can be as simple as the changing of the Message of the Day or the security settings. These settings, after an initial setup, will probably not change apart from the application of fix packs, so a sensible backup strategy for the Virtual I/O Server is after fix packs have been applied or configuration changes made. Although we discuss the user-defined virtual devices in the next section, it is worth noting that the backup of the Virtual I/O Server will capture some of this data. With this fact in mind, you can define the schedule for the Virtual I/O operating system backups to occur more frequently to cover both the Virtual I/O operating system and the user-defined devices in one single step. Starting with the release of Virtual I/O Server Version 1.3, you can schedule jobs with the crontab command. You can schedule the following backup steps to take place at regular intervals using this command. The backupios command creates a backup of the Virtual I/O Server to a bootable tape, a DVD, or a file system (local or a remotely mounted Network File System). Remember: Be aware of the following points: Virtual device mappings (that is, customized metadata) are backed up by default. Nothing special needs to happen. Client data is not backed up. Contents of the virtual media repository can be excluded by using the -nomedialib flag. Chapter 5. Virtual I/O Server maintenance 213
  • 251. You can back up and restore the Virtual I/O Server by the means listed in Table 5-1. Table 5-1 Virtual I/O Server backup and restore methods Backup method Media Restore method To tape Tape From tape To DVD DVD-RAM From DVD To remote file system nim_resources.tar From an HMC using the Network image Installation Management on Linux (NIMOL) facility and the installios command To remote file system mksysb image From an AIX NIM server and a mksysb image standard mksysb system installation Tivoli Storage Manager mksysb image Tivoli Storage Manager5.3 Scheduling backups of the Virtual I/O Server You can schedule regular backups of the Virtual I/O Server and user-defined virtual devices to ensure that your backup copy accurately reflects the current configuration. To ensure that your backup of the Virtual I/O Server accurately reflects your current running Virtual I/O Server, you should back up the Virtual I/O Server each time that its configuration changes. For example: Changing the Virtual I/O Server, for example installing a fix pack. Adding, deleting, or changing the external device configuration, such as changing the SAN configuration. Adding, deleting, or changing resource allocations and assignments for the Virtual I/O Server, such as memory, processors, or virtual and physical devices. Adding, deleting, or changing user-defined virtual device configurations, such as virtual device mappings. You can then back up your Virtual I/O Server manually after any of these modifications. You can also schedule backups on a regular basis using the crontab function. You therefore create a script for backing up the Virtual I/O Server, and save it in a214 IBM PowerVM Virtualization Managing and Monitoring
  • 252. directory that is accessible to the padmin user ID. For example, create a script called backup and save it in the /home/padmin directory. Ensure that your script includes commands for backing up the Virtual I/O Server and saving information about user-defined virtual devices. Mounting the image directory of your NIM server and posting the backups to this directory is an approach to quickly deploy a backup if the need arises. Then create a crontab file entry that runs the backup script on regular intervals. For example, to run backup every Saturday at 2:00 a.m., type the following commands: $ crontab -e 0 2 0 0 6 /home/padmin/backup When you are finished, remember to save and exit.5.4 Backing up the Virtual I/O Server operating system The following topics explain the options that can be used to back up the Virtual I/O Server.5.4.1 Backing up to tape You can back up the Virtual I/O Server base code, applied fix packs, custom device drivers to support disk subsystems, and some user-defined metadata to tape. If the system is managed by the Integrated Virtualization Manager, you need to back up your partition profile data for the management partition and its clients before you back up the Virtual I/O Server. To do so, see 5.2.3, “Backing up IVM resources” on page 212. You can find the device name on the Virtual I/O Server by typing the following command: $ lsdev -type tape name status description rmt0 Available Other SCSI Tape Drive If the device is in the Defined state, type the following command where dev is the name of your tape device: cfgdev -dev dev Chapter 5. Virtual I/O Server maintenance 215
  • 253. Run the backupios command with the -tape option. Specify the path to the device. Use the -accept flag to automatically accept licences. For example: backupios -tape /dev/rmt0 -accept Example 5-1 illustrates a backupios command execution to back up the Virtual I/O Server on a tape. Example 5-1 Backing up the Virtual I/O Server to tape $ backupios -tape /dev/rmt0 Creating information file for volume group volgrp01. Creating information file for volume group storage01. Backup in progress. This command can take a considerable amount of time to complete, please be patient... Creating information file (/image.data) for rootvg. Creating tape boot image.............. Creating list of files to back up. Backing up 44950 files........................... 44950 of 44950 files (100%) 0512-038 mksysb: Backup Completed Successfully.5.4.2 Backing up to a DVD-RAM You can back up the Virtual I/O Server base code, applied fix packs, custom device drivers to support disk subsystems, and some user-defined metadata to DVD. If the system is managed by the Integrated Virtualization Manager, then you need to back up your partition profile data for the management partition and its clients before you back up the Virtual I/O Server. To do so, see 5.2.3, “Backing up IVM resources” on page 212. To back up the Virtual I/O Server to one or more DVDs, you generally use DVD-RAM media. Vendor disk drives might support burning to additional disk types, such as CD-RW and DVD-R. See the documentation for your drive to determine which disk types are supported. DVD-RAM media can support both -cdformat and -udf format flags. DVD-R media only supports the -cdformat.216 IBM PowerVM Virtualization Managing and Monitoring
  • 254. The DVD device cannot be virtualized and assigned to a client partition when youuse the backupios command. Remove the device mapping from the client beforeproceeding with the backup.You can find the device name on the Virtual I/O Server by typing the followingcommand:$ lsdev -type opticalname status descriptioncd0 Available SATA DVD-RAM DriveIf the device is in the Defined state, type the following command where dev is thename of your CD or DVD device:cfgdev -dev devRun the backupios command with the -cd option. Specify the path to the device.Use the -accept flag to automatically accept licenses. For example:backupios -cd /dev/cd0 -acceptExample 5-2 illustrates a backupios command execution to back up the VirtualI/O Server on a DVD-RAM.Example 5-2 Backing up the Virtual I/O Server to DVD-RAM$ backupios -cd /dev/cd0 -udf -acceptCreating information file for volume group volgrp01.Creating information file for volume group storage01.Backup in progress. This command can take a considerable amount of timeto complete, please be patient...Initializing mkcd log: /var/adm/ras/mkcd.log...Verifying command parameters...Creating image.data file...Creating temporary file system: /mkcd/mksysb_image...Creating mksysb image...Creating list of files to back up.Backing up 44933 files.........44933 of 44933 files (100%)0512-038 mksysb: Backup Completed Successfully.Populating the CD or DVD file system...Copying backup to the CD or DVD file system.............................................................................................Building chrp boot image... Chapter 5. Virtual I/O Server maintenance 217
  • 255. Tip: If the Virtual I/O Server does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created.5.4.3 Backing up to a remote file The major difference for this type of backup compared to tape or DVD media is that all of the previous commands resulted in a form of bootable media that can be used to directly recover the Virtual I/O Server. Backing up the Virtual I/O Server base code, applied fix packs, custom device drivers to support disk subsystems, and some user-defined metadata to a file will result in either: A nim_resources.tar file that contains all the information needed for a restore. This is the preferred solution if you intend to restore the Virtual I/O Server on the same system. This backup file can both be restored by the HMC or a NIM server. An mksysb image. This solution is preferred if you intend to restore the Virtual I/O Server from a Network Installation Management (NIM) server. Tip: The mksysb backup of the Virtual I/O Server can be extracted from the tar file created in a full backup, so either method is appropriate if the restoration method uses a NIM server. Whichever method you choose, if the system is managed by the Integrated Virtualization Manager, you need to back up your partition profile data for the management partition and its clients before you back up the Virtual I/O Server. To do so, see 5.2.3, “Backing up IVM resources” on page 212. Mounting the remote file system You can use the backupios command to write to a local file on the Virtual I/O Server, but the more common scenario is to perform a backup to a remote NFS-based storage. The ideal situation might be to use the NIM server as the destination because this server can be used to restore these backups. In the following example, a NIM server has a host name of nim_server and the Virtual I/O Server is vios1. The first step is to set up the NFS-based storage export on the NIM server. Here, we export a file system named /export/ios_backup, and in this case, /etc/exports looks similar to the following: $ mkdir /export/ios_backup218 IBM PowerVM Virtualization Managing and Monitoring
  • 256. $ mknfsexp -d /export/ios_backup -B -S sys,krb5p,krb5i,krb5,dh -t rw -r vios1$ grep lpar01 /etc/exports/export/ios_backup -sec=sys:krb5p:krb5i:krb5:dh,rw,root=vios1 Important: The NFS server must have the root access NFS attribute set on the file system exported to the Virtual I/O Server logical partition for the backup to succeed. In addition, make sure that the name resolution is functioning from the NIM server to the Virtual I/O Server and back again (reverse resolution) for both the IP and host name. To edit the name resolution on the Virtual I/O Server, use the hostmap command to manipulate the /etc/hosts file or the cfgnamesrv command to change the DNS parameters. The backup of the Virtual I/O Server can be large, so ensure that the system ulimits parameter in the /etc/security/limits file on the NIM server is set to -1 and therefore will allow the creation of large files.With the NFS export and name resolution set up, the file system needs to bemounted on the Virtual I/O Server. You can use the mount command:$ mkdir /mnt/backup$ mount nim_server:/export/ios_backup /mnt/backup Remember: The remote file system should be mounted automatically at bootup of the Virtual I/O Server to simplify the scheduling of regular backups.Backing up to a nim_resources.tar fileAfter the remote file system is mounted, you can start the backup operation tothe nim_resources.tar file.Backing up the Virtual I/O Server to a remote file system creates thenim_resources.tar image in the directory you specify. The nim_resources.tarfile contains all the necessary resources to restore the Virtual I/O Server,including the mksysb image, the bosinst.data file, the network boot image, andthe Shared Product Object Tree (SPOT) resource.The backupios command empties the target_disks_stanza section ofbosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb filegenerated by the command to be cloned to another logical partition. If you plan touse the nim_resources.tar image to install to a specific disk, then you need torepopulate the target_disk_stanza section of bosinst.data and replace this file inthe nim_resources.tar image. All other parts of the nim_resources.tar imagemust remain unchanged. Chapter 5. Virtual I/O Server maintenance 219
  • 257. Run the backupios command with the -file option. Specify the path to the target directory. For example: backupios -file /mnt/backup Example 5-3 illustrates a backupios command execution to back up the Virtual I/O Server on a nim_resources.tar file. Example 5-3 Backing up the Virtual I/O Server to the nim_resources.tar file $ backupios -file /mnt/backup Creating information file for volume group storage01. Creating information file for volume group volgrp01. Backup in progress. This command can take a considerable amount of time to complete, please be patient... This command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC as described in “Restoring from a nim_resources.tar file with the HMC” on page 232. Remember: The argument for the backupios -file command is a directory. The nim_resources.tar file is stored in this directory. Backing up to an mksysb file Alternatively, after the remote file system is mounted, you can start the backup operation to an mksysb file. The mksysb image is an installable image of the root volume group in a file. Run the backupios command with the -file option. Specify the path to the target directory and specify the -mksysb parameter. For example: backupios -file /mnt/backup -mksysb Example 5-4 illustrates a backupios command execution to back up the Virtual I/O Server on a mksysb file. Example 5-4 Backing up the Virtual I/O Server to the mksysb image $ backupios -file /mnt/VIOS_BACKUP_13Oct2008.mksysb -mksysb /mnt/VIOS_BACKUP_13Oct2008.mksysb doesnt exist. Creating /mnt/VIOS_BACKUP_13Oct2008.mksysb Creating information file for volume group storage01.220 IBM PowerVM Virtualization Managing and Monitoring
  • 258. Creating information file for volume group volgrp01. Backup in progress. This command can take a considerable amount of time to complete, please be patient... Creating information file (/image.data) for rootvg. Creating list of files to back up... Backing up 45016 files........................... 45016 of 45016 files (100%) 0512-038 savevg: Backup Completed Successfully. Remember: If you intend to use a NIM server for the restoration, it must be running a level of AIX that can support the Virtual I/O Server installation. For this reason, the NIM server should be running the latest technology level and service packs at all times. For the restoration of any backups of a Virtual I/O Server Version 2.1, your NIM server needs to be at the latest AIX Version 6.1 level. For a Virtual I/O Server 1.x environment, your NIM server needs to be at the latest AIX Version 5.3 level.5.5 Backing up user-defined virtual devices After you have backed up the Virtual I/O Server operating system, you still need to back up the user-defined virtual devices: If you are restoring to the same server, the information might be available such as data structures (storage pools or volume groups and logical volumes) held on non-rootvg disks. If you are restoring to new hardware, these devices cannot be automatically recovered because the disk structures will not exist. If the physical devices exist in the same location and structures such as logical volumes are intact, the virtual devices such as virtual target SCSI and Shared Ethernet Adapters are recovered during the restoration. In the disaster recovery situation where these disk structures do not exist and network cards are at different location codes, you need to make sure to back up the following: Any user-defined disk structures such as storage pools or volume groups and logical volumes. The linking of the virtual device through to the physical devices. Chapter 5. Virtual I/O Server maintenance 221
  • 259. These devices will mostly be created at the Virtual I/O Server build and deploy time, but will change depending on when new clients are added or changes are made. For this reason, a weekly schedule or manual backup procedure when configuration changes are made is appropriate. User-defined virtual devices include metadata, such as virtual device mappings, that define the relationship between the physical and the virtual environment. You can back up this data in the following two ways: Saving the configuration information to a location that is automatically backed when the backupios command is run. This is required if you want to restore the configuration to a separate Virtual I/O Server. Using the viosbr command you can save the user-defined virtual device configuration and restore it to the same Virtual I/O Server from where it was backed up. The following sections describe both methods in more detail.5.5.1 Backing up user-defined virtual devices using viosbr The viosbr command backs up all relevant data to recover a Virtual I/O Server after an installation such as: Logical devices For example logical volume or file-backed storage pool, virtual media repository, or paging space device configurations Virtual devices For example Shared Ethernet Adapter, virtual SCSI server, and virtual Fibre Channel server adapter configurations Device attributes For example device attributes for disk, network, or Fibre Channel devices Tip: At the time of writing, backing up and restoring of shared storage pool configurations using the viosbr command did not yet work, although the respective command flags like -clustername are already available in the viosbr command.222 IBM PowerVM Virtualization Managing and Monitoring
  • 260. Example 5-5 shows an example of the viosbr command. Using the -file option you can specify the name of the backup file. By default the file will be written to the cfgbackups subdirectory in the home directory of the padmin user. You can also specify an alternate location by specifying a directory and file name like -file /tmp/backup. After the backup has finished you can display the backup file using the viosbr -view -list command Example 5-5 Performing a backup using the viosbr command $ viosbr -backup -file backup_1 Backup of this node(P7_1_vios2) successfull $ viosbr -view -list backup_1.tar.gz5.5.2 Scheduling regular backups using the viosbr command Using the viosbr command, regular backups of the user-defined virtual device configuration can be scheduled on a daily, weekly or monthly basis. Example 5-6 shows how to schedule a daily backup where 10 version of backup files will be kept. Example 5-6 Scheduling regular backups using the viosbr command $ viosbr -backup -file backup -frequency daily -numfiles 10 Backup of this node(P7_1_vios2) successfull5.6 Backing up user-defined virtual devices usingbackupios For situations where you need to restore the user-defined virtual device configuration to another Virtual I/O Server, for example during disaster recovery, you have to use the backupios command. The following three categories of configuration data that are required to rebuild the Virtual I/O Server configuration after a restore: Disk structures These are used-defined disk structures like the volume group information. Device mappings The mappings define the link between the physical devices and the virtual devices. Chapter 5. Virtual I/O Server maintenance 223
  • 261. Additional information Virtual I/O Server configuration data like network routing information, tuning settings, and security settings. Backing up disk structures with savevgstruct Use the savevgstruct command to back up user-defined disk structures. This command writes a backup of the structure of a named volume group (and therefore storage pool) to the /home/ios/vgbackups directory. For example, assume you have the following storage pools: $ lssp Pool Size(mb) Free(mb) Alloc Size(mb) BDs rootvg 139776 107136 128 0 storage01 69888 69760 64 1 volgrp01 69888 69760 64 1 Then you run the savevgstruct storage01 command to back up the structure in the storage01 volume group: $ savevgstruct storage01 Creating information file for volume group storage01. Creating list of files to back up. Backing up 6 files 6 of 6 files (100%) 0512-038 savevg: Backup Completed Successfully. The savevgstruct command is automatically called before the backup commences for all active non-rootvg volume groups or storage pools on a Virtual I/O Server when the backupios command is run. Because this command is called before the backup commences, the volume group structures will be included in the system backup. For this reason, you can use the backupios command to back up the disk structure as well, so the frequency that this command runs might increase. Remember: The volume groups or storage pools need to be activated for the backup to succeed. Only active volume groups or storage pools are automatically backed up by the backupios command. Use the lsvg or lssp command to list and activatevg to activate the volume groups or storage pools if necessary before starting the backup.224 IBM PowerVM Virtualization Managing and Monitoring
  • 262. Backing up virtual devices linking informationThe last item to back up is the linking information. You can gather this informationfrom the output of the lsmap command as shown in Example 5-7.Example 5-7 Sample output from the lsmap command$ lsmap -net -allSVEA Physloc------ --------------------------------------------ent2 U9117.MMA.101F170-V1-C11-T1SEA ent5Backing device ent0Status AvailablePhysloc U789D.001.DQDYKYW-P1-C4-T1$ lsmap -allSVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost0 U9117.MMA.101F170-V1-C21 0x00000003VTD aix61_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk7PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L1000000000000SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost1 U9117.MMA.101F170-V1-C22 0x00000004VTD aix53_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk8PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L2000000000000From this example, the vhost0 device in slot 10 on the HMC (the C21 value in thelocation code) is linked to the hdisk7 device and named aix61_rvg. The vhost1device holds the hdisk8 device and named aix53_rvg. For the network, the virtualEthernet adapter ent2 is linked to the physical Ethernet adapter ent0, making theent5 the Shared Ethernet Adapter. Chapter 5. Virtual I/O Server maintenance 225
  • 263. Consideration: The previous output does not gather information such as SEA control channels (for SEA Failover), IP addresses to ping, and whether threading is enabled for the SEA devices. These settings and any other changes that have been made (for example MTU settings) must be documented separately, as explained later in this section. Important: It is also vitally important to use the slot numbers as a reference for the virtual SCSI and virtual Ethernet devices, not the vhost number or ent number. The vhost and ent devices are assigned by the Virtual I/O Server because they are found at boot time or when the cfgdev command is run. If you add in more devices after subsequent boots or with the cfgdev command, these will be sequentially numbered. In the vhost0 example, the important information to note is that it is not vhost0 but that the virtual SCSI server in slot 21 (the C21 value in the location code) is mapped to a LUN disk hdisk7. The vhost and ent numbers are assigned sequentially by the Virtual I/O Server at initial discovery time and should be treated with caution to rebuild user-defined linking devices. Backing up Shared Storage Pool information If you are using a Shared Storage Pool as the cluster configuration, the storage pool and the logical unit configuration should be collected. You can get this information using the lscluster command and the lssp command as shown in Example 5-8. The lscluster command displays the cluster name and the repository and storage pool disks. The lssp command displays the pool name and the logical unit configuration. Example 5-8 Displaying shared storage pool information $ lscluster -d Storage Interface Query Cluster Name: clusterA Cluster uuid: 8e167044-0155-11e0-a50f-f61aa6a64371 Number of nodes reporting = 1 Number of nodes expected = 1 Node P7_1_vios1 Node uuid = 3a783312-f36f-11df-b987-00145ee9e161 Number of disk discovered = 3 cldisk2 state : UP uDid : 200B75BALB1111507210790003IBMfcp226 IBM PowerVM Virtualization Managing and Monitoring
  • 264. uUid : 5d7eace7-5213-07d6-e080-bcc62ff95386 type : CLUSDISK cldisk1 state : UP uDid : 200B75BALB1111407210790003IBMfcp uUid : 42499f99-5563-89ae-9453-07ff800a7e91 type : CLUSDISK caa_private0 state : UP uDid : uUid : 1264a0af-4a7e-fb93-62d7-6a33d5f17f35 type : REPDISK$ lssp -clustername clusterAPool Size(mb) Free(mb) LUs Type PoolIDpoolA 40704 40409 2 CLPOOL 2683031503901849366$ lssp -clustername clusterA -sp poolA -bdLu(Disk) Name Size(MB) Lu Udidtest1 106beea81a3d25723c9e3d1e72df34a296test2 20971d5586da279b2fa9a15d089c812514Backing up additional informationYou should also save the information about network settings, adapters, users,and security settings to the /home/padmin directory by running each command inconjunction with the tee command as follows: command | tee /home/padmin/filename Where: – command is the command that produces the information you want to save. – filename is the name of the file to which you want to save the information.The /home/padmin directory is backed up using the backupios command, andtherefore it is a good location to collect configuration information prior to abackup. Table 5-2 provides a summary of the commands that help you to savethe information.Table 5-2 Commands to save information about Virtual I/O Server Command Information provided (and saved) cfgnamesrv -ls Shows all system configuration database entries related to domain name server information used by local resolver routines. Chapter 5. Virtual I/O Server maintenance 227
  • 265. Command Information provided (and saved) entstat -all devicename Shows Ethernet driver and device statistics for the device specified. devicename is the name of a device. Run this command for each device whose attributes or statistics you want to save. hostmap -ls Shows all entries in the system configuration database. ioslevel Shows the current maintenance level of the Virtual I/O Server. lsdev -dev devicename -attr Shows the attributes of the device specified. devicename is the name of a device. Run this command for each device whose attributes or statistics you want to save. You generally want to save the customized devices attributes. Try to keep track of them when managing the Virtual I/O Server. lsdev -type adapter Shows information about physical and logical adapters. lsuser Shows a list of all attributes of all system users. netstat -routinfo Shows the routing tables, including the user-configured and current costs of each route. netstat -state Shows the state of the network including errors, collisions, and packets transferred. optimizenet -list Shows characteristics of all network tuning parameters, including the current and reboot value, range, unit, type, and dependencies. viosecure -firewall view Shows a list of allowed ports. viosecure -view -nonint Shows all of the security level settings for non-interactive mode.228 IBM PowerVM Virtualization Managing and Monitoring
  • 266. 5.6.1 Backing up using IBM Tivoli Storage Manager You can use the IBM Tivoli Storage Manager to automatically back up the Virtual I/O Server on regular intervals, or you can perform incremental backups. IBM Tivoli Storage Manager automated backup You can automate backups of the Virtual I/O Server using the crontab command and the Tivoli Storage Manager scheduler. Before you start, complete the following tasks: Ensure that you configured the Tivoli Storage Manager client on the Virtual I/O Server. If it is not configured, see 12.2, “Configuring the IBM Tivoli Storage Manager client” on page 457. Ensure that you are logged into the Virtual I/O Server as the administrator (padmin). To automate backups of the Virtual I/O Server, complete the following steps: 1. Write a script that creates an mksysb image of the Virtual I/O Server and save it in a directory that is accessible to the padmin user ID. For example, create a script called backup and save it in the /home/padmin directory. If you plan to restore the Virtual I/O Server to a separate system than the one from which it was backed up, ensure that your script includes commands for saving information about user-defined virtual devices. For more information, see the following tasks: – For instructions about how to create an mksysb image, see “Backing up to an mksysb file” on page 220. – For instructions about how to save user-defined virtual devices, see 5.5, “Backing up user-defined virtual devices” on page 221. 2. Create a crontab file entry that runs the backup script on a regular interval. For example, to create an mksysb image every Saturday at 2:00 a.m., type the following commands: $ crontab -e 0 2 * * 6 /home/padmin/backup When you are finished, remember to save and exit. 3. Work with the Tivoli Storage Manager administrator to associate the Tivoli Storage Manager client node with one or more schedules that are part of the policy domain. This task is not performed on the Tivoli Storage Manager client on the Virtual I/O Server, but by the Tivoli Storage Manager administrator on the Tivoli Storage Manager server. Chapter 5. Virtual I/O Server maintenance 229
  • 267. 4. Start the client scheduler and connect to the server schedule using the dsmc command as follows: dsmc -schedule 5. If you want the client scheduler to restart when the Virtual I/O Server restarts, add the following entry to the /etc/inittab file: itsm::once:/usr/bin/dsmc sched > /dev/null 2>&1 # TSM scheduler IBM Tivoli Storage Manager incremental backup You can back up the Virtual I/O Server at any time by performing an incremental backup with the Tivoli Storage Manager. Perform incremental backups in situations where the automated backup does not suit your needs. For example, before you upgrade the Virtual I/O Server, perform an incremental backup to ensure that you have a backup of the current configuration. Then, after you upgrade the Virtual I/O Server, perform another incremental backup to ensure that you have a backup of the upgraded configuration. Before you start, complete the following tasks: Ensure that you configured the Tivoli Storage Manager client on the Virtual I/O Server. For instructions, see 12.2, “Configuring the IBM Tivoli Storage Manager client” on page 457. Ensure that you have an mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server to a separate system than the one from which it was backed up, ensure that the mksysb includes information about user-defined virtual devices. For more information, see the following tasks: – For instructions about how to create an mksysb image, see “Backing up to an mksysb file” on page 220. – For instructions about how to save user-defined virtual devices, see 5.5, “Backing up user-defined virtual devices” on page 221. To perform an incremental backup of the Virtual I/O Server, run the dsmc command. For example: dsmc -incremental sourcefilespec where sourcefilespec is the directory path to where the mksysb file is located. For example, /home/padmin/mksysb_image.230 IBM PowerVM Virtualization Managing and Monitoring
  • 268. 5.7 Restoring the Virtual I/O Server With all of the different backups described and the frequency discussed, we now describe how to rebuild the server from scratch. The situation we work through is a Virtual I/O Server hosting an AIX operating system-based client partition running on virtual disk and network. We work through the restore from the uninstalled bare metal Virtual I/O Server upward and discuss where each backup strategy will be used. This complete end-to-end solution is only for this extreme disaster recovery scenario. If you need to back up and restore a Virtual I/O Server onto the same server, the restoration of the operating system is probably of interest.5.7.1 Restoring the HMC configuration In the most extreme case of a natural or man-made disaster that has destroyed or rendered unusable an entire data center, systems might have to be restored to a disaster recovery site. In this case, you need another HMC and server location to which to recover your settings. You should also have a disaster recovery server in place with your HMC profiles ready to start recovering your systems. The details of this are beyond the scope of this document but would, along with the following section, be the first steps for a disaster recovery.5.7.2 Restoring other IT infrastructure devices All other IT infrastructure devices, such as network routers, switches, storage area networks, and DNS servers, to name just a few, also need to be part of an overall IT disaster recovery solution. Having mentioned them, we say no more about them apart from making you aware that not just the Virtual I/O Server but the whole IT infrastructure will rely on these common services for a successful recovery.5.7.3 Restoring the Virtual I/O Server operating system This section details how to restore the Virtual I/O Server. We describe how to recover from a complete disaster. If you migrated to a separate system and if this system is managed by the Integrated Virtualization Manager, you need to restore your partition profile data for the management partition and its clients before you restore the Virtual I/O Server. Chapter 5. Virtual I/O Server maintenance 231
  • 269. To do so, from the Service Management menu, select Backup/Restore. The Backup/Restore page is displayed. Then click Restore Partition Configuration. Restoring from DVD backup The backup procedures described in this chapter created bootable media that you can use to restore as stand-alone backups. Insert the first DVD into the DVD drive and boot the Virtual I/O server partition into SMS mode, making sure the DVD drive is assigned to the partition. Using the SMS menus, select the option to install from the DVD drive and work through the usual installation procedure. Consideration: If the DVD backup spanned multiple disks during the install, you will be prompted to insert the next disk in the set with a message similar to the following: Please remove volume 1, insert volume 2, and press the ENTER key. Restoring from tape backup The procedure for the tape is similar to the DVD procedure. Because this is a bootable media, simply place the backup media into the tape drive and boot the Virtual I/O Server partition into SMS mode. Select to install from the tape drive and follow the same procedure as previously described. Restoring from a nim_resources.tar file with the HMC If you made a full backup of the Virtual I/O Server to a nim_resources.tar file, you can use the HMC to restore it using the installios command. To do so, the tar file must be located either on the HMC, an NFS-accessible directory, or a DVD. To make the nim_resources.tar file accessible for restore, we performed the following steps: 1. We created a directory named backup using the mkdir /home/padmin/backup command. 2. We checked that the NFS server was exporting a file system with the showmount nfs_server command. 3. We mounted the NFS-exported file system onto the /home/padmin/backup directory. 4. We copied the tar file created in “Backing up to a nim_resources.tar file” on page 219 to the NFS mounted directory using the following command: $ cp /home/padmin/backup_loc/nim_resources.tar /home/padmin/backup232 IBM PowerVM Virtualization Managing and Monitoring
  • 270. At this stage, the backup is ready to be restored to the Virtual I/O Server partitionusing the installios command on the HMC or an AIX partition that is a NIMserver. The restore procedure will shut down the Virtual I/O Server partition if it isstill running. The following is an example of the command help:hscroot@hmc1:~> installios -?installios: usage: installios [-s managed_sys -S netmask -p partition -r profile -i client_addr -d source_dir -m mac_addr -g gateway [-P speed] [-D duplex] [-n] [-l language]] | -uUsing the installios command, the -s managed_sys option requires the HMCdefined system name, the -p partition option requires the name of the Virtual I/OServer partition, and the -r profile option requires the partition profile you wantto use to boot the Virtual I/O Server partition during the recovery.If you do not specify the -m flag and include the MAC address of the Virtual I/OServer being restored, the restore will take longer because the installioscommand shuts down the Virtual I/O Server and boots it in SMS to determine theMAC address. The following is an example of the use of this command:hscroot@hmc1:~> installios -s MT_B_p570_MMA_101F170 -S 255.255.254.0 -p vios1-r default -i 9.3.5.111 -d 9.3.5.5:/export_fs -m 00:02:55:d3:dc:34 -g 9.3.4.1 Tip: If you do not input a parameter, the installios command will prompt you for one. hscroot@hmc1:~> installios The following objects of type "managed system" were found. Please select one: 1. MT_B_p570_MMA_101F170 2. MT_A_p570_MMA_100F6A0 3. p550-SN106629E Enter a number (1-3): 1 The following objects of type "virtual I/O server partition" were found. Please select one: 1. vios2 2. vios1 Enter a number (1-2):At this point, open a terminal console on the server to which you are restoring incase user input is required. Then run the installios command as describedabove. Chapter 5. Virtual I/O Server maintenance 233
  • 271. Following this command, NIMOL on the HMC takes over the NIM process and mounts the exported file system to process the backupios tar file created on the Virtual I/O Server previously. NIMOL on the HMC then proceeds with the installation of the Virtual I/O Server and a reboot of the partition completes the install. Tips: The configure client network setting must be set to no when prompted by the installios command. This is because the physical adapter we are installing the backup through might already be used by an SEA and the IP configuration will fail if this is the case. Log in and configure the IP if necessary after the installation using a console session. If the command seems to be taking a long time to restore, this is most commonly caused by a speed or duplex misconfiguration in the network. Restoring from a file with the NIM server The installios command is also available on the NIM server, but at present it only supports installations from the base media of the Virtual I/O Server. The method we used from the NIM server was to install the mksysb image. This can either be the mksysb image generated with the -mksysb flag in the backupios command shown previously or you can extract the mksysb image from the nim_resources.tar file. Whatever method you use, after you have stored the mksysb file this on the NIM server, you must create a NIM mksysb resource as shown: # nim -o define -t mksysb -aserver=master -alocation=/export/mksysb/VIOS_BACKUP_13Oct2008.mksysb VIOS_mksysb # lsnim VIOS_mksysb VIOS_mksysb resources mksysb After NIM mksysb resource has been successfully created, generate a SPOT from the NIM mksysb resource or use the SPOT available at the latest AIX technology and service pack level. To create the SPOT from the NIM mksysb resource, run the following command: # nim -o define -t spot -a server=master -a location=/export/spot/ -a source=VIOS_mksysb VIOS_SPOT Creating SPOT in "/export/spot" on machine "master" from "VIOS_mksysb" ... Restoring files from BOS image. This may take several minutes ... # lsnim VIOS_SPOT VIOS_SPOT resources spot234 IBM PowerVM Virtualization Managing and Monitoring
  • 272. With the SPOT and the mksysb resources defined to NIM, you can install theVirtual I/O Server from the backup. If the Virtual I/O Server partition you areinstalling is not defined to NIM, make sure that it is now defined as a machineand enter the smitty nim_bosinst fast path command. Select the NIM mksysbresource and SPOT defined previously. Important: Note that the Remain NIM client after install field must be set to no. If this is not set to no, the last step for the NIM installation is to configure an IP address onto the physical adapter through which the Virtual I/O Server has just been installed. This IP address is used to register with the NIM server. If this is the adapter used by an existing Shared Ethernet Adapter (SEA), it will cause error messages to be displayed. If this is the case, reboot the Virtual I/O Server if necessary, and then login to it using a terminal session and remove any IP address information and the SEA. After this, recreate the SEA and configure the IP address back for the SEA interface.Now that you have set up the NIM server to push out the backup image, theVirtual I/O Server partition needs to have the remote IPL setup completed. Forthis procedure, see the section “Installing with Network Installation Management”under the Installation and Migration category of the IBM System p® and AIXInformation Center at:http://publib16.boulder.ibm.com/pseries/index.htm Tip: One of the main causes of installation problems using NIM is the NFS exports from the NIM server. Make sure that the /etc/exports file is correct on the NIM server.The installation of the Virtual I/O Server should complete, but here is a bigdifference between restoring to the existing server and restoring to a newdisaster recovery server. One of the NIM install options is to preserve the NIMdefinitions for resources on the target. With this option, NIM attempts to restoreany virtual devices that were defined in the original backup. This depends on thesame devices being defined in the partition profile (virtual and physical) such thatthe location codes have not changed.This means that virtual target SCSI devices and Shared Ethernet Adaptersshould all be recovered without any need to recreate them (assuming the logicalpartition profile has not changed). If restoring to the same machine, there is adependency that the non-rootvg volume groups are present to be imported andany logical volume structure contained on these is intact. Chapter 5. Virtual I/O Server maintenance 235
  • 273. To demonstrate this, we operated a specific test scenario: A Virtual I/O Server was booted from a diagnostics CD and the Virtual I/O Server operating system disks were formatted and certified, destroying all data (this was done for demonstration purposes). The other disks containing volume groups and storage pools were not touched. Using a NIM server, the backup image was restored to the initial Virtual I/O Server operating system disks. Examining the virtual devices after the installation, the virtual target devices and Shared Ethernet Adapters are all recovered as shown in Example 5-9. Example 5-9 Restore of Virtual I/O Server to the same logical partition $ lsdev -virtual name status description ent2 Available Virtual I/O Ethernet Adapter (l-lan) ent3 Available Virtual I/O Ethernet Adapter (l-lan) ent4 Available Virtual I/O Ethernet Adapter (l-lan) ent6 Available Virtual I/O Ethernet Adapter (l-lan) vasi0 Available Virtual Asynchronous Services Interface (VASI) vbsd0 Available Virtual Block Storage Device (VBSD) vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter vhost2 Available Virtual SCSI Server Adapter vhost3 Available Virtual SCSI Server Adapter vhost4 Available Virtual SCSI Server Adapter vsa0 Available LPAR Virtual Serial Adapter IBMi61_0 Available Virtual Target Device - Disk IBMi61_1 Available Virtual Target Device - Disk aix53_rvg Available Virtual Target Device - Disk aix61_rvg Available Virtual Target Device - Disk rhel52 Available Virtual Target Device - Disk sles10 Available Virtual Target Device - Disk vtopt0 Defined Virtual Target Device - File-backed Optical vtopt1 Defined Virtual Target Device - File-backed Optical vtopt2 Defined Virtual Target Device - File-backed Optical vtopt3 Available Virtual Target Device - Optical Media vtscsi0 Defined Virtual Target Device - Disk vtscsi1 Defined Virtual Target Device - Logical Volume ent5 Available Shared Ethernet Adapter ent7 Available Shared Ethernet Adapter $ lsmap -all SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U9117.MMA.101F170-V1-C21 0x00000003 VTD aix61_rvg236 IBM PowerVM Virtualization Managing and Monitoring
  • 274. Status AvailableLUN 0x8100000000000000Backing device hdisk7PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L1000000000000SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost1 U9117.MMA.101F170-V1-C22 0x00000004VTD aix53_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk8PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L2000000000000SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost2 U9117.MMA.101F170-V1-C23 0x00000005VTD IBMi61_0Status AvailableLUN 0x8100000000000000Backing device hdisk11PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L5000000000000VTD IBMi61_1Status AvailableLUN 0x8200000000000000Backing device hdisk12PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L6000000000000SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost3 U9117.MMA.101F170-V1-C24 0x00000006VTD rhel52Status AvailableLUN 0x8100000000000000Backing device hdisk10PhyslocU789D.001.DQDYKYW-P1-C2-T2-W201300A0B811A662-L4000000000000 Chapter 5. Virtual I/O Server maintenance 237
  • 275. SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost4 U9117.MMA.101F170-V1-C60 0x00000003 VTD NO VIRTUAL TARGET DEVICE FOUND $ lsmap -net -all SVEA Physloc ------ -------------------------------------------- ent2 U9117.MMA.101F170-V1-C11-T1 SEA ent5 Backing device ent0 Status Available Physloc U789D.001.DQDYKYW-P1-C4-T1 If you restore to a different logical partition where you have defined similar virtual devices from the HMC recovery step provided previously, you will find that there are no linking devices. Consideration: The devices will always be different between machines because the machine serial number is part of the virtual device location code for virtual devices. For example: $ lsdev -dev ent4 -vpd ent4 U8204.E8A.10FE411-V1-C11-T1 Virtual I/O Ethernet Adapter (l-lan) Network Address.............C21E4467D40B Displayable Message.........Virtual I/O Ethernet Adapter (l-lan) Hardware Location Code......U8204.E8A.10FE411-V1-C11-T1 PLATFORM SPECIFIC Name: l-lan Node: l-lan@3000000b Device Type: network Physical Location: U8204.E8A.10FE411-V1-C11-T1 This is because the backing devices are not present for the linking to occur; the physical location codes have changed, and thus the mapping fails. Example 5-10 on page 239 shows the same restore of the Virtual I/O Server originally running on a Power 570 onto a Power 550 that has the same virtual devices defined in the same slots.238 IBM PowerVM Virtualization Managing and Monitoring
  • 276. Example 5-10 Devices recovered if restored to a different server$ lsdev -virtualname status descriptionent2 Available Virtual I/O Ethernet Adapter (l-lan)vhost0 Available Virtual SCSI Server Adaptervsa0 Available LPAR Virtual Serial Adapter$ lsmap -all -netSVEA Physloc------ --------------------------------------------ent4 U8204.E8A.10FE411-V1-C11-T1SEA ent6Backing device ent0Status AvailablePhysloc U78A0.001.DNWGCV7-P1-C5-T1$ lsmap -allSVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost0 U9117.MMA.101F170-V1-C10 0x00000003VTD NO VIRTUAL TARGET DEVICE FOUNDYou now need to recover the user-defined virtual devices and any backing diskstructure.Restoring with IBM Tivoli Storage ManagerYou can use the IBM Tivoli Storage Manager to restore the mksysb image of theVirtual I/O Server. Restriction: The IBM Tivoli Storage Manager can only restore the Virtual I/O Server to the system from which it was backed up.First, you restore the mksysb image of the Virtual I/O Server using the dsmccommand on the Tivoli Storage Manager client. Restoring the mksysb imagedoes not restore the Virtual I/O Server. You then need to transfer the mksysbimage to another system and convert the mksysb image to an installable format.Before you start, complete the following tasks:1. Ensure that the system to which you plan to transfer the mksysb image is running AIX.2. Ensure that the system running AIX has a DVD-RW or CD-RW drive. Chapter 5. Virtual I/O Server maintenance 239
  • 277. 3. Ensure that AIX has the cdrecord and mkisofs RPMs downloaded and installed. To download and install the RPMs, see the AIX Toolbox for Linux Applications website at: http://www.ibm.com/systems/p/os/aix/linux Restriction: Interactive mode is not supported on the Virtual I/O Server. You can view session information by typing the dsmc command on the Virtual I/O Server command line. To restore the Virtual I/O Server using Tivoli Storage Manager, complete the following tasks: 1. Determine which file you want to restore by running the dsmc command to display the files that have been backed up to the Tivoli Storage Manager server: dsmc -query 2. Restore the mksysb image using the dsmc command. For example: dsmc -restore sourcefilespec where sourcefilespec is the directory path to the location where you want to restore the mksysb image. For example, /home/padmin/mksysb_image. 3. Transfer the mksysb image to a server with a DVD-RW or CD-RW drive by running the following File Transfer Protocol (FTP) commands: a. Run the following command to make sure that the FTP server is started on the Virtual I/O Server: startnetsvc ftp b. Open an FTP session to the server with the DVD-RW or CD-RW drive: ftp server_hostname where server_hostname is the host name of the server with the DVD-RW or CD-RW drive. c. At the FTP prompt, change the installation directory to the directory where you want to save the mksysb image. d. Set the transfer mode to binary, running the binary command. e. Turn off interactive prompting using the prompt command. f. Transfer the mksysb image to the server. Run the mput mksysb_image command. g. Close the FTP session after transferring the mksysb image by typing the quit command. 4. Write the mksysb image to CD or DVD using the mkcd or mkdvd commands.240 IBM PowerVM Virtualization Managing and Monitoring
  • 278. Reinstall the Virtual I/O Server using the CD or DVD that you just created. For instructions, see “Restoring from DVD backup” on page 232. Or reinstall the Virtual I/O server from a NIM server. For more information, see “Restoring from a file with the NIM server” on page 234.5.7.4 Recovering user-defined virtual devices and disk structure The recovery of the user-defined virtual devices depends on the backup method that was used. If you used the viosbr command to perform the backup the viosbr command can be used to restore the configuration as described in “Restoring user-defined virtual devices using viosbr” on page 241. If the configuration data was saved as part of a backupios command operation you can restore the user-defined virtual device configuration manually using the saved information as described in “Manually restoring user-defined virtual devices” on page 244. Restoring user-defined virtual devices using viosbr Before restoring a backup file that was generated using the viosbr command you can display the content of the file using the -view option of the viosbr command. For example the command viosbr -view -file backup_1.tar.gz as shown in Example 5-11 will display the entities that were backed up in 5.5.1, “Backing up user-defined virtual devices using viosbr” on page 222. Using the -mapping option, the mappings to the virtual adapters can be displayed similar like they are shown by the lsmap command. Example 5-11 Using viosbr -view to display backup contents $ viosbr -view -file backup_1.tar.gz Details in: =============================================================== Controllers: ============ Name Phys Loc ---- -------- iscsi0 sissas0 U5802.001.0086848-P1-C1-T1 pager0 U8233.E8B.061AA6P-V2-C32769-L0-L0 vasi0 U8233.E8B.061AA6P-V2-C32769 vbsd0 U8233.E8B.061AA6P-V2-C32769-L0 sata0 U5802.001.0086848-P1-C1-T1 Chapter 5. Virtual I/O Server maintenance 241
  • 279. fcs0 U5802.001.0086848-P1-C3-T1 fcs1 U5802.001.0086848-P1-C3-T2 . . (Lines omitted for clarity) . Physical Volumes: ================= Name Phys Loc ---- -------- hdisk20 U5802.001.0086848-P1-C3-T2-W500507630419C12C-L4011401700000000 hdisk21 U5802.001.0086848-P1-C3-T2-W500507630419C12C-L4011401800000000 hdisk22 U5802.001.0086848-P1-C3-T1-W500507630414C12C-L4011401900000000 . . (Lines omitted for clarity) . Optical Devices: ================ Name Phys Loc ---- -------- Tape Devices: ============= Name Phys Loc ---- -------- Ethernet Interfaces: ==================== Name ---- en0 en1 en2 en3 en4 en5 en6 en7 Storage Pools: ============== SP Name PV Name ------- ------- rootvg hdisk0 my_vg hdisk8 lv_pool hdisk7242 IBM PowerVM Virtualization Managing and Monitoring
  • 280. caavg_private caa_private0File Backed Storage Pools:==========================Name Parent SP---- ---------fb_pool lv_poolOptical Repository:===================Name Parent SP---- ---------VMLibrary lv_poolShared Ethernet Adapters:=========================Name Physical Adapter Default Adapter Virtual Adapters---- ---------------- --------------- ----------------ent7 ent0 ent5 ent5Virtual Server Adapters:========================SVSA Phys Loc VTD---- -------- ---vhost0 U8233.E8B.061AA6P-V2-C34 vtscsi0 vtscsi3 vtopt0vhost1 U8233.E8B.061AA6P-V2-C54 vtscsi4 vtscsi5vhost2 U8233.E8B.061AA6P-V2-C55vhost3 U8233.E8B.061AA6P-V2-C64vhost4 U8233.E8B.061AA6P-V2-C65Cluster:========Cluster State------- -----cluster0 UPCluster Name Cluster ID------------ ---------------------------------- Chapter 5. Virtual I/O Server maintenance 243
  • 281. Attribute Name Attribute Value -------------- --------------- node_uuid 6a0463ce-fc8b-11df-82df-00145ee9e395 clvdisk 1264a0af-4a7e-fb93-62d7-6a33d5f17f35 To perform the restore, the command viosbr -restore -file backup_1.tar.gz can be used. By adding the -inter option the user is prompted before a configuration change is applied. Note that viosbr is not purposed to restore the cluster configuration if cluster -delete or -rmnode was issued. Manually restoring user-defined virtual devices On our original Virtual I/O Server partition, we used two additional disks in a non-rootvg volume group. If these were SAN disks or physical disks that were directly mapped to client partitions, we could simply restore the virtual device links. However, if we had a logical volume or storage pool structure on the disks, we need to restore this structure first. To do this, you need to use the volume group data files. The volume group or storage pool data files should have been saved as part of the backup process earlier. These files should be located in the /home/ios/vgbackups directory if you performed a full backup using the savevgstruct command. The following command lists all of the available backups: $ restorevgstruct -ls total 104 -rw-r--r-- 1 root staff 51200 Oct 21 14:22 extra_storage.data The restorevgstruct command restores the volume group structure onto the empty disks. In Example 5-12, there are new blank disks and the same storage01 and datavg volume groups to restore. Example 5-12 Disks and volume groups to restore $ lspv NAME PVID VG STATUS hdisk0 00c1f170d7a97dec old_rootvg hdisk1 00c1f170e170ae72 clientvg active hdisk2 00c1f170e170c9cd clientvg active hdisk3 00c1f170e170dac6 None hdisk4 00c1f17093dc5a63 None hdisk5 00c1f170e170fbb2 None hdisk6 00c1f170de94e6ed rootvg active244 IBM PowerVM Virtualization Managing and Monitoring
  • 282. hdisk7 00c1f170e327afa7 Nonehdisk8 00c1f170e3716441 Nonehdisk9 none Nonehdisk10 none Nonehdisk11 none Nonehdisk12 none Nonehdisk13 none Nonehdisk14 none Nonehdisk15 00c1f17020d9bee9 None$ restorevgstruct -vg extra_storage hdisk15hdisk15extra_storagetestlvWill create the Volume Group: extra_storageTarget Disks: Allocation Policy: Shrink Filesystems: no Preserve Physical Partitions for each Logical Volume: noAfter you restore all of the logical volume structures, the only remaining step is torestore the virtual devices linking the physical backing device to the virtual. Torestore these, use the lsmap outputs recorded from the backup steps in 5.5,“Backing up user-defined virtual devices” on page 221, or build documentation.As previously noted, it is important to use the slot numbers and backing deviceswhen restoring these links.The restoration of the Shared Ethernet Adapters will need the linking of thecorrect virtual Ethernet adapter to the correct physical adapter. Usually, thephysical adapters are placed into a VLAN in the network infrastructure of theorganization. It is important that the correct virtual VLAN is linked to the correctphysical VLAN. Any network support team or switch configuration data can helpwith this task.The disaster recovery restore involves a bit more manual recreating of virtuallinking devices (vtscsi and SEA) and relies on good user documentation. If thereis no multipath setup on the Virtual I/O Server to preserve, another solution is acompletely new installation of the Virtual I/O Server from the installation mediaand then restore from the build documentation.After running the mkvdev commands to recreate the mappings, the Virtual I/OServer will host virtual disks and networks that can be used to rebuild the AIX,IBM i or Linux clients. Chapter 5. Virtual I/O Server maintenance 245
  • 283. 5.7.5 Restoring the Virtual I/O Server client operating system After you have the Virtual I/O Server operational and all of the devices recreated, you are ready to start restoring any AIX, IBM i or Linux clients. The procedure for this should already be defined in your organization and, most likely, will be identical to that for any server using dedicated disk and network resources. The method depends on the solution employed and should be defined by you. For AIX clients, this information is available in the IBM Systems Information Center at: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/ com.ibm.aix.baseadmn/doc/baseadmndita/backmeth.htm For IBM i clients, information about system backup and recovery is available in the IBM Systems Information Center at: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/index.jsp?t opic=/rzahg/rzahgbackup.htm5.8 Rebuilding the Virtual I/O Server This section describes what to do if there are no valid backup devices or backup images. In this case, you must install a new Virtual I/O Server. In the following discussion, we assume that the partition definitions of the Virtual I/O Server and of all clients on the HMC are still available. We describe how we rebuilt our configuration of network and SCSI configurations. It is useful to generate a System Plan on the HMC as documentation of partition profiles, settings, slot numbers, and so on. Example 5-13 shows the command to create a System Plan for a managed system. Note that the file name must have the extension *.sysplan. Example 5-13 Creating an HMC system plan from the HMC command line hscroot@hmc1:~> mksysplan -f p570.sysplan -m MT_B_p570_MMA_101F170246 IBM PowerVM Virtualization Managing and Monitoring
  • 284. To view the System Plan, select System Plans. Then select the System Plan that you want to see, and select View System Plan. A browser window is opened where you are prompted for the user name and password of the HMC. Figure 5-9 shows a System Plan generated from a managed system.Figure 5-9 Example of a System Plan generated from a managed system In addition to the regular backups using the backupios command, document the configuration of the following topics using the commands provided: Network settings Commands: netstat -state netstat -routinfo netstat -routtable lsdev -dev Device -attr cfgnamsrv -ls hostmap -ls Chapter 5. Virtual I/O Server maintenance 247
  • 285. optimizenet -list entstat -all Device All physical and logical volumes, SCSI devices Commands: lspv lsvg lsvg -lv VolumeGroup All physical and logical adapters Command: lsdev -type adapter The mapping between physical and logical devices and virtual devices Commands: lsmap -all lsmap -all -net Code levels, users and security Commands: ioslevel viosecure -firewall view viosecure -view -nonint With this information, you can reconfigure your Virtual I/O Server manually. In the following sections, we describe the commands we needed to get the necessary information and the commands that rebuilt the configuration. The important information from the command outputs is highlighted. In your environment the commands may differ from those shown as examples. To start rebuilding the Virtual I/O Server, you must know which disks are used for the Virtual I/O Server itself and for any assigned volume groups for virtual I/O. The lspv command lists that the Virtual I/O Server was installed on hdisk0. The first step is to install the new Virtual I/O Server from the installation media onto disk hdisk0. $ lspv hdisk0 00c0f6a0f8a49cd7 rootvg active hdisk1 00c0f6a02c775268 None hdisk2 00c0f6a04ab4fd01 None hdisk3 00c0f6a04ab558cd None hdisk4 00c0f6a0682ef9e0 None hdisk5 00c0f6a067b0a48c None hdisk6 00c0f6a04ab5995b None hdisk7 00c0f6a04ab66c3e None hdisk8 00c0f6a04ab671fa None248 IBM PowerVM Virtualization Managing and Monitoring
  • 286. hdisk9 00c0f6a04ab66fe6 None hdisk10 00c0f6a0a241e88d None hdisk11 00c0f6a04ab67146 None hdisk12 00c0f6a04ab671fa None hdisk13 00c0f6a04ab672aa None hdisk14 00c0f6a077ed3ce5 None hdisk15 00c0f6a077ed5a83 None See PowerVM Virtualization on IBM System p: Introduction and Configuration, SG24-7940, for the installation procedure. The further rebuild of the Virtual I/O Server is done in two steps: 1. Rebuilding the SCSI configuration. 2. Rebuilding the network configuration. These steps are explained in greater detail in the following sections.5.8.1 Rebuilding the SCSI configuration The lspv command also shows us that there is an additional volume group located on the Virtual I/O Server (datavg): $ lspv hdisk0 00c0f6a0f8a49cd7 rootvg active hdisk1 00c0f6a02c775268 None hdisk2 00c0f6a04ab4fd01 None hdisk3 00c0f6a04ab558cd datavg active hdisk4 00c0f6a0682ef9e0 None hdisk5 00c0f6a067b0a48c None hdisk6 00c0f6a04ab5995b None hdisk7 00c0f6a04ab66c3e None hdisk8 00c0f6a04ab671fa None hdisk9 00c0f6a04ab66fe6 None hdisk10 00c0f6a0a241e88d None hdisk11 00c0f6a04ab67146 None hdisk12 00c0f6a04ab671fa None hdisk13 00c0f6a04ab672aa None hdisk14 00c0f6a077ed3ce5 None hdisk15 00c0f6a077ed5a83 None The following command imports this information into the new Virtual I/O Server system’s ODM: importvg -vg datavg hdisk3 Chapter 5. Virtual I/O Server maintenance 249
  • 287. Example 5-14 shows the mapping between the logical and physical volumes and the virtual SCSI server adapters. Example 5-14 lsmap -all command $ lsmap -all SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U9117.MMA.100F6A0-V1-C15 0x00000002 VTD vcd Status Available LUN 0x8100000000000000 Backing device cd0 Physloc U789D.001.DQDWWHY-P4-D1 SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost1 U9117.MMA.100F6A0-V1-C20 0x00000002 VTD vnim_rvg Status Available LUN 0x8100000000000000 Backing device hdisk12 Physloc U789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-L12000000000000 VTD vnimvg Status Available LUN 0x8200000000000000 Backing device hdisk13 Physloc U789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-L13000000000000 SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost2 U9117.MMA.100F6A0-V1-C25 0x00000003 VTD vdb_rvg Status Available LUN 0x8100000000000000 Backing device hdisk8 Physloc U789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-LE000000000000250 IBM PowerVM Virtualization Managing and Monitoring
  • 288. SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost3 U9117.MMA.100F6A0-V1-C40 0x00000004VTD vapps_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk6PhyslocU789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-LC000000000000SVSA Physloc Client PartitionID--------------- -------------------------------------------- ------------------vhost4 U9117.MMA.100F6A0-V1-C50 0x00000005VTD vlnx_rvgStatus AvailableLUN 0x8100000000000000Backing device hdisk10PhyslocU789D.001.DQDWWHY-P1-C2-T1-W200400A0B8110D0F-L10000000000000Virtual SCSI server adapter vhost0 (defined on slot 15 in HMC) has one VirtualTarget Device vcd. It is mapping the optical device cd0 to vhost0.Virtual SCSI server adapter vhost1 (defined on slot 20 in HMC) has two VirtualTarget Devices, vnim_rvg and vnimvg. They are mapping the physical volumeshdisk12 and hdisk13 to vhost1.Virtual SCSI server adapter vhost2 (defined on slot 25 in HMC) has vdb_rvg as aVirtual Target Device. It is mapping the physical volume hdisk8 to vhost2.Virtual SCSI server adapter vhost3 (defined on slot 40 in HMC) has vapps_rvgas a Virtual Target Device. It is mapping the physical volume hdisk6 to vhost3.Virtual SCSI server adapter vhost4 (defined on slot 50 in HMC) has vlnx_rvg asa Virtual Target Device. It is mapping the physical volume hdisk10 to vhost4.These commands are used to create the Virtual Target Devices that we need:mkvdev -vdev cd0 -vadapter vhost0 -dev vcdmkvdev -vdev hdisk12 -vadapter vhost1 -dev vnim_rvgmkvdev -vdev hdisk13 -vadapter vhost1 -dev vnimvgmkvdev -vdev hdisk8 -vadapter vhost2 -dev vdb_rvgmkvdev -vdev hdisk6 -vadapter vhost3 -dev vnim_rvgmkvdev -vdev hdisk10 -vadapter vhost4 -dev vlnx_rvg Chapter 5. Virtual I/O Server maintenance 251
  • 289. Tip: The names of the Virtual Target Devices are generated automatically, except when you define a name using the -dev flag of the mkvdev command.5.8.2 Rebuilding the network configuration After successfully rebuilding the SCSI configuration, we rebuild the network configuration. The netstat -state command shows us that en4 is the only active network adapter:$ netstat -stateName Mtu Network Address ZoneID Ipkts Ierrs Opkts Oerrs Collen4 1500 link#2 6a.88.8d.e7.80.d 4557344 0 1862620 0 0en4 1500 9.3.4 vios1 4557344 0 1862620 0 0lo0 16896 link#1 4521 0 4634 0 0lo0 16896 127 loopback 4521 0 4634 0 0lo0 16896 ::1 0 4521 0 4634 0 0 With the lsmap -all -net command, we determine that ent5 is defined as a Shared Ethernet Adapter mapping physical adapter ent0 to virtual adapter ent2: $ lsmap -all -net SVEA Physloc ------ -------------------------------------------- ent2 U9117.MMA.101F170-V1-C11-T1 SEA ent5 Backing device ent0 Status Available Physloc U789D.001.DQDYKYW-P1-C4-T1 SVEA Physloc ------ -------------------------------------------- ent4 U9117.MMA.101F170-V1-C13-T1 SEA NO SHARED ETHERNET ADAPTER FOUND The information for the default gateway address is provided by the netstat -routinfo command: $ netstat -routinfo Routing tables Destination Gateway Flags Wt Policy If Cost Config_Cost Route Tree for Protocol Family 2 (Internet):252 IBM PowerVM Virtualization Managing and Monitoring
  • 290. default 9.3.4.1 UG 1 - en4 0 0 9.3.4.0 vios1 UHSb 1 - en4 0 0 => 9.3.4/23 vios1 U 1 - en4 0 0 vios1 loopback UGHS 1 - lo0 0 0 9.3.5.255 vios1 UHSb 1 - en4 0 0 127/8 loopback U 1 - lo0 0 0 Route Tree for Protocol Family 24 (Internet v6): ::1 ::1 UH 1 - lo0 0 0 To list the subnet mask, we use the lsdev -dev en4 -attr command: $ lsdev -dev en4 -attr attribute value description user_settable alias4 IPv4 Alias including Subnet Mask True alias6 IPv6 Alias including Prefix Length True arp on Address Resolution Protocol (ARP) True authority Authorized Users True broadcast Broadcast Address True mtu 1500 Maximum IP Packet Size for This Device True netaddr 9.3.5.111 Internet Address True netaddr6 IPv6 Internet Address True netmask 255.255.254.0 Subnet Mask True prefixlen Prefix Length for IPv6 Internet Address True remmtu 576 Maximum IP Packet Size for REMOTE Networks True rfc1323 Enable/Disable TCP RFC 1323 Window Scaling True security none Security Level True state up Current Interface Status True tcp_mssdflt Set TCP Maximum Segment Size True tcp_nodelay Enable/Disable TCP_NODELAY Option True tcp_recvspace Set Socket Buffer Space for Receiving True tcp_sendspace Set Socket Buffer Space for Sending True The last information we need is the default virtual adapter and the default PVID for the Shared Ethernet Adapter. This is shown by the lsdev -dev ent5 -attr command:$ lsdev -dev ent5 -attrattribute value descriptionuser_settableaccounting disabled Enable per-client accounting of network statistics Truectl_chan ent3 Control Channel adapter for SEA failover Truegvrp no Enable GARP VLAN Registration Protocol (GVRP) Trueha_mode auto High Availability Mode Truejumbo_frames no Enable Gigabit Ethernet Jumbo Frames Truelarge_receive no Enable receive TCP segment aggregation Truelargesend 0 Enable Hardware Transmit TCP Resegmentation True Chapter 5. Virtual I/O Server maintenance 253
  • 291. netaddr 0 Address to ping Truepvid 1 PVID to use for the SEA device Truepvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged packets Trueqos_mode disabled N/A Truereal_adapter ent0 Physical adapter associated with the SEA Truethread 1 Thread mode enabled (1) or disabled (0) Truevirt_adapters ent2 List of virtual adapters associated with the SEA (comma separated) True Consideration: In this example, the IP of the Virtual I/O Server is not configured on the Shared Ethernet Adapter (ent5) but on another adapter (ent4). This avoids network disruption between the Virtual I/O Server and any other partition on the same system when the physical card (ent0) used as the Shared Ethernet Adapter is replaced. The following commands recreated our network configuration: $ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 $ mktcpip -hostname vios1 -inetaddr 9.3.5.111 -interface en5 -start -netmask 255.255.254.0 -gateway 9.3.4.1 These steps complete the basic rebuilding of the Virtual I/O Server.5.9 Updating the Virtual I/O Server Two scenarios for updating a Virtual I/O Server are described in this section. A dual Virtual I/O Server environment (useful when performing Virtual I/O Server software upgrades and service) is recommended to provide a continuous connection of your clients to their virtual I/O resources. For clients using non-critical virtual resources, or when you have service windows that allow a Virtual I/O Server to be rebooted, then you can use a single Virtual I/O Server scenario. For the dual Virtual I/O Server scenario, if you are using SAN LUNs and MPIO or IBM i mirroring on the clients, the maintenance on the Virtual I/O Server will not cause additional work after the update on the client side.5.9.1 Updating a single Virtual I/O Server environment When applying routine service that requires a reboot in a single Virtual I/O Server environment, you need to plan downtime and shut down every client partition using virtual storage provided by this Virtual I/O Server.254 IBM PowerVM Virtualization Managing and Monitoring
  • 292. Tip: Before starting an upgrade, take a backup of the Virtual I/O Server and the virtual I/O clients if a current backup is not available. To back up the Virtual I/O Server, use the backupios command. Then document the virtual Ethernet and SCSI devices before the update.To avoid complications during an upgrade or update, check the environmentbefore upgrading or updating the Virtual I/O Server. The following list is a sampleof useful commands for the virtual I/O client and Virtual I/O Server:lsvg rootvg On the Virtual I/O Server and AIX virtual I/O client, check for stale PPs and stale PV.cat /proc/mdstat On the Linux client using mirroring, check for faulty disks.multipath -ll On the Linux client using MPIO, check the paths.lsvg -pv rootvg On the Virtual I/O Server, check for missing disks.netstat -cdlistats On the Virtual I/O Server, check that the Link status is Up on all used interfaces.errpt On the AIX virtual I/O client, check for CPU, memory, disk, or Ethernet errors, and resolve them before continuing.dmesg, messages On the Linux virtual I/O client, check for CPU, memory, disk, or Ethernet errors, and resolve them before continuing.netstat -v On the virtual I/O client, check that the Link status is Up on all used interfaces.Running update on a single Virtual I/O ServerThere are several options for downloading and installing a Virtual I/O Serverupdate: download iso-images, packages, or install from CD. Tip: You can get the latest available updates for the Virtual I/O Server and check also the recent installation instructions, from the following website: http://www-933.ibm.com/support/fixcentral/To update the Virtual I/O Server, follow these steps:1. Perform a backup of the Virtual I/O Server.2. Shut down the virtual I/O clients connected to the Virtual I/O Server, or disable any virtual resource that is in use. Chapter 5. Virtual I/O Server maintenance 255
  • 293. 3. If you use a the Virtual Media Repository unload all media images using the lsvopt command and unloadvopt command. 4. If previous updates have been applied to the Virtual I/O Server, you have to commit them with this command: # updateios -commit This command does not provide any progress information, but you can run: $ tail -f install.log In another terminal window, follow the progress. If the command hangs, simply interrupt it with Ctrl + C and run it again until you see the following output: $ updateios -commit There are no uncommitted updates. 5. Apply the update with the updateios command. Use /dev/cd0 for CD or any directory containing the files. You can also mount a NFS directory with the mount command: $ mount <name_of_remote_server>:/software/AIX/VIO-Server /mnt $ updateios -dev /mnt -install -accept 6. Reboot the Virtual I/O Server when the update has finished: $ shutdown -restart 7. Verify the new level with the ioslevel command. 8. Check the configuration of all disks and Ethernet adapters on the Virtual I/O Server. 9. Start the client partitions. Verify the Virtual I/O Server environment, document the update, and create a backup of your updated Virtual I/O Server.5.9.2 Updating a dual Virtual I/O Server environment When applying an update to the Virtual I/O Server in a properly configured dual Virtual I/O Server environment, you can do so without having downtime to the virtual I/O services and without any disruption in continuous availability. Tip: Back up the Virtual I/O Servers and the virtual I/O clients if a current backup is not available, and document the virtual Ethernet and SCSI device before the update. This reduces the time needed for a recovery scenario.256 IBM PowerVM Virtualization Managing and Monitoring
  • 294. Checking network healthIt is a best practice to check the virtual Ethernet and disk devices on the VirtualI/O Server and virtual I/O client before starting the update on either of the VirtualI/O Servers. Check the physical adapters to verify connections. As shown inExample 5-15, Figure 5-10 on page 258, Example 5-16 on page 258, andExample 5-17 on page 258, all the virtual adapters are up and running.Example 5-15 The netstat -v comand on the virtual I/O clientnetstat -v.. (Lines omitted for clarity).Virtual I/O Ethernet Adapter (l-lan) Specific Statistics:---------------------------------------------------------RQ Length: 4481No Copy Buffers: 0Filter MCast Mode: FalseFilters: 255 Enabled: 1 Queued: 0 Overflow: 0LAN State: OperationalHypervisor Send Failures: 0 Receiver Failures: 0 Send Errors: 0Hypervisor Receive Failures: 0ILLAN Attributes: 0000000000003002 [0000000000002000]. (Lines omitted for clarity) Chapter 5. Virtual I/O Server maintenance 257
  • 295. Figure 5-10 shows the Work with TCP/IP Interface Status panel. Work with TCP/IP Interface Status System:E101F170 Type options, press Enter. 5=Display details 8=Display associated routes 9=Start 10=End 12=Work with configuration status 14=Display multicast groups Internet Network Line Interface Opt Address Address Description Status 9.3.5.119 9.3.4.0 ETH01 Active 127.0.0.1 127.0.0.0 *LOOPBACK Active Figure 5-10 IBM i Work with TCP/IP Interface Status panel Example 5-16 shows the primary Virtual I/O Server being checked. Example 5-16 The netstat -cdlistats command on the primary Virtual I/O Server $ netstat -cdlistats . . (Lines omitted for clarity) . Virtual I/O Ethernet Adapter (l-lan) Specific Statistics: --------------------------------------------------------- RQ Length: 4481 No Copy Buffers: 0 Trunk Adapter: True Priority: 1 Active: True Filter MCast Mode: False Filters: 255 Enabled: 1 Queued: 0 Overflow: 0 LAN State: Operational . . (Lines omitted for clarity) Example 5-17 shows the shows the secondary Virtual I/O Server being checked. Example 5-17 The netstat -cdlistats command on the secondary Virtual I/O Server $ netstat -cdlistats . . (Lines omitted for clarity) . Virtual I/O Ethernet Adapter (l-lan) Specific Statistics: --------------------------------------------------------- RQ Length: 4481258 IBM PowerVM Virtualization Managing and Monitoring
  • 296. No Copy Buffers: 0Trunk Adapter: True Priority: 2 Active: FalseFilter MCast Mode: FalseFilters: 255 Enabled: 1 Queued: 0 Overflow: 0LAN State: Operational.. (Lines omitted for clarity)Checking storage healthChecking the disk status depends on how the disks are shared from the VirtualI/O Server.Checking the storage health in the MPIO environmentIf you have an MPIO setup on your virtual I/O Server clients similar to Figure 5-11on page 260, run the following commands before and after the first Virtual I/OServer update to verify the disk path status:lspath On the AIX virtual I/O client, check all the paths to the disks. They should all be in the enabled state.multipath -ll Check the paths on the Linux client.lsattr -El hdisk0 On the virtual I/O client, check the MPIO heartbeat for hdisk0; the attribute hcheck_mode is set to nonactive; and that hcheck_interval is 60. If you run IBM SAN storage, check that reserve_policy is no_reserve. Other storage vendors might require other values for reserve_policy. This command should be executed on all disks on the Virtual I/O Server. Chapter 5. Virtual I/O Server maintenance 259
  • 297. SAN Switch SAN Switch FC FC VIOS 1 VIOS 2 VSCSI VSCSI VSCSI VSCSI MPIO Client Partition Figure 5-11 Virtual I/O client running MPIO Checking storage health in the mirroring environment Figure 5-12 shows the concept of a mirrored infrastructure. VIOS 1 VIOS 2 VSCSI VSCSI VSCSI VSCSI Mirror Client Partition Figure 5-12 Virtual I/O client partition software mirroring260 IBM PowerVM Virtualization Managing and Monitoring
  • 298. If you use mirroring on your virtual I/O clients, verify a healthy mirroring status forthe disks shared from the Virtual I/O Server with the following procedures.On the AIX virtual I/O client:lsvg rootvg Verify there are no stale PPs, and the quorum must be off.lsvg -p rootvg Verify there is no missing hdisk. Note: The fixdualvio.ksh script provided in Appendix A, “Sample script for disk and NIB network checking and recovery on AIX virtual clients” on page 615 is a useful tool for performing a health check.On the IBM i virtual I/O client: Run STRSST and login to System Service Tools. Select options 3. Work with disk units  1. Display disk configuration  1. Display disk configuration status and verify all virtual disk units (type 6B22) are in mirrored Active state as shown in Figure 5-13. Display Disk Configuration Status Serial Resource ASP Unit Number Type Model Name Status 1 Mirrored 1 Y3WUTVVQMM4G 6B22 050 DD001 Active 1 YYUUH3U9UELD 6B22 050 DD004 Active 2 YD598QUY5XR8 6B22 050 DD003 Active 2 YTM3C79KY4XF 6B22 050 DD002 Active Figure 5-13 IBM i Display Disk Configuration Status panel On the Linux virtual I/O client: cat /proc/mdstat Check the mirror status. A healthy environment is shown in Example 5-18. Example 5-18 The mdstat command showing a healthy environment cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb3[1] sda3[0] 1953728 blocks [2/2] [UU] md2 : active raid1 sdb4[1] sda4[0] 21794752 blocks [2/2] [UU] Chapter 5. Virtual I/O Server maintenance 261
  • 299. md0 : active raid1 sdb2[1] sda2[0] 98240 blocks [2/2] [UU] After checking the environment and resolving any issues, back up the Virtual I/O Server and virtual I/O client if a current backup is not available. Step-by-step update To update a dual Virtual I/O Server environment, perform the following steps: 1. Find the standby Virtual I/O Server and run the netstat command. At the end of the output, locate the priority of the Shared Ethernet Adapter and whether it is active. In this case, the standby adapter is not active, so you can begin the upgrade of this server. $ netstat -cdlistats . . (Lines omitted for clarity) . Trunk Adapter: True Priority: 2 Active: False Filter MCast Mode: False Filters: 255 Enabled: 1 Queued: 0 Overflow: 0 LAN State: Operational . . (Lines omitted for clarity) If you need to change the active adapter, use following command to put it in backup mode manually: $ chdev -attr entXX ha_mode=standby 2. All Interim Fixes must be applied before the upgrade is removed. To remove the Interim Fixes, perform the following steps: a. Become root on Virtual I/O Server: $ oem_setup_env b. List all Interim Fixes installed: # emgr -P c. Remove each Interim Fix by label: # emgr -r -L <label name> d. Exit the root shell: # exit262 IBM PowerVM Virtualization Managing and Monitoring
  • 300. 3. Apply the update from VD or a remote directory with the updateios command and press y to start the update. $ updateios -dev /mnt -install -accept . (Lines omitted for clarity) Continue the installation [y|n]?4. Reboot the standby Virtual I/O Server when the update completes: $ shutdown -force -restart SHUTDOWN PROGRAM Mon Oct 13 21:57:23 CDT 2008 Wait for Rebooting... before stopping. Error reporting has stopped.5. After the reboot, verify the software level: $ ioslevel 1.5.2.1-FP-11.16. For an AIX MPIO environment, as shown in Figure 5-11 on page 260, run the lspath command on the virtual I/O client and verify that all paths are enabled. For an AIX LVM mirroring environment, as shown in Figure 5-12 on page 260, run the varyonvg command as shown in Example 5-19, and the volume group should begin to sync. If not, run the syncvg -v <VGname> command on the volume groups that used the virtual disk from the Virtual I/O Server environment to synchronize each volume group, where <VGname> is the name of the Volume Group. For the IBM i client mirroring environment, you can proceed to the next step. No manual action is required on IBM i client side because IBM i automatically resumes the suspended mirrored disk units as soon as the updated Virtual I/O Server resumes operations. Consideration: IBM i tracks changes for a suspended mirrored disk unit for a limited time, allowing it to resynchronize changed pages only. In our experience, IBM i did not require a full mirror resynchronize when rebooting the Virtual I/O Server. But this may not be the case for any reboot taking an extended amount of time. Example 5-19 AIX LVM Mirror Resync # lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION Chapter 5. Virtual I/O Server maintenance 263
  • 301. hdisk0 active 511 488 102..94..88..102..102 hdisk1 missing 511 488 102..94..88..102..102 # varyonvg rootvg # lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 511 488 102..94..88..102..102 hdisk1 active 511 488 102..94..88..102..102 # lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00c478de00004c00000 00006b8b6c15e VG STATE: active PP SIZE: 64 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 1022 (65408 megabytes) MAX LVs: 256 FREE PPs: 976 (62464 megabytes) LVs: 9 USED PPs: 46 (2944 megabytes) OPEN LVs: 8 QUORUM: 1 TOTAL PVs: 2 VG DESCRIPTORS: 3 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 2 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable For a Linux client mirroring environment, follow these steps for every md-device (md0, md1, md2): a. Set the disk faulty (repeat the steps for all mdx devices): # mdadm --manage --set-faulty /dev/md2 /dev/sda4 b. Remove the device: # mdadm --manage --remove /dev/md2 /dev/sda2 c. Rescan the device (choose the corresponding path): # echo 1 > /sys/class/scsi_device/0:0:1:0/device/rescan264 IBM PowerVM Virtualization Managing and Monitoring
  • 302. d. Hot-add the device to mdadm: # mdadm --manage --add /dev/md2 /dev/sda4 e. Check the sync status and wait for it to be finished: # cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda3[0] sdb3[1] 1953728 blocks [2/2] [UU] md2 : active raid1 sda4[2] sdb4[1] 21794752 blocks [2/1] [_U] [=>...................] recovery = 5.8% (1285600/21794752) finish=8.2min speed=41470K/sec md0 : active raid1 sda2[0] sdb2[1] 98240 blocks [2/2] [UU]7. If you use Shared Ethernet Adapter Failover, shift the standby and primary connections to the Virtual I/O Server with the chdev command and check with the netstat -cdlistats command whether the state has changed, as shown in this example: $ chdev -dev ent4 -attr ha_mode=standby ent4 changed $ netstat -cdlistats . . (Lines omitted for clarity) . Trunk Adapter: True Priority: 1 Active: False Filter MCast Mode: False . . (Lines omitted for clarity) After you verify the network and storage health again on all Virtual I/O Server active client partitions, proceed as follows to update the other Virtual I/O Server as well.8. Remove all interim fixes on the second Virtual I/O Server to be updated.9. Apply the update to the second Virtual I/O Server, which is now the standby Virtual I/O Server, by using the updateios command.10.Reboot the second Virtual I/O Server with the shutdown -restart command.11.Check the new level with the ioslevel command.12.For an AIX MPIO environment as shown in Figure 5-11 on page 260, run the lspath command on the virtual I/O client and verify that all paths are enabled. For an AIX LVM environment, as shown in Figure 5-12 on page 260, run the varyonvg command, and the volume group should begin to synchronize. If it does not, use the syncvg -v <VGname> command on the volume groups that Chapter 5. Virtual I/O Server maintenance 265
  • 303. used the virtual disk from the Virtual I/O Server environment to synchronize each volume group, where <VGname> is the name of the Volume Group. For the IBM i client mirroring environment, you can proceed to the next step. No manual action is required on IBM i client side because IBM i automatically resumes the suspended mirrored disk units as soon as the updated Virtual I/O Server resumes operations. For the Linux mirroring environment, manually resynchronize the mirror again (see step 6). 13.If you use Shared Ethernet Adapter Failover, reset the Virtual I/O Server role to primary with the chdev command: $ chdev -dev ent4 -attr ha_mode=auto ent4 changed 14.After verifying the network and storage health again, create another backup this time from both updated Virtual I/O Servers before considering the update process complete.5.10 Updating Virtual I/O Server adapter firmware This section describes how to update the firmware for I/O adapters owned by the Virtual I/O Server. Remember: When an IBM i partition is assigned an adapter device, the IBM i Licensed Internal Code (SLIC) maintains the firmware. When an adapter device is assigned to the VIOS, the process is manual, even though IBM i is using the device. This can cause mismatches between IBM i and the firmware of adapters managed by the Virtual I/O Server. Perform the following steps to check for and update to the latest available I/O adapter firmware: 1. Log in to the Virtual I/O Server and run the lsdev -type adapter command to list the installed adapters as shown in Example 5-20.Example 5-20 lsdev -type adapter command$ lsdev -type adaptername status descriptionent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)ent2 Available Virtual I/O Ethernet Adapter (l-lan)ent3 Available Shared Ethernet Adapterfcs0 Available 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)266 IBM PowerVM Virtualization Managing and Monitoring
  • 304. fcs1 Available 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)pager0 Available Pager Kernel Extensionsissas0 Available PCI-X266 Planar 3Gb SAS Adaptervasi0 Available Virtual Asynchronous Services Interface (VASI)vbsd0 Available Virtual Block Storage Device (VBSD)vfchost0 Available Virtual FC Server Adaptervhost0 Available Virtual SCSI Server Adaptervhost1 Available Virtual SCSI Server Adaptervhost2 Available Virtual SCSI Server Adaptervhost3 Available Virtual SCSI Server Adaptervhost4 Available Virtual SCSI Server Adaptervsa0 Available LPAR Virtual Serial Adapter 2. Run the lsmcode -d adapter_name command to list the currently installed adapter firmware as shown for the Fibre Channel adapter fcs0 in Example 5-21 Example 5-21 lsmcode -d fcs0 command $ oem_setup_env # lsmcode -d fcs0 DISPLAY MICROCODE LEVEL 802111 fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) The current microcode level for fcs0 is 110105. Use Enter to continue. 3. Note the current microcode level for the adapter and go to the following IBM Fix Central support website to check if there is a newer version available: http://www.ibm.com/support/fixcentral Chapter 5. Virtual I/O Server maintenance 267
  • 305. 4. Select Power for Product Group, Firmware and HMC for Product, enter your corresponding Machine type-model, and click Continue to proceed as shown in Figure 5-14.Figure 5-14 IBM Fix Central website268 IBM PowerVM Virtualization Managing and Monitoring
  • 306. 5. Select Device Firmware and click Continue to proceed as shown in Figure 5-15.Figure 5-15 IBM Fix Central website Firmware and HMC Chapter 5. Virtual I/O Server maintenance 269
  • 307. 6. Select Select by feature code, or if you do not know the adapter feature code, select Select by device type and click Continue to proceed as shown in Figure 5-16.Figure 5-16 IBM Fix Central website Select by feature code 7. Enter the adapter’s feature code and click Continue as shown in Figure 5-16.Figure 5-17 IBM Fix Central website Select device feature code270 IBM PowerVM Virtualization Managing and Monitoring
  • 308. 8. Check the displayed version of the available latest RPM firmware package for the adapter, and if it is newer than the currently installed one checked in step 2 above, click Description to display the Firmware Description File including installation instructions before clicking Continue as shown in Figure 5-18 to proceed with accepting the license agreement and starting its download.Figure 5-18 IBM Fix Central website Select device firmware fixes 9. Transfer the downloaded adapter firmware package *.aix.rpm file to the Virtual I/O Server using binary FTP as shown in Example 5-22. Example 5-22 FTP transfer of adapter firmware to the Virtual I/O Server D:tmp>ftp 172.16.20.190 Connected to 172.16.20.190. 220 P6_1_vios1 FTP server (Version 4.2 Tue Sep 14 20:17:37 CDT 2010) ready. User (172.16.20.190:(none)): padmin 331 Password required for padmin. Password: 230-Last unsuccessful login: Sat Dec 4 10:06:27 EST 2010 on ssh from 172.16.254.34 230-Last login: Sat Dec 4 13:59:08 EST 2010 on ftp from ::ffff:172.16.254.34 230 User padmin logged in. ftp> bin 200 Type set to I. ftp> put df1000f114108a03-111304.aix.rpm Chapter 5. Virtual I/O Server maintenance 271
  • 309. 200 PORT command successful. 150 Opening data connection for df1000f114108a03-111304.aix.rpm. 226 Transfer complete. ftp: 655470 bytes sent in 15,33Seconds 42,76Kbytes/sec. ftp> bye 221 Goodbye. 10.On the Virtual I/O Server, unpack the adapter firmware package *.aix.rpm file using the rpm -ihv --ignoreos package_name command as shown in Example 5-23, which adds the adapter firmware file to the /etc/microcode directory. Example 5-23 Unpacking the adapter firmware package on the Virtual I/O Server $ oem_setup_env # rpm -ihv --ignoreos df1000f114108a03-111304.aix.rpm pci.df1000f114108a03 ################################################## # ls -l /etc/microcode/df* -rwxr-xr-x 1 root system 920768 Jul 10 2009 /etc/microcode/df1000f114108a03.111304 11.Start the diagnostic service aids for updating the adapter’s firmware by running the diag command, then press Enter to continue on the DIAGNOSTIC OPERATING INSTRUCTIONS panel as shown in Example 5-24. Example 5-24 diag command # diag DIAGNOSTIC OPERATING INSTRUCTIONS VERSION 6.1.6.2 801001 LICENSED MATERIAL and LICENSED INTERNAL CODE - PROPERTY OF IBM (C) COPYRIGHTS BY IBM AND BY OTHERS 1982, 2010. ALL RIGHTS RESERVED. These programs contain diagnostics, service aids, and tasks for the system. These procedures should be used whenever problems with the system occur which have not been corrected by any software application procedures available. In general, the procedures will run automatically. However, sometimes you will be required to select options, inform the system when to continue, and do simple tasks. Several keys are used to control the procedures:272 IBM PowerVM Virtualization Managing and Monitoring
  • 310. - The Enter key continues the procedure or performs an action. - The Backspace key allows keying errors to be corrected. - The cursor keys are used to select an option. Press the F3 key to exit or press Enter to continue.12.Select Task Selection and press Enter as shown in Figure 5-19. FUNCTION SELECTION 801002 Move cursor to selection, then press Enter. Diagnostic Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will not be used. Advanced Diagnostics Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will be used. Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) This selection will list the tasks supported by these procedures. Once a task is selected, a resource menu may be presented showing all resources supported by the task. Resource Selection This selection will list the resources in the system that are supported by these procedures. Once a resource is selected, a task menu will be presented showing all tasks that can be run on the resource(s). F1=Help F10=Exit F3=Previous Menu Figure 5-19 Diagnostics aids Task Selection Chapter 5. Virtual I/O Server maintenance 273
  • 311. 13.Select Microcode Tasks and press Enter as shown in Figure 5-20. TASKS SELECTION LIST 801004 From the list below, select a task by moving the cursor to the task and pressing Enter. To list the resources for the task highlighted, press List. [MORE...20] Display Service Hints Display Software Product Data Display or Change Bootlist Format Media Gather System Information Hot Plug Task Identify and Attention Indicators Local Area Network Analyzer Log Repair Action Microcode Tasks Periodic Diagnostics RAID Array Manager [MORE...1] F1=Help F4=List F10=Exit Enter F3=Previous Menu Figure 5-20 Diagnostics aids Microcode Tasks274 IBM PowerVM Virtualization Managing and Monitoring
  • 312. 14.Move the cursor to Download Microcode and press Enter as shown in Figure 5-21. Microcode Tasks 801004 Move cursor to desired item and press Enter. Display Microcode Level Download Latest Available Microcode Download Microcode Generic Microcode Download F1=Help F4=List F10=Exit Enter F3=Previous Menu Figure 5-21 Diagnostics aids Download Microcode Chapter 5. Virtual I/O Server maintenance 275
  • 313. 15.Move the cursor to each I/O adapter port to be updated and press Enter, then press F7=Commit to start the adapter firmware download as shown in Figure 5-22. RESOURCE SELECTION LIST 801006 From the list below, select any number of resources by moving the cursor to the resource and pressing Enter. To cancel the selection, press Enter again. To list the supported tasks for the resource highlighted, press List. Once all selections have been made, press Commit. To avoid selecting a resource, press Previous Menu. [TOP] All Resources This selection will select all the resources currently displayed. U789D.001.DQDYKYW- + fcs0 P1-C1-T1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) + fcs1 P1-C1-T2 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) ent0 P1-C4-T1 2-Port 10/100/1000 Base-TX PCI-X Adapter [MORE...13] F1=Help F4=List F7=Commit F10=Exit F3=Previous Menu Figure 5-22 Diagnostics aids resource selection list276 IBM PowerVM Virtualization Managing and Monitoring
  • 314. 16.Read through the notice and press Enter to continue as shown in Figure 5-23. INSTALL MICROCODE 802113 fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) Please stand by. +------------------------------------------------------+ | | | | | [TOP] | | *** NOTICE *** NOTICE *** NOTICE *** | | | | The microcode installation occurs while the | | adapter and any attached drives are available for | | use. It is recommended that this installation | | be scheduled during non-peak production periods. | | | | As with any microcode installation involving | | [MORE...5] | | | | F3=Cancel F10=Exit Enter | F3=Cancel +------------------------------------------------------+Figure 5-23 Diagnostic aids install microcode notice Chapter 5. Virtual I/O Server maintenance 277
  • 315. 17.Select /etc/microcode as the install source and press Enter as shown in Figure 5-24. INSTALL MICROCODE 802114 fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) Select the source of the microcode image. Make selection, use Enter to continue. file system /etc/microcode optical media (ISO 9660 file system format) cd0 F1=Help F10=Exit F3=Previous Menu Figure 5-24 Diagnostics aids install microcode image source selection278 IBM PowerVM Virtualization Managing and Monitoring
  • 316. 18.Select the new adapter firmware microcode image for installation and press Enter to proceed as shown in Figure 5-25. INSTALL MICROCODE 802116 fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) The current microcode level for fcs0 is 110105. Available levels to install are listed below. Select the microcode level to be installed. Use Help for explanations of "M", "L", "C" and "P" . Make selection, use Enter to continue. M 111304 F1=Help F10=Exit F3=Previous Menu Figure 5-25 Diagnostics aids microcode level selection Chapter 5. Virtual I/O Server maintenance 279
  • 317. 19.The successful installation of the new adapter microcode is shown in Figure 5-26. INSTALL MICROCODE 802118 fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) Installation of the microcode has completed successfully. The current microcode level for fcs0 is 111304. Please run diagnostics on the adapter to ensure that it is functioning properly. Use Enter to continue. F3=Cancel F10=Exit Enter Figure 5-26 Diagnostics aids install microcode success message 20.Steps 16-19 are repeated for each additional adapter port selected. After all adapter ports are updated, exit the diagnostic aids by pressing F10=Exit. 21.As indicated on the Install Microcode panel, run diagnostics on each I/O adapter that was updated to ensure that it is functioning properly by making the following selections from the diagnostic aids main menu: a. Diagnostic Routines b. System Verification c. Select all updated adapter resources to test and press F7=Commit280 IBM PowerVM Virtualization Managing and Monitoring
  • 318. An example of a successful diagnostics test is shown in Figure 5-27. TESTING COMPLETE on Sat Dec 4 15:51:36 EST 2010 801010 No trouble was found. The resources tested were: - sysplanar0 System Planar - fcs0 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) U789D.001.DQDYKYW-P1-C1-T1 - fcs1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) U789D.001.DQDYKYW-P1-C1-T2 Use Enter to continue. F3=Cancel F10=Exit Enter Figure 5-27 Diagnostic aids successful diagnostic test5.11 Error logging on the Virtual I/O Server Error logging on the Virtual I/O Server uses the same error logging facility as AIX. The error logging daemon is started with the errdemon command. This daemon reads error records from the /dev/error device and writes them to the error log in /var/adm/ras/errlog. Errdemon also performs specified notifications in the notification database /etc/objrepos/errnotify. The command to display binary error logs is errlog. See Example 5-25 for a short error listing. Example 5-25 errlog short listing $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION 4FC8E358 1015104608 I O hdisk8 CACHED DATA WILL BE LOST IF CONTROLLER B6267342 1014145208 P H hdisk12 DISK OPERATION ERROR Chapter 5. Virtual I/O Server maintenance 281
  • 319. DF63A4FE 1014145208 T S vhost2 Virtual SCSI Host Adapter detected an B6267342 1014145208 P H hdisk12 DISK OPERATION ERROR DF63A4FE 1014145208 T S vhost2 Virtual SCSI Host Adapter detected an B6267342 1014145208 P H hdisk11 DISK OPERATION ERROR B6267342 1014145208 P H hdisk11 DISK OPERATION ERROR C972F43B 1014111208 T S vhost4 Misbehaved Virtual SCSI ClientB6267342 B6267342 1014164108 P H hdisk14 DISK OPERATION ERROR To obtain all the details listed for each event, you can use errlog -ls as shown in Example 5-26. Example 5-26 Detailed error listing $ errlog -ls |more ---------------------------------------------------------------------- LABEL: SC_DISK_PCM_ERR7 IDENTIFIER: 4FC8E358 Date/Time: Wed Oct 15 10:46:33 CDT 2008 Sequence Number: 576 Machine Id: 00C1F1704C00 Node Id: vios1 Class: O Type: INFO WPAR: Global Resource Name: hdisk8 Description CACHED DATA WILL BE LOST IF CONTROLLER FAILS Probable Causes USER DISABLED CACHE MIRRORING FOR THIS LUN User Causes CACHE MIRRORING DISABLED Recommended Actions ENABLE CACHE MIRRORING ...282 IBM PowerVM Virtualization Managing and Monitoring
  • 320. All errors are divided into the classes listed in Table 5-3. Table 5-3 Error log entry classes Error log entry class Description H Hardware error S Software error O Operator messages (logger) U Undetermined error class5.11.1 Redirecting error logs to other servers In certain cases you may need to redirect error logs to one central instance, for example to be able to run automated error log analysis in one place. For the Virtual I/O Server you need to set up redirecting error logs to syslog first and then assign the remote syslog host in the syslog configuration. To redirect error logs to syslog, create the file /tmp/syslog.add with the content shown in Example 5-27. Note: Before redirecting errors logs to syslog, you must first become the root user on the Virtual I/O server. Run the command: $ oem_setup_env Example 5-27 Content of /tmp/syslog.add file errnotify: en_pid = 0 en_name = "syslog" en_persistenceflg = 1 en_method = "/usr/bin/errpt -a -l $1 |/usr/bin/fgrep -v ERROR_ID TIMESTAMP| /usr/bin/logger -t ERRDEMON -p local1.warn" Use the odmadd command to add the configuration to the ODM: # odmadd /tmp/syslog.add In the syslog file, you can redirect all messages to any other server running syslogd and accepting remote logs. Simply add the following line to your /etc/syslog.conf file: *.debug @9.3.5.115 Chapter 5. Virtual I/O Server maintenance 283
  • 321. Restart your syslog daemon using the following command: # stopsrc -s syslogd 0513-044 The syslogd Subsystem was requested to stop. # startsrc -s syslogd 0513-059 The syslogd Subsystem has been started. Subsystem PID is 520236.5.11.2 Troubleshooting error logs If your error log becomes corrupted, you can always move the file, and a new clean error log file will be created as shown in Example 5-28. Example 5-28 Creating a new error log file $ oem_setup_env # /usr/lib/errstop # mv /var/adm/ras/errlog /var/adm/ras/errlog.bak # /usr/lib/errdemon If you want to back up your error log to an alternate file and view it later, use the command shown in Example 5-29. Example 5-29 Copy errlog and view it $ oem_setup_env # cp /var/adm/ras/errlog /tmp/errlog.save # errpt -i /tmp/errlog.save IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION 4FC8E358 1015104608 I O hdisk8 CACHED DATA WILL BE LOST IF CONTROLLER B6267342 1014145208 P H hdisk12 DISK OPERATION ERROR DF63A4FE 1014145208 T S vhost2 Virtual SCSI Host Adapter detected an B6267342 1014145208 P H hdisk12 DISK OPERATION ERROR DF63A4FE 1014145208 T S vhost2 Virtual SCSI Host Adapter detected an B6267342 1014145208 P H hdisk11 DISK OPERATION ERROR B6267342 1014145208 P H hdisk11 DISK OPERATION ERROR C972F43B 1014111208 T S vhost4 Misbehaved Virtual SCSI ClientB6267342 B6267342 1014164108 P H hdisk14 DISK OPERATION ERROR5.12 VM Storage Snapshots/Rollback VM Storage Snapshots/Rollback is a new function that allows multiple point-in-time snapshots of individual virtual machine storage. These point-in-time copies can be used to quickly roll back a virtual machine to a particular snapshot image. This functionality can be used to capture a VM image for cloning purposes or before applying maintenance.284 IBM PowerVM Virtualization Managing and Monitoring
  • 322. As a sample usage scenario, the CE checks on the client LPAR for prerequisitesneeded for an upcoming hardware upgrade:root@p71aix03 /root # lslpp -f devices.pci.2b102725.X11 Fileset File -----------------------------------------------------------------------Path: /usr/lib/objrepos devices.pci.2b102725.X11 7.1.1.0 /usr/lpp/gai /usr/lpp/gai/pci2b102725/loadddx /usr/lpp/gai/pci2b102725The customer has the Vitual I/O Server taking snapshots of sspdisk04 LU’s,during minimal I/O workload (every night at midnight) as shown in Example 5-30.Example 5-30 snapshot create commandsnapshot -clustername ssp_cluster -spname ssp_pool_1 -lu sspdisk04 -createsnap03_01 Tip: The snapshot must be created under minimal I/O load on the source LU.Due to a power outage, the data on the LPAR’s rootvg gets corrupted. The CEnotices that the prerequisite fileset is no longer present:root@p71aix03 /root # lslpp -f devices.pci.2b102725.X11lslpp: Fileset devices.pci.2b102725.X11 not installed.root@p71aix03 /root # lslpp -l devices.pci.2b102725.X11lslpp: Fileset devices.pci.2b102725.X11 not installed. ls -l /usr/lpp/gai/pci2b102725/loadddx/usr/lpp/gai/pci2b102725/loadddx not foundThe CE shuts down the client LPAR to restore the snapshot as shown inExample 5-31Example 5-31 snapshot rollbacksnapshot -clustername ssp_cluster -spname ssp_pool_1 -rollback snap03_01 -lusspdisk04 Restriction: You cannot perform a rollback operation on a snapshot while the LU is in use. Therefore, when the client LPAR is in Running state, to rollback you must shut down the client LPAR to Not Activated state.Check fileset integrity after the snapshot rollback on the client LPAR:root@p71aix03 /root # lslpp -f devices.pci.2b102725.X11 Fileset File Chapter 5. Virtual I/O Server maintenance 285
  • 323. ----------------------------------------------------------------------- Path: /usr/lib/objrepos devices.pci.2b102725.X11 7.1.1.0 /usr/lpp/gai /usr/lpp/gai/pci2b102725/loadddx /usr/lpp/gai/pci2b102725 root@p71aix03 /root # ls -l /usr/lpp/gai/pci2b102725/loadddx -rw-r--r-- 1 bin bin 371219 Jul 25 23:46 /usr/lpp/gai/pci2b102725/loadddx286 IBM PowerVM Virtualization Managing and Monitoring
  • 324. 6 Chapter 6. Dynamic operations This chapter discusses how to set up a shared processor pool, and how to change resources dynamically, which can be useful when maintaining your virtualized environment. With this goal, the focus is on the following operations valid for AIX, IBM i, and Linux operating systems: Multiple Shared Processor Pools Addition of resources Movement of adapters between partitions Removal of resources Replacement of resource This chapter contains the following sections: Multiple Shared Processor Pools management Dynamic LPAR operations Dynamic LPAR operations on Linux for Power Dynamic LPAR operations on the Virtual I/O Server© Copyright IBM Corp. 2012. All rights reserved. 287
  • 325. 6.1 Multiple Shared Processor Pools management With the POWER6 and POWER7 systems, you can now define Multiple Shared Processor Pools (MSPPs) and assign the shared partitions to any of these MSPPs. The configuration of this feature is simple. You simply have to set the properties of a processor pool. To set up a shared processor pool (SPP), complete these steps. 1. In the HMC navigation pane, open Systems Management and click Servers. 2. In the content pane, select the managed system of the shared processor pool you want to configure. Click the Task button and select Configuration  Shared Processor Pool Management.288 IBM PowerVM Virtualization Managing and Monitoring
  • 326. Figure 6-1 lists the available shared processor pools.Figure 6-1 Shared Processor Pool 3. Click the name of the shared processor pool that you want to configure. Chapter 6. Dynamic operations 289
  • 327. 4. Enter the maximum number of processing units that you want the logical partitions in the shared processor pool to use in the Maximum processing units field. If desired, change the name of the shared processor pool in the Pool name field and enter the number of processing units that you want to reserve for uncapped logical partitions in the shared processor pool in the Reserved processing units field (Figure 6-2). Requirement: The name of the shared processor pool must be unique on the managed system. When you are done, click OK. Figure 6-2 Modifying Shared Processor pool attributes 5. Repeat steps 3 and 4 for any other shared processor pools that you want to configure. 6. Click OK. After this procedure (modifying the processor pool attributes) is complete, assign logical partitions to the configured shared processor pools. You can assign a logical partition to a shared processor pool when you create a logical partition. Alternatively, you can reassign existing logical partitions from their current shared290 IBM PowerVM Virtualization Managing and Monitoring
  • 328. processor pools to the (new) shared processor pools that you configured using the following procedure: 1. Click the Partitions tab and select the partition name as shown in Figure 6-3.Figure 6-3 Partitions assignment to Multiple Shared Processor Pools 2. Select to which SPP this partition should be assigned as shown in Figure 6-4. Figure 6-4 Assign a partition to a Shared Processor Pool Chapter 6. Dynamic operations 291
  • 329. Remember: The default Shared Processor Pool is the one with ID 0. This cannot be changed, and it has default configuration values that cannot be changed. When you no longer want to use a Shared Processor Pool, you can deconfigure the shared processor pool by using this procedure to set the maximum number of processing units and reserved number of processing units to 0. Before you can deconfigure a shared processor pool, you must reassign all logical partitions that use the shared processor pool to other shared processor pools. Calibrating the shared partitions’ weight You should pay attention to the values you provide for the partition weight when you define shared partition characteristics. Indeed, in case the partitions within an SPP need more processor resources, the extra resources that will be donated from the other idle partitions in the other SPPs are distributed to the partitions based on their weight. The partitions with the highest weight will gain more processor resources (Figure 6-5). SPP0 SPP1 SPP2 A B C D F G H I Weight Weight Weight Weight Weight Weight Weight Weight 10 20 40 30 10 20 40 30Figure 6-5 Comparing partition weights from different Shared Processor Pools Considering the example shown in Figure 6-5, if the partitions C, D, and H require extra processing resources, these extra resources will be distributed based on their weight value even though they are not all in the same SPP. Based on the weight value shown in this example, partition C and H will get most of the available shared resources (of equal amounts), and partition D will receive a lesser share. In situations where your workload on partition H (or another partition) needs more system resources, set its weight value by taking into account the weight of the partitions in the other SPPs.292 IBM PowerVM Virtualization Managing and Monitoring
  • 330. In summary, if several partitions from different SPPs compete to get additional resources, the partitions with the highest weight will be served first. You must therefore pay attention when you define a partition’s weight and make sure that its value is reasonable compared to all of the other partitions in the shared processor pools. For more detailed information, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940.6.2 Dynamic LPAR operations The following sections explain how to perform dynamic LPAR operations for AIX and IBM i. Considerations: When using IBM i 6.1 dynamic LPAR operations with virtual adapters, make sure that SLIC PTFs MF45568 and MF45473 have been applied or you are at Cum Level C9111610. HMC communicates with partitions using RMC. Therefore, you need to make sure RMC port has not been restricted in firewall settings. For more details see 4.1.2, “Setting up the firewall” on page 152.6.2.1 Adding and removing processors dynamically Follow these steps to add or remove processors dynamically: 1. Select the logical partition where you want to initiate a dynamic LPAR operation, then select Dynamic Logical Partitioning  Processor  Add or Remove as shown in Figure 6-6 on page 294. Chapter 6. Dynamic operations 293
  • 331. Figure 6-6 Add or remove processor operation 2. On HMC Version 7, you do not have to define a certain amount of CPU to be removed or added to the partition. Simply indicate the total number of processor units to be assigned to the partition. You can change processing units and the virtual processors of the partition to be more or less than the294 IBM PowerVM Virtualization Managing and Monitoring
  • 332. current value. The values for these fields must be between the Minimum and Maximum values defined for them on the partition profile. Figure 6-7 shows a partition being set with 0.5 processing units and 1 virtual processor. Figure 6-7 Defining the amount of CPU processing units for a partition3. Click OK when done. Tip: In this example, a partition using Micro-partition technology was used. However, this process is also valid for dedicated partitions where you move dedicated processors. Chapter 6. Dynamic operations 295
  • 333. From an IBM i partition, you can display the current virtual processors and processing capacity by using the WRKSYSACT command as shown in Figure 6-8. Work with System Activity P71I04 12/07/11 11:08:21 Automatic refresh in seconds . . . . . . . . . . . . . . . . . . . 5 Job/Task CPU filter . . . . . . . . . . . . . . . . . . . . . . . . .10 Elapsed time . . . . . . : 00:00:21 Average CPU util . . . . : .1 Virtual Processors . . . . : 12 Maximum CPU util . . . . . : 1.2 Overall DB CPU util . . . : .0 Minimum CPU util . . . . . : .0 Average CPU rate . . . . . : 102.1 Current processing capacity: 5.00 Type options, press Enter. 1=Monitor job 5=Work with job Total Total DB Job or CPU Sync Async CPU Opt Task User Number Thread Pty Util I/O I/O Util CRTPFRDTA QSYS 010945 0000000E 50 .0 127 92 .0 QPADEV0004 JIMIOFR 010963 00000012 1 .0 23 0 .0 CAS QCPMGTDIR 010816 000000BF 25 .0 45 22 .0 QYPSPFRCOL QSYS 010185 00000009 1 .0 0 1 .0 SMXCAGER01 99 .0 0 1 .0 SMXCAGER02 99 .0 0 1 .0 More... F3=Exit F10=Update list F11=View 2 F12=Cancel F19=Automatic refresh F24=More keysFigure 6-8 IBM i Work with System Activity panel296 IBM PowerVM Virtualization Managing and Monitoring
  • 334. 6.2.2 Adding memory dynamically To dynamically add additional memory to the logical partition as shown in Figure 6-9, follow these steps: 1. Select the partition and then select Dynamic Logical Partitioning  Memory  Add or Remove.Figure 6-9 Add or remove memory operation Chapter 6. Dynamic operations 297
  • 335. 2. Change the total amount of memory to be assigned to the partition. Note that on HMC Version 7 you do not provide the amount of additional memory that you want to add to the partition, but the total amount of memory that you want to assign to the partition. In Figure 6-10 the total amount of memory allocated to the partition was changed to 5 GB. Figure 6-10 Changing the total amount of memory of the partition to 5 GB 3. Click OK when you are done. A status window as shown in Figure 6-11 displays. Figure 6-11 Dynamic LPAR operation in progress298 IBM PowerVM Virtualization Managing and Monitoring
  • 336. Remember: For an IBM i partition dynamically added memory is added to the base memory pool (system pool 2 as shown in the WRKSYSSTS or WRKSHRPOOL panel) and dynamically distributed to other memory pools when using the default automatic performance adjustment (QPFRADJ=2 system value setting).6.2.3 Removing memory dynamically The following steps describe the dynamic removal of memory from a logical partition: 1. Select the logical partition where you want to initiate a dynamic LPAR operation. The first window in any dynamic operation will be similar to Figure 6-12.Figure 6-12 Add or remove memory operation Chapter 6. Dynamic operations 299
  • 337. For our AIX partition, the memory settings before the operation are: # lsattr -El mem0 goodsize 5120 Amount of usable physical memory in Mbytes False size 5120 Total amount of physical memory in Mbytes False The graphical user interface to change the memory allocated to a partition is the same one used to add memory in Figure 6-10 on page 298. On HMC Version 7, you do not choose the amount to remove from the partition as you did in the previous versions of HMC. Now you just change the total amount of memory to be assigned to the partition. In the command output shown, the partition has 5 GB and you want to remove, for example, 1 GB from it. To do so, simply change the total amount of memory to 4 GB, as shown in Figure 6-13. Figure 6-13 Dynamically reducing 1 GB from a partition 2. Click OK when done. The following command shows the effect of the memory deletion on our AIX partition: # lsattr -El mem0300 IBM PowerVM Virtualization Managing and Monitoring
  • 338. goodsize 4096 Amount of usable physical memory in Mbytes False size 4096 Total amount of physical memory in Mbytes False Note: For an IBM i partition, dynamically removed memory is removed from the base memory pool and only to the extent of leaving the minimum amount of memory required in the base pool as determined by the base storage pool minimum size (QBASPOOL system value).6.2.4 Adding physical adapters dynamically Follow these steps to add physical adapters dynamically: 1. Log in to HMC and select the system-managed name. On the right, select the partition where you want to execute a dynamic LPAR operation, as shown in Figure 6-14.Figure 6-14 LPAR overview menu Chapter 6. Dynamic operations 301
  • 339. 2. On the Tasks menu on the right side of the window, select Dynamic Logical Partitioning  Physical Adapters  Add as shown in Figure 6-15.Figure 6-15 Add physical adapter operation302 IBM PowerVM Virtualization Managing and Monitoring
  • 340. 3. The next window will look like the one in Figure 6-16. Select the physical adapter you want to add to the partition.Figure 6-16 Select physical adapter to be added 4. Click OK when done. Chapter 6. Dynamic operations 303
  • 341. 6.2.5 Moving physical adapters dynamically To move a physical adapter, you first have to release the adapter in the partition that currently owns it. 1. Use the HMC to list which partition owns the adapter. In the left menu, select Systems Management and then click the system’s name. In the right menu, select Properties. 2. Select the I/O tab on the window that will appear, as shown in Figure 6-17. You can see each I/O adapter for each partition.Figure 6-17 I/O adapters properties for a managed system Remove devices that belong to the adapter, such as optical drives, as well. The optical drive often needs to be moved to another partition. For an AIX partition, use the lsslot -c slot command as root user to list adapters and their members. In the Virtual I/O Server you can use the lsdev -slots command as padmin user as follows: $ lsdev -slots # Slot Description Device(s)304 IBM PowerVM Virtualization Managing and Monitoring
  • 342. U789D.001.DQDYKYW-P1-T1 Logical I/O Slot pci4 usbhc0 usbhc1U789D.001.DQDYKYW-P1-T3 Logical I/O Slot pci3 sissas0U9117.MMA.101F170-V1-C0 Virtual I/O Slot vsa0U9117.MMA.101F170-V1-C2 Virtual I/O Slot vasi0U9117.MMA.101F170-V1-C11 Virtual I/O Slot ent2U9117.MMA.101F170-V1-C12 Virtual I/O Slot ent3U9117.MMA.101F170-V1-C13 Virtual I/O Slot ent4U9117.MMA.101F170-V1-C14 Virtual I/O Slot ent6U9117.MMA.101F170-V1-C21 Virtual I/O Slot vhost0U9117.MMA.101F170-V1-C22 Virtual I/O Slot vhost1U9117.MMA.101F170-V1-C23 Virtual I/O Slot vhost2U9117.MMA.101F170-V1-C24 Virtual I/O Slot vhost3U9117.MMA.101F170-V1-C25 Virtual I/O Slot vhost4U9117.MMA.101F170-V1-C50 Virtual I/O Slot vhost5U9117.MMA.101F170-V1-C60 Virtual I/O Slot vhost6For an AIX partition, use the rmdev -l pcin -d -R command to remove theadapter from the configuration, that is, release it to be able to move it anotherpartition. In the Virtual I/O Server, you can use the rmdev -dev pcin -recursivecommand (n is the adapter number).For an IBM i partition, vary off any devices using the physical adapter beforemoving it to another partition by using a VRYCFG command such as VRYCFGCFGOBJ(TAP02) CFGTYPE(*DEV) STATUS(*OFF) to release the tape drivefrom using the physical adapter. To see which devices are attached to whichadapter, use a WRKHDWRSC command such as WRKHDWRSC *STG forstorage devices. Select option 7=Display resource detail for an adapterresource to see its physical location (slot) information. Select option 9=Workwith resources to list the devices attached to it.Example 6-1 shows how to remove a Fibre Channel adapter from an AIXpartition that was virtualized and does not need this adapter any more.Example 6-1 Removing the Fibre Channel adapter# lsslot -c pci# Slot Description Device(s)U789D.001.DQDYKYW-P1-C2 PCI-E capable, Rev 1 slot with 8x lanes fcs0 fcs1U789D.001.DQDYKYW-P1-C4 PCI-X capable, 64 bit, 266MHz slot ent0 ent1# rmdev -dl fcs0 -Rfcnet0 deletedfscsi0 deletedfcs0 deleted# rmdev -dl fcs1 -Rfcnet1 deletedfscsi1 deletedfcs1 deleted Chapter 6. Dynamic operations 305
  • 343. After the adapter has been deleted in the virtual I/O client, the physical adapter can be moved to another partition by using the HMC using the following steps: 1. Select the partition that currently holds the adapter and then select Dynamic Logical Partitioning  Physical Adapters  Move or Remove (see Figure 6-18). The adapter must not be set as required in the profile. To change the setting from required to desired, you must update the profile.Figure 6-18 Move or remove physical adapter operation306 IBM PowerVM Virtualization Managing and Monitoring
  • 344. 2. Select the adapter to be moved and select the receiving partition as shown in Figure 6-19.Figure 6-19 Selecting adapter in slot C2 to be moved to partition AIX_LPAR 3. Click OK to execute. 4. For an AIX partition, run the cfgmgr command (cfgdev in the Virtual I/O Server) in the receiving partition to make the adapter and its devices available. An IBM i partition, by default, automatically discovers and configures new devices attached to it if the system value QAUTOCFG is set to 1. Therefore, they only need to be varied on by using the VRYCFG command before they are used. Chapter 6. Dynamic operations 307
  • 345. 5. To reflect the change across restarts of the partitions, remember to update the profiles of both partitions. Alternatively, use the Configuration  Save Current Configuration option to save the changes to a new profile as shown in Figure 6-20.Figure 6-20 Save current configuration308 IBM PowerVM Virtualization Managing and Monitoring
  • 346. 6.2.6 Removing physical adapters dynamically To remove virtual adapters from a partition dynamically, follow these steps: 1. On the HMC, select the partition to remove the adapter from and choose Dynamic Logical Partitioning  Physical Adapters  Move or Remove (Figure 6-21).Figure 6-21 Remove physical adapter operation Chapter 6. Dynamic operations 309
  • 347. 2. Select the adapter you want to delete and don’t select any partition in Move to partition selection box as shown in Figure 6-22.Figure 6-22 Select physical adapter to be removed 3. Click OK when done.310 IBM PowerVM Virtualization Managing and Monitoring
  • 348. 6.2.7 Adding virtual adapters dynamically The following steps illustrate one way to add virtual adapters dynamically: 1. Log in to HMC and then select the system-managed name. In the right window, select the partition where you want to execute a dynamic LPAR operation. 2. On the Tasks menu on the right side of the window, select Dynamic Logical Partitioning  Virtual Adapters as shown in Figure 6-23.Figure 6-23 Add virtual adapter operation Chapter 6. Dynamic operations 311
  • 349. 3. The next window will look like the one in Figure 6-24. Click Actions and select Create  SCSI Adapter.Figure 6-24 Dynamically adding virtual SCSI adapter312 IBM PowerVM Virtualization Managing and Monitoring
  • 350. 4. Figure 6-25 shows the window after selecting SCSI Adapter. Type the slot adapter number of the new virtual SCSI being created, then select whether this new SCSI adapter can be accessed by any client partition or only by a specific one. In this case, as an example, we are only allowing the SCSI client adapter in slot 2 of the AIX61 partition to access it. Consideration: In this example we used a different slot numbering for the client and the server virtual SCSI adapter. Be sure to create an overall numbering scheme. Figure 6-25 Virtual SCSI adapter properties Chapter 6. Dynamic operations 313
  • 351. 5. The newly created adapter is listed in the adapters list as shown in Figure 6-26.Figure 6-26 Virtual adapters for an LPAR 6. Click OK when done. To reflect the change across restart of the partition, remember to update the profile of partition.6.2.8 Removing virtual adapters dynamically To remove virtual adapters from a partition dynamically, follow these steps: 1. For AIX, unconfigure the devices and unconfigure the virtual adapter itself on the AIX. For IBM i, vary-off any devices attached to the virtual client adapter. Remember: First remove all associated virtual client adapters in the virtual I/O clients before removing a virtual server adapter in the Virtual I/O Server.314 IBM PowerVM Virtualization Managing and Monitoring
  • 352. 2. On the HMC, select the partition to remove the adapter from and click Dynamic Logical Partitioning  Virtual Adapters (Figure 6-27).Figure 6-27 Remove virtual adapter operation Chapter 6. Dynamic operations 315
  • 353. 3. Select the adapter you want to delete and select Actions  Delete. (Figure 6-28).Figure 6-28 Delete virtual adapter 4. Click OK when done.6.2.9 Removing or replacing a PCI Hot Plug adapter The PCI Hot Plug feature enables you to remove host-based adapters without shutting down the partition. Replacing an adapter might be needed, for example, when you exchange 2 Gb Fibre Channel adapters for a 4 Gb Fibre Channel adapter, or when you need to make configuration changes or updates. For virtual Ethernet adapters in the virtual I/O client, redundancy must be enabled either through Shared Ethernet failover being enabled on the Virtual I/O Servers, or through Network Interface Backup being configured if continuous network connectivity is required. If there is no redundancy for Ethernet, the replace operation can be done while the virtual I/O client is still running, but it will316 IBM PowerVM Virtualization Managing and Monitoring
  • 354. lose network connectivity during replacement. For virtual I/O clients that have redundant paths to their virtual disks and are not mirroring these disks, it is necessary to shut them down while the adapter is being replaced. On the Virtual I/O Server, in both cases there will be child devices connected to the adapter because the adapter would be in use before. Therefore, the child devices and the adapter will have to be unconfigured, before the adapter can be removed or replaced. Normally there is no need to remove the child devices (for example disks and mapped disks, also known as Virtual Target Devices) in the case of a Fibre Channel adapter replacement, but they have to be unconfigured (set to the defined state) before the adapter they rely on can be replaced.6.3 Dynamic LPAR operations on Linux for Power This section explains how to run dynamic LPAR operations in Linux for Power.6.3.1 Service and productivity tools for Linux for Power Virtualization and hardware support in Linux for Power is realized through Open Source drivers included in the standard Linux Kernel for 64-bit POWER-based systems. However, IBM provides additional tools for virtualization management. These tools are useful for exploiting advanced features and hardware diagnostics. These tools are called Service and productivity tools for Linux for Power, and are provided as a no-cost download for all supported distributions and systems. The tools include Reliable Scalable Cluster Technology (RSCT) daemons used for communication with the Hardware Management Console. Some packages are Open Source and are included on the distribution media. However, the website download offers the latest version. The URL for service and productivity tools is: http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags Consideration: Package names and dependencies can vary between one Linux distribution and another. See the Service and productivity tools website for more detailed information about each Linux release. Chapter 6. Dynamic operations 317
  • 355. Table 6-1 provides details about each package. Table 6-1 Service and productivity tools description Tool name Description llibrtas Platform Enablement Library (base tool) The librtas package contains a library that allows applications to access certain functionality provided by platform firmware. This functionality is required by many of the other higher-level service and productivity tools. This package is open source and shipped by both Red Hat and Novell SUSE. src SRC is a facility for managing daemons on a system. It provides a standard command interface for defining, undefining, starting, stopping, querying status, and controlling trace for daemons. This package is currently IBM proprietary. rsct.core and Reliable Scalable Cluster Technology (RSCT) core and rsct.core.utils utilities The RSCT packages provide the Resource Monitoring and Control (RMC) functions and infrastructure needed to monitor and manage one or more Linux systems. RMC provides a flexible and extensible system for monitoring numerous aspects of a system. It also allows customized responses to detected events. This package is currently IBM proprietary. csm.core and Cluster Systems Management (CSM) core and client csm.client The CSM packages provide for the exchange of host-based authentication security keys. These tools also set up distributed RMC features on the Hardware Management Console (HMC). This package is currently IBM proprietary. devices.chrp.base Service Resource Manager (ServiceRM) .ServiceRM Service Resource Manager is a Reliable, Scalable, Cluster Technology (RSCT) resource manager that creates the Serviceable Events from the output of the Error Log Analysis Tool (diagela). ServiceRM then sends these events to the Service IBM Focal Point™ on the Hardware Management Console (HMC). This package is currently IBM proprietary.318 IBM PowerVM Virtualization Managing and Monitoring
  • 356. Tool name DescriptionDynamicRM DynamicRM (Productivity tool) Dynamic Resource Manager is a Reliable Scalable Cluster Technology (RSCT) resource manager that allows a Hardware Management Console (HMC) to do the following: Dynamically add or remove processors or I/O slots from a running partition Concurrently update system firmware Perform certain shutdown operations on a partition Show how virtual disks map to disk names in Linux and to virtual Ethernet interfaces, and support migration from POWER6-based to POWER7-based servers. This package is currently IBM proprietary.lsvpd / libvpd Hardware Inventory The lsvpd package contains the lsvpd, lscfg, and lsmcode commands. These commands, along with a boot-time scanning script named update-lsvpd-db, constitute a hardware inventory system. The lsvpd command provides Vital Product Data (VPD) about hardware components to higher-level serviceability tools. The lscfg command provides a more human-readable format of the VPD, and system-specific information. This package is open source, and shipped by both Red Hat and Novell SuSE.servicelog Service Log (service tool) The Service Log package creates a database to store system-generated events that might require service. The package includes tools for querying the database. This package is open source, and shipped by both Red Hat and Novell SuSE.ppc64-diag Error Log Analysis This tool provides automatic analysis and notification of errors reported by the platform firmware on IBM systems. This RPM analyzes errors written to /var/log/platform. If a corrective action is required, notification is sent to the Service Focal Point on the Hardware Management Console (HMC), if so equipped, or to users subscribed for notification through the file /etc/diagela/mail_list. The Serviceable Event sent to the Service Focal Point and listed in the email notification may contain a Service Request Number. This number is listed in the Diagnostics manual Information for Multiple Bus Systems. This package is currently IBM proprietary. Chapter 6. Dynamic operations 319
  • 357. Tool name Description powerpc-utils Service Aids The utilities in the powerpc-utils and powerpc-utils-papr packages enable several RAS (Reliability, Availability, and Serviceability) features. Among others, these utilities include the update_flash command for installing system firmware updates, the serv_command for modifying various serviceability policies, the usysident and usysattn utilities for manipulating system LEDs, the bootlist command for updating the list of devices from which the system will boot, and the snap command for capturing extended error data to aid analysis of intermittent errors. This package is open source. IBMinvscout Inventory Scout This tool surveys one or more systems for hardware and software information. The gathered data can be used by web services such as the Microcode Discovery Service, which generates a report indicating if installed microcode needs to be updated. This package is currently IBM proprietary. IBM Installation Toolkit for Linux for Power As an alternative to manually installing additional packages as described in “Installing Service and Productivity tools” on page 321, you can use the IBM Linux for Power Installation Toolkit. The IBM Installation Toolkit for Linux for Power is a bootable CD that provides access to the additional packages that you need to install to provide additional capabilities of your server. It also allows you to set up an installation server to make your customized operating system installation files available for other server installations. Download the IBM Installation Toolkit for Linux for Power iso image from: http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/installtools/home.html The IBM Installation Toolkit for Linux for Power simplifies the Linux for Power installation by providing a wizard that allows you to install and configure Linux for Power machines in just a few steps. It supports DVD and network-based installs by providing an application to create and manage network repositories containing Linux and IBM value-added packages. The IBM Installation Toolkit includes: The Welcome Center, the main toolkit application, which is a centralized user interface for system diagnostics, Linux, and RAS Tools installation; microcode update; and documentation.320 IBM PowerVM Virtualization Managing and Monitoring
  • 358. System Tools, which is an application to create and manage network repositories for Linux and IBM RAS packages. The POWER Advance Toolchain, which is a technology preview toolchain that provides decimal floating point support, Power architecture c-library optimizations, optimizations in the gcc compiler for POWER, and performance analysis tools. Microcode packages. More than 20 RAS Tools packages. More than 60 Linux for Power user guides and manuals.Installing Service and Productivity toolsDownload Service and Productivity tools at:http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.htmlSelect your distribution and whether you have an HMC-connected system.Download all the packages in one directory and run the rpm -Uvh<filename>.rpm command for each package individually. Depending on yourLinux distribution, version, and installation choice, you will be prompted formissing dependencies. Keep your software installation source for yourdistribution available and accessible. Important: Install the packages in the same order as listed at the Service and Productivity tools website. This avoids install and setup issues.Service and Productivity tools examplesAfter the packages are installed, run the vpdupdate command to initialize theVital Product Data database.Now you can list hardware with the lscfg command and see proper locationcodes as shown in Example 6-2.Example 6-2 lscfg command on Linux[root@localhost ~]# lscfgINSTALLED RESOURCE LISTThe following resources are installed on the machine.+/- = Added or deleted from Resource List.* = Diagnostic support not available. Model Architecture: chrp Model Implementation: Multiple Processor, PCI Bus Chapter 6. Dynamic operations 321
  • 359. + sys0 System Object + sysplanar0 System Planar + eth0 U9117.MMA.101F170-V5-C2-T1 Interpartition Logical LAN + eth1 U9117.MMA.101F170-V5-C3-T1 Interpartition Logical LAN + scsi0 U9117.MMA.101F170-V5-C21-T1 Virtual SCSI I/O Controller + sda U9117.MMA.101F170-V5-C21-T1-L1-L0 Virtual SCSI Disk Drive (21400 MB) + scsi1 U9117.MMA.101F170-V5-C22-T1 Virtual SCSI I/O Controller + sdb U9117.MMA.101F170-V5-C22-T1-L1-L0 Virtual SCSI Disk Drive (21400 MB) + mem0 Memory + proc0 Processor You can use the lsvpd command to display vital product data (for example, the firmware level), as shown in Example 6-3. Example 6-3 lsvpd command [root@localhost ~]# lsvpd *VC 5.0 *TM IBM,9117-MMA *SE IBM,02101F170 *PI IBM,02101F170 *OS Linux 2.6.18-53.el5 To display virtual adapters, use the lsvio command as shown in Example 6-4. Example 6-4 Display virtual SCSI and network [root@linuxlpar ~]# lsvio -s scsi0 U9117.MMA.101F170-V5-C21-T1 scsi1 U9117.MMA.101F170-V5-C22-T1 [root@linuxlpar ~]# lsvio -e eth0 U9117.MMA.101F170-V5-C2-T1 eth1 U9117.MMA.101F170-V5-C3-T1 Dynamic LPAR with Linux for Power Dynamic logical partitioning is needed to change the physical or virtual resources assigned to the partition without reboot or disruption. If you create or assign a new virtual adapter, a dynamic LPAR operation is needed to make the operating system aware of this change. On Linux-based systems, existing hot plug and322 IBM PowerVM Virtualization Managing and Monitoring
  • 360. udev mechanisms are utilized for dynamic LPAR operations, so all the changesoccur dynamically and there is no need to run a configuration manager.Dynamic LPAR requires a working IP connection to the Hardware ManagementConsole (port 657) and the following additional packages installed on the Linuxsystem as described in “Installing Service and Productivity tools” on page 321: librtas, src, rsct.core and rsct.core.utils, csm.core and csm.client, powerpc-utils-papr, devices.chrp.base.ServiceRM, DynamicRM, rpa-pci-hotplug, rpa-dlparIf you encounter any dynamic LPAR problems, try to ping your HMC first. If theping is successful, try to list the rmc connection as shown in Example 6-5.Example 6-5 List the management server[root@linuxlpar ~]# lsrsrc IBM.ManagementServerResource Persistent Attributes for IBM.ManagementServerresource 1: Name = "9.3.5.128" Hostname = "9.3.5.128" ManagerType = "HMC" LocalHostname = "9.3.5.115" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"linuxlpar"} Chapter 6. Dynamic operations 323
  • 361. Adding processors dynamically After the tools are installed, and depending on the available shared system resources, you can use the HMC to add (virtual) processors and memory to the desired partition (adding processing units does not require dynamic LPAR) as shown in Figure 6-29.Figure 6-29 Adding a processor to a Linux partition324 IBM PowerVM Virtualization Managing and Monitoring
  • 362. From the panel shown in Figure 6-30, you can increase the number ofprocessors.Figure 6-30 Increasing the number of virtual processorsYou will be able to receive the following messages on the client if you run thetail -f /var/log/messages command as shown in Example 6-6.Example 6-6 Linux finds new processorsDec 2 11:26:08 linuxlpar : drmgr: /usr/sbin/drslot_chrp_cpu -c cpu -a -q 60 -pent_capacity -w 5 -d 1Dec 2 11:26:08 linuxlpar : drmgr: /usr/sbin/drslot_chrp_cpu -c cpu -a -q 1 -w5 -d 1Dec 2 11:26:08 linuxlpar kernel: Processor 2 found.Dec 2 11:26:09 linuxlpar kernel: Processor 3 found. Chapter 6. Dynamic operations 325
  • 363. In addition to the messages in the log directory file, you can monitor the changes by executing cat /proc/ppc64/lparcfg. Example 6-7 shows that the partition had 0.5 CPU as its entitled capacity. Example 6-7 The lparcfg command before adding CPU dynamically lparcfg 1.7 serial_number=IBM,02101F170 system_type=IBM,9117-MMA partition_id=7 R4=0x32 R5=0x0 R6=0x80070000 R7=0x800000040004 BoundThrds=1 CapInc=1 DisWheRotPer=5120000 MinEntCap=10 MinEntCapPerVP=10 MinMem=128 MinProcs=1 partition_max_entitled_capacity=100 system_potential_processors=16 DesEntCap=50 DesMem=2048 DesProcs=1 DesVarCapWt=128 DedDonMode=0 partition_entitled_capacity=50 group=32775 system_active_processors=4 pool=0 pool_capacity=400 pool_idle_time=0 pool_num_procs=0 unallocated_capacity_weight=0 capacity_weight=128 capped=0 unallocated_capacity=0 purr=19321795696 partition_active_processors=1 partition_potential_processors=2 shared_processor_mode=1 The user of this partition added 0.1 CPU dynamically. Two LPAR configuration attributes that reflect the changes associated with addition/deletion of CPUs are partition_entitled_capacity and partition_ative_processor. The change in326 IBM PowerVM Virtualization Managing and Monitoring
  • 364. the values of lparcfg attributes, after the addition of 0.1 CPU, is shown inExample 6-8.Example 6-8 The lparcfg command after adding 0.1 CPU dynamicallylparcfg 1.7serial_number=IBM,02101F170system_type=IBM,9117-MMA... omitted lines ...partition_entitled_capacity=60group=32775system_active_processors=4pool=0pool_capacity=400pool_idle_time=0pool_num_procs=0unallocated_capacity_weight=0capacity_weight=128capped=0unallocated_capacity=0purr=19666496864partition_active_processors=2partition_potential_processors=2shared_processor_mode=1Removing processors dynamicallyTo remove a processor dynamically, repeat the preceding steps, decrease thenumber of processors, and run the dmesg command. The messages will be asshown in Example 6-9.Example 6-9 Ready to die messageIRQ 18 affinity broken off cpu 0IRQ 21 affinity broken off cpu 0cpu 0 (hwid 0) Ready to die...cpu 1 (hwid 1) Ready to die... Consideration: The DLPAR changes made to the partition attributes (CPU and memory) are not saved to the current active profile. Therefore, users can save the current partition configuration by selecting partition and clicking Name  Configuration  Save Current Configuration and then, during the next reboot, using the saved partition profile as the default profile. An alternative method is to make the same changes in the default profile. Chapter 6. Dynamic operations 327
  • 365. Adding memory dynamically Adding memory is supported by Red Hat Enterprise (RHEL5.0 or later) and Novell SUSE (SLES10 or later) Linux distributions. Important: You must ensure that powerpc-utils-papr rpm is installed. See the productivity download site for the release information. Before you add or remove memory, you can obtain the partition’s current memory information by executing cat /proc/meminfo as shown in Example 6-10. Example 6-10 Display of total memory in the partition before adding memory [root@VIOCRHEL52 ~]# cat /proc/meminfo | head -3 MemTotal: 2057728 kB MemFree: 1534720 kB Buffers: 119232 kB328 IBM PowerVM Virtualization Managing and Monitoring
  • 366. From the Hardware Management Console of the partition, click System Management  your Server  Dynamic Logical Partitioning  Memory  Add or Remove. The entire navigation process is illustrated in Figure 6-31.Figure 6-31 DLPAR add or remove memory Chapter 6. Dynamic operations 329
  • 367. After you select the Add or Remove option, you receive the display shown in Figure 6-32. Figure 6-32 DLPAR adding 2 GB memory Enter the desired memory for the partition in the Assigned Memory box by increasing the value (in the example, it is increased by 1 GB), then click OK. After the action completes, you can verify the addition or removal of memory by executing the command cat /proc/meminfo as shown in Example 6-11. Example 6-11 Total memory in the partition after adding 1 GB dynamically [root@VIOCRHEL52 ~]# cat /proc/meminfo | head -3 MemTotal: 3106304 kB MemFree: 2678528 kB Buffers: 65024 kB330 IBM PowerVM Virtualization Managing and Monitoring
  • 368. Remember: The DLPAR changes made to the partition attributes (CPU and memory) are not saved to the current active profile. Therefore, you can save the current partition configuration by selecting partition Name  Configuration  Save Current Configuration. Then, during the next reboot, use the saved partition profile as the default profile. An alternative method is to make the same changes in the default profile.Managing virtual SCSI changes in LinuxIf a new virtual SCSI adapter is added to a Linux partition and dynamic LPAR isfunctional, this adapter and any attached disks are immediately ready for use.Sometimes you might need to add a virtual target device to an existing virtualSCSI adapter. In this case, the operation is not a dynamic LPAR operation. Theadapter itself does not change: instead, an additional new disk is attached to thesame adapter. You must issue a scan command to recognize this new disk andrun the dmesg command to see the result as shown in Example 6-12.Example 6-12 Rescanning a SCSI host adapter# echo "- - -" > /sys/class/scsi_host/host0/scan# dmesg# SCSI device sdb: 585937500 512-byte hdwr sectors (300000 MB)sdb: Write Protect is offsdb: Mode Sense: 2f 00 00 08sdb: cache data unavailablesdb: assuming drive cache: write throughSCSI device sdb: 585937500 512-byte hdwr sectors (300000 MB)sdb: Write Protect is offsdb: Mode Sense: 2f 00 00 08sdb: cache data unavailablesdb: assuming drive cache: write through sdb: sdb1 sdb2sd 0:0:2:0: Attached scsi disk sdbsd 0:0:2:0: Attached scsi generic sg1 type 0The added disk is recognized and ready to use as /dev/sdb.If you are using software mirroring on the Linux client and one of the adapterswas set faulty due to Virtual I/O Server maintenance, you might need to rescanthe disk after both Virtual I/O Servers are available again. Use the followingcommand to issue a disk rescan:echo 1 > /sys/bus/scsi/drivers/sd/<SCSI-ID>/block/device/rescan Chapter 6. Dynamic operations 331
  • 369. More examples and detailed information about SCSI scanning is provided in the IBM Linux for Power Wiki page at: http://www-941.ibm.com/collaboration/wiki/display/LinuxP/SCSI+-+Hot+add %2C+remove%2C+rescan+of+SCSI+devices6.4 Dynamic LPAR operations on the Virtual I/O Server This section discusses two maintenance tasks for a Virtual I/O Server partition: Ethernet adapter replacement on the Virtual I/O Server Replacing a Fibre Channel adapter on the Virtual I/O Server If you want to change any processor, memory, or I/O configuration, follow the steps described in 6.2, “Dynamic LPAR operations” on page 293, because they are similar to ones required for the Virtual I/O Server.6.4.1 Replacing Ethernet adapters on the Virtual I/O Server You can perform the replace and remove functions by using the diagmenu command. Follow these steps: 1. Enter diagmenu and press Enter. 2. Read the Diagnostics Operating Instructions and press Enter to continue. 3. Select Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) and press Enter. 4. Select Hot Plug Task and press Enter. 5. Select PCI Hot Plug Manager and press Enter. 6. Select Replace/Remove a PCI Hot Plug Adapter and press Enter. 7. Select the adapter you want to replace and press Enter. 8. Select replace in the Operation field and press Enter. 9. Before a replace operation is performed, the adapter can be identified by a blinking LED at the adapter card. You will see the following message: The visual indicator for the specified PCI slot has been set to the identify state. Press Enter to continue or enter x to exit.332 IBM PowerVM Virtualization Managing and Monitoring
  • 370. If there are still devices connected to the adapter and a replace or removeoperation is performed on that device, there will be error messages displayed indiagmenu:The visual indicator for the specified PCI slot has been set to the identifystate. Press Enter to continue or enter x to exit.The specified slot contains device(s) that are currentlyconfigured. Unconfigure the following device(s) and try again.pci5ent0ent1ent2ent3These messages mean that devices that are dependent on this adapter have tobe unconfigured first.To replace a single Physical Ethernet adapter that is part of a Shared EthernetAdapter, follow these steps:1. Use the diagmenu command to unconfigure the Shared Ethernet Adapter. Then select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Unconfigure a Device. You should get output similar to the following: Device Name Move cursor to desired item and press Enter. Use arrow keys to scroll. [MORE...16] en7 Defined Standard Ethernet Network Interface ent0 Available 05-20 4-Port 10/100/1000 Base-TX PCI-X Adapt ent1 Available 05-21 4-Port 10/100/1000 Base-TX PCI-X Adapt ent2 Available 05-30 4-Port 10/100/1000 Base-TX PCI-X Adapt ent3 Available 05-31 4-Port 10/100/1000 Base-TX PCI-X Adapt ent4 Available Virtual I/O Ethernet Adapter (l-lan) ent5 Available Virtual I/O Ethernet Adapter (l-lan) ent6 Available Shared Ethernet Adapter et0 Defined 05-20 IEEE 802.3 Ethernet Network Interface et1 Defined 05-21 IEEE 802.3 Ethernet Network Interface [MORE...90] Select the Shared Ethernet Adapter (in this example, ent6), and in the following dialogue choose to keep the information about the database: Type or select values in entry fields. Press Enter AFTER making all desired changes. Chapter 6. Dynamic operations 333
  • 371. [Entry Fields] * Device Name [ent6] + Unconfigure any Child Devices no + KEEP definition in database yes + Press Enter to accept the changes. The system will show that the adapter is now defined: ent6 Defined 2. Perform the same operation on the physical adapter (in this example ent0, ent1, ent2, ent3, and pci5) with the difference that now “Unconfigure any Child devices” has to be set to yes. 3. Run the diagmenu command. Then select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Replace/Remove a PCI Hot Plug adapter, and select the physical adapter. You see an output panel similar to the following: Command: running stdout: yes stderr: no Before command completion, additional instructions may appear below. The visual indicator for the specified PCI slot has been set to the identify state. Press Enter to continue or enter x to exit. Press Enter as directed and the next message will appear: The visual indicator for the specified PCI slot has been set to the action state. Replace the PCI card in the identified slot and press Enter to continue. Enter x to exit. Exiting now leaves the PCI slot in the removed state. 4. Locate the blinking adapter, replace it, and press Enter. The window will display the message Replace Operation Complete. 5. Run the diagmenu command. Then select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Configure a Defined Device and select the physical Ethernet adapter ent0 that was replaced. 6. Repeat the Configure operation for the Shared Ethernet Adapter. Note that this method changes if the physical Ethernet adapter is part of a Network Interface Backup Configuration or an IEE 802.3ad link aggregation.334 IBM PowerVM Virtualization Managing and Monitoring
  • 372. 6.4.2 Replacing a Fibre Channel adapter on the Virtual I/O Server For Virtual I/O Servers, have at least two Fibre Channel adapters attached for redundant access to FC-attached disks. This allows for concurrent maintenance because the multipathing driver of the attached storage subsystem is supposed to handle any outage of a single Fibre Channel adapter. This section explains how to hot-plug a Fibre Channel adapter connected to a IBM DS4000 series storage device. Depending on the storage subsystem used and the multipathing driver installed, your results may be different. If there are disks mapped to the virtual SCSI adapters, these devices have to be unconfigured first because there is no automatic configuration method used to define them. 1. Use the diagmenu command to unconfigure devices that are dependent on the Fibre Channel adapter. Run diagmenu and then select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Unconfigure a device. Select the disk (or disks) in question and set its state to Defined as shown: Unconfigure a Device Device Name Move cursor to desired item and press Enter. Use arrow keys to scroll. [MORE...43] hdisk6 Available 04-08-02 3542 (200) Disk Array Device hdisk9 Defined 09-08-00-4,0 16 Bit LVD SCSI Disk Drive inet0 Available Internet Network Extension iscsi0 Available iSCSI Protocol Device lg_dumplv Defined Logical volume lo0 Available Loopback Network Interface loglv00 Defined Logical volume lpar1_rootvg Available Virtual Target Device - Disk lpar2_rootvg Available Virtual Target Device - Disk lvdd Available LVM Device Driver [MORE...34] 2. Perform that task for every mapped disk (Virtual Target Device). Then set the state of the Fibre Channel Adapter to Defined also as shown: Unconfigure a Device Device Name ? Move cursor to desired item and press Enter. Use arrow keys to scroll. [MORE...16] et1 Defined 05-09 IEEE 802.3 Ethernet Network Inter et2 Defined IEEE 802.3 Ethernet Network Inter et3 Defined IEEE 802.3 Ethernet Network Inter Chapter 6. Dynamic operations 335
  • 373. et4 Defined IEEE 802.3 Ethernet Network Inter fcnet0 Defined 04-08-01 Fibre Channel Network Protocol De fcnet1 Defined 06-08-01 Fibre Channel Network Protocol De fcs0 Available 04-08 FC Adapter fcs1 Available 06-08 FC Adapter? fscsi0 Available 04-08-02 FC SCSI I/O Controller Protocol D fscsi1 Available 06-08-02 FC SCSI I/O Controller Protocol D? [MORE...61] Be sure to set Unconfigure any Child Devices to Yes. This will unconfigure the fcnet0 and fscsi0 devices, and the RDAC driver device dac0 as shown: Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] * Device Name [fcs0] Unconfigure any Child Devices yes KEEP definition in database yes Following is the output of that command, showing the other devices unconfigured: COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below. fcnet0 Defined dac0 Defined fscsi0 Defined fcs0 Defined 3. Run diagmenu, and select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Replace/Remove a PCI Hot Plug Adapter. 4. Select the adapter to be replaced. Set the operation to replace, then press Enter. You will be presented with the following dialogue: COMMAND STATUS Command: running stdout: yes stderr: no Before command completion, additional instructions may appear below. The visual indicator for the specified PCI slot has been set to the identify state. Press Enter to continue or enter x to exit.336 IBM PowerVM Virtualization Managing and Monitoring
  • 374. 5. Press Enter as directed and the following message will appear: The visual indicator for the specified PCI slot has been set to the action state. Replace the PCI card in the identified slot and press Enter to continue. Enter x to exit. Exiting now leaves the PCI slot in the removed state.6. Locate the blinking adapter, replace it, and press Enter. The system will display the message Replace Operation Complete.7. Run diagmenu, and select Task Selection  Hot Plug Task  PCI Hot Plug Manager  Install/Configure Devices Added After IPL.8. Press Enter. This calls the cfgdev command internally and sets all previously unconfigured devices back to Available.9. If a Fibre Channel adapter is replaced, the settings such as zoning on the Fibre Channel switch and the definition of the WWPN of the replaced adapter to the storage subsystem have to be done before the replaced adapter can access the disks on the storage subsystem. For IBM DS4000 storage subsystems, switch the LUN mappings back to their original controllers because they may have been distributed to balance I/O load. Chapter 6. Dynamic operations 337
  • 375. 338 IBM PowerVM Virtualization Managing and Monitoring
  • 376. 7 Chapter 7. PowerVM Live Partition Mobility PowerVM Live Partition Mobility allows for the movement of an active (running) or inactive (shut down) partition from one system to another with no application downtime. This results in higher system utilization, improved application availability, and energy savings. With PowerVM Live Partition Mobility, planned application downtime due to regular server maintenance can be a thing of the past. PowerVM Live Partition Mobility requires systems with POWER6 or newer processors running AIX or Linux operating systems and PowerVM Enterprise Edition. Live Partition Mobility is not supported for IBM i. For more information regarding Live Partition Mobility see IBM PowerVM Live Partition Mobility, SG24-7460. This chapter includes the following sections: PowerVM Live Partition Mobility requirements Managing a live partition migration Live Partition Mobility and Live Application Mobility© Copyright IBM Corp. 2012. All rights reserved. 339
  • 377. 7.1 PowerVM Live Partition Mobility requirements To prepare for PowerVM Live Partition Mobility, check the following requirements before attempting a partition migration.7.1.1 HMC requirements PowerVM Live Partition Mobility can include one or more HMCs: Both the source and destination systems are managed by the same HMC (or redundant HMC pair). In this case, the HMC must be at Version 7 Release 3.2 or later. The source system is managed by one HMC and the destination system is managed by a separate HMC. In this case, both the source HMC and the destination HMC must meet the following requirements: – The source HMC and the destination HMC must be connected to the same network so that they can communicate with each other. – The source HMC and the destination HMC must be at Version 7, Release 3.4. Use the lshmc command to display the HMC version: hscroot@hmc1:~> lshmc -V "version= Version: 7 Release: 3.4.0 Service Pack: 0 HMC Build level 20080929.1 ","base_version=V7R3.4.0 – A secure shell (SSH) connection must be set up correctly between the two HMCs. Run the following command from the source system HMC to configure the ssh authentication to the destination system HMC (note that 9.3.5.180 is the IP address of the destination HMC): hscroot@hmc1:~> mkauthkeys -u hscroot --ip 9.3.5.180 --g Enter the password for user hscroot on the remote host 9.3.5.180: Run the following command from the source system HMC to verify the ssh authentication to the destination system HMC: hscroot@hmc1:~> mkauthkeys -u hscroot --ip 9.3.5.180 --test340 IBM PowerVM Virtualization Managing and Monitoring
  • 378. 7.1.2 Common system requirements checklist The common system requirements are listed here:  Both source and destination systems are POWER6-based systems or later.  The PowerVM Enterprise Edition license code must be installed on both systems.  Systems have a Virtual I/O Server installed with Version 1.5.1.1 or later. You can check this by running the ioslevel command on the Virtual I/O Server: $ ioslevel 2.1.0.0  POWER6-based systems must have a firmware level of 01Ex320 or later, where x is an S for BladeCenter, an L for Low End servers, an M for Midrange servers, or an H for High End servers. You can check this on the HMC by running the lslic command: hscroot@hmc1:~> lslic -m MT_A_p570_MMA_100F6A0 -t sys -F perm_ecnumber_primary 01EM320 All POWER7-based systems are supported.  The systems must have the same logical memory block size. This can be checked using the Advanced System Management Interface (ASMI).  At least one of the source and one of the destination Virtual I/O Servers are set as Mover Service Partition (MSP). You can check the Virtual I/O Server settings in their partition properties on the HMC. The Virtual I/O Server that is a part of Shared Storege Pool cluster can also be an MSP. Consideration: Setting a Virtual I/O Server as mover service partition automatically creates a Virtual Asynchronous Services Interface (VASI) adapter on this Virtual I/O Server.  Both Virtual I/O Servers should have their clocks synchronized. See the Time reference in the Setting tab in the Virtual I/O Server properties on the HMC.7.1.3 Destination system requirements checklist Check that the following requirements are met on your destination system:  The amount of memory available on the destination system is greater than or equal to the amount of free memory used by the mobile partition of the source system. Chapter 7. PowerVM Live Partition Mobility 341
  • 379.  If the mobile partition uses dedicated processors, the destination system must have at least this number of available processors.  If the mobile partition is assigned to a Shared-Processor Pool, the destination system must have enough spare entitlement to allocate it to the mobile partition in the destination Shared-Processor Pool.  The destination system must have at least one Virtual I/O Server that has access to all the network and storage resources used by the migrating partition.  The destination system must have a virtual switch configured with the same name as the source system.7.1.4 Migrating partition requirements checklist Check that the migrating partition is ready:  All of the migrating partitions I/O resources must be virtual and able to be configured through a Virtual I/O Server on the destination system. Virtual SCSI devices must be backed by SAN volumes.  The migrating partition can only use virtual Ethernet for network communication. Systems using IVE cannot be migrated.  The partition is not designated as a redundant error path reporting partition. Check this in the partition properties on the HMC. Changing this setting currently requires a reboot of the partition.  The partition is not part of an LPAR workload group. A partition can be dynamically removed from a group. Check this in the properties of the partition of the HMC.  The partition has a unique name. A partition cannot be migrated if any partition exists with the same name on the destination system.  The additional virtual adapter slots for this partition (slot ID higher or equal to 2) do not appear as required in the partition profile. Check this in the properties of the partition of the HMC.7.1.5 Active and inactive migrations checklist You can perform an active partition migration if the following requirements are met. If this is not the case, you can still run an inactive partition migration:  The partition is in the Running state.  The partition does not have any dedicated adapters.342 IBM PowerVM Virtualization Managing and Monitoring
  • 380.  The partition does not use huge pages. Check this in the advanced properties of the partition on the HMC.  The partition does not use the Barrier Synchronization Register. Check that the “number of BSR arrays” is set to zero (0) in the memory properties of the partition on the HMC. Changing this setting currently requires a reboot of the partition.  The operating system must be at one of the following levels: – AIX 5.3 TL 7 or later – Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later – SUSE Linux Enterprise Services 10 (SLES 10) Service Pack 1 or later7.2 Managing a live partition migration In addition to these requirements, use a high speed Ethernet link between the systems involved in the partition migration. A minimum of 1 Gbps link is preferable between the Mover Service Partitions. There are no architected maximum distances between systems for PowerVM Live Partition Mobility. The maximum distance is dictated by the network and storage configuration used by the systems. Standard long-range network and storage performance considerations apply.7.2.1 The migration validation The migration validation process verifies that the migration of a given partition on one server to another specified server meets all the compatibility requirements and therefore has a good chance of succeeding. Because the validation is also integrated with the migration operation wizard, you can also use the Migrate operation. If problems are detected, they are reported the same way as for the validation operations. The migration operation will not proceed if any conditions are detected that might cause it not to be successful.7.2.2 Validation and migration The migration operation uses a wizard to get the required information. This wizard is accessed from the HMC using the following steps: 1. Select the partition to migrate and select Operations  Mobility  Validate. Chapter 7. PowerVM Live Partition Mobility 343
  • 381. 2. In the Migration Validation wizard, fill in the remote HMC and Remote User fields. 3. Click Refresh Destination System to view the list of available systems on the remote HMC and select the destination system. 4. In the Profile Name field, you can type a profile name that differs from the names currently created for the mobile partition. This profile will be overwritten with the current partition configuration. Click Validate as shown in Figure 7-1. If you do not specify a profile name, the same name as the current profile will be used on the destination system. Figure 7-1 Partition Migration Validation344 IBM PowerVM Virtualization Managing and Monitoring
  • 382. 5. After the migration validation is successful, choose to migrate the partition by clicking Migrate as shown in Figure 7-2.Figure 7-2 Partition Migration Chapter 7. PowerVM Live Partition Mobility 345
  • 383. If you already have configured a dual Virtual I/O Server (VIOS) environment for the serviceability on the source system, you need two or more VIOSs on the destination for the Live Partition Mobility. When you migrate a mobile partition, Virtual I/O client partition, on a dual VIOS environemnt by using the HMC GUI, you need to make a selsction which VIOSs wil be used for the mobile partition on the destination system. Figure 7-3 shows virtual storage assignments in the validation wizard, and it shows two VIOSs are selected on the destination system. In this example, one VIOS named as p72vios1 is selected for the slot ID 11 of the mobile partition, and another VIOS p72vios2 is selected for the slot ID 21.Figure 7-3 Virtual Storage assignments selection You can also migrate a mobile partition from the source to the destination system by using migrlpar HMC command line interface (CLI). If you migrate a mobile partition by the HMC CLI, you can migrate it without any specifications of a destination VIOS pair. The HMC automatically makes the selection if you do not specify the destination VIOSs for Live Partition Mobility. If you need to specify the VIOSs on the destination system, use migrlpar with -i or -f flag, as shown in Example 7-1. Example 7-1 HMC CLI migrlpar -i hscroot@hmc:~> migrlpar -o m -m "Source System" -t "Destination System" --id "mobile partition ID" --redundantvios 1 --mpio 1 -w 5 -i ""virtual_scsi_mappings=11/p72vios1/,21/p72vios2/",dest_msp_name=p72vios1 ,source_msp_name=p71vios1" For more detailed information, see IBM PowerVM Live Partition Mobility, SG24-7460 and HMC Manual Reference Pages for migrlpar as below: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7edm /migrlpar.html346 IBM PowerVM Virtualization Managing and Monitoring
  • 384. 7.2.3 How to fix missing requirements In the validation summary window, you can check the mandatory requirements missing detected by selecting the Errors report. There is additional information related to these errors by selecting the Detailed information report. It often helps to determine the precise origin of the missing requirements, as shown in Figure 7-4. Figure 7-4 Partition migration validation detailed information The most common missing requirements can be treated as shown in Table 7-1. Table 7-1 Missing requirements for PowerVM Live Partition Mobility Partition validation Correction error message reported The HMC was unable to Check that there is a Virtual I/O Server on the source find a valid mover service and the destination system has “Mover service partition partition” checked in its general properties. Check that both Virtual I/O Servers can communicate with each other through the network. Can not get physical Partitions probably have access to the CD/DVD drive device location - vcd is through a virtual device.Temporally remove the backed by optical mapping on the virtual CD/DVD drive on the Virtual I/O Server using rmdev. Chapter 7. PowerVM Live Partition Mobility 347
  • 385. Partition validation Correction error message reported The migrating partitions Check the Detailed information tab: virtual SCSI adapter xx If there is a message that mentions “Missing Begin Tag cannot be hosted by the reserve policy mismatched”, check that the moving existing Virtual I/O Server storage disks reserve policy is set to no_reserve on (VIOS) partitions on the the source and destination disks on the Virtual I/O destination managed Servers with a command similar to: system. echo “lsattr -El hdiskxxx" | oem_setup_env You can fix it with the command: chdev -dev hdiskxxx -attr reserve_policy=no_reserve If not, check in your SAN zoning that the destination Virtual I/O Server can access the same LUNs as the source. For IBM storage, you can check the LUN you have access to by running the following command on the Virtual I/O Servers: echo "fget_config -Av" | oem_setup_env For other vendors, contact your representative. For further information about setting up a system for PowerVM Live Partition Mobility, see IBM PowerVM Live Partition Mobility, SG24-7460.7.3 Live Partition Mobility and Live Application Mobility Beginning with AIX Version 6, it is possible to group applications running on the same AIX image together with their disk data and network configuration into a configuration known as a Workload Partition (WPAR). Workload Partitions are migration capable. Given two running AIX images that share a common file system, the administrator can decide to actively migrate a workload between operating systems while still keeping the applications running. This is called Live Application Mobility. Live Application Mobility and Live Partition Mobility differ as follows: Live Application Mobility is a feature of the AIX operating system and will function on all systems that support AIX Version 6 or later. PowerVM Live Partition Mobility is a PowerVM feature that works for AIX and Linux operating systems that operate on POWER6 or later servers. This feature requires PowerVM Enterprise Edition.348 IBM PowerVM Virtualization Managing and Monitoring
  • 386. The differences between Live Application Mobility and PowerVM Live PartitionMobility are shown in Table 7-2.Table 7-2 PowerVM Live Partition Mobility versus Live Application Mobility PowerVM Live Partition Mobility Live Application Mobility Requires SAN storage Uses NFS-mounted file systems to access data Requires POWER6 based systems or Requires POWER4 or later later Requires PowerVM Enterprise license Requires the WPAR migration manager and the Application Mobility activation/license Can move any supported OS Can move the applications running in a WPAR on an AIX 6 or later system only Moves the entire OS Does not move the OS Any application can run on the system Restrictions apply to applications that can run in a WPAR The resource allocations move with the The administrator might have to adapt the migrated partition resources allocated to the source and destination partitions Chapter 7. PowerVM Live Partition Mobility 349
  • 387. 350 IBM PowerVM Virtualization Managing and Monitoring
  • 388. 8 Chapter 8. Partition Suspend and Resume This chapter describes listing, adding, or removing devices in the reserved storage device pool. It also addresses shutting down a suspended partition, and recovering a suspended or resumed partition. Common validation error messages are also described. Tip: Partition Resume and Suspend are supported for IBM i V7R1 TR2 and HMC 7.3 or later. For more information, see the IBM Power Systems Hardware Information Center for Suspend and Resume requirements and configuration: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topi c=/p7hat/iphatphibreqs.htm Restriction: A reserved storage device pool must be used in the Partition Suspend and Resume capability on a PowerVM Standard Edition environment. It must also be used in a PowerVM Enterprise Edition environment with a partition not configured to use Active Memory Sharing.© Copyright IBM Corp. 2012. All rights reserved. 351
  • 389. This chapter includes the following sections: Listing volumes in the reserved storage device pool Adding volume to the reserved storage device pool Removing a volume from the reserved storage device pool Suspending a partition Shutting down a suspended partition Recovering a suspended or resumed partition Correcting validation errors352 IBM PowerVM Virtualization Managing and Monitoring
  • 390. 8.1 Listing volumes in the reserved storage device pool To list physical volumes in the reserved storage device pool by using the Hardware Management Console (HMC), perform these steps: On the HMC, select the managed system where the reserved storage device pool is located. Then click Configuration  Virtual Resources  Reserved Storage Device Pool Management as shown in Figure 8-1.Figure 8-1 Reserved storage device pool management access menu The list of devices in the reserved storage device pool is displayed as shown in Figure 8-2.Figure 8-2 Reserved storage device pool device list Chapter 8. Partition Suspend and Resume 353
  • 391. From Hardware Management Console command-line interface, run lshwres with the --rsubtype rsdev flag. This command shows the reserved storage device used to save suspension data for partition as shown in Example 8-1. Example 8-1 lshwres command output showing reserved storage device properties hscroot@hmc6:~> lshwres -r rspool -m POWER7_1-SN100EF5R --rsubtype rsdev device_name=hdisk11,vios_name=p71vios01,vios_id=1,size=51200,type=phys,state=Ina ctive,phys_loc=U78A0.001.DNWKFYH-P1-C2-T1-W500507630410412C-L4011401200000000,is _redundant=1,redundant_device_name=hdisk11,redundant_vios_name=p71vios02,redunda nt_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U78A0.001.DNWKFYH-P1-C3 -T1-W500507630410412C-L4011401200000000,lpar_id=none,device_selection_type=auto device_name=hdisk10,vios_name=p71vios01,vios_id=1,size=51200,type=phys,state=Ina ctive,phys_loc=U78A0.001.DNWKFYH-P1-C2-T1-W500507630410412C-L4011401100000000,is _redundant=1,redundant_device_name=hdisk10,redundant_vios_name=p71vios02,redunda nt_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U78A0.001.DNWKFYH-P1-C3 -T1-W500507630410412C-L4011401100000000,lpar_id=none,device_selection_type=auto device_name=hdisk20,vios_name=p71vios01,vios_id=1,size=102400,type=phys,state=In active,phys_loc=U78A0.001.DNWKFYH-P1-C2-T1-W500507630410412C-L4011400000000000,i s_redundant=1,redundant_device_name=hdisk20,redundant_vios_name=p71vios02,redund ant_vios_id=2,redundant_state=Inactive,redundant_phys_loc=U78A0.001.DNWKFYH-P1-C 3-T1-W500507630410412C-L4011400000000000,lpar_id=none,device_selection_type=auto8.2 Adding volume to the reserved storage device pool To add a physical volume to the reserved storage device pool by using the HMC, perform these steps: 1. On the HMC, select the managed system where the reserved storage device pool is located. Then click Configuration  Virtual Resources  Reserved Storage Device Pool Management as shown in Figure 8-1 on page 353.354 IBM PowerVM Virtualization Managing and Monitoring
  • 392. 2. Click Edit Pool as shown in Figure 8-3.Figure 8-3 Edit pool operation 3. Click Select Device(s) as shown in Figure 8-4.Figure 8-4 Reserved storage device pool management device Chapter 8. Partition Suspend and Resume 355
  • 393. 4. Select the device type, and optionally select the minimum and maximum devices size, as shown in Figure 8-5. Then click Refresh to display the list of available devices. Figure 8-5 Reserved storage device pool management device list selection Important: The size of the volume must be at least the same size as the maximum memory specified in the profile for the suspending partition356 IBM PowerVM Virtualization Managing and Monitoring
  • 394. 5. Select the devices to add to the reserved storage device pool as shown in Figure 8-6, then click OK. Figure 8-6 Reserved storage device pool management device selection Chapter 8. Partition Suspend and Resume 357
  • 395. 6. Review the list of devices in the reserved storage device pool as shown in Figure 8-7. Click OK to complete the device addition operation.Figure 8-7 Adding a device to the reserved storage device pool validation You can now list the devices as explained in section 8.1, “Listing volumes in the reserved storage device pool” on page 353.8.3 Removing a volume from the reserved storagedevice pool To remove a physical volume from the reserved storage device pool by using the HMC, perform these steps: 1. On the HMC, select the managed system where the reserved storage device pool is located. Then click Configuration  Virtual Resources  Reserved Storage Device Pool Management as shown in Figure 8-1 on page 353.358 IBM PowerVM Virtualization Managing and Monitoring
  • 396. 2. Click Edit Pool as shown in Figure 8-8.Figure 8-8 Reserved storage device pool management Chapter 8. Partition Suspend and Resume 359
  • 397. 3. Select the device you want to remove, then click Remove as shown in Figure 8-9.Figure 8-9 Reserved storage device pool management device 4. Review the list of devices in the reserved storage device pool as shown in Figure 8-10, then click OK to complete the device removal operation.Figure 8-10 Removing a device from the reserved storage device pool validation360 IBM PowerVM Virtualization Managing and Monitoring
  • 398. You can now list the devices and volumes in the reserved storage device pool. For more information, see 8.1, “Listing volumes in the reserved storage device pool” on page 353.8.4 Suspending a partition To suspend a running partition, perform these steps: 1. On the HMC, select the partition you want to suspend. Then click Operation  Suspend Operations  Suspend as shown in Figure 8-11. Figure 8-11 Starting the suspend operation 2. Select the VIO servers and click Suspend as shown in Figure 8-12. Figure 8-12 Options for suspend and resume Chapter 8. Partition Suspend and Resume 361
  • 399. Status per activity is shown in a separate window. A validation is done during this phase. The suspend status window is shown in Figure 8-13. Figure 8-13 Activity status window When all steps are completed, you receive a confirmation window as shown in Figure 8-14. Figure 8-14 Suspend and resume final status The partition is now suspended and has a status of Suspended, as shown in Figure 8-15. Figure 8-15 HMC operating status362 IBM PowerVM Virtualization Managing and Monitoring
  • 400. 8.5 Shutting down a suspended partition After it is in a suspended state, a partition can be changed with one of these operations: Resumed Returns partition to the state it was in before being suspended. Shutdown Invalidates the suspend state and moves the partition to a state of powered off. This operation has a similar impact as an immediate shutdown on the application hosted by the partition, and the partition itself. If the storage device that contains the partition state is available, all saved virtual server adapter configuration entries are restored. Migrated Allows a partition to be moved from one server to another. This function uses PowerVM Live Partition Mobility, which requires PowerVM Enterprise Edition. This section describes how to suspend and then shut down a suspended partition from the HMC command-line interface. Important: Avoid shutting down a suspended partition. In this case, resume the partition, then perform the shutdown on a running partition if possible. When a partition is suspended, there are two types of shutdowns: Normal HMC reconfigures all virtual server adapters at shutdown of suspended partition. Force Available if virtual server adapter reconfiguration faces an unrecoverable error. From Hardware Management Console command-line interface, perform these steps: 1. Run chlparstate command to suspend the partition p71ibmi08 as shown in Example 8-2. Example 8-2 Suspending partition p71ibmi08 from the HMC command line hscroot@hmc6:~> chlparstate -o suspend -m POWER7_1-SN100EF5R -p p71ibmi08 Chapter 8. Partition Suspend and Resume 363
  • 401. 2. Using the lssyscfg command, view the state of the partition p71ibmi08 as shown in Example 8-3. Example 8-3 Listing partition p71ibmi08 state from the HMC command line hscroot@hmc6:~> lssyscfg -r lpar -m POWER7_1-SN100EF5R -F name,state --filter "lpar_names=p71ibmi08" p71ibmi08,Suspended Tips: The suspend operation removes all virtual devices mapping from the Virtual I/O Servers for the suspended partition. The suspend operation removes all virtual server adapters used by the suspended partition from the Virtual I/O Servers. The physical volumes used as backing devices by the suspended partition are displayed as volumes that are available for use. 3. Using chlparstate command, shut down the suspended partition p71ibmi08 with the default normal option as shown in Example 8-4. Example 8-4 Shutting down and suspending a partition hscroot@hmc6:~> chlparstate -m POWER7_1-SN100EF5R -o shutdown -p p71ibmi08 4. Using lssyscfg command, view the state of the partition p71ibmi08 as shown in Example 8-5. Example 8-5 Verifying the state of the partition hscroot@hmc6:~> lssyscfg -r lpar -m POWER7_1-SN100EF5R -F name,state --filter "lpar_names=p71ibmi08" p71ibmi08,Not Activated Tips: The shutdown operation of a suspended partition with the Normal option recreates all virtual server adapters used by the suspended partition on the Virtual I/O Servers. The shutdown operation of a suspended partition with the Normal option recreates all virtual device mapping on the Virtual I/O Servers used by the suspended partition. If the partition is not a shared memory partition, all devices used to store partition suspend data are released.364 IBM PowerVM Virtualization Managing and Monitoring
  • 402. 8.6 Recovering a suspended or resumed partition This section describes how to recover a partition from a failed Suspend or Resume operation. The recover operation might have to be issued in the following circumstances: Suspend or Resume is taking a long time and user ends the operation abruptly. User is not able to cancel a Suspend or Resume operation successfully. Initiating a Suspend or Resume operation has resulted in an extended error that indicates that the partition state is not valid. When issuing a recover, the HMC determines the last successful step in the previous operation from progress data. Both suspend and resume store the operation progress in both HMC and Hypervisor. Depending on the last successful step, the HMC either completes or rolls back the operation. Exception: If no progress data is available, the recover operation must be run with the force option. The HMC will recover as much as possible. To run the recover on the HMC, perform the following operations: 1. Select the partition to recover, then click Operations  Suspend Operations  Recover as shown in Figure 8-16.Figure 8-16 Recovering a suspended partition Chapter 8. Partition Suspend and Resume 365
  • 403. 2. Select the type of operation you want to recover from in the Target Operation menu. Then click OK as shown in Figure 8-17. Figure 8-17 Partition recover operation After completing the recover operation, the partition will return to a Running or a Suspended state. This state depends on the last successful step and the type of operation you are recovering from.8.7 Correcting validation errors Table 8-1 shows a list of the most common validation errors and how to correct those errors. Table 8-1 Common Suspend and Resume validation errors Message ID Validation error message Correction HSC0A929 There is no non-redundant There is no device available in device available in the reserved the reserved storage pool to storage device pool that can be run the Suspend operation. used by this partition. This Add a physical volume with the partition requires a device with a required size to the reserved size of at least 4318 MB. Add a storage device pool. device of at least that size to the reserved storage device pool, then try the operation again.366 IBM PowerVM Virtualization Managing and Monitoring
  • 404. Message ID Validation error message CorrectionHSCLA930 There is no redundant device There is no redundant device available in the reserved storage available in the reserved device pool that can be used by storage pool to run the this partition. This partition Suspend operation. Add a requires a device with a size of redundant physical volume at least 4318 MB. Add a device that can be seen by both of at least that size to the Virtual I/O Servers in the reserved storage device pool, reserved storage device pool. then try the operation again.HSCLA27C The operation to get the During a suspend operation in physical device location for an NPIV environment, there is adapter no SAN zoning of the virtual U8233.E8B.061AB2P-V1-C36 Fibre Channel WWPNs. on the virtual I/O server partition Perform the SAN zoning of the P7_2_vios1 has failed. The virtual Fibre Channel WWPNs. partition command is: migmgr -f get_adapter -t vscsi -s U8233.E8B.061AB2P-V1-C36 -w 13857705808736681994 -W 13857705808736681995 -d 1 The partition standard error is: child process returned errorHSCLA319 The migrating partitions virtual The resume operation in an Fibre Channel client adapter 36 NPIV environment requires cannot be hosted by the existing proper zoning of the SSPNs. Virtual I/O Server (VIOS) Ensure both WWPNs are in partitions on the destination the same zone. managed system. To migrate the partition, set up the necessary VIOS host on the destination managed system, then try the operation again. Chapter 8. Partition Suspend and Resume 367
  • 405. 368 IBM PowerVM Virtualization Managing and Monitoring
  • 406. 9 Chapter 9. System Planning Tool This chapter describes how you can use the PC-based System Planning Tool (SPT) to create a configuration to be deployed on a system. When deploying the partition profiles, assigned resources are generated on the HMC or in IVM. The Virtual I/O Server operating system can be installed during the deployment process. In the scenario described in this chapter, the Virtual I/O Server, AIX, IBM i, and Linux operating systems are all installed using DVD or NIM. SPT is available as a download from the System Planning Tool website for no additional charge. The generated system plan can be viewed from the SPT on a PC, or directly on an HMC. After you save your changes, the configuration can be deployed to the HMC or IVM. For detailed information about the SPT, see the following address: http://www.ibm.com/systems/support/tools/systemplanningtool For further information about how to create and deploy a system plan, see IBM PowerVM Virtualization Introduction and Configuration, SG24-7940. This chapter includes the following sections: Sample scenario Preparation recommendation Planning the configuration with SPT Initial setup checklist© Copyright IBM Corp. 2012. All rights reserved. 369
  • 407. 9.1 Sample scenario This scenario shows the system configuration used for this book. Figure shows the basic layout of partitions and the slot numbering of virtual adapters. An additional virtual SCSI server adapter with slot number 60 is added for the virtual tape and one client SCSI adapter with slot number 60 is added to the aix61 and aix53 partitions (not shown in Figure ). VIOS 1 AIX V6.1 AIX V5.3 IBM i 6.1 RHEL SLES VIOS 2 50 50 50 50 50 MPIO 50 21 22 25 MPIO 25 21 22 24 24 21 Mirror 22 23 23 MPIO 22 21 22 22 MPIO 21 21 22 21 MPIO MPIO SAN switch SAN switch Controller ControllerFigure 9-1 The partition and slot numbering plan of virtual storage adapters Tip: You are not required to have the same slot numbering for the server and client adapters: it simply makes it easier to keep track.370 IBM PowerVM Virtualization Managing and Monitoring
  • 408. Figure 9-2 shows the basic layout of partitions and the slot numbering of virtual Ethernet adapters. VIOS 1 AIX V6.1 AIX V5.3 IBM i 6.1 RHEL SLES VIOS 2 PVID 1 11 13 2 2 2 2 2 13 11 12 12 PVID 90 Control channel SEA SEA HMC Switch SwitchFigure 9-2 The partition and slot numbering plan for virtual Ethernet adapters9.2 Preparation recommendation When you deploy a System Plan on the HMC, the configuration is validated against the installed system (adapter and disk features and their slot numbers, amount of physical and CoD memory, number/type of physical and CoD CPU and more). To simplify matching the SPT System Plan and the physical system, follow these steps: 1. Create a System Plan of the physical system on the HMC or IVM. 2. Export the System Plan to SPT on your PC and convert it to SPT format. Tip: If the System Plan cannot be converted, use the System Plan to manually create the compatible configuration. Chapter 9. System Planning Tool 371
  • 409. 3. Use this System Plan as a template and customize it to meet your requirements. 4. Import the completed SPT System Plan to the HMC or IVM and deploy it.9.3 Planning the configuration with SPT Figure 9-3 shows the Partition properties window where you can add or modify partition profiles. Notice the Processor and Memory tabs for setting those properties.Figure 9-3 The SPT Partition properties window A useful feature in SPT is the ability to edit virtual adapter slot numbers and check consistency for server-client slot numbers. As a general rule, increase the372 IBM PowerVM Virtualization Managing and Monitoring
  • 410. maximum number of virtual slots for the partitions. Figure 9-4 shows the VirtualSCSI window. Click Edit Virtual Slots to open the window.Figure 9-4 The SPT Virtual SCSI window Chapter 9. System Planning Tool 373
  • 411. 5. Figure 9-5 shows the Edit Virtual Slots window. Here the two Virtual I/O Servers are listed on the left side and the client partitions are listed on the right side. The maximum number of virtual adapters is 1024 for the Virtual I/O Servers. In the SCSI area, you can check that the server adapters from the Virtual I/O Servers match the client partitions.Figure 9-5 The SPT Edit Virtual Slots window Note: If your configuration includes the use of Virtual Fibre Channel, their slot numbers can also be edited on this window.374 IBM PowerVM Virtualization Managing and Monitoring
  • 412. 6. When all elements of the System Plan are completed, you can import it to the HMC to be deployed. Figure 9-6 shows the HMC with the imported SPT System Plan ready to be deployed.Figure 9-6 System Planning Tool ready to be deployed 7. Select deploy as shown in Figure 9-7Figure 9-7 Deploy System Plan 8. Select Deploy System Plan to start the Deploy System Plan Wizard, and follow the directions. Figure 9-8 shows the first menu window where you can select the System Plan and target managed system.Figure 9-8 Deploy System Plan Wizard Chapter 9. System Planning Tool 375
  • 413. 9. The next step is validation. The selected System Plan is validated against the target Managed System. If the validation Status reports Failed, the cause is usually found in the Validation Messages. The most common reason for failure is a mismatch between SPT configuration and the Managed System configuration. The validation window is shown in Figure 9-9.Figure 9-9 The System Plan validation window376 IBM PowerVM Virtualization Managing and Monitoring
  • 414. 10.Next, the partition profiles to be deployed are selected as shown in Figure 9-10. In this case, all partition profiles will be deployed.Figure 9-10 Partitions to Deploy Chapter 9. System Planning Tool 377
  • 415. 11.Next window shows the Deployment Step Order as shown in Figure 9-11. Click Next to continue to the deployment progress steps.Figure 9-11 The Deployment Steps Important: The HMC must be prepared with the correct resources for operating system installation. You can load the Virtual I/O Server operating system onto the HMC from DVD or NIM using the OS_install command. You can define it as a resource using the defsysplanres command, or using the HMC graphical user interface by clicking HMC Management  Manage Install resources.378 IBM PowerVM Virtualization Managing and Monitoring
  • 416. 12.When the preparation steps have been completed, click Deploy to start the deployment. The deployment progress is logged as shown in Figure 9-12.Figure 9-12 The Deployment Progress window Chapter 9. System Planning Tool 379
  • 417. Figure 9-13 shows the partition profiles when deployed on the HMC. These partitions are now ready for installation of the operating systems.Figure 9-13 Partition profiles deployed on the HMC All profiles are created with physical and virtual adapter assignments. In this scenario, the operating system for the Virtual I/O Servers or any of the client partitions was not installed in the deployment process. After the System Plan has been deployed and the configuration has been customized, a System Plan should be created from the HMC. The HMC generated System Plan provides excellent documentation of the installed system. This System Plan can also be used as a backup of the managed system configuration. Important: The first page of the System Plan might be marked as follows: This system plan is not valid for deployment. This means that it cannot be used to restore the configuration. In case a System Plan cannot be generated using the HMC graphical user interface, you can use the following HMC command: HMC restricted shell> mksysplan -m <managed system> -f <filename>.sysplan --noprobe380 IBM PowerVM Virtualization Managing and Monitoring
  • 418. 9.4 Initial setup checklist This section contains a high level listing of common steps for the initial setup of a new system using SPT. Customize the list to fit your environment. 1. Make a System Plan from the HMC of the new system. Delete the pre-installed partition if the new system comes with such a partition. This System Plan is a baseline for configuring the new system. It will have the adapter’s slot assignment, CPU, and memory configurations. 2. Export the System Plan from the HMC into SPT. In SPT, the file must be converted to SPT format. 3. Complete the configuration as much as possible in SPT.  Add one Virtual I/O Server partition if using virtual I/O.  Add one more Virtual I/O Server for a dual configuration, if required. Dual Virtual I/O Server provides higher serviceability.  Add the client partition profiles.  Assign CPU and memory resources to all partitions.  Create the required configurations for storage and network in SPT.  Add virtual storage as local disks or SAN disks.  Configure SCSI connections for MPIO or mirroring if you are using a dual Virtual I/O Server configuration.  Configure virtual networks and SEA for attachment to external networks.  For a dual Virtual I/O Server configuration, configure SEA failover, or Network Interface Backup (NIB) as appropriate for virtual network redundancy.  Assign virtual IVE network ports if an IVE adapter is installed.  Create a virtual server adapter for virtual DVD and for virtual tape if a tape drive is installed.  Apply your slot numbering structure according to your plan. 4. Import the SPT System Plan into the HMC and deploy it to have the profiles generated. Alternatively, profiles can be generated directly on the HMC. Chapter 9. System Planning Tool 381
  • 419. 5. If using SAN disks, create and map them to the host or host group of the Fibre Channel adapters.  If using Dual Virtual I/O Servers, the reserve_policy must be changed from single_path to no_reserve.  SAN disks must be mapped to all Fibre Channel adapters that will be target in Partition Mobility. 6. Install the first Virtual I/O Server from DVD or NIM.  Upgrade the Virtual I/O Server if updates are available.  Mirror the rootvg disk.  Create or install SSH keys. The SSH subsystem is installed in the Virtual I/O Server by default.  Configure time protocol services.  Add users.  Set the security level and firewall settings if required. 7. Configure an internal network connected to the external network by configuring a Shared Ethernet Adapter (SEA).  Consider adding a separate virtual adapter to the Virtual I/O Server to carry the IP address instead of assigning it to the SEA. 8. Create a backup of the Virtual I/O Server to local disk by using the backupios command. 9. Map disks to the client partitions with the mkvdev command.  Map local disks or local partitions.  Map SAN disks.  Map SAN NPIV disks. 10.Map the DVD drive to a virtual DVD for the client partitions by using the mkvdev command. 11.If available, map the tape drive to a virtual tape drive for the client partitions by using the mkvdev command. 12.Add a client partition to be a NIM server to install the AIX and Linux partitions. If a NIM server is already available, skip this step.  Boot a client partition to SMS and install AIX from the virtual DVD.  Configure NIM on the client partition.  Let the NIM resources reside in a separate volume group. The rootvg volume group should be kept as compact as possible.382 IBM PowerVM Virtualization Managing and Monitoring
  • 420. 13.Copy the base mksysb image to the NIM server and create the required NIM resources.14.If using dual Virtual I/O Servers, perform a NIM install of the second Virtual I/O Server from the base backup of the first Virtual I/O Server. If a single Virtual I/O Server is used, go directly to step number 20.15.Configure the second Virtual I/O Server.16.Map disks from the second Virtual I/O Server to the client partitions.17.Configure SEA failover for network redundancy on the first Virtual I/O Server.18.Configure SEA failover for network redundancy on the second Virtual I/O Server.19.Test that SEA failover is operating correctly.20.Install the operating system on the client partitions using NIM or the virtual DVD.  Configure NIB if this is used for network redundancy.  If using MPIO, change the hcheck_interval parameter with the chdev command to have the state of paths updated automatically.  Test that NIB failover is configured correctly in client partitions if NIB is used for network redundancy.  Test mirroring if this is used for disk redundancy.21.Create a system backup of both Virtual I/O Servers using the backupios command.22.Document the Virtual I/O Server environment.  List virtual SCSI, NPIV, and network mappings with the lsmap command.  List network definitions.  List security settings.  List user definitions.23.Create a system backup of all client partitions.24.Create a System Plan of the installed configuration from the HMC as documentation and backup.25.Save the profiles on the HMC. Select the Managed System and click Configuration  Manage Partition Data  Backup. Enter the name of your profile backup. This backup is a record of all partition profiles on the Managed System.26.Back up HMC information to DVD, to a remote system, or to a remote site. Click HMC Management  Back up HMC Data. In the menu, select a target Chapter 9. System Planning Tool 383
  • 421. for the backup and follow the provided instructions. The backup contains all HMC settings such as user, network, security and profile data. 27.Start collecting performance data. It is valuable to collect long-term performance data to have a baseline of performance history.384 IBM PowerVM Virtualization Managing and Monitoring
  • 422. 10 Chapter 10. Automated management This chapter provides an overview of how to automate and streamline the management of partitions with the HMC. The following operations are discussed: Automating remote operations using the HMC command line interface Scheduling jobs on a Virtual I/O Server The number of commands available on the HMC make it impractical to document them all here, this chapter is designed to help you configure the interface and get you started with common tasks. This chapter includes the following sections: 10.1, “Using System Profiles” on page 386 10.2, “Using the HMC command line interface” on page 387 10.3, “Scheduling jobs on the Virtual I/O Server” on page 393© Copyright IBM Corp. 2012. All rights reserved. 385
  • 423. 10.1 Using System Profiles You can make the startup of the partitions easier by placing them in a system profile and starting this system profile. These system profiles contain logical partitions and an associated partition profile to use. The menu to create a system profile is shown in Figure 10-1. To access the menu select a Managed System, click Configuration  Manage System Profiles. In this menu you can select New to add a system profile. Give the system profile a name and select the partitions to be included in the profile.Figure 10-1 Creating a system profile on the HMC386 IBM PowerVM Virtualization Managing and Monitoring
  • 424. Remember: When a Virtual I/O Server is part of a system profile, the system will automatically start this partition first. For more information about system profiles, see the IBM Systems Hardware Information Center.10.2 Using the HMC command line interface In an environment with numerous partitions or servers, it is convenient to perform operations using the HMC Command Line Interface (CLI) instead of the Graphical User Interface (GUI). It is possible to perform almost all operations from the command line that can be done in the graphical interface. Using the CLI enables scripting, can reduce human error when making numerous changes, and allows for the efficient repetition of common tasks. From a system with an SSH client, you can either login interactively to the HMC, or issue remote commands to the HMC in the same manner you would if accessing a remote Unix or Linux server.10.2.1 Configuring the Secure Shell interface You must use the ssh protocol to access the HMC command line interface remotely. In addition, remote command execution must be enabled on the HMC. It is found in the HMC Management panel as Remote Command Execution. Check the box Enable remote command execution using the ssh facility as shown in Figure 10-2. Figure 10-2 The HMC Remote Command Execution menu Chapter 10. Automated management 387
  • 425. 10.2.2 Client configuration Any SSH client program can be used to connect to the HMC. This section shows a few tips that can make remote administration easier when using the openSSH client from a Unix or Linux based system. Host specific client customizations It is unlikely that you will be logged into your workstation or central server with the same username as you would use to access the HMC. This means that every time you invoke an ssh connection to the HMC, you need to specify the username to use on the command line using the -l flag or username@host syntax. Failure to do so results in the ssh client trying to use your current username on the HMC. Example 10-1 shows the default behavior of the ssh client, the connection is attempted using the username of the local user, in this case the user is root. The second connection uses the username@host syntax. Example 10-1 The default behavior of ssh [root@Power7-2-RHEL ~]# ssh hmc9 root@hmc9s password: [root@Power7-2-RHEL ~]# ssh hscroot@hmc9 hscroot@hmc9s password: You can specify host specific parameters in the ~/.ssh/ssh_config file to tailor this behavior. Example 10-2 shows that we have configured our configuration file so that all connections to systems that have host names beginning with hmc are to use the username hscroot. Now the username doesn’t need to be specified every time a connection is attempted. Example 10-2 Using host specific options [root@Power7-2-RHEL ~]# cat ~/.ssh/config Host hmc* user hscroot [root@Power7-2-RHEL ~]# [root@Power7-2-RHEL ~]# ssh hmc9 Password: Last login: Thu Dec 9 21:35:17 2010 from 172.16.254.38 hscroot@hmc9:~> See the ssh_config man page for more customizable parameters.388 IBM PowerVM Virtualization Managing and Monitoring
  • 426. Public key authenticationFor scripting purposes or your own convenience, you might want to configureyour SSH client to use SSH public key authentication. This allows you to log inremotely without being prompted for a password, while still providing a stronglevel of security.The procedure for this is available in the IBM Systems Hardware InformationCenter at:http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/ipha1/settingupsecurescriptexecution.htmExample 10-3 shows is a working example of the configuration.Example 10-3 Configuring SSH public key authentication[root@Power7-2-RHEL ~]# ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa):Created directory /root/.ssh.Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:e0:a6:74:65:84:49:e6:a9:ab:31:3a:a0:6c:5f:9d:fb root@Power7-2-RHEL[root@Power7-2-RHEL ~]#[root@Power7-2-RHEL ~]# scp .ssh/id_rsa.pub hmc9:Password:id_rsa.pub100% 400 0.4KB/s 00:00[root@Power7-2-RHEL ~]# ssh hmc9Password:Last login: Fri Dec 10 10:30:32 2010 from 172.16.20.174hscroot@hmc9:~> KEY=`cat id_rsa.pub`hscroot@hmc9:~> mkauthkeys -a "$KEY"hscroot@hmc9:~> exitexitConnection to hmc9 closed.[root@Power7-2-RHEL ~]#[root@Power7-2-RHEL ~]# ssh hmc9 ls -altotal 56drwxr-xr-x 5 hscroot hmc 4096 Dec 10 10:35 .drwxr-xr-x 5 root root 4096 Nov 29 16:37 ..-rw------- 1 hscroot hmc 18111 Dec 10 10:35 .bash_history-r-xr-xr-x 1 root root 94 Nov 16 13:07 .bash_profile Chapter 10. Automated management 389
  • 427. -r-xr-xr-x 1 root root 327 Nov 16 13:07 .bashrc drwxr-xr-x 2 hscroot users 4096 Jun 2 2009 .fonts drwxr-xr-x 2 hscroot users 4096 Jun 2 2009 .mozilla drwxr-xr-x 2 root hmc 4096 Jun 2 2009 .ssh -rw-r--r-- 1 hscroot hmc 37 Nov 16 13:08 .version -rw-r--r-- 1 hscroot hmc 400 Dec 10 10:35 id_rsa.pub -rw-r--r-- 1 root root 0 Dec 2 16:30 jobsMonitorThread.txt Running non-interactive commands The SSH client is capable of passing commands to a remote SSH server in a non-interactive manner. This is useful for scripting. Example 10-4 shows a simple example of a non-interactive command. Example 10-4 Running a non-interactive command [root@Power7-2-RHEL ~]# /usr/bin/ssh hmc9 lssyscfg -r sys -F name POWER7_2-061AB2P POWER7_1-061AA6P There are two points to notice in the previous example: The full path to the SSH client is used. This is generally a good practice to ensure you are invoking the correct client and not an alias or similarly named file in your path. The remote command is enclosed in single quotes. This is sufficient for basic commands because the shell on the local system will not attempt to interpret anything between the quotes. When scripting you might find that you need to use double quotes or even escape quotes with the operator depending on which shell you want to interpret the commands. For more information about this, see any shell programming book or there is plenty of information available on the World Wide Web.10.2.3 Initial login and shell conventions After you’ve connected to the HMC, you will notice the interface is a restricted version of the bash shell. A few tips for first time users: Pressing the Tab key twice will show all the commands available. Command names follow the regular IBM naming convention for Unix interfaces: commands starting with ls list information, and commands starting with ch change parameters.390 IBM PowerVM Virtualization Managing and Monitoring
  • 428. There are two methods to get help on the command line: either run the command with no parameters, or use the man page. Most of the ls commands provide a lot of information. The -F flag is useful in limiting the fields in the result set for readability. The -F flag allows you to use your own delimiter when specifying fields. If you use colons in your command, the output will be delimited with semicolons. This is useful for scripting because some results include commas in the result data and this can make parsing difficult if you have used commas as your delimiter.10.2.4 Basic reporting We will now cover common commands to get you started. The following command lists all managed systems visible to the HMC: lssyscfg -r sys Most ls commands require the name of a managed system. You can use the -F flag to limit the output to just the name field. This is one command that is worth remembering because you will use it a lot. Where you see <managed system name> in further examples, a name from this output is required: lssyscfg -r sys -F name The following command lists all the partitions in a managed system: lssyscfg -r lpar -m <managed system name> To list all of the partitions visible to an HMC, use a simple for loop: for i in `lssyscfg -r sys -F name` ; do lssyscfg -r lpar -m ${i} -F name; done The following command lists all the physical adapters in a managed system: lshwres -m <managed system name> -r io --rsubtype slot -F unit_phys_loc,phys_loc,description,lpar_name10.2.5 Modifying the power state of partitions and systems Changing the power state of partitions and servers is done using the chsysstate command. In systems that support suspend and resume, the chlparstate command is used to manage these capabilities. It is possible to shutdown a partition using the chlparstate command. To power on a system to partition standby, run the following command, where the managed system name is the name of the server as shown on the HMC: Chapter 10. Automated management 391
  • 429. chsysstate -m <managed system name> -o onstandby -r sys To monitor the status of the server startup, use the lsrefcode command and check the LED status codes: lsrefcode -r sys -m <managed system name> -F refcode Execute the following command to activate all partitions in the System Profile named all_lpars. When there are Virtual I/O Servers in the System Profile, these will automatically be activated before client partitions. If client partitions appear to be started first, they will wait for the Virtual I/O Servers to be started. chsysstate -m <managed system name> -o on -r sysprof -n all_lpars Run the following command to shut down a partition immediately: chsysstate -m <managed system> -r lpar -o shutdown -n <lpar name> --immed10.2.6 Modifying profiles Modifications to profiles is done with the chsyscfg command. Commands to modify profiles can become quite long. However, when you are familiar with the syntax it is efficient to use the command line, especially if multiple updates are required. The man page provides more examples and a detailed syntax guide. Example 10-5 shows how to change the Normal profile on partition 9 in order to: Decrease the minimum processing units by 0.1. Set the desired processing units to 0.2. Increase the maximum processing units by 0.2. Example 10-5 Profile modification chsyscfg -r prof -m POWER7_2-061AB2P -i "name=Normal,lpar_id=9,min_proc_units-=0.1,desired_proc_units=0.2,max_proc_unit s+=0.2"10.2.7 Dynamic LPAR operations The chhwres command is used to perform dynamic LPAR operations. The man page has many examples of how to use the chhwres command. A few are listed here.392 IBM PowerVM Virtualization Managing and Monitoring
  • 430. Example 10-6 shows how to increase by 128 MB the memory of the partition with ID 1, and time out after 10 minutes. Example 10-6 Memory dynamic operation chhwres -r mem -m <managed system name> -o a --id 1 -q 128 -w 10 Example 10-7 shows how to add a virtual Ethernet adapter with a port VLAN ID of 4 to the partition with ID 3. The adapter also has the trunk flag enabled and trunks VLAN 5 and 6 with a priority of 1. Example 10-7 Virtual adapter dynamic operation chhwres -r virtualio -m POWER7_2-061AB2P -o a --id 3 --rsubtype eth -a "ieee_virtual_eth=1,port_vlan_id=4,"addl_vlan_ids=5,6",is_trunk=1,trunk_priorit y=1"10.3 Scheduling jobs on the Virtual I/O Server Starting with Virtual I/O Server Version 1.3, the crontab command is available to enable you to submit, edit, list, and remove cron jobs. A cron job is a command run by the cron daemon at regularly scheduled intervals, such as system tasks, nightly security checks, analysis reports, and backups. With the Virtual I/O Server, a cron job can be submitted by specifying the crontab command with the -e flag. The crontab command invokes an editing session that enables you to modify the padmin users’ crontab file and create entries for each cron job in this file. TIp: When scheduling jobs, use the padmin user’s crontab file. You cannot create or edit other users’ crontab files. When you finish creating entries and exit the file, the crontab command copies it into the /var/spool/cron/crontabs directory and places it in the padmin file. The following syntax is available to the crontab command: crontab [-e padmin | -l padmin | -r padmin | -v padmin] -e padmin This edits a copy of the padmin’s crontab file. When editing is complete, the file is copied into the crontab directory as the padmins crontab file. -l padmin This lists the padmins crontab file. Chapter 10. Automated management 393
  • 431. -r padmin This removes the padmin’s crontab file from the crontab directory. -v padmin This lists the status of the padmins cron jobs.394 IBM PowerVM Virtualization Managing and Monitoring
  • 432. 11 Chapter 11. High-level management This chapter describes the IBM Systems Director high-level management tool. This chapter contains the following sections: Systems Director overview IBM Systems Director installation on AIX Log on to IBM Systems Director Preparing managed systems Discover managed systems Collect inventory data View Managed resources Power Systems Management summary IBM Systems Director VMControl plug-in summary Manage Virtual I/O Server with IBM Systems Director IBM Systems Director Active Energy Manager plug-in© Copyright IBM Corp. 2012. All rights reserved. 395
  • 433. 11.1 Systems Director overview This sections provides an overview of IBM Systems Director, a short description on how to install it on AIX and how to get started with discovering and managing resources. IBM Systems Director 6.3 is a platform management foundation that streamlines the way physical and virtual systems are managed across a multi-system environment. Leveraging industry standards, IBM Systems Director supports multiple operating systems and virtualization technologies across IBM and non-IBM platforms including PowerVM, KVM, z/VM®, VMware, Windows Server 2008 x64 Editions with Hyper-V role enabled. IBM Systems Director is an easy-to-use, point-and-click, simplified management solution. Through a single user interface, IBM Systems Director provides systems management personnel with a single-point-of-control, helping reduce IT management complexity and cost. It uses consistent views for visualizing managed systems, determines how these systems relate to one another while identifying their individual status, thus helping to correlate technical resources with business needs. IBM Systems Directors web and command-line interfaces provide a consistent interface focused on these common tasks: Discovering, navigating, and visualizing servers, storage and network resources with a detailed inventory and relationships to the other network resources. Notifying users of problems that occur on systems and the ability to navigate to the source of the problem. Notifying users when systems need updates, and distributing and installing updates on a schedule. Analyzing real-time data for systems, and setting critical thresholds that notify the administrator of emerging problems. Configuring settings of a single system, and creating a configuration plan that can apply those settings to multiple systems. Updating Systems Director and installed plug-ins to add new features and function to the base capabilities. Managing the life cycle of virtual resources. Additional information can be found in the IBM Systems Director Information Center at: http://publib.boulder.ibm.com/infocenter/director/pubs/index.jsp396 IBM PowerVM Virtualization Managing and Monitoring
  • 434. 11.1.1 Plug-ins included with IBM Systems Director Base plug-ins (also called managers) provided with IBM Systems Director deliver core capabilities to manage the full life cycle of IBM and non-IBM server, storage, network, and virtualization systems. These are: Discovery Manager Discovers virtual and physical systems and related resources Collects inventory data about hardware and software installed on systems Status Manager Provides health status, alerts and monitors of system resources Automatic notifications for hardware events or exceeded thresholds Update Manager Notifies, downloads, distributes, and installs updates for systems Automation Manager provides tools to notify an administrator or run predefined tasks automatically when certain events occur. Configuration Manager Configures systems after installation by applying templates to servers, storage, and network resources Remote Access Manager Provides tools that support running and monitoring applications and services running on remote systems. Also gives access to a remote console, a command line, and file transfer features to target systems Storage Management Lifecycle management for selected IBM System Storage and select EMC storage systems including discovery, health and status monitoring, configuration, updates, and virtualization Network Management Management functions for network devices, including discovery, inventory, health and status monitoring, and configuration Chassis Management Provides lifecycle management of your IBM BladeCenter chassis and related resources including discovery, health and status monitoring, configuration, updates, and virtualization. Chapter 11. High-level management 397
  • 435. IBM System x® Management Lifecycle management for modular System x systems and related resources, including discovery, health and status monitoring, configuration, updates, and virtualization Power Systems Management Lifecycle management for IBM Power systems and HMC or IVM platform managers, including discovery, health and status monitoring, configuration, updates, and virtualization IBM System z® Management Discovers System z hosted virtual servers and provides status information about them11.1.2 Plug-ins for Systems Director Systems Director allows you to extend the base platform with additional plug-ins. Some are pre installed but not activated others have to be installed separately. The plug-ins can be used for free for 90 days after activation then a license has to be purchased and installed. VMControl Installed by default it provides advanced management of virtual environments across multiple virtualization technologies and hardware platforms including customization and deployment of virtual images (VMControl Standard edition), and management of virtual workloads in system pools (VMControl Enterprise edition) Active Energy Manager Provides advanced energy and thermal monitoring of IBM servers and IBM BladeCenter products plus a range of intelligent datacenter devices. AEM collects historical power consumption and temperature data for all supported devices and provides customized reporting. It also facilitates usage of power policies to cap, statically or dynamically manage the power consumption and in given circumstances performance. Storage Control Integrated management, based on IBM Tivoli Storage Productivity Center, of an expanded set of storage subsystems and Fibre Channel switches. For a complete list of supported devices please check the infocenter link provided above. Network Control Discovery, inventory collection, and monitoring of network devices including launching of vendor applications for configuration of network devices.398 IBM PowerVM Virtualization Managing and Monitoring
  • 436. AIX Profile Manager Manages the configuration of multiple remote AIX systems.Service and Support Manager Automatically detects serviceable hardware problems and collects supporting data for serviceable hardware problems that occur on your monitored endpoint systems. Provides call-home support through the integrated IBM Electronic Service Agent™PowerHA SystemMirrorr Manages PowerHA SystemMirror capabilities through a browser interface.WPAR Manager Provides a centralized point of control for managing AIX workload partitions.There are three main components of a Systems Director 6.3 deployment: Systems Director Server acts as the central management server. It can be deployed on AIX, Windows Server 2003, Windows Server 2008, Red Hat, or SUSE Linux. Starting with version 6.3 IBM Systems Director is shipped with IBM DB2 ESE V9.7 FP4 as the managed database that replaces the previous Apache Derby as the default database. Oracle or SQL Server can also be used as managed databases. Migration from IBM Systems Director 6.2.x is supported and done automatically in cases where Apache Derby was used for the older versions. If the managed database is Oracle or MS SQL the default migration behavior is to continue using the existing version but it can be changed to migrate to managed IBM DB2. Management Console provides a common web browser-based interface for administrators to the full set of Systems Director tools and functions for all supported platforms. Common and Platform Agents reside on managed servers running supported operating systems. Common Agent enables use of the full suite of Systems Director 6.3 services. Platform Agent implements a subset of these. It provides a smaller footprint option for servers that perform limited functions or are equipped with less powerful processors. Agentless support is provided for x86 servers that run older versions of Windows or Linux operating systems, along with other devices that conform to Distributed Component Object Model (DCOM), Secure Shell (SSH), and Simple Network Management Protocol (SNMP) specifications.A single Systems Director server can manage and monitor thousands of physicaland logical endpoints, agentless or installed with Common or Platform Agents.There is no limit to the number of SNMP devices that can be managed. For largedata center environments Systems Director can be set up as a hierarchical Chapter 11. High-level management 399
  • 437. management system where a Systems Director global server manages the data and status of multiple Systems Director domain servers. The overall IBM Systems Director 6.3 management topology is summarized in Figure 11-1. Management IBM System Director Global Server Console(s) Web Interface AIX, Linux, Windows IBM System Director Domain Servers AIX, Linux, WindowsFigure 11-1 IBM Systems Director management topology IBM Director is provided at no additional charge for use on IBM systems. You can purchase additional IBM Director Server licenses for installation on non-IBM Servers. In order to extend IBM Director capabilities, several optional extensions (see 11.1.2, “Plug-ins for Systems Director” on page 398) can be purchased and integrated. Systems Director also provides upward tight integration with IBM Tivoli products such as Tivoli Monitoring, Tivoli Common Reporting, and Tivoli Application Dependency Discovery Manager, and other 3rd party enterprise service management solutions. For further information, see the following IBM Systems Director site: http://www.ibm.com/systems/software/director400 IBM PowerVM Virtualization Managing and Monitoring
  • 438. 11.1.3 IBM Systems Director editions IBM Systems Director 6.3 is delivered in three editions providing levels of systems management capabilities to suit different needs from small businesses to large enterprise customers.] Table 11-1 summarizes the capabilities of IBM Systems Director editions from the no-charge Express edition license, over Standard edition and to Enterprise editions providing the highest level of systems management capability. Table 11-1 IBM Systems Director editions Feature Express Standard Enterprise Visualize physical/virtual system yes yes yes relationships Monitor system health yes yes yes Provide threshold and error alerts yes yes yes Update operating system and firmware yes yes yes Simplify deployment with virtual images yes yes Control energy use within existing yes yes capacity Monitor network system health with yes yes servers and storage Automate configuration and placement yes for new workloads Manage workload availability yes end-to-end (systems pools) Understand capacity yes Analyze and report historical yes performance11.1.4 Choosing the