Volume 1, Issue 2 Devoted to our POWER Systems(p,i) customers in OH PA, KY, IN, MI
, July 2009
Midwest POWER Systems Newsletter
Midwest POWER Systems Newsletter
In this Issue POWER Systems Tech JS23 & JS43 Blades – New!!
University! On April 28th, 2009 IBM announced two new POWER6
Orlando, FL Sept 21- 25 blades – the JS23 and JS43. These latest editions to the
• Power Systems
The IBM Power Systems Technical Power Systems family pack a lot of punch. Both “J”
Tech University University featuring IBM AIX/Linux and blades run POWER6 4.2GHz processors and support the
• New JS23/JS43 IBM i is the premiere Power technical
AIX 5.3, AIX 6.1, Linux, and IBM i (v6) operating
Blade Servers systems.
• IBM Systems Don’t miss the opportunity to gain new
b PowerVM Standard Edition is still included free of charge.
skills, obtain an update on what’s new,
network with colleagues, and meet with Enterprise edition is also available, which adds Live
update our subject matter experts in a Partition Mobility and Active Memory Sharing
• SLES 11 for professional networking collaboration of capabilities.
technical gurus and industry professionals
like yourself. The JS23 has 4 cores and supports up to 64GB of RAM (in
• POWER-i (i- eight DIMM slots). It supports one of the following disk
Choose from over 250 technical sessions
that cover: drives: 73 / 146 / 300GB 10K RPM spinning drive or a
• VIOS Tips 69GB solid state disk. Dual port gigabit Ethernet, 4 USB,
and 1 serial over lan are integrated. Two PCI-E expansion
• HMC Tips • New features, tips and best
cards are also available, CFFh and CIOv. The rPerf and
practices for IBM POWER6™, IBM
AIX® V6.1 and IBM i v6.1
CPW numbers are 36.28 and 14,400, respectively.
• Active Memory
Sharing • Workload consolidation delivering
maximum ROI Adding feature code 8446 converts the JS23 into the two-
• SEA & • IBM PowerVM software to deliver
wide JS43 model. The blade is now double wide and has
Broadcast industry-leading virtualization twice the number of features – 8 cores, 128GB of RAM
Storms • How to improve server utilization (sixteen DIMM slots), two spinning or solid state disk, 2x
and share I/O resources for integrated dual port gigabit Ethernet, and up to four PCI-E
• Power5/Power6 expansion cards (2x CFFh; 2x CIOv). The rPerf and CPW
better total cost of ownership
Firmware (TCO) numbers are 68.20 and 24,050 , respectively.
Update • Consolidation of Intel® and IBM i
applications into a single chassis CFFh expansion cards:
with the new blade servers and QLogic Ethernet & 4 Gb Fibre Channel (#8252)
the IBM System i™ platform 4x InfiniBand DDR (8258)
• Best practices for a smooth QLogic 8 Gb Fibre Channel (8271)
migration to IBM POWER6 and Voltaire 4x InfiniBand DDR (8298)
IBM i v6.1 or IBM AIX V6.1 CIOv expansion cards:
Emulux 8Gb Fibre Channel (8240)
• Dynamic reallocation tools and
QLogic 4Gb Fibre Channel (8241)
techniques to handle changing
business cycles and surges in
QLogic 8Gb Fibre Channel (8242)
demand 3Gb SAS Pass-through (8246)
• IBM Power Architecture® Supported Operating System levels:
technology and the steps needed AIX 5.3: TL07-SP9, TL08-SP7, TL09-SP4, &
for a greener IT operation and TL10
financial advantage AIX 6.1: TL00-SP9, TL01-SP5, TL02-SP4, &
• Enabling security, high TL03
performance and high availability SLES10 Service Pack 2 & SLES11
for 24x7 operations RHEL 4.6, 4.7, 5.1, 5.2, & 5.3
• Tools that simplify the life of the IBM i 6.1
Visit Power University for agenda, costs, and JS23 / JS43 Home Page
sign-up information. Looking forward to JS23 / JS43 Server Proven (Compatibility) web page
seeing you in Orlando, FL in September.
Or contact Rick Milton firstname.lastname@example.org
3rd Quarter Midwest Newsletter 6/16/2010 Page 1 of 8
New Release of IBM Systems Director 6.1.1
To update Systems Director follow the steps below
1. Logon into Systems Director
2. Manage Tab
3. View updates – upper right hand link
4. Check for updates
5. Select all and install updates.
6. Recycle Systems Director services.
Systems Director 6.1 feature spotlight
Acquires, distributes and installs required firmware, device drivers and operating system updates using predefined policies
This includes compliance status to indicate what managed systems may require critical updates. The IBM Systems Director
product itself is now updated via Update Manager and allows customers to see which updates or which configurations have
been applied to a system
Download, Manage, and apply recommended Updates for:
Power Systems - AIX, p, Linux, i5/OS, HMC and System Firmware
System x and BladeCenter - firmware and drivers
Updating AIX – uses a NIM server to upgrade AIX client partitions.
AIX updates are downloaded and staged on the Systems Director server
Updates are then copied to the NIM server /exports directory, this needs to be a filesystem or else / will fill-up.
Systems Director will work with the NIM server to update and install the AIX update on the AIX client partition,
and reboot the server if necessary.
#Note – Systems Director assumes that the NIM server was used to create the AIX client partition, meaning that
the NIM server needs to know about the AIX client partition. If the NIM server does not know about the AIX
client partition, then you need to create a NIM Client machine, and on the AIX client partition, run <niminit –a
name=Client_Hostname –a master= NIMmaster_Hostname>
Updating Linux, HMC, firmware, etc… will use the Systems Director staging area to update the client system.
Contributed by Doug Herman (email@example.com).
VIOS Tips: Two commands that might be of assistance in a VIOS environment (systat, and lsgcl). Sysstat lists current load on the
system, similar to AIX version of uptime command, whilel lsgcl lists VIOS commands in datesetamped format.
03:49PM up 80 days, 3:11, 2 users, load average: 0.00, 0.01, 0.08
User tty login@ idle JCPU PCPU what
padmin vty0 27Jun09 17days 20:54 20:54 -rksh
# lsgcl | grep mkvdev (lists all VIOS commands with mkvdev in the command)
Jun 23 2007, 11:22:17 padmin mkvdev -vdev djdata -vadapter vhost7 -dev
Jul 3 2007, 14:17:14 padmin mkvdev -vdev aixgui -vadapter vhost0 -dev
Jul 25 2007, 11:59:14 padmin mkvdev -vdev sandboxlv -vadapter vhost9 -dev
Jul 25 2007, 12:08:29 padmin mkvdev -sea ent1 -vadapter ent2 -default ent2
Jun 3 2008, 10:04:08 padmin mkvdev -vdev cd1 -vadapter vhost3
Mar 4 2009, 16:14:31 padmin mkvdev -fbo -vadapter vhost5
3rd Quarter Midwest Newsletter 6/16/2010 Page 2 of 8
SLES 11 now Available for POWER
SLES 11 delivers mission-critical support for Power to help improve service, reduce cost, and manage risk. For customers who have been waiting for Linux to
approach AIX in terms of exploiting Power 6 capabilities, now may be the time to consider running Linux on Power. IBM controls development of AIX so Linux
generally lags behind and catches up as the OpenSource community and respective distributions approve Power specific code. SLES 11 represents Novell's
latest offering and represents a huge leap in many key areas. . If you are a Red Hat fan, be patient, as the distros tend to leapfrog each other with features at
major releases. IBM is actively working with each distro to provide new, Power- specific code.
Demonstrating our commitment to Linux, the following contributions have been made by IBM to SLES 11 for Power
•DLPAR Memory Remove (this has been a long time coming)
•PowerVM Active Memory Sharing
•Dynamic Heterogeneous Multi-Path I/O
•N_Port ID Virtualization (NPIV)
•IBM Installation Toolkit for Linux v3.1
•IBM PowerVM Lx86 V1.3.1
Note that Live Partition Mobility is not missing, it has been available since It was announced on Power 6 using SLES 10 sp1 and Red Hat 5.2.
Contributed by Kevin McCombs firstname.lastname@example.org
HMC Tip – Using vtmenu to access your lpars
The HMC command vtmenu can be used to access any lpar within any managed frame within your environment.
For those administrators who either don’t care for the time aspect of navigating through the HMC Gui or just prefer
the command line in general, the vtmenu command provides this access.
Administrators can either telnet or ssh to the HMC and upon login, simply enter the ‘vtmenu’ command. If your HMC is
controlling multiple servers or frames, you will be presented with a list of these frames. Once you select a frame, you then will be
iSeries Redbooks Website
presented with a list of the partitions or lpars within that frame (see screen shot below).
IBM iSeries IP Networks: Dynamic! May 5, 2004 SG24-6718-00
V5 TCP/IP Applications on the IBM iSeries May 11, 2004 SG24-6321-00
Using IBM WebSphere Host Access Transformation Services V5 May 17, 2004 SG24-6099-00
Lotus Domino 6 Multi-Versioning Support on the IBM iSeries May 21, 2004 SG24-6940-00
IBM i5 and iSeries System Handbook: IBM i5/OS Version 5 Release 3 May 28, 2004 GA19-5486-25
Domino 6 for iSeries Best Practices Guide June 18, 2004 SG24-6937-00
The IBM TotalStorage Tape Libraries Guide for Open Systems June 25, 2004 SG24-5946-02
IBM and PeopleSoft Technology Foundation: Ensuring a High Quality of Service June 29, 2004 SG24-6308-00
Student Edition WebSphere Development Studio for iSeries V5.0 July 15, 2004 SG24-7086-00
Once you’ve selected an lpar, you’ll simply be presented with a console login screen.
See IBM’s Infocenter webpages regarding vtmenu for further capabilities.
Contributed by Rick Beach email@example.com
3rd Quarter Midwest Newsletter 6/16/2010 Page 3 of 8
Active Memory Sharing – New POWER Systems Feature
IBM announced Active Memory Sharing, a feature of PowerVM Virtualization technology that offers customers greater
efficiency through intelligent management of memory allocation, and improve performance of the applications at the
same time. Combined with the dynamic allocation of processors to LPARs, the dynamic allocation of memory
depending on the workload characteristics improves flexibility and optimum resource utilization to lower the TCO.
PowerVM Active Memory Sharing is designed to increase memory utilization on systems that are running partitions
with variable memory requirements. Instead of dedicating memory to partitions, Active Memory Sharing can
automatically flow the memory between partitions as their memory demands change. For example, systems with
partitions that serve different parts of the world or day and night workloads can have memory automatically moved
from the partition that is winding down to the partition that is ramping up, improving memory utilization and system
A shared memory pool is similar to shared processor pool and all of the memory activities (memory allocation and de-
allocation) takes place within the shared memory pool. The size of the shared memory pool can be changed
dynamically and up to 128 logical partitions can be assigned to a memory pool. If the amount of memory required by
the partitions in the shared memory pool exceeds the amount of memory available in the shared memory pool then
paging will occur to the paging device owned by the VIOS.
This new virtualization feature is optionally configurable on a partition basis, enabling Power servers to support a
combination of dedicated and shared memory partitions. Active Memory Sharing is only provided with PowerVM
Minimum system requirements to use AMS are:
* An IBM Power System server based on the POWER6 processor
* Enterprise Edition of PowerVM
* Firmware level 340_070
* HMC V7R3.4.0 Service Pack 2
* Virtual I/O Server Version 2.1.1 for both HMC and IVM managed systems
* OS - AIX 6.1-TL03 or IBM i 6.1 or SUSE Linux Enterprise Server 11 on client LPARs
* All resources on client LPARs should be virtualized (Processor and I/O devices)
* paging devices on one or more Virtual IO Servers.
Partitions with dedicated Memory Partitions with shared Memory
1. White paper on AMS
2. Redpaper PowerVM Virtualization Active Memory Sharing REDP-4470-00 http://www.redbooks.ibm.com
Contributed by Ravi Singh (firstname.lastname@example.org).
3rd Quarter Midwest Newsletter 6/16/2010 Page 4 of 8
SEA Failover and Avoiding Broadcast Storms
The two common methods available to provide virtual I/O client partition network redundancy in dual Virtual I/O Servers
configurations are Network Interface Backup (NIB) and Shared Ethernet Adapter (SEA) Failover. Network interface Backup is
implemented at the client LPAR using two virtual ent devices. Shared Ethernet Adapter Failover was introduced at VIOS 1.2
Fix Pack 7. It has become the more popular method of choice. Shared Ethernet Adapter failover provides redundancy by
configuring a backup Shared Ethernet Adapter on a different Virtual I/O Server logical partition that can be used if the primary
Shared Ethernet Adapter fails. The client LPAR network connectivity continues without disruption.
Typical SEA Failover Configuration
Shared Ethernet Adapter (SEA) Failover is implemented on the Virtual I/O Server using a bridging (layer-2) approach to access
external networks. SEA Failover supports IEEE 802.1Q VLAN-tagging, unlike Network Interface Backup. With SEA Failover,
two Virtual I/O Servers have the bridging function of the Shared Ethernet Adapter to automatically fail over if one Virtual I/O
Server is unavailable or the Shared Ethernet Adapter is unable to access the external network through its physical Ethernet
A broadcast storm is a situation where one message that is broadcast across a network results in multiple responses. Each
response generates more responses, causing excessive transmission of broadcast messages. Severe broadcast storms can block
all other network traffic, but they can usually be prevented by carefully configuring a network to block illegal broadcast
How to avoid a potential broadcast storm
1. Upgrade your VIO code to current levels. The latest updates available from IBM Fix Central are VIOS 18.104.22.168 (Fix
Pack 21) and VIOS 22.214.171.124 (Fix Pack 11.1). See VIOS Support Site for VIOS fixes/levels
2. When setting up SEA Failover, be sure to create the standby SEA failover in one step using the mkvdev command.
mkvdev -sea ent#_physical -vadapter ent#_data_veth -default ent#_data_veth -defaultid ent#_data_veth_vid -attr
Do not create the standby SEA as a regular SEA and then chdev it to failover mode. If the SEA is configured in 2
steps, one to create the SEA adapters and the second one to run a chdev to add in the HA value, it can cause a
3. Talk to your network engineers about implementing BPDU on the switch ports. The Shared Ethernet Adapter is
designed to prevent network loops. However, as an additional precaution, you can enable Bridge Protocol Data Unit
(BPDU) Guard on the switch ports connected to the physical adapters of the Shared Ethernet Adapter. BPDU Guard
detects looped Spanning Tree Protocol BPDU packets and shuts down the port. This helps prevent broadcast storms on
Other related information can be found at:
1) InfoCenter: Shared Ethernet Adapter Failover (SEA)
2) Document from Cisco regarding Spanning Tree: Understanding Spanning Tree Protocol
Contributed by Tony Garone email@example.com
3rd Quarter Midwest Newsletter 6/16/2010 Page 5 of 8
IBM i-6.1 enhancements 2Q 2009 Solid State Device (SSD) announcement
• IBM i supports additional IBM Power6 On April 28th, IBM announced the ability to support Solid
hardware State Disk (SSD) on IBM i 6.1. With no seek time or
oVirtual Tape support enables IBM i partitions to rotational delays, SSDs can deliver substantially better I/O
directly backup to PowerVM VIOS attached tape
drive saving hardware costs and management time. performance than Hard Disk Drives (HDD), bridging the gap
oIBM i supports additional options for customers between memory and disk speeds. On many systems,
looking to implement SAN solutions. Enhancements HDD’s capacity utilization is held low to help ensure higher
include smart fibre channel attachment of DS8000 I/O performance and more consistent response time. Often,
extended to Power5 Systems, new attachment
options for DS6800 and support for DS5000 with for performance sensitive workloads, this is less than
PowerVM. 30-50% of capacity. SSD capacity utilization is not
oThe i Edition Express for BladeCenter S restricted and can run much closer to 100% without a
configuration has been updated to increase the performance impact.
minimum memory from 2GB to 4GB on the
PowerBlade JS12 Express blade and to include a
new Intelligent Copper Pass-Thru Module. Mixing SSD and HDD can provide a cost effective and
oFor more information: Announcement Letter highly efficient solution. It is typical for databases to have a
209-078 large percentage of data which is infrequently used (“cold”)
• IBM i integration with Bladecenter and System and a small percentage of data which is frequently used
(“hot”). Only 10-20% of the data may be considered hot
o i 6.1 supports the Microsoft software initiator
service with select models of Bladecenter and data, but it can be responsible for 80-90% of the disk
System x servers providing the same level of activity. Since SSD offers the best performance, it should be
integration, while saving the expense of the iSCSI focused on hot data. HDD offers a lower storage cost, so it
hardware adapter. Supported with Microsoft® should be focused on cold data. This can allow the use of
Windows 2008 Server and Windows 2003
Server. larger HDD and/or use them at a higher percent of capacity,
o For more information: IBM i Integration with reducing the total quantity of drives needed.
BladeCenter and System x
• Two new features join the DB2 ® Web Query To allow you to identify and move hot and cold data to the
for i family: appropriate disk drives, IBM has included a “Trace and
oDB2 Web Query for i is enhanced with two new
features to enable integration between DB2
Balance” function as part of IBM i 6.1. It monitors an ASP
WebQuery reporting environment and Microsoft (Aux Storage Pool) to determine hot and cold data. Upon
products. DB2 Web Query Adapter for Microsoft request, it automatically moves hot data to SSD and cold
SQL Server provides connectivity from DB2 Web data to HDD. You can monitor and rebalance an ASP at any
Query in IBM i to remote SQL Databases. DB2 Web
Query Spreadsheet Client provides enhanced time. A few key OS files are automatically placed on SSD.
capabilities for user of Microsoft Excel 2002, or later You can also specify specific database objects to be placed
oFor more information: Announcement Letter on SSDs as well.
• DB2 Storage Engine for My SQL
oIBM and MySQL are delivering a SQL storage
Application Response time
engine for MySQL on i5/OS. With a DB2 storage
72 HDD + 16 SSD No Balance
engine, the applications written to MySQL will run on 72 HDD + 16 SSD Data Balanced
i5/OS and the data will be stored in DB2. This will
allow you to implement online and transactional
MySQL applications while storing all of the data in a
single, easy-to-manage DB2 Database.
oFor more information: Using IBM DB2 for i as a Trans/min
Storage Engine of MySQL Redbook
• The IBM Temporary Software License for i
oThe IBM temporary Software License for i is
enhanced to offer a the temporary licensing of i5/OS Not only are SSDs capable of driving tens of thousands of I/
and IBM i processors, users, and application O operations per second (IOPS), as opposed to hundreds for
servers. HDDs, they can also provide a more reliable system.
oFor more information: Announcement Letter Compare the chance of one SSD failing versus the chance of
209-085 one out of several HDD failing (using the scenario of a few
SSDs and a few larger HDDs replacing many smaller
Contributed by John Bizon firstname.lastname@example.org
HDDs). SSD have no moving parts, which adds to its
reliability. Using sophisticated management of flash
memory, IBM can avoid over using storage locations (wear
leveling). Finally, there is a large amount (80%) of spare
storage in IBM SSDs on standby to extend its life. For more
information refer to: Performance Value of SSD using IBM i
Contributed by Dean Woodke email@example.com
3rd Quarter Midwest Newsletter 6/16/2010 Page 6 of 8
Power5/Power6 Firmware Updates – Naming Conventions
IBM POWER systems firmware naming conventions:
· PP = package identifier;
a) If this value is 01, it is identifying server (system) firmware
b) On Power5 Systems, if this value is 02, it identifies power subsystem firmware (Bulk Power Code).
· NN = machine type/model group
The server firmware for each machine type/model group is given its own unique 2-character code.
In IBM Power Systems™, the server firmware for the individual machine type/model groups are identified by one of the following: EA, EH,
EM, EL, ES
In Power6 Systems™ Bulk Power Code is identified by either: EB or EP
· SSS = Release Level Indicator (e.g., 310)
· FFF = Service pack level within that release (this number is incremental and increases with each service pack)
· DDD = Release or Service Pack level of the last disruptive level
Release Levels and Service Packs consist of a cover letter, an XML file and the firmware
RPM file (for example, 01EM310_001_001.xml and 01EM310_001_001.rpm).
Firmware packages also have Severity Definitions; here’s how they are classified:
Symbol Meaning Description
HIPER High Impact/PERvasive Should be installed as soon as possible.
SPE SPEcial Attention Should be installed at earliest convenience. Fixes for
low potential high impact problems
ATT ATTention Should be installed at earliest convenience. Fixes for
low potential low to medium impact problems.
PE Programming Error Can install when convenient. Fixes minor problems.
When should you consider updating firmware ? There are some natural points at which firmware should be evaluated for potential
· When a subscription notice advises of a critical or HIPER (highly pervasive) fix.
· When one of the twice-yearly Release Levels are released.
· Whenever new hardware is introduced into the environment.
· Anytime HMC firmware levels are adjusted.
· Whenever an outage is scheduled for a system which otherwise has limited opportunity to update or upgrade.
· When your system's firmware level is approaching end-of-service or “End of Service Pack Support”.
· If other similar hardware systems are being upgraded and firmware consistency can be maximized by a more homogeneous firmware level.
· On a “twice yearly” cycle if firmware has not been updated or upgraded within the last year.
Finally, be sure to check out the “Service and support best practices for POWER Systems” web page at:
On this page is a link to firmware best practices labeled:
IBM Power Systems System Firmware (Microcode) Service Strategies and Best Practices
Contributed by Ross Coniglio (firstname.lastname@example.org).
OH and No. KY
Cleveland, OH Kevin McCombs email@example.com Cincinnati, OH Ross Coniglio firstname.lastname@example.org
Columbus, OH Rick Beach email@example.com
Indianapolis, IN Brett Murphy firstname.lastname@example.org Pittsburgh, PA email@example.com
Flint, MI Dean Woodke firstname.lastname@example.org Detroit, MI John Bizon email@example.com
Detroit, MI Ravi Singh firstname.lastname@example.org Detroit, MI Rick Milton email@example.com
Detroit, MI Doug Herman firstname.lastname@example.org Mgr, Brian Richmond email@example.com
3rd Quarter Midwest Newsletter 6/16/2010 Page 7 of 8
3rd Quarter Midwest Newsletter 6/16/2010 Page 8 of 8