Your SlideShare is downloading. ×
Managing disk subsystems using ibm total storage productivity center sg247097
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Managing disk subsystems using ibm total storage productivity center sg247097

3,674
views

Published on

Published in: Technology, Business

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
3,674
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Front coverManaging Disk Subsystems usingIBM TotalStorage Productivity CenterInstall and customize ProductivityCenter for DiskInstall and customize ProductivityCenter for ReplicationUse Productivity Center tomanage your storage Mary Lovelace Jason Bamford Dariusz Ferenc Madhav Vazeibm.com/redbooks
  • 2. International Technical Support OrganizationManaging Disk Subsystems using IBM TotalStorageProductivity CenterSeptember 2005 SG24-7097-01
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page ix.Second Edition (September 2005)This edition applies to Version 2 Release 1 of IBM TotalStorage Productivity Center (product number5608-TC1, 5608-TC4, 5608-TC5).© Copyright International Business Machines Corporation 2004, 2005. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  • 4. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 5 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 7 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 10 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 12 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 15 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.1.1 CIM/WEB management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.1 The SNIA Shared Storage Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.2 SMI Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.3 Integrating existing devices into the CIM model . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.4 CIM Agent implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.5 CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3 Common Information Model (CIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.1 How the CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4 Service Location Protocol (SLP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.2 SLP service agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.3 SLP user agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.4 SLP directory agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.4.5 Why use an SLP DA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.6 When to use DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.7 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.4.8 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . . 40 2.4.9 Configuring SLP Directory Agent addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5 Productivity Center for Disk and Replication architecture . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 3. TotalStorage Productivity Center suite installation . . . . . . . . . . . . . . . . . . 43 3.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.1 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45© Copyright IBM Corp. 2004, 2005. All rights reserved. iii
  • 5. 3.1.3 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 45 3.1.4 Default databases created during install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.1 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Uninstall Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.3 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.4 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.6 Change the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.1 Prerequisite product install: DB2 and WebSphere . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5.2 Installing IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.3 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . . 86 3.5.5 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.5.6 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 100 3.5.7 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 4. CIMOM installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5.1 ESS CLI install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5.2 ESS CIM Agent install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.6 Configuring the ESS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.6.1 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.6.2 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.6.3 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.6.4 CIMOM User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 4.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 4.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 150 4.7.4 Configuring IBM Director for SLP discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.7.5 Registering the ESS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 4.7.6 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 167iv Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 6. 4.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 1734.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 173 4.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 4.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Chapter 5. TotalStorage Productivity Center common base use . . . . . . . . . . . . . . . 1775.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.3.1 Configure MDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5.3.2 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.3.3 Discovering new storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.3.4 Manage CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.3.5 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945.5 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5.5.1 Changing the display name of an ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.5.2 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.5.3 Assigning and unassigning ESS volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.5.4 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.5.5 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025.6 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.6.1 Changing the display name of a SAN Volume Controller . . . . . . . . . . . . . . . . . . 204 5.6.2 Working with SAN Volume Controller mdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.6.3 Creating new Mdisks on supported storage devices. . . . . . . . . . . . . . . . . . . . . . 206 5.6.4 Create and view SAN Volume Controller Vdisks . . . . . . . . . . . . . . . . . . . . . . . . 2075.7 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.7.1 Changing the display name of a DS4000 or FAStT . . . . . . . . . . . . . . . . . . . . . . 210 5.7.2 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.7.3 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.7.4 Assigning hosts to DS4000 and FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . 213 5.7.5 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . 2145.8 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.8.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . . . 219 5.8.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Chapter 6. TotalStorage Productivity Center for Disk use . . . . . . . . . . . . . . . . . . . . . 2276.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2286.2 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.2.1 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.2.2 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.2.3 Reviewing Data collection task status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.2.4 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.2.5 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 6.2.6 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6.2.7 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.2.8 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2616.3 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.2 Creating gauges example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.3.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Contents v
  • 7. 6.3.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . . 268 6.4 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.4.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.4.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.5 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.5.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . . 275 6.5.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . . 276 6.6 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.1 Workload profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.2 Using VPA with predefined Workload profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.3 Launching VPA tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.4 ESS User Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 6.6.5 Configuring VPA settings for the ESS diskspace request. . . . . . . . . . . . . . . . . . 283 6.6.6 Choosing Workload Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.6.7 Choosing candidate locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 6.6.8 Verify settings for VPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.9 Approve recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.10 VPA loopback after Implement Recommendations selected . . . . . . . . . . . . . . 294 6.7 Creating and managing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 6.7.1 Choosing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 6.8.1 Installing IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console. . . . 319 6.8.3 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 323 6.8.4 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 328 Chapter 7. TotalStorage Productivity Center for Fabric use . . . . . . . . . . . . . . . . . . . 331 7.1 TotalStorage Productivity Center for Fabric overview . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.1.1 Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.1.2 Supported switches for zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.1.3 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 7.1.4 Enabling zone control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 7.1.5 TotalStorage Productivity Center for Disk eFix . . . . . . . . . . . . . . . . . . . . . . . . . . 338 7.1.6 Installing the eFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 7.2 Installing Fabric remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 7.3 TotalStorage Productivity Center for Disk integration . . . . . . . . . . . . . . . . . . . . . . . . . 346 7.4 Launching TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . 352 Chapter 8. TotalStorage Productivity Center for Replication use. . . . . . . . . . . . . . . . 355 8.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . . 356 8.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 8.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 8.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 8.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 8.2 Exploiting TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . 361 8.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362vi Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 8. 8.2.2 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 8.2.3 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 8.2.4 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 8.2.5 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8.2.6 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8.2.7 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 8.2.8 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 8.2.9 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 8.2.10 Storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.2.11 Point-in-Time Copy: Creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.2.12 Creating a session: Verifying source-target relationship. . . . . . . . . . . . . . . . . . 379 8.2.13 Continuous Synchronous Remote Copy: Creating a session . . . . . . . . . . . . . . 385 8.2.14 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 8.2.15 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . 3958.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . . 407 8.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 8.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 8.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 8.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415Chapter 9. Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4219.1 Troubleshooting tips: Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.3 Restricting discovery scope in TotalStorage Productivity Center . . . . . . . . . . . . 423 9.1.4 Following discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . . . 423 9.1.5 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 9.1.6 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 9.1.7 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 9.1.8 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 9.1.9 ESS registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 9.1.10 Viewing Event entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4319.2 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 9.2.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 9.2.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4359.3 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 9.3.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . . . 4359.4 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 9.4.1 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 9.4.2 SVC Data collection task failure due to previous running task . . . . . . . . . . . . . . 445Chapter 10. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 44910.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45010.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 450 10.2.1 Performance Manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45110.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 10.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 10.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 10.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 10.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45710.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 10.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45810.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Contents vii
  • 9. 10.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 10.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 10.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 10.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 10.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 485 10.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 10.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Appendix A. TotalStorage Productivity Center DB2 table formats. . . . . . . . . . . . . . . 497 A.1 Performance Manager tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.1 VPVPD table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.2 VPCFG table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.3 VPVOL table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 A.1.4 VPCCH table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 B.1 User IDs and passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.1.1 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.1.2 User IDs and passwords to lock the key files . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.2 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 B.2.1 IBM Enterprise Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 B.2.2 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 B.2.3 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Appendix C. Event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 C.1 Event management introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.1.1 Understanding events and event actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.1.2 Understanding event filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 C.1.3 Event Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 C.1.4 Event Data Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 C.1.5 Updating Event Plans, Filters, and Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529viii Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 10. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisionsare inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2004, 2005. All rights reserved. ix
  • 11. TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: Eserver® DB2® OS/390® e-business on demand™ Enterprise Storage Server® QMF™ iSeries™ ESCON® Redbooks™ z/OS® FlashCopy® Redbooks (logo) ™ AIX® Informix® S/390® Cloudscape™ Intelligent Miner™ Tivoli Enterprise™ Cube Views™ IBM® Tivoli Enterprise Console® CICS® Lotus® Tivoli® DataJoiner® MVS™ TotalStorage® DB2 Universal Database™ NetView® WebSphere®The following terms are trademarks of other companies:Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registeredtrademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.Excel, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the UnitedStates, other countries, or both.EJB, Java, JDBC, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the UnitedStates, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure ElectronicTransaction LLC.Other company, product, and service names may be trademarks or service marks of others.x Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 12. Preface IBM® TotalStorage® Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server®, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli® Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Mary Lovelace is a Consulting IT Specialist in the International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage and storage networking product education, system engineering and consultancy, and systems support. Jason Bamford is a Certified IT Specialist in the IBM Software Business, United Kingdom. He has 21 years customer experience in finance, commercial and public sector accounts, deploying mid-range systems in AIX®, Windows® and other UNIX® variants. An IBM employee for the past eight years, Jason specializes in IBM software storage products and is a subject matter expert in the UK for Tivoli Storage Manager. Dariusz Ferenc is a Technical Support Specialist with Storage Systems Group at IBM Poland. He has been with IBM for four years and he has nearly 10 years of experience in storage systems. He is in Technical Support in a CEMA region and is an IBM Certified Specialist in various storage products. His responsibility involves providing technical support and designing storage solutions. Darek holds a degree in Computer Science from the Poznan University of Technology, Poland. Madhav Vaze is an Accredited Senior IT Specialist and ITS Storage Engagement Lead in Singapore, specializing in storage solutions for Open Systems. Madhav has 19 years of experience in the IT services industry and five years of experience in IBM storage hardware and software. He has acquired the Brocade BFCP and SNIA professional certification.© Copyright IBM Corp. 2004, 2005. All rights reserved. xi
  • 13. The team: Dariusz, Jason, Mary, Madhav Thanks to the following people for their contributions to this project: Sangam Racherla International Technical Support Organization, San Jose Center Bob Haimowitz ITSO Raleigh Center Diana Duan Michael Liu Richard Kirchofer Paul Lee Thiha Than Bill Warren Martine Wedlake IBM San Jose, California Mike Griese Technical Support Marketing Lead Scott Drummond Program Director Storage Networking Curtis Neal Scott Venuti Open Systems Demo Center, San Jose Russ Smith Storage Software Project Management Jeff Ottman Systems Group TotalStorage Education Architect Doug Dunham Tivoli Swat Teamxii Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 14. Ramani Routray Almaden Research Center The original authors of this book are: Ivan Aliprandi William Andrews John A. Cooper Daniel Demer Werner Eggli Tom Smythe Peter ZerbiniBecome a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an Internet note to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xiii
  • 15. xiv Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 16. 1 Chapter 1. IBM TotalStorage Productivity Center overview IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), and IBM TotalStorage DS4000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. While the focus of this book is the IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication components of the IBM TotalStorage Productivity Center, this chapter provides an overview of the entire IBM TotalStorage Open Software Family.© Copyright IBM Corp. 2004, 2005. All rights reserved. 1
  • 17. 1.1 Introduction to IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN.1.1.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 1-1 SAN management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).2 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 18. 1.2 IBM TotalStorage Open Software family The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error. Figure 1-2 Enabling customer to move toward On Demand Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3 on page 4. Chapter 1. IBM TotalStorage Productivity Center overview 3
  • 19. Figure 1-3 IBM TotalStorage open software family1.3 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager) and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager). Taking a closer look at storage infrastructure management (see Figure 1-4 on page 5), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert4 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 20. Figure 1-4 Centralized, automated storage infrastructure management1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 6 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager). Chapter 1. IBM TotalStorage Productivity Center overview 5
  • 21. Figure 1-5 Monitor and Configure the Storage Infrastructure Data area Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.6 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 22. TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information. Architecture The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.1.3.2 Fabric subject matter expert: Productivity Center for Fabric The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies – What HBAs to use for each host and for what purpose – Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in” Improved Application Availability – Predicting storage network failures before they happen enabling preventative maintenance – Accelerate problem isolation when failures do happen Chapter 1. IBM TotalStorage Productivity Center overview 7
  • 23. Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert. Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.8 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 24. TotalStorage Productivity Center for Fabric componentsThe major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMPThere are two additional components which are not included in the TotalStorage ProductivityCenter. IBM Tivoli Enterprise™ Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business.The TotalStorage Productivity Center for Fabric functions are distributed across the Managerand the Agent.TotalStorage Productivity Center for FabricServer Performs initial discovery of environment: – Gathers and correlates data from agents on managed hosts – Gathers data from SNMP (outband) agents – Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView® Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console® or SNMP managersTotalStorage Productivity Center for Fabric AgentGathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the ManagerDiscover SAN components and devicesTotalStorage Productivity Center for Fabric uses two methods to discover information aboutthe SAN - outband discovery, and inband discovery.Outband discovery is the process of discovering SAN information, including topology anddevice data, without using the Fibre Channel data paths. Outband discovery uses SNMPqueries, invoked over IP network. Outband management and discovery is normally used tomanage devices such as switches and hubs which support SNMP. Chapter 1. IBM TotalStorage Productivity Center overview 9
  • 25. In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network. TotalStorage Productivity Center for Fabric benefits TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or “smartsets”, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848.1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk The Disk subject matter expert’s job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 11. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.10 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 26. Figure 1-7 Monitor and configure the Storage Infrastructure Disk areaThe TotalStorage Productivity Center for Disk provides the raw capabilities of initiating andscheduling performance data collection on the supported devices, of storing the receivedperformance statistics into database tables for later use, and of analyzing the stored data andgenerating reports for various metrics of the monitored devices. In conjunction with datacollection, the TotalStorage Productivity Center for Disk is responsible for managing andmonitoring the performance of the supported storage devices. This includes the ability toconfigure performance thresholds for the devices based on performance metrics, thegeneration of alerts when these thresholds are exceeded, the collection and maintenance ofhistorical performance data, and the creation of gauges, or performance reports, for thevarious metrics to display the collected historical data to the end user. The TotalStorageProductivity Center for Disk enables you to perform sophisticated performance analysis forthe supported storage devices.FunctionsTotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2® database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs. Chapter 1. IBM TotalStorage Productivity Center overview 11
  • 27. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be "started", which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 6, “TotalStorage Productivity Center for Disk use” on page 227.1.3.4 Replication subject matter expert: Productivity Center for Replication The Replication subject matter expert’s job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application. Productivity Center for Replication will start up all replication pairs and monitor them to completion. If any of the replication pairs fail, meaning the application is out of sync, the Productivity Center for Replication will suspend them until the problem is resolved, resync12 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 28. them and resume the replication. The Productivity Center for Replication provides completemanagement of the replication process.The requirements addressed by the Replication subject matter expert are shown Figure 1-8.Replication in a complex environment needs to be addressed by a comprehensivemanagement tool like the TotalStorage Productivity Center for Replication.Figure 1-8 Monitor and Configure the Storage Infrastructure Replication areaFunctionsData replication is the core function required for data protection and disaster recovery. Itprovides advanced copy services functions for supported storage subsystems on the SAN.Replication Manager administers and configures the copy services functions and monitorsthe replication actions. Its capabilities consist of the management of two types of copyservices: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), andthe Point-in-Time Copy (also known as FlashCopy®). At this time TotalStorage ProductivityCenter for Replication supports the IBM TotalStorage ESS.Productivity Center for Replication includes support for replica sessions, which ensures thatdata on multiple related heterogeneous volumes is kept consistent, provided that theunderlying hardware supports the necessary primitive operations. Productivity Center forReplication also supports the session concept, such that multiple pairs are handled as aconsistent unit, and that Freeze-and-Go functions can be performed when errors in mirroringoccur. Productivity Center for Replication is designed to control and monitor the copy servicesoperations in large-scale customer environments.Productivity Center for Replication provides a user interface for creating, maintaining, andusing volume groups and for scheduling copy tasks. The User Interface populates lists ofvolumes using the Device Manager interface. Some of the tasks you can perform withProductivity Center for Replication are: Chapter 1. IBM TotalStorage Productivity Center overview 13
  • 29. Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: – Create Session Wizard. – Select Source Group. – Select Copy Type. – Select Target Pool. – Save Session. Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 8, “TotalStorage Productivity Center for Replication use” on page 355.1.4 IBM TotalStorage Productivity Center All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products — Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication — from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 15.14 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 30. Figure 1-9 IBM TotalStorage Productivity Center Launch Pad The IBM TotalStorage Productivity Center establishes the foundation for IBM’s e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand™ environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.1.4.1 Productivity Center for Disk and Productivity Center for Replication The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console. This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device Chapter 1. IBM TotalStorage Productivity Center overview 15
  • 31. Figure 1-10 Managing multiple devices Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 17 provides an overview of Productivity Center for Disk and Productivity Center for Replication.16 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 32. IBM TotalStorage Productivity Center Performance Replication Manager Manager Device Manager IBM Director IBM TotalStorage Productivity Center for Fabric WebSphere Application Server DB2Figure 1-11 Productivity Center overviewThe Productivity Center for Disk and Productivity Center for Replication provides support forconfiguration, tuning, and replication of the virtualized SAN. As with the individual devices,the Productivity Center for Disk and Productivity Center for Replication layers are open andcan be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center forDisk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for ReplicationDevice ManagerThe Device Manager is responsible for the discovery of supported devices; collecting asset,configuration, and availability data from the supported devices; and providing a limitedtopography view of the storage usage relationships between those devices.The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storagedevices adheres to the SNIA SMI-S specification standards. Device Manager uses theService Level Protocol (SLP) to discover SMI-S enabled devices. The Device Managercreates managed objects to represent these discovered devices. The discovered managedobjects are displayed as individual icons in the Group Contents pane of the IBM DirectorConsole as shown in Figure 1-12 on page 18. Chapter 1. IBM TotalStorage Productivity Center overview 17
  • 33. Figure 1-12 IBM Director Console Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device. SAN Management When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts LUN configurations change18 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 34. Performance Manager functionThe Performance Manager function provides the raw capabilities of initiating and schedulingperformance data collection on the supported devices, of storing the received performancestatistics into database tables for later use, and of analyzing the stored data and generatingreports for various metrics of the monitored devices. In conjunction with data collection, thePerformance Manager is responsible for managing and monitoring the performance of thesupported storage devices. This includes the ability to configure performance thresholds forthe devices based on performance metrics, the generation of alerts when these thresholdsare exceeded, the collection and maintenance of historical performance data, and thecreation of gauges, or performance reports, for the various metrics to display the collectedhistorical data to the end user. The Performance Manager enables you to performsophisticated performance analysis for the supported storage devices.Functions Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device.The eligible metrics for threshold checking are fixed for each storage device. If the thresholdmetrics are modified by the user, the modifications are accepted immediately and applied tochecking being performed by active performance collection tasks. Examples of thresholdmetrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rateThere is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. – Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices. Chapter 1. IBM TotalStorage Productivity Center overview 19
  • 35. – Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database. Gauges The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed. Database services for managing the collected performance data The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables.20 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 36. Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI. Volume Performance Advisor The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN. Replication Manager function Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.1.4.2 Event services At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken. An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be Chapter 1. IBM TotalStorage Productivity Center overview 21
  • 37. generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.1.5 Taking steps toward an On Demand environment So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand. An On Demand operating environment must be: Flexible Self-managing Scalable Economical22 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 38. Resilient Based on open standardsThe move to an On Demand storage environment is an evolving one, it does not happen all atonce. There are several next steps that you may take to move to the On Demandenvironment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows.No matter which steps you take to an On Demand environment there will be results. Theresults will be improved application availability, optimized storage resource utilization, andenhanced storage personnel productivity. Chapter 1. IBM TotalStorage Productivity Center overview 23
  • 39. 24 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 40. 2 Chapter 2. Key concepts This chapter gives you an understanding of the basic concepts that you must know in order to use TotalStorage Productivity Center. These concepts include standards for storage management, Service Location Protocol (SLP), Common Information Model (CIM) agent, and Common Information Model Object Manager (CIMOM).© Copyright IBM Corp. 2004, 2005. All rights reserved. 25
  • 41. 2.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 2-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 2-1 SAN standards bodies Key standards for storage management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).2.1.1 CIM/WEB management model CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Distributed Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBM’s intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces.26 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 42. CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.2.2 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is to ensure that storage networks become efficient, complete, and trusted solutions across the IT community. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market.2.2.1 The SNIA Shared Storage Model IBM is an active member of SNIA and fully supports SNIA’s goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 2-2. Figure 2-2 The SNIA Storage Model IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Chapter 2. Key concepts 27
  • 43. Block aggregation The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or Block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: – Space management through combining or splitting native storage into new, aggregated block storage – Striping through spreading the aggregated block storage across several native storage devices – Redundancy through point-in-time copy and both local and remote mirroring File aggregation or File level virtualization The file/record layer in the SNIA model is responsible for packing items such as files and databases into larger entities such as block-level volumes and storage devices. File aggregation or File level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: – Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support – Enhance productivity by providing centralized and simplified management through policy-based storage management automation – Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMI-S/CIM/WBEM, see the SNIA and DMTF Web sites: http://www.snia.org http://www.dmtf.org2.2.2 SMI Specification SNIA has fully adopted and enhanced CIM standard for Storage Management in its SMI Specification (SMI-S). SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. The idea behind SMI-S is to standardize the management interfaces so that management applications can utilize these and provide cross device management. This means that a newly introduced device can be immediately managed as it will conform to the standards. SMI-S extends CIM/WBEM with the following features: A single management transport: Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model: SMI-S defines “profiles” and “recipes” within the CIM that enables a management client to reliably utilize a component vendor’s implementation of the standard, such as the control of LUNs and Zones in the context of a SAN.28 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 44. Consistent use of durable names: As a storage network configuration evolves and is re-configure, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMI-S provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system: SMI-S compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.2.2.3 Integrating existing devices into the CIM model As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-3. The CIM Agent or CIM Object Manager (CIMOM) will translate a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage Enterprise Storage Server includes a CIMOM inside it. Figure 2-3 CIM Agent / Object Manager Chapter 2. Key concepts 29
  • 45. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the “Embedded Model” in Figure 2-3 on page 29. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.2.2.4 CIM Agent implementation When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables TotalStorage Productivity Center for Data, TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-4 is an overview of the CIM agent. Figure 2-4 CIM agent overview The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface.2.2.5 CIM Object Manager The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type, and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between catcher and sender use the language and models defined by the SMI-S standard.30 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 46. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions. Figure 2-5 shows CIM enablement model. Figure 2-5 CIM enablement model2.3 Common Information Model (CIM) The Common Information Model (CIM) Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. A CIM agent typically involves the following components: Agent code: An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. CIM Object Manager (CIMOM): The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or device provider. Client application: A storage management program, like TotalStorage Productivity Center, that initiates CIM requests to the CIM agent for the device. Device: The storage server that processes and hosts the client application requests. Device provider: A device-specific handler that serves as a plug-in for the CIM. That is, the CIMOM uses the handler to interface with the device. Chapter 2. Key concepts 31
  • 47. Service Location Protocol (SLP): A directory service that the client application calls to locate the CIMOM.2.3.1 How the CIM Agent works The CIM Agent typically works in the following way (see Figure 2-6): (1) The client application locates the CIMOM by calling an SLP directory service. (2) When the CIMOM is first invoked, (3) it registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. (4) With this information, the client application starts to directly communicate with the CIMOM. The client application then (5) sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. (6) It then directs the requests to the appropriate functional component of the CIMOM or to a device provider. (7) The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy (8)-(9)-(10) client application requests. Figure 2-6 CIM Agent work flow2.4 Service Location Protocol (SLP) The Service Location Protocol (SLP) is an Internet Engineering Task Force (IETF) standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services.32 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 48. SLP enables the discovery and selection of generic services, which could range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user could specify to search for all available printers that support Postscript. Based on the given service type (printers), and the given attributes (Postscript), SLP searches the users network for any matching services, and returns the discovered list to the user.2.4.1 SLP architecture The Service Location Protocol (SLP) architecture includes three major components, a service agent, a user agent, and a directory agent. The service agent and user agent are required components in an SLP environment, whereas the SLP directory agent is optional. Following is a description of these components: Service agent (SA) A process working on the behalf of one or more network services to broadcast the services. User agent (UA) A process working on the behalf of the user to establish contact with some network service. The UA retrieves network service information from the service agents or directory agents. Directory agent (DA) A process that collects network service broadcasts. Note: The SLP directory agent is completely different and separate from the IBM Director Agent, which occupies the lowest tier in the IBM Director architecture.2.4.2 SLP service agent The Service Location Protocol (SLP) service agent (SA) is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. The SA can run in the same process or in a different process as the service itself. But in either case, the SA supports registration and de-registration requests for the service. The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if Chapter 2. Key concepts 33
  • 49. a service becomes inactive without removing the registration for itself, that old registration will be removed automatically when its life-span expires. The maximum life-span of a registration is 65,535 seconds (about 18 hours).2.4.3 SLP user agent The Service Location Protocol (SLP) user agent (UA) is a process working on the behalf of the user to establish contact with some network service. The UA retrieves service information from the service agents or directory agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services on the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the services URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP service agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA is, therefore, able to discover these SAs with a minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. The SLP UA follows the multicast convergence algorithm, and sends out repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the services URL (see Figure 2-7 on page 35).34 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 50. Figure 2-7 Service Location Protocol user agent A SLP UA is not required to discover all matching services that exist on the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets, which could be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs are able to recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range, which could cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs.2.4.4 SLP directory agent The Service Location Protocol (SLP) directory agent (DA) is an optional component of SLP that collects network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. The SLP DA can be thought of as an intermediate tier in the SLP architecture, placed between the user agents (UAs) and the service agents (SAs), such that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic on the network, and it protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. Figure 2-8 on page 36 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs. Chapter 2. Key concepts 35
  • 51. S CIMOM A Subnet A CIMOM S S CIMOM CIMOM A A DA Subnet B MDM S CIMOM A SLP UA S CIMOM A S CIMOM A CIMOM S CIMOM A DA Subnet C Figure 2-8 SLP UA, SA and DA interaction When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request and specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery, and it is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts up. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service Uniform Resource Locators (URLs) and attributes. Figure 2-9 on page 37 shows the interactions of UAs and SAs with DAs, during active DA discovery.36 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 52. S LP D A D A A dvertisem ent D A A dvertisem ent S ervice S ervice C lient R equest R equest or user S LP UA "S ervice: D irectory S LP D A "Service: D irectory S LP SA S ervice A gent A gent D A A dvertisem ent D A A dvertisem ent D A A dvertisem ent D A A dvertisem ent S LP D AFigure 2-9 Service Location Protocol DA functionsThe SLP DA functions very similarly to an SLP SA, receiving registration and deregistrationrequests, and responding to service requests with unicast service replies. There are a coupleof differences, however, where DAs provide more functionality than SAs. One area,mentioned previously, is that DAs respond to service requests of the service:directory-agentservice type with a DA advertisement response message, passing back a service URLcontaining the DAs IP address. This allows SAs and UAs to perform active discovery on DAs.One other difference is that when a DA first initializes, it sends out a multicast DAadvertisement message to advertise its services to any existing SAs (and UAs) that mightalready be active on the network. UAs can optionally listen for, and SAs are required to listenfor, such advertisement messages. This listening process is also sometimes called passiveDA discovery. When the SA finds a new DA through passive DA discovery, it sendsregistration requests for all its currently registered services to that new DA.Figure 2-10 on page 38 shows the interactions of DAs with SAs and UAs, during passive DAdiscovery. Chapter 2. Key concepts 37
  • 53. SLP U A SLP SA DA SLP DA SLP U A A d v e rti A d v e rti SLP SA sem ent DA sem ent SLP U A SLP SA Figure 2-10 Service Location Protocol passive DA discovery2.4.5 Why use an SLP DA? The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UA’s scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicast outing enabled, you can configure SLP to use broadcast. However, broadcast is very inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.2.4.6 When to use DAs Use DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.38 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 54. 2.4.7 SLP configuration recommendation Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration, SLP directory agent configuration, and environment configuration. Router configuration Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. SLP directory agent configuration Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for TotalStorage Productivity Center each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center Discovery Preference panel as discussed in “Configuring SLP Directory Agent addresses” on page 41. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly. Environment configuration It might be advantageous to configure SLP DAs in the following environments: In environments where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, an SLP DA should be configured. This ensures that the existing SAs are not overwhelmed by too many service requests. In environments where there are many SLP SAs, a DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request. Chapter 2. Key concepts 39
  • 55. 2.4.8 Setting up the Service Location Protocol Directory Agent You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Find the slp.conf file in the CIM Agent installation directory and perform the following steps to edit the file: – Make a backup copy of this file and name it slp.conf.bak. – Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. – Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:winnt), or in the /etc directory for UNIX machines. 4. Restart the daemon process and the CIMOM process for the CIM Agent. Refer to the CIM Agent documentation for your operating system and Chapter 4, “CIMOM installation and configuration” on page 119 for more details. Note: The CIMOM process might start automatically when you restart the SLP daemon. 5. You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. 6. Go to the TotalStorage Productivity Center Discovery Preference settings panel (Figure 2-11 on page 41), and enter the host names or IP addresses of each of the machines that are running the SLP DA that was set up in the prior steps. Note: Enter only a simple host name or IP address; do not enter protocol and port number. Result When a discovery task is started (either manually or scheduled), TotalStorage Productivity Center will discover all devices on the subnet on which TotalStorage Productivity Center resides, and it will discover all devices with affinity to the SLP DAs that were configured.40 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 56. 2.4.9 Configuring SLP Directory Agent addresses Perform this task to configure the addresses for the Service Location Protocol (SLP) Directory Agent (DA) for TotalStorage Productivity Center.TotalStorage Productivity Center uses the DA addresses during device discovery. When configured with DAs, the TotalStorage Productivity Center SLP User Agent (UA) sends service requests to each of the configured DA addresses in turn to discover the registered services for each. The UA also continues discovery of registered services by performing multicast service discovery. This additional action ensures that registered services are discovered when going from an environment without DAs to one with DAs. Note: If you have set up an SLP DA in the subnet that the TotalStorage Productivity Center server is in, you can register specific devices to be discovered and managed by TotalStorage Productivity Center that are outside that subnet.You do this by registering the CIM Agent to SLP. Refer to Chapter 4, “CIMOM installation and configuration” on page 119 for details. Perform the following steps to configure the addresses for the SLP directory agent: From the IBM Director menu bar, click Options. The Options menu is displayed. From the TotalStorage Productivity Center selections, click Discovery Preferences panel. The Discovery Preferences menu for is displayed. Select MDM SLP Configuration tab (see Figure 2-11). Figure 2-11 MDM SLP Configuration panel In the SLP Directory Agent Configuration section, type a valid Internet host name or an IP address (in dotted decimal format). Click Add. The host and scope information that you entered are displayed in the SLP Directory Agents Table. Click Change to change the host name or IP address for a selected item in the SLP Directory Agents Table. Chapter 2. Key concepts 41
  • 57. Click Remove to delete a selected a item from the SLP Directory Agents Table. Click OK to add or change the directory agent information. Click Cancel to cancel adding or changing the directory agent information.2.5 Productivity Center for Disk and Replication architecture Figure 2-12 provides an overview of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication architecture. All of the components of the TotalStorage Productivity Center are shown - Device Manager, TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Keep in mind that the TotalStorage Productivity Center for Replication and TotalStorage Productivity Center for Disk are separately orderable features on TotalStorage Productivity Center. The communication protocols and flow between supported devices through the TotalStorage Productivity Center Server and Console are shown. Multiple Device Manager Console TotalStorage Productivity Center Console Device Replication Performance Mgr. Console Mgr. Console Mgr. Console IBM Director Console LAN (TCP / IP) TotalStorage Prod. Center WAS Server Device SOAP Manager IBM DirectorServer Co-Server Performance Manager JDBC Co-Server Replication IBM DB2 Manager Workgroup Server Co-Server LAN (TCP / IP) ESS ICAT SVC ICAT (Proxy) (Proxy) FAStT CIMOM / SLP CIMOM / SLP CIMOM / SLP ESS SVC FAStT ESS SVC Figure 2-12 TotalStorage Productivity Center architecture overview42 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 58. 3 Chapter 3. TotalStorage Productivity Center suite installation The components of the IBM TotalStorage Productivity Center can be installed individually using the component install as shipped, or they can be installed using the Suite Installer shipped with the package. In this chapter we document the use of the Suite Installer. Hints and tips based on our experience are included.© Copyright IBM Corp. 2004, 2005. All rights reserved. 43
  • 59. 3.1 Installing the IBM TotalStorage Productivity Center IBM TotalStorage Productivity Center provides a suite installer that helps guide you through the installation process. You can also use the suite installer to install the components standalone. One advantage of the suite installer is that it will interrogate your system and install required prerequisites. The suite installer will install the following prerequisite products or components in this order: – DB2 (required by all the managers) – IBM Director (required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication) – Tivoli Agent Manager (required by Fabric Manager and Data Manager) – WebSphere Application Server (required by all the managers except for TotalStorage Productivity Center for Data) The suite installer will then guide you through the installation of the IBM TotalStorage Productivity Center components. You can select more than one installation option at a time, but in this book we focus on the Productivity Center for Disk and Productivity Center for Replication install. The types of installation tasks are: – IBM TotalStorage Productivity Center Manager Installations – IBM TotalStorage Productivity Center Agent Installations – IBM TotalStorage Productivity Center GUI/Client Installations – Language Pack Installations – Uninstall IBM TotalStorage Productivity Center Products Considerations If you want the ESS, SAN Volume Controller, or FAStT storage subsystems to be managed using IBM TotalStorage Productivity Center for Disk, you must install the prerequisite I/O Subsystem Licensed Internal Code and CIM Agent for the devices. See Chapter 4, “CIMOM installation and configuration” on page 119 for more information. If you are installing the CIM agent for the ESS, you must install it on a separate machine from the Productivity Center for Disk and Productivity Center for Replication code. Note that IBM TotalStorage Productivity Center does not support zLinux on S/390® and does not support windows domains.3.1.1 Configurations The storage management components of IBM TotalStorage Productivity Center can be installed on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: – Windows 2000 Server with Service Pack 4 – Windows 2000 Advanced Server – Windows 2003 Enterprise Server Edition Note: Refer to the following Web sites for the updated support summaries, including specific software, hardware, and firmware levels supported. http://www.storage.ibm.com/software/index.html http://www.ibm.com/software/support/44 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 60. If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend you install IBM Tivoli Provisioning Manager on a separate Windows machine.3.1.2 Installation prerequisites This section lists the minimum prerequisites for installing TotalStorage Productivity Center. Hardware Dual Pentium® 4 or Xeon™ 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Centerfor Fabric (optional) 80 GB available disk space Database The installation of DB2 Version 8.2 is part of the suite installer and is required by all the managers.3.1.3 TCP/IP ports used by TotalStorage Productivity Center This section provides an overview of the TCP/IP ports used by TotalStorage Productivity Center. Productivity Center for Disk and Productivity Center for Replication The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication installation program will pre-configure the TCP/IP ports used by WebSphere®. Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value WebSphere ports 2809 Bootstrap port 9080 HTTP Transport port 9443 HTTPS Transport port 9090 Administrative Console port 9043 Administrative Console Secure Server port 5559 JMS Server Direct Address port 5557 JMS Server Security port 5 5558 JMS Server Queued Address port 8980 SOAP Connector Address port 7873 DRS Client Address p TCP/IP ports used by agent manager The Agent Manager uses these TCP/IP ports. Chapter 3. TotalStorage Productivity Center suite installation 45
  • 61. Table 3-2 TCP/IP ports for agent manager Port value Usage 9511 Registering agents and resource managers 9512 Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets 9513 Requesting updates to the certificate revocation list Requesting agent manager information Downloading the truststore file 80 Agent recovery service TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric The Fabric Manager uses these default TCP/IP ports. Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value Usage 8080 NetView Remote Web console 9550 HTTP port 9551 Reserved 9552 Reserved 9553 Cloudscape™ server port 9554 NVDAEMON port 9555 NVREQUESTER port 9556 SNMPTrapPort port on which to get events forwarded from Tivoli NetView 9557 Reserved 9558 Reserved 9559 Tivoli NetView Pager daemon 9560 Tivoli NetView Object Database daemon 9661 Tivoli NetView Topology Manager daemon 9562 Tivoli NetView Topology Manager socket 9563 Tivoli General Topology Manager 9564 Tivoli NetView OVs_PMD request services 9565 Tivoli NetView OVs_PMD management services 9565 Tivoli NetView trapd socket 9567 Tivoli NetView PMD service 9568 Tivoli NetView General Topology map service 9569 Tivoli NetView Object Database event socket46 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 62. Port value Usage 9570 Tivoli NetView Object Collection facility socket 9571 Tivoli NetView Web server socket 9572 Tivoli NetView SnmpServerFabric Manager remote console TCP/IP default portsTable 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value Usage 9560 HTTP port 9561 Reserved 9561 Reserved 9562 Tomcat’s Local Server port 9563 Tomcat’s warp port 9564 NVDAEMON port 9565 NVREQUESTER port 9569 Tivoli NetView Pager daemon 9570 Tivoli NetView Object Database daemon 9571 Tivoli NetView Topology Manager daemon 9572 Tivoli NetView Topology Manager socket 9573 Tivoli General Topology Manager 9574 Tivoli NetView OVs_PMD request services 9575 Tivoli NetView OVs_PMD management services 9576 Tivoli NetView trapd socket 9577 Tivoli NetView PMD service 9578 Tivoli NetView General Topology map service 9579 Tivoli NetView Object Database event socket 9580 Tivoli NetView Object Collection facility socket 9581 Tivoli NetView Web server socket 9582 Tivoli NetView SnmpServerFabric agents TCP/IP portsTable 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric agents Port value Usage 9510 Common agent 9514 Used to restart the agent 9515 Used to restart the agent Chapter 3. TotalStorage Productivity Center suite installation 47
  • 63. 3.1.4 Default databases created during install During the installation of IBM TotalStorage Productivity Center we recommend that you use DB2 as the preferred database type. Table 3-6 lists the default databases that the installer will create during the installation. Table 3-6 Default DB2 databases Application Dealt Database Name (DB2) IBM Director No Default (WE created Database: DIRECTOR) Tivoli Agent Manager IBMCDB IBM TotalStorage Productivity Center for Disk DMCOSERV and Replication Base IBM TotalStorage Productivity Center for Disk PMDATA IBM TotalStorage Productivity Center for ESSHWL Replication hardware subcomponent IBM TotalStorage Productivity Center for ELEMCAT Replication element catalog IBM TotalStorage Productivity Center for REPMGR Replication, Replication Manager IBM TotalStorage Productivity Center for Fabric ITSANMDB3.2 Pre-installation check list The following is a list of the tasks you need to complete in preparation for the install of the IBM TotalStorage Productivity Center. You should print the tables in Appendix B, “Worksheets” on page 505 to keep track of the information you will need during the install (for example usernames, ports, IP addresses and locations of servers and managed devices). 1. Determine which elements of the TotalStorage Productivity Center you will be installing 2. Uninstall Internet Information Services 3. Grant the user account that will be used to install the TotalStorage Productivity Center the following privileges: – Act as part of the operating system – Create a token object – Increase quotas – Replace a process-level token – Logon as a service 4. Install and configure SNMP (Fabric requirement) 5. Identify any firewalls and obtain required authorization 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers3.2.1 User IDs and security This section will list and explain the user IDs used in a IBM TotalStorage Productivity Center environment during the installation and also those that are later used to manage and work with TotalStorage Productivity Center. For some of the IDs the table Table 3-8 on page 49 includes a link to further information that is available in the manuals.48 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 64. Suite Installer user We recommend you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights shown in Table 3-7. Table 3-7 Requirements for the Suite Installer user User rights/Policy Used for Act as part of the operating system DB2 Productivity Center for Disk Fabric Manager Create a token object DB2 Productivity Center for Disk Increase quotas DB2 Productivity Center for Disk Replace a process-level token DB2 Productivity Center for Disk Log on as a service DB2 Debug programs Productivity Center for Disk Table 3-8 shows the user IDs used in our TotalStorage Productivity Center environment.Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element User ID New user Type Group(s) Usage Suite Installer Administrator no DB2 db2admina yes, will Windows DB2 management and Windows be Service Account created IBM Director Administratora no Windows DirAdmin or Windows Service Account (see also below) DirSuper Resource Manager managerb no, Tivoli n/a - internal used during the registration of a default Agent user Resource Manager to the Agent user Manager Manager Common Agent AgentMgrb no Tivoli n/a - internal used to authenticate agents and lock (see also below) Agent user the certificate key files Manager Common Agent itcauserb yes, will Windows Windows Windows Service Account be created TotalStorage TPCSUIDa yes, will Windows DirAdmin This ID is used to accomplish Productivity Center be connectivity with the managed universal user created devices, i.e this ID has to be set up on the CIM agents c Tivoli NetView Windows see “Fabric Manager User IDs” on page 51 c IBM WebSphere Windows see “Fabric Manager User IDs” on page 51 Chapter 3. TotalStorage Productivity Center suite installation 49
  • 65. Element User ID New user Type Group(s) Usage c Host Authentication Windows see “Fabric Manager User IDs” on page 51 a. This account can have whatever name you like. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here, see “Fabric Manager User IDs” on page 51. Granting privileges Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. It is recommended that this user ID be the superuser ID. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can, optionally, set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, select the following path and select the appropriate user: Click Start →Settings → Control Panel – Double-click Administrative Tools – Double-click Local Security Policy; the Local Security Settings window opens. – Expand Local Policies. – Double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: • Highlight the policy to be checked. • Double-click the policy and look for the user’s name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are checked. • If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: a) Click Add on the Local Security Policy Setting window. b) In the Select Users or Groups window, highlight the user of group under the Name column. c) Click Add to put the name in the lower window. d) Click OK to add the policy to the user or group. After these user rights are set (either by the installation program or manually), log off the system, and then log on again in order for the user rights to take effect. You can then restart the installation program to continue with the install of the IBM TotalStorage Productivity Center for Disk and Replication Base. IBM Director With Version 4.1, you no longer need to create “internal” user account. All user IDs must be operating system accounts and members of one of the following: DirAdmin or DirSuper groups (Windows), diradmin or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux)50 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 66. In addition to the above there is a host authentication password that is used to allow managed hosts and remote consoles to communicate with IBM Director. TotalStorage Productivity Center superuser ID The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. Do not be confused by the name, it is really only a communication user ID. Fabric Manager User IDs During the installation of IBM TotalStorage Productivity Center for Fabric you can select if you want to use individual passwords for the sub components like DB2, IBM WebSphere, NetView and for the Host Authentication. You can also choose use the DB2 administrators user ID and password to make the configuration much simpler. Figure 3-97 on page 113 shows the window where you can choose the options.3.2.2 Certificates and key files With in a TotalStorage Productivity Center environment several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager. Productivity Center for Disk and Replication certificates The WebSphere Application Server that is part of Productivity Center for Disk and Productivity Center for Replication uses certificates for SSL communication. During the installation key files can be generated as a self-signed certificates, but you will have to enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for that key file on the agent manager is: C:Program FilesIBMmdmdmkeys Tivoli Agent Manager certificates The Agent Manager comes with demon certificates that you can use, but you can also create new certificates during the installation of agent manager (see Figure 3-49 on page 83). If you choose to create new files, the password that you have entered on the panel shown in Figure 3-50 on page 84 as the Agent registration password will be used to lock the key file: agentTrust.jks The default directory for that key file on the agent manager is: C:Program FilesIBMAgentManagercerts There are more key files in that directory, but during the installation and first steps the agentTrust.jks file is the most important one. And this is only important if you let the installer create you own keys. Chapter 3. TotalStorage Productivity Center suite installation 51
  • 67. 3.3 Services and service accounts The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. Note that we did not include all the DB2 services in the table, to keep it simple.Table 3-9 Services and Service Accounts Element Service name Service account Comment DB2 db2admin The account needs to be part of: Administrators and DB2ADMNS IBM Director IBM Director Server Administrator You need to modify the account, to be part of one of the groups: DirAdmin or DirSuper Agent Manager IBM WebSphere Application LocalSystem You need to set this service to start Server V5 - Tivoli Agent automatically, after the installation Manager Common Agent IBM Tivoli Common Agent - itcauser C:Program Filestivoliep Productivity Center IBM WebSphere Application LocalSystem for Fabric Server V5 - Fabric Manager Tivoli NetView Tivoli NetView Service NetView Service3.3.1 Starting and stopping the managers To start, stop or restart one of the managers or components you simply use the windows control panel. Table 3-10 is a list of the services.Table 3-10 Services used for TotalStorage Productivity Center Element Service name Service account DB2 db2admin IBM Director IBM Director Server Administrator Agent Manager IBM WebSphere Application Server V5 - LocalSystem Tivoli Agent Manager Common Agent IBM Tivoli Common Agent - C:Program itcauser Filestivoliep Productivity Center IBM WebSphere Application Server V5 - LocalSystem for Fabric Fabric Manager Tivoli NetView Tivoli NetView Service NetView Service3.3.2 Uninstall Internet Information Services Make sure Internet Information Services (IIS) is not installed on the server, if it is installed uninstall it using the following procedure. – Click Start →Settings →Control Panel – Click Add/Remove Programs52 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 68. – Click Add/Remove Windows Components – Remove the tick box from Internet Information Services (IIS)3.3.3 SNMP install Before installing the components of the TotalStorage Productivity Center you should install and configure Simple Network Management Protocol (SNMP). – Click Start →Settings →Control Panel – Click Add/Remove Programs – Click Add/Remove Windows Components – Double-click Management and Monitoring Tools – Click Simple Network Management Protocol – Click OK Close the panels and accept the installation of the components the Windows installation CD or installation files will be required. Make sure that the SNMP services is configured, this can be configured by – Right-click My Computer – Click Manage – Click Services An alternative method follows: – Click Start → Run... – Type in MMC (Microsoft® Management console) and click OK. – Click Console → Add/Remove Snap-in... – Click Add and add Services. Select the services and scroll down to SNMP Service as shown in Figure 3-1 on page 54. – Double-click SNMP Service. – Click the Traps panel tab. – Make sure that the public community name is available if not add it. – Make sure that on the Security tab Accept SNMP packets from any host is checked. Chapter 3. TotalStorage Productivity Center suite installation 53
  • 69. Figure 3-1 SNMP Security After setting the public community name restart the SNMP community service.3.4 IBM TotalStorage Productivity Center for Fabric The primary focus of this book is the install and use of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. We have included the IBM TotalStorage Productivity Center for Fabric for completeness since it is used with the Productivity Center for Disk. There are planning considerations and prerequisite tasks that need to be completed.3.4.1 The computer name IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow the procedure below. Right–click the My Computer icon on your desktop. Click Properties The System Properties panel is displayed. Click on the Network Identification tab. Click on Properties The Identification Changes panel is displayed. Verify that your computer name is entered correctly. This is the name that the computer will be identified as in the network. Also verify that the Full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. Click More54 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 70. The DNS Suffix and NetBIOS Computer Name panel is displayed. Verify that the Primary DNS suffix field displays a domain name. The fully qualified host name must match the HOSTS file name (including case–sensitive characters).3.4.2 Database considerations When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created (if you specified the DB2 database). The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before reinstalling the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is TSANMDB. You cannot change the database name for Cloudscape. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines might end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines.3.4.3 Windows Terminal Services You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any IBM TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView will appear on the manager or remote console machine only. The dialogs will not appear in the Windows Terminal Services session.3.4.4 Tivoli NetView IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to Version 7.1.3. If you have a Tivoli NetView release below Version 7.1.1, IBM TotalStorage Productivity Center for Fabric will prompt you to uninstall Tivoli NetView before installing this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. – Web Console – Web Console Security – MIB Loader – MIB Browser – Netmon Seed Editor – Tivoli Event Console Adaptor Important: Also ensure that you do not have the Windows 2000 Terminal Services running. Go to the Services panel and check for Terminal Services. Chapter 3. TotalStorage Productivity Center suite installation 55
  • 71. User IDs and password considerations IBM TotalStorage Productivity Center for Fabric only supports local user IDs and groups. IBM TotalStorage Productivity Center for Fabric does not support domain user IDs and groups. Cloudscape database If you install IBM TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you will need the following user IDs and passwords: – Agent manager name or IP address and password – Common agent password to register with the agent manager – Resource manager user ID and password to register with the agent manager – WebSphere administrative user ID and password host authentication password – Tivoli NetView password only DB2 database If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you will need the user IDs and passwords listed below: – Agent manager name or IP address and password – Common agent password to register with the agent manager – Resource manager user ID and password to register with the agent manager – DB2 administrator user ID and password – DB2 user ID and password v WebSphere administrative user ID and password – Host authentication password only – Tivoli NetView password only Note: If you are running under Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must have the Act as part of the operating system user privilege. WebSphere To change the WebSphere user ID and password, follow this procedure: – Open the file: <install_location>appswaspropertiessoap.client.props – Modify the following entries: com.ibm.SOAP. login Userid=<user_ID> (enter a value for user_ID) com.ibm.SOAP. login Password=<password> (enter a value for password) – Save the file. – Run the following script: ChangeWASAdminPass.bat <user_ID> <password> <install_dir> Where <user_ID> is the WebSphere user ID and <password> is the password. <install_dir> is the directory where the manager is installed and is optional. For example, <install_dir> is c:Program FilesIBMTPCFabricmanagerbinW32-ix86.3.4.5 Personal firewall If you have a software firewall on your system, you should disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager.56 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 72. Security Considerations Set up security by using the demonstration certificates or by generating new certificates was a option that was specified when you installed the agent manager as shown in Figure 3-49 on page 83. If you used the demonstration certificates carry on with the installation. If you generated new certificates, follow this procedure: – Copy the manager CD image to your computer. – Copy the agentTrust.jks file from the agent manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This will overwrite the existing agentTrust.jks file. – You can write a new CD image with the new file or keep this image on your computer and point the suite installer to the directory when requested.3.4.6 Change the HOSTS file When you install Service Pack 3 for Windows 2000 on your computers, you must follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol which returns the short name (not fully qualified host name). This problem can be avoided by changing the entries in the corresponding host tables on the DNS server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See “The computer name” on page 54 for details on determining the host name. To correct this problem you will have to edit the HOSTS file. The HOSTS file is in the following directory: %SystemRoot%system32driversetc Example 3-1 Sample HOSTS file # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a # symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com Note: Host names are case–sensitive. This is a WebSphere limitation. Check your host name. Chapter 3. TotalStorage Productivity Center suite installation 57
  • 73. 3.5 Installation process Depending on which managers you plan to install, these are the prerequisites programs that are installed first. The suite installer will install these prerequisites programs in this order: – DB2 – WebSphere Application Server – IBM Director – Tivoli Agent Manager The suite installer then launches the installation wizard for each manager you have chosen to install. If you are running the Fabric Manager install under Windows 2000, the Fabric Manager installation requires that user ID must have the Act as part of the operating system and Log on as a service user rights. Insert the IBM TotalStorage Productivity Center suite installer CD into the CD–ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD–ROM drive. Double–click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient until the language selection panel in Figure 3-2 appears. The language panel is displayed. Select a language from the drop–down list. This is the language that is used for installing this product Click OK as shown in Figure 3-2. Figure 3-2 Installer Wizard The Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel is displayed. Click Next as shown in Figure 3-3 on page 59.58 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 74. Figure 3-3 Welcome to IBM TotalStorage Productivity Center panelThe Software License Agreement panel is displayed. Read the terms of the licenseagreement. If you agree with the terms of the license agreement, select the – I accept the terms of the license agreement radio button. – Click Next to continue as shown in Figure 3-4.If you do not accept the terms of the license agreement, the installation program will endwithout installing IBM TotalStorage Productivity Center.Figure 3-4 License agreement Chapter 3. TotalStorage Productivity Center suite installation 59
  • 75. The Select Type of Installation panel is displayed. Select Manager installations of Data, Disk, Fabric, and Replication and click Next to continue as shown in Figure 3-5. Figure 3-5 IBM TotalStorage Productivity Center options panel The Select the Components panel is displayed. Select the components you want to install. Click Next to continue as shown in Figure 3-6. Figure 3-6 IBM TotalStorage Productivity Center components WinMgmt is a service of Windows that need to be stopped before proceeding with the install. If the service is running you will see the panel in Figure 3-7 on page 61. Click Next to stop the services.60 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 76. Figure 3-7 WinMgmt information windowThe window in Figure 3-8 will open. Click Next once again to stop WinMgmt. Note: You should stop this service prior to beginning the install of TotalStorage Productivity Center to prevent these windows from appearing.Figure 3-8 Services informationThe Prerequisite Software panel is displayed. The products will be installed in the order listed.Click Next to continue as shown in Figure 3-9 on page 62.In this example, the first prerequisites to be installed are DB2 and WebSphere. Chapter 3. TotalStorage Productivity Center suite installation 61
  • 77. Note: The installer will interrogate the server to determine what prerequisites are installed on the server and list what remains to be installed. Figure 3-9 Prerequisite installation3.5.1 Prerequisite product install: DB2 and WebSphere The DB2 installation Information panel is displayed. The products will be installed in the order shown in Figure 3-10 on page 63. From the DB2 installation information panel click Next to continue. Note: If DB2 is already installed on the server the installer will skip the DB2 install.62 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 78. Figure 3-10 Products to be installedThe DB2 User ID and Password panel is displayed. Accept the default user name or enter anew user ID and password. Click Next to continue as shown in Figure 3-11.Figure 3-11 DB2 User configurationThe Confirm Target Directories for DB2 panel is displayed. Accept the default directory orenter a target directory. Click Next to continue as shown in Figure 3-12 on page 64. Chapter 3. TotalStorage Productivity Center suite installation 63
  • 79. Figure 3-12 DB2 Target Directory You will be prompted for the location of the DB2 installation image.Browse to the installation image or installer CD select the required information and click Install as shown in Figure 3-13. Figure 3-13 Installation source Note: If you use the DB2 CD for this step, the Welcome to DB2 panel is displayed. Click Exit to exit the DB2 installation wizard. The suite installer will guide you through the DB2 installation. The Installing Prerequisites (DB2) panel is displayed with the word Installing on the right side of the panel. When the component is installed a green arrow appears next to the component name (see Figure 3-14 on page 65). Wait for all the prerequisite programs to install. Click Next. Note: Depending on the speed of your machine, this can take from 30–40 minutes to install.64 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 80. Figure 3-14 Installing Prerequisites window - DB2 installingAfter DB2 has installed a green check mark will appear next to the text DB2 UniversalDatabase™ Enterprise Server Edition. The installer will start the install of WebSphere asshown in Figure 3-15.Figure 3-15 Installing Prerequisites window - WebSphere installingAfter WebSphere has installed a green check mark will appear next to the text WebSphereApplication Server. The installer will start the install of WebSphere Fixpack as shown inFigure 3-16 on page 66. Chapter 3. TotalStorage Productivity Center suite installation 65
  • 81. Figure 3-16 Installing Prerequisites window - WebSphere Fixpack installing After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-15 on page 65. Figure 3-17 Installing Prerequisites window - WebSphere Fixpack installed After the DB2 WebSphere, and WebSphere fixpack are installed the DB2 Server installation was successful window opens (see Figure 3-18 on page 67). Click Next to continue.66 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 82. Figure 3-18 DB2 installation successful The WebSphere Application Server installation was successful window opens (see Figure 3-19). Click Next to continue. Figure 3-19 WebSphere Application Server installation was successful3.5.2 Installing IBM Director The suite installer will present you with the panel showing the remaining products to be installed. The next prerequisite product to be installed is the IBM Director (see Figure 3-20 on page 68). Chapter 3. TotalStorage Productivity Center suite installation 67
  • 83. Figure 3-20 Installer prerequisite products panel The location of the IBM Director install package panel is displayed. Enter the installation source or insert the CD-ROM and enter the CD drive location. Click Next as shown in Figure 3-21. Figure 3-21 IBM Director Installation source The next panel provides information about the IBM Director post install reboot option. Note that you should choose the option to reboot later when prompted (seeFigure 3-22 on page 69). Click Next to continue.68 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 84. Figure 3-22 IBM Director informationThe IBM Director Server - InstallShield Wizard panel is displayed indicating that the IBMDirector installation wizard will be launched. Click Next to continue (see Figure 3-23).Figure 3-23 IBM Director InstallShield WizardThe License Agreement window opens next. Read the license agreement. Click I accept theterms in the license agreement radio button as shown in Figure 3-24 on page 70. ClickNext to continue. Chapter 3. TotalStorage Productivity Center suite installation 69
  • 85. Figure 3-24 IBM Director licence agreement The next window is the advertisement for Enhance IBM Director with the new Server Plus Pack window (see Figure 3-25). Click Next to continue. Figure 3-25 IBM Director information The Feature and installation directory window opens (see Figure 3-26 on page 71). Accept the default settings and click Next to continue.70 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 86. Figure 3-26 IBM Director feature and installation directory windowThe IBM Director service account information window opens (see Figure 3-27). Type thedomain for the IBM Director system administrator. Alternatively, if there is no domain, thentype the local host name (this is the recommended setup). Type a user name and passwordfor IBM Director. The IBM Director will run under this user name and you will log on to the IBMDirector console using this user name. Click Next to continue.Figure 3-27 Account informationThe Encryption settings window opens as shown in Figure 3-28 on page 72. Accept thedefault settings in the Encryption settings window. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 71
  • 87. Figure 3-28 Encryption settings In the Software Distribution settings window, accept the default values and click Next as shown in Figure 3-29. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director. Figure 3-29 Install target directory The Ready to Install the Program window opens (see Figure 3-30 on page 73). Click Install to continue.72 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 88. Figure 3-30 Installation readyThe Installing IBM Director server window reports the status of the installation as shown inFigure 3-31.Figure 3-31 Installation progressThe Network driver configuration window opens. Accept the default settings and click OK tocontinue. Chapter 3. TotalStorage Productivity Center suite installation 73
  • 89. Figure 3-32 Network driver configuration The secondary window closes and the installation wizard performs additional actions which are tracked in the status window. The Select the database to be configured window opens (see Figure 3-33). Select IBM DB2 Universal Database in the Select the database to be configured window. Click Next to continue. Figure 3-33 Data base selection The IBM Director DB2 Universal Database configuration window will open (see Figure 3-34). It might be behind the status window, and you must click it to bring it to the foreground.74 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 90. In the Database name field, type a new database name for the IBM Director database table ortype an existing database name.In the User ID and Password fields, type the DB2 user ID and password that you createdduring the DB2 installation. Click Next to continue.Figure 3-34 Database selection configurationAccept the default DB2 node name LOCAL - DB2 in the IBM Director DB2 UniversalDatabase configuration secondary window as shown in Figure 3-35. Click OK to continue.Figure 3-35 Database node name selectionThe Database configuration in progress window is displayed at the bottom of the IBM DirectorDB2 Universal Database configuration window. Wait for the configuration to complete and thesecondary window to close. Click Finish as shown in Figure 3-36 on page 76 when the InstallShield Wizard Completed window opens. Chapter 3. TotalStorage Productivity Center suite installation 75
  • 91. Figure 3-36 Completed installation Important: Do not reboot the machine at the end of the IBM Director installation. The suite installer will reboot the machine. Click No as shown in Figure 3-37. Figure 3-37 IBM Director reboot option Click Next to reboot the machine as shown in Figure 3-38 on page 77. Important: If the server does not reboot at this point cancel the installer and reboot the server.76 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 92. Figure 3-38 Install wizard completion After rebooting the machine the installer will initialize. The window Select the installation language to be used for this wizard opens. Select the language and Click OK to continue (see Figure 3-39). Figure 3-39 IBM TotalStorage Productivity Center installation wizard language selection The installation confirmation panel is displayed click Next as shown in Figure 3-40 on page 78.3.5.3 Tivoli Agent Manager The next product to be installed in the Tivoli Agent Manager (see Figure 3-40 on page 78). The Tivoli Agent manager is required if you are installing the Productivity Center for Fabric or the Productivity Center for Data. It is not required for the Productivity Center for Disk or the Productivity Center for Replication. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 77
  • 93. Figure 3-40 IBM TotalStorage Productivity Center installation information The Package Location panel is displayed (see Figure 3-41). Select the installation source or CD-ROM drive and click Next. Note: If you specify the path for the installation source you must specify the path at the win directory level. Figure 3-41 Tivoli Agent Manager installation source The Tivoli Agent Manager Installer window opens (see Figure 3-42 on page 79). Click Next to continue.78 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 94. Figure 3-42 Tivoli Agent Manager installer launch windowThe Install Shield wizard will start. Then you see the language installation option window inFigure 3-43. Select the required language and click OK.Figure 3-43 Tivoli Agent Manager installation wizardThe Software License Agreement window opens. Click I accept the terms of the licenseagreement to continue. Chapter 3. TotalStorage Productivity Center suite installation 79
  • 95. Figure 3-44 Tivoli Agent Manager License agreement The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-45. Figure 3-45 Tivoli Agent Manager prerequisite source directory80 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 96. The DB2 information panel is displayed (see Figure 3-46). If you do not want to accept thedefaults, enter the – DB2 User Name – DB2 PortEnter the DB2 Password and click Next to continue.Figure 3-46 DB2 User informationThe WebSphere Application Server Information panel is displayed. This panel lets youspecify the host name or IP address, and the cell and node names on which to install theagent manager. If you specify a host name, use the fully qualified host name. For example,specify x330f03.almaden.ibm.com. If you use the IP address, use a static IP address. Thisvalue is used in the URLs for all agent manager services.Typically the cell and node name are both the same as the host name of the computer. IfWebSphere was installed before you started the agent manager installation wizard, you canlook up the cell and node name values in the – %WebSphere Application Server_INSTALL_ROOT%binSetupCmdLine.bat file. – You can also specify the ports used by the agent manager: – Registration (the default is 9511 for the server–side SSL) – Secure communications (the default is 9512 for client authentication, two–way SSL) – Public communication (the default is 9513)If you are using WebSphere network deployment or a customized deployment, make surethat the cell and node names are correct. For more information about WebSpheredeployment, see your WebSphere documentation, Click Next as shown in Figure 3-47 onpage 82. Chapter 3. TotalStorage Productivity Center suite installation 81
  • 97. Figure 3-47 WebSphere Application Server information Figure 3-48 WebSphere Application Server information The Security Certificates panel is displayed in Figure 3-49 on page 83. Specify whether to create new certificates or to use the demonstration certificates. In a typical production82 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 98. environment, create new certificates. The ability to use demonstration certificates is providedas a convenience for testing and demonstration purposes. Make a selection and click Next tocontinue.Figure 3-49 Tivoli Agent Manager security certificatesThe security certificate settings panel is displayed. Specify the certificate authority name,security domain, and agent registration password. The agent registration password is thepassword used to register the agents. You must provide this password when you install theagents. This password also sets the agent manager key store and trust store files.The domain name is used in the right-hand portion of the distinguished name (DN) of everycertificate issued by the agent manager. It is the name of the security domain defined by theagent manager. Typically, this value is the registered domain name or contains the registereddomain name. For example, for the computer system myserver.ibm.com, the domain name isibm.com. This value must be unique in your environment. If you have multiple agentmanagers installed, this value must be different on each agent manager.The default agent registration password is changeMe; click Next as shown in Figure 3-50 onpage 84. Chapter 3. TotalStorage Productivity Center suite installation 83
  • 99. Figure 3-50 Security certificate settings Preview Prerequisite Software Information panel is displayed. Click Next as shown in Figure 3-51. Figure 3-51 Prerequisite reuse information The Summary Information for Agent Manager panel is displayed. Click Next as shown in Figure 3-52 on page 85.84 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 100. Figure 3-52 Installation summaryThe Installation of Agent Manager Completed panel is displayed, Click Finish as shown inFigure 3-53.Figure 3-53 Completion summaryThe Installation of Agent Manager Successful panel is displayed. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 85
  • 101. Important: There are three configuration tasks left to: Start the Agent Manager Service Set the service to start automatically Add a DNS entry for the Agent Recovery Service with the unqualified host name TivoliAgentRecovery and port 80. Tip: The Database created for the IBM Agent Manager is IBMCDB.3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base There are three separate installs: – Install the IBM TotalStorage Productivity Center for Disk and Replication Base code – Install the IBM TotalStorage Productivity Center for Disk – Install the IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in “User IDs and security” on page 48. – Act as part of the operating system – Create a token object – Increase quotas – Replace a process level token – Debug programs Figure 3-54 IBM TotalStorage Productivity Center installation information86 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 102. The Package Location for Disk and Replication Manager window (Figure 3-54 on page 86) isdisplayed. Enter the appropriate information and click Next to continue.Figure 3-55 Package location for Productivity Center Disk and ReplicationThe Information for Disk and Replication Manager panel is displayed. Click Next to continueas shown in Figure 3-56.Figure 3-56 Installer informationThe Launch Disk and Replication Manager Base panel is displayed indicating that the Diskand Replication Manager installation wizard will be launched. Click Next to continue asshown in Figure 3-57 on page 88. Chapter 3. TotalStorage Productivity Center suite installation 87
  • 103. Figure 3-57 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-58. Figure 3-58 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory88 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 104. The IBM WebSphere selection panel will be displayed, click Next to continue as shown inFigure 3-59.Figure 3-59 WebSphere Application Server informationIf the Installation User ID privileges were not set a information panel stating that the privilegesneeds to be set will be displayed, click Yes to continue.At this point the installation will terminate, close the installer log and log back on and restartthe installer.Select the Typical radio button. Click Next to continue as shown in Figure 3-60 on page 90. Chapter 3. TotalStorage Productivity Center suite installation 89
  • 105. Figure 3-60 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation If the IBM Director Support Program and IBM Director Server service is still running a information panel will be displayed that the services will be stopped click Next to stop the running services as shown in Figure 3-61. Figure 3-61 Server checks90 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 106. You must enter the name and password for the IBM TotalStorage Productivity Center for Diskand Replication Base super user ID in the IBM TotalStorage Productivity Center for Disk andReplication Base installation window. This user name must be defined to the operatingsystem. Click Next to continue as shown in Figure 3-62.Figure 3-62 IBM TotalStorage Productivity Center for Disk and Replication Base Superuser informationYou need to enter the user name and password for the IBM DB2 Universal Database Server,click Next to continue as shown in Figure 3-63 on page 92. Chapter 3. TotalStorage Productivity Center suite installation 91
  • 107. Figure 3-63 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation in the SSL Configuration window. The information you enter will be used later. Generate a self-signed certificate – Select this option if you want the installer to automatically generate these certificate files (used for this installation). Defer the generation of the certificate as a manual post-installation task – Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. In this case the next step, Generate Self-Signed Certificate, is skipped. Fill in the Key file and Trust file password.92 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 108. Figure 3-64 Key and Trust file optionsIf you chose to have the installation program generate the certificate for you, the GenerateSelf-Signed Certificate window opens, after completing all the fields click Next as shown inFigure 3-65.Figure 3-65 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information Chapter 3. TotalStorage Productivity Center suite installation 93
  • 109. You are presented with the Create Local Database window. Enter the database name, click Next to continue as shown in Figure 3-66. l Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications. Figure 3-66 IBM TotalStorage Productivity Center for Disk and Replication Base Database name The Preview window displays a summary of all of the choices that were made during the customizing phase of the installation, click Install to complete the installation as shown in Figure 3-67 on page 95.94 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 110. Figure 3-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information The Finish window opens. You can view the log file for any possible error messages. The log file is located in (installeddirectory)logsdmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation. The post-install tasks information opens in a Notepad. You should read the information and complete any required tasks.3.5.5 IBM TotalStorage Productivity Center for Disk The next product to be installed is the Productivity Center for Disk as indicated in Figure 3-68 on page 96. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 95
  • 111. Figure 3-68 IBM TotalStorage Productivity Center installer information The Package Location for IBM TotalStorage Productivity Center for Disk panel is displayed. Enter the appropriate information and click Next to continue as shown in Figure 3-70 on page 97. Figure 3-69 Productivity Center for Disk install package location The Launch IBM TotalStorage Productivity Center for Disk panel is displayed indicating that the IBM TotalStorage Productivity Center for Disk installation wizard will be launched (see Figure 3-70 on page 97). Click Next to continue.96 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 112. Figure 3-70 IBM TotalStorage Productivity Center for Disk installerThe Productivity Center for Disk Installer - Welcome panel is displayed (see Figure 3-71).Click Next to continue.Figure 3-71 IBM TotalStorage Productivity Center for Disk Installer WelcomeThe confirm target directories panel is displayed. Enter the directory path or accept thedefault directory (see Figure 3-72 on page 98) and click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 97
  • 113. Figure 3-72 Productivity Center for Disk Installer - Destination Directory The IBM TotalStorage Productivity Center for Disk Installer - Installation Type panel opens (see Figure 3-73). Select Typical install in the radio button click Next to continue. Figure 3-73 Productivity Center for Disk Installation Type The database configuration panel will be opened accept the database name or re-enter a new data base name, click Next to continue as shown in Figure 3-74 on page 99.98 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 114. Figure 3-74 IBM TotalStorage Productivity Center for Disk database nameReview the information about the IBM TotalStorage Productivity Center for Disk previewpanel and click Install as shown in Figure 3-75.Figure 3-75 IBM TotalStorage Productivity Center for Disk installation preview Chapter 3. TotalStorage Productivity Center suite installation 99
  • 115. The installer will create the required database (see Figure 3-76) and install the product. You will see a progress bar for the Productivity Center for Disk install status. Figure 3-76 Productivity Center for Disk DB2 database creation When the install is complete you will see the panel in Figure 3-77. You should review the post installation tasks. Click Finish to continue. Figure 3-77 Productivity Center for Disk Installer - Finish3.5.6 IBM TotalStorage Productivity Center for Replication The InstallShield will be displayed. Read the information and click Next to continue as shown in Figure 3-78 on page 101.100 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 116. Figure 3-78 IBM TotalStorage Productivity Center installation overviewThe Package Location for Replication Manager panel is displayed. Enter the appropriateinformation and click Next to continueThe Welcome window opens with suggestions about what documentation to review prior toinstallation. Click Next to continue as shown in Figure 3-79, or click Cancel to exit theinstallation.Figure 3-79 IBM TotalStorage Productivity Center for Replication installation Chapter 3. TotalStorage Productivity Center suite installation 101
  • 117. The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-80. Figure 3-80 IBM TotalStorage Productivity Center for Replication installation directory The next panel (see Figure 3-81) asks you to select the install type. Select the Typical radio button and click Next to continue. Figure 3-81 Productivity Center for Replication Install type selection102 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 118. Enter parameters for the new DB2 Hardware subcomponent database in the database nameor accept the default. We recommend you accept the default. Click Next to continue asshown in Figure 3-82. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.Figure 3-82 IBM TotalStorage Productivity Center for Replication hardware database nameEnter parameters for the new Element Catalog subcomponent database in the databasename or accept the default, click Next to continue as shown in Figure 3-83 on page 104. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Chapter 3. TotalStorage Productivity Center suite installation 103
  • 119. Figure 3-83 IBM TotalStorage Productivity Center for Replication element catalog database name Enter parameters for the new Replication Manager subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-84 on page 105. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.104 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 120. Figure 3-84 IBM TotalStorage Productivity Center for Replication, Replication Manager database nameSelect the required database tuning cycle in hours, select Next to continue as shown inFigure 3-85.Figure 3-85 IBM TotalStorage Productivity Center for Replication database tuning cycle Chapter 3. TotalStorage Productivity Center suite installation 105
  • 121. Review the information about the IBM TotalStorage Productivity Center for Replication preview panel and click Install as shown in Figure 3-86. Figure 3-86 IBM TotalStorage Productivity Center for Replication installation information The Productivity Center for Replication Installer - Finish panel in Figure 3-87 will be displayed upon successful installation. Read the post installation tasks. Click Finish to complete the installation. Figure 3-87 Productivity Center for Replication installation successful106 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 122. 3.5.7 IBM TotalStorage Productivity Center for Fabric We have included the installation for the Productivity Center for Fabric here. Refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for more information on using the Productivity Center for Fabric with the Productivity Center for Disk. Prior to installing IBM TotalStorage Productivity Center for Fabric, there are prerequisite tasks that need to be completed. These tasks are described in detail in 3.4, “IBM TotalStorage Productivity Center for Fabric” on page 54. These tasks include: “The computer name” on page 54 “SNMP install” on page 53 “Database considerations” on page 55 “Windows Terminal Services” on page 55 “User IDs and password considerations” on page 56 “Personal firewall” on page 56 “Tivoli NetView” on page 55 “Security Considerations” on page 57 Installing the manager After the successful installation of the Productivity Center for Replication, the suite installer will begin the Productivity Center for Fabric install (see Figure 3-88). Click Next to continue. Figure 3-88 IBM TotalStorage Productivity Center installation information The install shield will be displayed read the information and click Next to continue. The Package Location for Productivity Center for Fabric Manager panel is displayed (see Figure 3-89 on page 108). Enter the appropriate information and click Next to continue. Important: The package location at this point is very important If you used the demonstration certificates point to the CD-ROM drive. If you generated new certificates point to the manager CD image with the new agentTrust.jks file. Chapter 3. TotalStorage Productivity Center suite installation 107
  • 123. Figure 3-89 Productivity Center for Fabric install package location The language installation option panel is displayed, select the required language and click OK as shown in Figure 3-90. Figure 3-90 IBM TotalStorage Productivity Center for Fabric install wizard The Welcome panel is displayed. Click Next to continue as shown in Figure 3-91 on page 109.108 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 124. Figure 3-91 IBM TotalStorage Productivity Center for Fabric welcome informationSelect the type of installation you want to perform (see Figure 3-92 on page 110). In this casewe are installing the IBM TotalStorage Productivity Center for Fabric code. You can also usethe suite installer to perform a remote deployment of the Fabric agent.This operation can beperformed only if you have previously installed the common agent on a machines. Forexample, you might have installed the Data agent on the machines and want to add theFabric agent to the same machines. You must have installed the Fabric Manager before youcan deploy the Fabric agent. You cannot select both Fabric Manager Installation and RemoteFabric Agent Deployment at the same time. You can only select one option. Click Next tocontinue. Chapter 3. TotalStorage Productivity Center suite installation 109
  • 125. Figure 3-92 Fabric Manager installation type selection The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-93. Figure 3-93 IBM TotalStorage Productivity Center for Fabric installation directory The Port Number panel is displayed. This is a range of eight port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 7 numbers will be reserved for use by IBM TotalStorage Productivity110 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 126. Center for Fabric. For example, if you specify port number 9550, IBM TotalStorageProductivity Center for Fabric will use port numbers 9550–9557.Ensure that the port numbers you use are not used by other applications at the same time. Todetermine which port numbers are in use on a particular computer, type either of the followingcommands from a command prompt. We recommend you use the first command. – netstat -a – netstat -anThe port numbers in use on the system are listed in the Local Address column of the output.This field has the format host:port. Enter the primary port number as shown in Figure 3-94and click Next to continue.Figure 3-94 IBM TotalStorage Productivity Center for Fabric port numberThe Database choice panel is displayed. You can select DB2 or Cloudscape. If you selectDB2, you must have previously installed DB2 on the server. DB2 is the recommendedinstallation option. Select Next to continue as shown in Figure 3-95 on page 112. Chapter 3. TotalStorage Productivity Center suite installation 111
  • 127. Figure 3-95 IBM TotalStorage Productivity Center for Fabric database selection type The next panel allows you to select the WebSphere Application Server to use in the install. In this installation we used Embedded WebSphere Application Server. Click Next to continue as shown in Figure 3-97 on page 113. Figure 3-96 Productivity Center for Fabric WebSphere Application Server type selection The Single or Multiple User ID and Password panel (using DB2) is displayed (see Figure 3-97 on page 113). If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2 user and WebSphere user. You can also use the DB2 administrative password for the host authentication and NetView password.112 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 128. For example, if you selected all the choices in the panel, you will use the DB2 administrativeuser ID and password for the DB2 and WebSphere user ID and password. You will also usethe DB2 administrative password for the host authentication and NetView password. If youselect a choice, you will not be prompted for the user ID or password for each item you select. Note: If you selected Cloudscape as your database, this panel is not displayed.Click Next to continue.Figure 3-97 IBM TotalStorage Productivity Center for Fabric user and password optionsThe User ID and Password panel (using DB2) is displayed. If you selected DB2 as yourdatabase, you will see this panel. This panel allows you to use the DB2 administrative user IDand password for the DB2, enter the required User ID and Password, click Next to continueas shown in Figure 3-98 on page 114. Chapter 3. TotalStorage Productivity Center suite installation 113
  • 129. Figure 3-98 IBM TotalStorage Productivity Center for Fabric database user information Enter parameters for the new database in the database name or accept the default, click Next to continue as shown in Figure 3-99. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications. Figure 3-99 IBM TotalStorage Productivity Center for Fabric database name Enter parameters for the database drive, click Next to continue as shown in Figure 3-100 on page 115.114 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 130. Figure 3-100 IBM TotalStorage Productivity Center for Fabric database drive informationThe Agent Manager Information panel is displayed. You must provide the followinginformation: – Agent manager name or IP address. This is the name or IP address of your agent manager. – Agent manager registration port. This is the port number of your agent manager. – Agent registration password (twice). This is the password used to register the common agent with the agent manager as shown in Figure 3-50 on page 84 if the password was not set and the default was accepted the password is changeMe. – Resource manager registration user ID. This is the user ID used to register the resource manager with the agent manager (default is manager) – Resource manager registration password (twice). This is the password used to register the resource manager with the agent manager (default is password).Fill in the information and click Next to continue as shown in Figure 3-101 on page 116. Chapter 3. TotalStorage Productivity Center suite installation 115
  • 131. Figure 3-101 IBM TotalStorage Productivity Center for Fabric agent manager information The IBM TotalStorage Productivity Center for Fabric Install panel is displayed. This panel provides information about the location and size of the Fabric Manager. Click Next to continue as shown in Figure 3-102. Figure 3-102 IBM TotalStorage Productivity Center for Fabric installation information The Status panel is displayed. The installation can take about 15–20 minutes to complete. When the installation has completed, the Successfully Installed panel is displayed, click Next to continue as shown in Figure 3-103 on page 117.116 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 132. Figure 3-103 IBM TotalStorage Productivity Center for Fabric installation statusThe install wizard Complete Installation panel is displayed. Do not restart your computer, clickNo, I will restart my computer later. Click Finish to complete the installation as shown inFigure 3-104.Figure 3-104 IBM TotalStorage Productivity Center for Fabric restart optionsThe Install Status panel will be displayed indicating the Productivity Center for Fabricinstallation was successful. Click Next to continue as shown in Figure 3-105 on page 118. Chapter 3. TotalStorage Productivity Center suite installation 117
  • 133. Figure 3-105 IBM TotalStorage Productivity Center installation information118 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 134. 4 Chapter 4. CIMOM installation and configuration This chapter provides a step-by-step guide to configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) that are required to use the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication.© Copyright IBM Corp. 2004, 2005. All rights reserved. 119
  • 135. 4.1 Introduction After you have completed the installation of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication simply as TotalStorage Productivity Center. The TotalStorage Productivity Center for Disk uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server Verifying connection to ESS Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller4.2 Planning considerations for Service Location Protocol The Service Location Protocol (SLP) has three major components, Service Agent (SA) and User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and DA is an optional component. You may have to make a decision whether to use SLP DA in your environment based on considerations as described below.4.2.1 Considerations for using SLP DAs You may consider to use DA is to reduce the amount of multicast traffic involved in service discovery. In a large net work with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery.120 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 136. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider using DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.4.2.2 SLP configuration recommendation Some configuration recommendations are provided for enabling TotalStorage Productivity Center for Disk to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent configuration. Router configuration Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. - Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations). Chapter 4. CIMOM installation and configuration 121
  • 137. SLP directory agent configuration Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center for Disk. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center for Disk to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center for Disk Discovery Preference panel as discussed in “Configuring IBM Director for SLP discovery” on page 152. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.4.3 General performance guidelines Here are some general performance considerations for configuring the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will receive information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center for Disk installation. You should have not more than one DA per subnet. Misconfiguring the IBM Director discovery preferences may impact performance on auto discovery or on device presence checking. It may also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the ESS CLI and ESS CIM agent software on another host of comparable size to the main TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication server. Attempting to run a full TotalStorage Productivity Center implementation (Device Manager, Performance Manager, Replication Manager, DB2, IBM Director and the WebSphere Application server) on the same host as the ESS CIM agent, will result in dramatically increased wait times for data retrieval. Based on our ITSO Lab experience, it is suggested to have separate servers for TotalStorage Productivity Center for Disk along with TotalStorage Productivity Center for Replication, ESS CIMOM and DS 4000 family CIMOM. Otherwise, you may have port conflicts, increased wait times for data retrieval and resource contention.122 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 138. 4.4 Planning considerations for CIMOM The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface. Figure 4-1 on page 123 shows overview of CIM agent. Figure 4-1 CIM Agent Overview You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA.4.4.1 CIMOM configuration recommendations Following recommendations are based on our experience in ITSO Lab environment: The CIMOM agent code which you are planning to use, must be supported by the installed version of TotalStorage Productivity Center for Disk. You may refer to the link below for latest updates: http://www-1.ibm.com/servers/storage/support/software/tpcdisk/ You must have CIMOM supported firmware level on the storage devices. It you have incorrect version or firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have dedicated server for CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize and need to open the firewall ports only for TotalStorage Productivity Center for Disk communication with CIMOM. Chapter 4. CIMOM installation and configuration 123
  • 139. Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center server. This is due to resource contention, TCP/IP port requirements and system services co-existence.4.5 Installing CIM agent for ESS Before starting Multiple Device Manager discovery, you must first configure the Common Information Model Object Manager (CIMOM) for ESS. The ESS CIM Agent package is made up of the following parts (see Figure 4-2). Figure 4-2 ESS CIM Agent Package This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system.4.5.1 ESS CLI install The following list of installation and configuration tasks are in the order in which they should be performed: Before you install the ESS CIM Agent you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI). The ESS CIM Agent installation program checks your system for the existence of the ESS CLI and reports that it cannot continue if the ESS CLI is not installed as shown in Figure 4-3 on page 125.124 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 140. Figure 4-3 ESS CLI install requirement for ESS CIM Agent Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236.Perform the following steps to install the ESS CLI for Windows: Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 4-4 on page 126 through Figure 4-7 on page 127. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI. Chapter 4. CIMOM installation and configuration 125
  • 141. Figure 4-4 InstallShield Wizard for ESS CLI Figure 4-5 Choose target system panel126 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 142. Figure 4-6 ESS CLI Setup Status panelFigure 4-7 ESS CLI installation complete panel Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system. Chapter 4. CIMOM installation and configuration 127
  • 143. Verify that the ESS CLI is installed: – Click Start –> Settings –> Control Panel. – Double-click the Add/Remove Programs icon. – Verify that there is an IBM ESS CLI entry. Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u itso -p itso13sj -s 9.43.226.43 list server Where: – 9.43.226.43 represents the IP address of the Enterprise Storage Server – itso represents the Enterprise Storage Server Specialist user name – itso13sj represents the Enterprise Storage Server Specialist password for the user name Figure 4-8 shows the response from the esscli command. Figure 4-8 ESS CLI verification4.5.2 ESS CIM Agent install To install the ESS CIM Agent in your Windows system, perform the following steps: Log on to your system as the local administrator. Insert the CIM Agent for ESS CD into the CD-ROM drive. The Install Wizard launchpad should start automatically, if you have autorun mode set on your system. You should see launchpad similar to Figure 4-9 on page 129. You may review the Readme file from the launchpad menu. Subsequently, you can Click Installation Wizard. The Installation Wizard starts executing setup.exe program and shows the Welcome panel in Figure 4-10 on page 130. Note: The ESS CIM Agent program should start within 15 - 30 seconds if you have autorun mode set on your system. If the installer window does not open, perform the following steps: – Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. – If you are using a Command Prompt window, run setup.exe. – If you are using Windows Explorer, double-click the setup.exe file.128 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 144. Note: If you are using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing setup.exe from the longer pathname may fail. An example of a short pathname is C:CIMOMsetup.exe.Figure 4-9 ESS CIMOM launchpad The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 4-10 on page 130). Chapter 4. CIMOM installation and configuration 129
  • 145. Figure 4-10 ESS CIM Agent welcome window The License Agreement window opens. Read the license agreement information. Select “I accept the terms of the license agreement”, then click Next to accept the license agreement (see Figure 4-11 on page 131).130 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 146. Figure 4-11 ESS CIM Agent license agreement The Destination Directory window opens. Accept the default directory and click Next (see Figure 4-12 on page 132). Chapter 4. CIMOM installation and configuration 131
  • 147. Figure 4-12 ESS CIM Agent destination directory panel The Updating CIMOM Port window opens (see Figure 4-13 on page 133). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a132 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 148. Figure 4-13 ESS CIM Agent port window The Installation Confirmation window opens (see Figure 4-14 on page 134). Click Install to confirm the installation location and file size. Chapter 4. CIMOM installation and configuration 133
  • 149. Figure 4-14 ESS CIM Agent installation confir