• Like
  • Save
Ibm total storage productivity center v2.3 getting started sg246490
Upcoming SlideShare
Loading in...5
×
 

Ibm total storage productivity center v2.3 getting started sg246490

on

  • 761 views

 

Statistics

Views

Total Views
761
Views on SlideShare
761
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Ibm total storage productivity center v2.3 getting started sg246490 Ibm total storage productivity center v2.3 getting started sg246490 Document Transcript

    • Front coverIBM TotalStorageProductivity Center V2.3:Getting StartedEffectively use the IBM TotalStorageProductivity CenterLearn to install and customize the IBMTotalStorage Productivity CenterUnderstand the IBM TotalStorageOpen Software Family Mary Lovelace Larry Mc Gimsey Ivo Gomilsek Mary Anne Marquezibm.com/redbooks
    • International Technical Support OrganizationIBM TotalStorage Productivity Center V2.3:Getting StartedDecember 2005 SG24-6490-01
    • Note: Before using this information and the product it supports, read the information in “Notices” on page xiii.Second Edition (December 2005)This edition applies to Version 2, Release 3 of IBM TotalStorage Productivity Center (product number5608-UC1, 5608-UC3, 5608-UC4, 5608-UC5.© Copyright International Business Machines Corporation 2005. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
    • Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiPart 1. IBM TotalStorage Productivity Center foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 7 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 9 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 12 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 14 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 17 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1 IBM TotalStorage Productivity Center architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.1 Architectural overview diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 Architectural layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.3 Relationships between the managers and components . . . . . . . . . . . . . . . . . . . . 31 2.1.4 Collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2 Standards used in IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.1 ANSI standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.2 Web-Based Enterprise Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2.3 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2.4 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.5 Fibre Alliance MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3 Service Location Protocol (SLP) overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.2 Common Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4 Component interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.1 CIMOM discovery with SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.2 How CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.5 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5.1 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5.2 Common Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Part 2. Installing the IBM TotalStorage Productivity Center base product suite . . . . . . . . . . . . . . . . . 55 Chapter 3. Installation planning and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 57© Copyright IBM Corp. 2005. All rights reserved. iii
    • 3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.1 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 59 3.2.2 Default databases created during the installation . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3 Our lab setup environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.4 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.5 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.1 User IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.2 Increasing user security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5.3 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.4 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.7 Windows Management Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.8 World Wide Web Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.9 Uninstalling Internet Information Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.10 Installing SNMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.11 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.11.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.11.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.11.6 Changing the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.12 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.1 Server recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.2 Supported subsystems and databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.12.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.12.4 Creating the DB2 database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Chapter 4. Installing the IBM TotalStorage Productivity Center suite . . . . . . . . . . . . . 83 4.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.1.1 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.2 Prerequisite Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2.2 Installing prerequisite software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Suite installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.1 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3.2 Installing the TotalStorage Productivity Center suite . . . . . . . . . . . . . . . . . . . . . 110 4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 125 4.3.4 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.3.5 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 146 4.3.6 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 157 4.3.7 IBM TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 5. CIMOM install and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196iv IBM TotalStorage Productivity Center V2.3: Getting Started
    • 5.5.1 ESS CLI Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.5.2 DS CIM Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.6 Configuring the DS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.6.1 Registering DS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.6.2 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 5.6.3 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 5.6.4 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.6.5 CIMOM user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 5.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 5.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 221 5.7.4 Configuring TotalStorage Productivity Center for SLP discovery . . . . . . . . . . . . 223 5.7.5 Registering the DS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.7.6 Verifying and managing CIMOM’s availability. . . . . . . . . . . . . . . . . . . . . . . . . . . 224 5.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 5.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 5.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 5.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 235 5.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 241 5.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 241 5.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 5.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 5.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243Part 3. Configuring the IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Chapter 6. Configuring IBM TotalStorage Productivity Center for Disk. . . . . . . . . . . 247 6.1 Productivity Center for Disk Discovery summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.2 SLP DA definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.2.1 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 6.3 Disk and Replication Manager remote GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 6.3.1 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 270 6.3.2 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 277 Chapter 7. Configuring TotalStorage Productivity Center for Replication . . . . . . . . 279 7.1 Installing a remote GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Chapter 8. Configuring IBM TotalStorage Productivity Center for Data . . . . . . . . . . 289 8.1 Configuring the CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.1 CIM and SLP interfaces within Data Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.2 Configuring CIM Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 8.1.3 Setting up a disk alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 8.2 Setting up the Web GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 8.2.1 Using IBM HTTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 8.2.2 Using Internet Information Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 8.2.3 Configuring the URL in Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 8.3 Installing the Data Manager remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 8.4 Configuring Data Manager for Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 8.5 Alert Disposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Chapter 9. Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . 319 9.1 TotalStorage Productivity Center component interaction . . . . . . . . . . . . . . . . . . . . . . 320 Contents v
    • 9.1.1 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . 320 9.1.2 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 9.1.3 Tivoli Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 9.2 Post-installation procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.2.1 Installing Productivity Center for Fabric – Agent . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.2.2 Installing Productivity Center for Fabric – Remote Console . . . . . . . . . . . . . . . . 331 9.3 Configuring IBM TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . 342 9.3.1 Configuring SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 9.3.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 9.3.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 9.3.4 Performing an initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . . 349 Chapter 10. Deployment of agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 10.1 Installing the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 10.2 Data Agent installation using the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 10.3 Deploying the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361Part 4. Using the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Chapter 11. Using TotalStorage Productivity Center for Disk. . . . . . . . . . . . . . . . . . . 375 11.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 11.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 11.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 11.3.1 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 11.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 11.5 Changing the display name of a storage device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 11.6 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 11.6.1 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 11.6.2 Assigning and unassigning ESS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 11.6.3 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 11.6.4 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 11.7 Working with DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 11.7.1 DS8000 Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 11.7.2 Assigning and unassigning DS8000 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 392 11.7.3 Creating new DS8000 volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 11.7.4 Launch device manager for an DS8000 device . . . . . . . . . . . . . . . . . . . . . . . . 394 11.8 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.8.1 Working with SAN Volume Controller MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.8.2 Creating new MDisks on supported storage devices . . . . . . . . . . . . . . . . . . . . 399 11.8.3 Create and view SAN Volume Controller VDisks . . . . . . . . . . . . . . . . . . . . . . . 402 11.9 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 11.9.1 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 11.9.2 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 11.9.3 Assigning hosts to DS4000 and FAStT Volumes . . . . . . . . . . . . . . . . . . . . . . . 413 11.9.4 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . 414 11.9.5 Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 11.10 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 11.10.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . 421 11.10.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Chapter 12. Using TotalStorage Productivity Center Performance Manager . . . . . . 427 12.1 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 12.1.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 12.1.2 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429vi IBM TotalStorage Productivity Center V2.3: Getting Started
    • 12.1.3 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 12.1.4 Reviewing data collection task status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 12.1.5 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 12.1.6 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 12.1.7 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 12.1.8 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 12.1.9 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 12.1.10 Data collection for the DS6000 and DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . 463 12.1.11 DS6000 and DS8000 thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46612.2 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 12.2.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 12.2.2 Creating gauges: an example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 12.2.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 12.2.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 12.2.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . 47412.3 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 12.3.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 12.3.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47712.4 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.4.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . 479 12.4.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.4.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . 480 12.4.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . 481 12.4.7 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 12.4.8 Creating and managing workload profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508Chapter 13. Using TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . 52113.1 TotalStorage Productivity Center for Data overview . . . . . . . . . . . . . . . . . . . . . . . . . 522 13.1.1 Business purpose of TotalStorage Productivity Center for Data. . . . . . . . . . . . 522 13.1.2 Components of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . 522 13.1.3 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52313.2 Functions of TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . 523 13.2.1 Basic menu displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 13.2.2 Discover and monitor Agents, disks, filesystems, and databases . . . . . . . . . . 526 13.2.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 13.2.4 Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 13.2.5 Chargeback: Charging for storage usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53313.3 OS Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 13.3.1 Navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13.3.2 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 13.3.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 13.3.4 Pings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13.3.5 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 13.3.6 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 13.3.7 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55213.4 OS Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 13.4.1 Alerting navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 13.4.2 Computer Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 13.4.3 Filesystem Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562 13.4.4 Directory Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 13.4.5 Alert logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564 Contents vii
    • 13.5 Policy management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 13.5.1 Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 13.5.2 Network Appliance Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 13.5.3 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 13.5.4 Filesystem extension and LUN provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 576 13.5.5 Scheduled Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 13.6 Database monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 13.6.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 13.6.2 Probes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 13.6.3 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 13.6.4 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 13.7 Database Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.1 Instance Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.2 Database-Tablespace Alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 13.7.3 Table Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.7.4 Alert log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.8 Databases policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 13.8.1 Network Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 13.8.2 Instance Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.8.3 Database Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9 Database administration samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.1 Database up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.2 Database utilization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.9.3 Need for reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 13.10 Data Manager reporting capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 13.10.1 Major reporting categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593 13.11 Using the standard reporting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 13.11.1 Asset Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 13.11.2 Storage Subsystems Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 13.11.3 Availability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 13.11.4 Capacity Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 13.11.5 Usage Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 13.11.6 Usage Violation Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 13.11.7 Backup Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 13.12 TotalStorage Productivity Center for Data ESS Reporting . . . . . . . . . . . . . . . . . . . 634 13.12.1 ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 13.13 IBM Tivoli Storage Resource Manager top 10 reports . . . . . . . . . . . . . . . . . . . . . . 653 13.13.1 ESS used and free storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 13.13.2 ESS attached hosts report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 13.13.3 Computer Uptime Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 13.13.4 Growth in storage used and number of files . . . . . . . . . . . . . . . . . . . . . . . . . . 659 13.13.5 Incremental backup trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 13.13.6 Database reports against DBMS size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 13.13.7 Database instance storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 13.13.8 Database reports size by instance and by computer . . . . . . . . . . . . . . . . . . . 667 13.13.9 Locate the LUN on which a database is allocated . . . . . . . . . . . . . . . . . . . . . 669 13.13.10 Finding important files on your systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 13.14 Creating customized reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 13.14.1 System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 13.14.2 Reports owned by a specific username . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 13.14.3 Batch Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688 13.15 Setting up a schedule for daily reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 13.16 Setting up a reports Web site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698viii IBM TotalStorage Productivity Center V2.3: Getting Started
    • 13.17 Charging for storage usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700Chapter 14. Using TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . 70314.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 14.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 14.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 14.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 14.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 14.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709 14.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711 14.1.10 NetView and Productivity Center for Fabric integration . . . . . . . . . . . . . . . . . 71114.2 Walk-through of Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712 14.2.1 Device Centric view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 14.2.2 Host Centric view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 14.2.3 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 14.2.4 Launching element managers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723 14.2.5 Explore view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72514.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 14.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 14.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731 14.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732 14.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 14.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73414.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 14.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73514.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739 14.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740 14.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740 14.5.3 Launching TotalStorage Productivity Center for Data. . . . . . . . . . . . . . . . . . . . 742 14.5.4 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74214.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74314.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 14.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 14.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . . 747 14.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . . 750 14.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752 14.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . . 756 14.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758 14.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760 14.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762 14.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76414.8 Netview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766 14.8.1 Reporting overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767 14.8.2 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76714.9 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 14.9.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769 14.9.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770 14.9.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77114.10 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774 14.10.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775 Contents ix
    • 14.10.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783 14.10.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . 784 14.10.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 14.11 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 14.11.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 14.11.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791 14.11.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794 14.11.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802 14.11.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 14.12 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810 14.13 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 14.14 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 14.15 Productivity Center for Fabric and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 14.15.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 14.15.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 14.16 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814 14.16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814 14.16.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816 14.16.3 Configuration for ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . 818 14.16.4 Using ED/FI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820 14.16.5 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . . 822 14.16.6 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825 Chapter 15. Using TotalStorage Productivity Center for Replication. . . . . . . . . . . . . 827 15.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . 828 15.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828 15.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 15.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831 15.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831 15.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832 15.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 15.2 Exploiting Productivity Center for replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.2 Adding a replication device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834 15.2.3 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 15.2.4 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841 15.2.5 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842 15.2.6 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 15.2.7 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844 15.2.8 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847 15.2.9 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 15.2.10 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849 15.2.11 Creating storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850 15.2.12 Point-in-Time Copy - creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852 15.2.13 Creating a session - verifying source-target relationship . . . . . . . . . . . . . . . . 856 15.2.14 Continuous Synchronous Remote Copy - creating a session. . . . . . . . . . . . . 861 15.2.15 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866 15.2.16 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . 873 15.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . 884 15.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886 15.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888 15.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892 15.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893x IBM TotalStorage Productivity Center V2.3: Getting Started
    • Chapter 16. Hints, tips, and good-to-knows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89916.1 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900 16.1.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90116.2 Tivoli Common Agent Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 16.2.1 Locations of configured user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901 16.2.2 Resource Manager registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902 16.2.3 Tivoli Agent Manager status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902 16.2.4 Registered Fabric Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904 16.2.5 Registered Data Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90616.3 Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 16.3.1 Launchpad installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 16.3.2 Launchpad customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90916.4 Remote consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91116.5 Verifying whether a port is in use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91116.6 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91116.7 Collecting logs for support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 16.7.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 16.7.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921 16.7.3 Following Discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . 921 16.7.4 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 16.7.5 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . 92716.8 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928 16.8.1 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929 16.8.2 Device registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93016.9 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930 16.9.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 16.9.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93116.10 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 16.10.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . 93216.11 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94016.12 SVC Data collection task failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940Chapter 17. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 94317.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94417.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 944 17.2.1 Performance manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94517.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948 17.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948 17.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 17.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 17.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95117.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952 17.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95217.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 17.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 17.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95917.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 17.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 17.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 97917.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98417.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988Appendix A. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992 Contents xi
    • Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992 User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . 993 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 IBM TotalStorage Enterprise Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999xii IBM TotalStorage Productivity Center V2.3: Getting Started
    • NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area. Anyreference to an IBM product, program, or service is not intended to state or imply that only that IBM product,program, or service may be used. Any functionally equivalent product, program, or service that does notinfringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisions areinconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate without incurringany obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2005. All rights reserved. xiii
    • TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX® iSeries™ Sequent® Cloudscape™ MVS™ ThinkPad® DB2® Netfinity® Tivoli Enterprise™ DB2 Universal Database™ NetView® Tivoli Enterprise Console® e-business on demand™ OS/390® Tivoli® Enterprise Storage Server® Predictive Failure Analysis® TotalStorage® Eserver® pSeries® WebSphere® Eserver® QMF™ xSeries® FlashCopy® Redbooks™ z/OS® IBM® Redbooks (logo) ™ zSeries® ibm.com® S/390® 1-2-3®The following terms are trademarks of other companies:Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,Inc. in the United States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in theUnited States, other countries, or both.Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, othercountries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Other company, product, and service names may be trademarks or service marks of others.xiv IBM TotalStorage Productivity Center V2.3: Getting Started
    • Preface IBM® TotalStorage® Productivity Center is a suite of infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments. It can help reduce the effort of managing complex storage infrastructures, improve storage capacity utilization, and improve administration efficiency. IBM TotalStorage Productivity Center allows you to respond to on demand storage needs and brings together, in a single point, the management of storage devices, fabric, and data. This IBM Redbook is intended for administrators and users who are installing and using IBM TotalStorage Productivity Center V2.3. It provides an overview of the product components and functions. We describe the hardware and software environment required, provide a step-by-step installation procedure, and offer customization and usage hints and tips. This book is not a replacement for the existing IBM Redbooks™, or product manuals, that detail the implementation and configuration of the individual products that make up the IBM TotalStorage Productivity Center, or the products as they may have been called in previous versions. We refer to those books as appropriate throughout this book.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), San Jose Center. Mary Lovelace is a Consulting IT Specialist at the ITSO in San Jose, California. She has more than 20 years of experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and systems support. Larry Mc Gimsey is a consulting IT Architect working in Managed Storage Services delivery supporting worldwide SAN storage customers. He has over 30 years experience in IT. He joined IBM 6 years ago as a result of an outsourcing engagement. Most of his experience prior to joining IBM was in mainframe systems support. It included system programming, performance management, capacity planning, system automation and storage management. Since joining IBM, Larry has been working with large SAN environments. He currently works with Managed Storage Services offering and delivery teams to define the architecture used to deliver worldwide storage services. Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central and Eastern European Region in architecting, deploying, and supporting SAN/storage/DR solutions. His areas of expertise include SAN, storage, HA systems, xSeries® servers, network operating systems (Linux, MS Windows, OS/2®), and Lotus® Domino™ servers. He holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo has contributed to various other redbooks on Tivoli products, SAN, Linux/390, xSeries, and Linux. Mary Anne Marquez is the team lead for tape performance at IBM Tucson. She has extensive knowledge in setting up a TotalStorage Productivity Center environment for use with Copy Services and Performance Management, as well as debugging the various components of TotalStorage Productivity Center including WebSphere, ICAT, and the CCW interface for ESS. In addition to TPC, Mary Anne has experience with the native Copy Services tools on ESS model-800 and DS8000. She has authored several performance white papers.© Copyright IBM Corp. 2005. All rights reserved. xv
    • Thanks to the following people for their contributions to this project: Sangam Racherla Yvonne Lyon ITSO, San Jose Center Bob Haimowitz ITSO, Raleigh Center Diana Duan Tina Dunton Nancy Hobbs Paul Lee Thiha Than Miki Walter IBM San Jose Martine Wedlake IBM Beaverton Ryan Darris IBM Tucson Doug Dunham Tivoli Storage SWAT Team Mike Griese Technical Support Marketing Lead, Rochester Curtis Neal Scott Venuti Open System Demo Center, San Josexvi IBM TotalStorage Productivity Center V2.3: Getting Started
    • Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an email to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xvii
    • xviii IBM TotalStorage Productivity Center V2.3: Getting Started
    • Part 1Part 1 IBM TotalStorage Productivity Center foundation In this part of the book we introduce the IBM TotalStorage Productivity Center: Chapter 1, “IBM TotalStorage Productivity Center overview” on page 3, contains an overview of the components of IBM TotalStorage Productivity Center. Chapter 2, “Key concepts” on page 27, provides information about the communication, protocols, and standards organization that is the foundation of understanding the IBM TotalStorage Productivity Center.© Copyright IBM Corp. 2005. All rights reserved. 1
    • 2 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 1 Chapter 1. IBM TotalStorage Productivity Center overview IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), IBM TotalStorage DS4000, IBM TotalStorage DS6000, and IBM TotalStorage DS8000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. This chapter provides an overview of the entire IBM TotalStorage Open Software Family.© Copyright IBM Corp. 2005. All rights reserved. 3
    • 1.1 Introduction to IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN.1.1.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 1-1 SAN management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).4 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 1.2 IBM TotalStorage Open Software family The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error. Figure 1-2 Enabling customer to move toward On Demand Chapter 1. IBM TotalStorage Productivity Center overview 5
    • Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3. Figure 1-3 IBM TotalStorage Open Software Family1.3 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager), and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager).6 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Taking a closer look at storage infrastructure management (see Figure 1-4), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert Figure 1-4 Centralized, automated storage infrastructure management1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 8 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager). Chapter 1. IBM TotalStorage Productivity Center overview 7
    • Figure 1-5 Monitor and Configure the Storage Infrastructure Data area Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.8 IBM TotalStorage Productivity Center V2.3: Getting Started
    • TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information. Architecture The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.1.3.2 Fabric subject matter expert: Productivity Center for Fabric The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies – What HBAs to use for each host and for what purpose – Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in” Improved Application Availability – Predicting storage network failures before they happen enabling preventative maintenance – Accelerate problem isolation when failures do happen Chapter 1. IBM TotalStorage Productivity Center overview 9
    • Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert. Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.10 IBM TotalStorage Productivity Center V2.3: Getting Started
    • TotalStorage Productivity Center for Fabric componentsThe major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMPThere are two additional components which are not included in the TotalStorage ProductivityCenter. IBM Tivoli Enterprise™ Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business.The TotalStorage Productivity Center for Fabric functions are distributed across the Managerand the Agent.TotalStorage Productivity Center for FabricServer Performs initial discovery of environment: – Gathers and correlates data from agents on managed hosts – Gathers data from SNMP (outband) agents – Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView® Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console® or SNMP managersTotalStorage Productivity Center for Fabric AgentGathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the ManagerDiscover SAN components and devicesTotalStorage Productivity Center for Fabric uses two methods to discover information aboutthe SAN - outband discovery, and inband discovery.Outband discovery is the process of discovering SAN information, including topology anddevice data, without using the Fibre Channel data paths. Outband discovery uses SNMPqueries, invoked over IP network. Outband management and discovery is normally used tomanage devices such as switches and hubs which support SNMP. Chapter 1. IBM TotalStorage Productivity Center overview 11
    • In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network. TotalStorage Productivity Center for Fabric benefits TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or “smartsets”, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848.1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk The Disk subject matter expert’s job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 13. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.12 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 1-7 Monitor and configure the Storage Infrastructure Disk areaThe TotalStorage Productivity Center for Disk provides the raw capabilities of initiating andscheduling performance data collection on the supported devices, of storing the receivedperformance statistics into database tables for later use, and of analyzing the stored data andgenerating reports for various metrics of the monitored devices. In conjunction with datacollection, the TotalStorage Productivity Center for Disk is responsible for managing andmonitoring the performance of the supported storage devices. This includes the ability toconfigure performance thresholds for the devices based on performance metrics, thegeneration of alerts when these thresholds are exceeded, the collection and maintenance ofhistorical performance data, and the creation of gauges, or performance reports, for thevarious metrics to display the collected historical data to the end user. The TotalStorageProductivity Center for Disk enables you to perform sophisticated performance analysis forthe supported storage devices.FunctionsTotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2® database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs. Chapter 1. IBM TotalStorage Productivity Center overview 13
    • You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be “started”, which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 11, “Using TotalStorage Productivity Center for Disk” on page 375.1.3.4 Replication subject matter expert: Productivity Center for Replication The Replication subject matter expert’s job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application.14 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Productivity Center for Replication will start up all replication pairs and monitor them tocompletion. If any of the replication pairs fail, meaning the application is out of sync, theProductivity Center for Replication will suspend them until the problem is resolved, resyncthem and resume the replication. The Productivity Center for Replication provides completemanagement of the replication process.The requirements addressed by the Replication subject matter expert are shown Figure 1-8.Replication in a complex environment needs to be addressed by a comprehensivemanagement tool like the TotalStorage Productivity Center for Replication.Figure 1-8 Monitor and Configure the Storage Infrastructure Replication areaFunctionsData replication is the core function required for data protection and disaster recovery. Itprovides advanced copy services functions for supported storage subsystems on the SAN.Replication Manager administers and configures the copy services functions and monitorsthe replication actions. Its capabilities consist of the management of two types of copyservices: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), andthe Point-in-Time Copy (also known as FlashCopy®). At this time TotalStorage ProductivityCenter for Replication supports the IBM TotalStorage ESS.Productivity Center for Replication includes support for replica sessions, which ensures thatdata on multiple related heterogeneous volumes is kept consistent, provided that theunderlying hardware supports the necessary primitive operations. Productivity Center forReplication also supports the session concept, such that multiple pairs are handled as aconsistent unit, and that Freeze-and-Go functions can be performed when errors in mirroringoccur. Productivity Center for Replication is designed to control and monitor the copy servicesoperations in large-scale customer environments. Chapter 1. IBM TotalStorage Productivity Center overview 15
    • Productivity Center for Replication provides a user interface for creating, maintaining, and using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. Some of the tasks you can perform with Productivity Center for Replication are: Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: – Create Session Wizard. – Select Source Group. – Select Copy Type. – Select Target Pool. – Save Session. Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 15, “Using TotalStorage Productivity Center for Replication” on page 827.1.4 IBM TotalStorage Productivity Center All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products — Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication — from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 17.16 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 1-9 IBM TotalStorage Productivity Center Launch Pad The IBM TotalStorage Productivity Center establishes the foundation for IBM’s e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand™ environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.1.4.1 Productivity Center for Disk and Productivity Center for Replication The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console (Figure 1-10 on page 18). This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device Chapter 1. IBM TotalStorage Productivity Center overview 17
    • Figure 1-10 Managing multiple devices Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 19 provides an overview of Productivity Center for Disk and Productivity Center for Replication.18 IBM TotalStorage Productivity Center V2.3: Getting Started
    • IBM TotalStorage Productivity Center Performance Replication Manager Manager Device Manager IBM Director WebSphere Application Server DB2Figure 1-11 Productivity Center overviewThe Productivity Center for Disk and Productivity Center for Replication provides support forconfiguration, tuning, and replication of the virtualized SAN. As with the individual devices,the Productivity Center for Disk and Productivity Center for Replication layers are open andcan be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center forDisk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for ReplicationDevice ManagerThe Device Manager is responsible for the discovery of supported devices; collecting asset,configuration, and availability data from the supported devices; and providing a limitedtopography view of the storage usage relationships between those devices.The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storagedevices adheres to the SNIA SMI-S specification standards. Device Manager uses theService Level Protocol (SLP) to discover SMI-S enabled devices. The Device Managercreates managed objects to represent these discovered devices. The discovered managedobjects are displayed as individual icons in the Group Contents pane of the IBM DirectorConsole as shown in Figure 1-12 on page 20. Chapter 1. IBM TotalStorage Productivity Center overview 19
    • Figure 1-12 IBM Director Console Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device. SAN Management When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure that Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts LUN configurations change20 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Performance Manager functionThe Performance Manager function provides the raw capabilities of initiating and schedulingperformance data collection on the supported devices, of storing the received performancestatistics into database tables for later use, and of analyzing the stored data and generatingreports for various metrics of the monitored devices. In conjunction with data collection, thePerformance Manager is responsible for managing and monitoring the performance of thesupported storage devices. This includes the ability to configure performance thresholds forthe devices based on performance metrics, the generation of alerts when these thresholdsare exceeded, the collection and maintenance of historical performance data, and thecreation of gauges, or performance reports, for the various metrics to display the collectedhistorical data to the end user. The Performance Manager enables you to performsophisticated performance analysis for the supported storage devices.Functions Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series, IBM TotalStorage DS6000 and IBM TotalStorage DS8000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device.The eligible metrics for threshold checking are fixed for each storage device. If the thresholdmetrics are modified by the user, the modifications are accepted immediately and applied tochecking being performed by active performance collection tasks. Examples of thresholdmetrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rateThere is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. – Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices. Chapter 1. IBM TotalStorage Productivity Center overview 21
    • – Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database. Gauges The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed. Database services for managing the collected performance data The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables.22 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI. Volume Performance Advisor The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN. Replication Manager function Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.1.4.2 Event services At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken. Chapter 1. IBM TotalStorage Productivity Center overview 23
    • An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.1.5 Taking steps toward an On Demand environment So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand.24 IBM TotalStorage Productivity Center V2.3: Getting Started
    • An On Demand operating environment must be: Flexible Self-managing Scalable Economical Resilient Based on open standardsThe move to an On Demand storage environment is an evolving one, it does not happen all atonce. There are several next steps that you may take to move to the On Demandenvironment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows.No matter which steps you take to an On Demand environment there will be results. Theresults will be improved application availability, optimized storage resource utilization, andenhanced storage personnel productivity. Chapter 1. IBM TotalStorage Productivity Center overview 25
    • 26 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 2 Chapter 2. Key concepts There are certain industry standards and protocols that are the basis of the IBM TotalStorage Productivity Center. The understanding of these concepts is important for installing and customizing the IBM TotalStorage Productivity Center. In this chapter, we describe the standards on which the IBM TotalStorage Productivity Center is built, as well as the methods of communication used to discover and manage storage devices. We also discuss communication between the various components of the IBM TotalStorage Productivity Center. To help you understand these concepts, we provide diagrams to show the relationship and interaction of the various elements in the IBM TotalStorage Productivity Center environment.© Copyright IBM Corp. 2005. All rights reserved. 27
    • 2.1 IBM TotalStorage Productivity Center architecture This chapter provides an overview of the components and functions that are included in the IBM TotalStorage Productivity Center.2.1.1 Architectural overview diagram The architectural overview diagram in Figure 2-1 helps to illustrate the governing ideas and building blocks of the product suite which makes up the IBM TotalStorage Productivity Center. It provides a logical overview of the main conceptual elements and relationships in the architecture, components, connections, users, and external systems. Figure 2-1 IBM TotalStorage Productivity Center architecture overview diagram IBM TotalStorage Productivity Center and Tivoli Provisioning Manager are presented as building blocks in the diagram. Both of the products are not a single application but a complex environment by themselves. The diagram also shows the different methods used to collect information from multiple systems to give an administrator the necessary views on the environment, for example: Software clients (agents) Standard interfaces and protocols (for example, Simple Network Management Protocol (SNMP), Common Information Model (CIM) Agent) Proprietary interfaces (for only a few devices) In addition to the central data collection, Productivity Center provides a single point of control for a storage administrator, even though each manager still comes with its own interface. A program called the Launchpad is provided to start the individual applications from a central dashboard.28 IBM TotalStorage Productivity Center V2.3: Getting Started
    • The Tivoli Provisioning Manager relies on Productivity Center to make provisioning possible.2.1.2 Architectural layers The IBM TotalStorage Productivity Center architecture can be broken up in three layers as shown in Figure 2-2. Layer one represents a high level overview. There is only IBM TotalStorage Productivity Center instance in the environment. Layers two and three drill down into the TotalStorage Productivity Center environment so you can see the managers and the prerequisite components. Figure 2-2 Architectural layers Layer two consists of the individual components that are part of the product suite: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Data Throughout this redbook, these products are referred to as managers or components. Layer three includes all the prerequisite components, for example IBM DB2, IBM WebSphere, IBM Director, IBM Tivoli NetView, and Tivoli Common Agent Services. IBM TotalStorage Productivity Center for Fabric can be installed on a full version of WebSphere Application Server or on the embedded WebSphere Application Server, which is shipped with Productivity Center for Fabric. Installation on a full version of WebSphere Application Server is used when other components of TotalStorage Productivity Center are installed on the same logical server. IBM TotalStorage Productivity Center for Fabric can utilize an existing IBM Tivoli Netview installation or can be installed along with it. Note: Each of the manager and prerequisite components can be drilled down even further, but in this book we go into this detail only where necessary. The only exception is Tivoli Common Agent Services, which is a new underlying service in the Tivoli product family. Terms and definitions When you look at the diagram in Figure 2-2, you see that each layer has a different name. The following sections explain each of these names as well as other terms commonly used in this book. Chapter 2. Key concepts 29
    • Product A product is something that is available to be ordered. The individual products that are included in IBM TotalStorage Productivity Center are introduced in Chapter 1, “IBM TotalStorage Productivity Center overview” on page 3. Components Products (licensed software packages) and prerequisite software applications are in general called components. Some of the components are internal, meaning that, from the installation and configuration point of view, they are somewhat transparent. External components have to be separately installed. We usually use the term components for the following applications: IBM Director (external, used by Disk and Replication Manager) IBM DB2 (external, used by all managers) IBM WebSphere Application Server (external, used by Disk and Replication Manager, used by Fabric Manager if installed on the same logical server) Embedded WebSphere Application Server (internal, used by Fabric Manager) Tivoli NetView (internal, used by Fabric Manager) Tivoli Common Agent Services (external, used by Data and Fabric Manager) Not all of the internal components are always shown in the diagrams and lists in this book. The term subcomponent is used to emphasize that a certain component (the subcomponent) belongs to or is used by another component. For example, a Resource Manager is a subcomponent of the Fabric or Data Manager. Managers The managers are the central components of the IBM TotalStorage Productivity Center environment. They may share some of the prerequisite components. For example, IBM DB2 and IBM WebSphere are used by different managers. In this book, we sometimes use the following terms: Disk Manager for Productivity Center for Disk Replication Manager for Productivity Center for Replication Data Manager for Productivity Center for Data Fabric Manager for Productivity Center for Fabric In addition, we use the term manager for the Agent Manager for Tivoli Agent Manager component, because the name of the component already includes that term. Agents The agents are not shown in the diagram in Figure 2-2 on page 29, but they have an important role in the IBM TotalStorage Productivity Center environment. There are two types of agents: Common Information Model (CIM) Agents and agents that belong to one of the managers: CIM Agents: Agents that offer a CIM interface for management applications, for example, for IBM TotalStorage DS8000 and DS6000 series storage systems, IBM TotalStorage Enterprise Storage Server (ESS), SAN (Storage Area Network) Volume Controller, and DS4000 Storage Systems formerly known as FAStT (Fibre Array Storage Technology) Storage Systems Agents that belong to one of the managers: – Data Agents: Agents to collect data for the Data Manager – Fabric Agents: Agents that are used by the Fabric Manager for inband SAN data discovery and collection30 IBM TotalStorage Productivity Center V2.3: Getting Started
    • In addition to these agents, the Service Location Protocol (SLP) also use the term agent for these components: User Agent Service Agent Directory Agent Elements We use the generic term element whenever we do not differentiate between components and managers.2.1.3 Relationships between the managers and components An IBM TotalStorage Productivity Center environment includes many elements and is complex. This section tries to explain how all the elements work together to form a center for storage administration. Figure 2-3 shows the communication between the elements and how they relate to each other. Each gray box in the diagram represents one machine. The dotted line within a machine separates two distinct managers of the IBM TotalStorage Productivity Center.Figure 2-3 Manager and component relationship diagram All these components can also run on one machine. In this case all managers and IBM Director will share the same DB2 installation and all managers and IBM Tivoli Agent Manager will share the same WebSphere installation. Chapter 2. Key concepts 31
    • 2.1.4 Collecting data Multiple methods are used within the different components to collect data from the devices in your environment. In this version of the product, the information is stored in different databases (see Table 3-6 on page 62) that are not shared between the individual components. Productivity Center for Disk and Productivity Center for Replication Productivity Center for Disk and Productivity Center for Replication use the Storage Management Initiative - Specification (SMI-S) standard (see “Storage Management Initiative - Specification” on page 35) to collect information about subsystems. For devices that are not CIM ready, this requires the installation of a proxy application (CIM Agent or CIM Object Manager (CIMOM)). It does not use its own agent such as the Data Manager and Fabric Manager. IBM TotalStorage Productivity Center for Fabric IBM TotalStorage Productivity Center for Fabric uses two methods to collect information: inband and outband discovery. You can use either method or you can use both at the same time to obtain the most complete picture of your environment. Using just one of the methods will give you incomplete information, but topology information will be available in both cases. Outband discovery is the process of discovering SAN information, including topology and device data, without using the Fibre Channel data paths. Outband discovery uses SNMP queries, invoked over IP network. Outband management and discovery is normally used to manage devices such as switches and hubs that support SNMP. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. Inband discovery uses the following general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches. The switch returns the information through the Fibre Channel network and the HBA to the Agent. The Agent queries the endpoint devices using RNID and SCSI protocols. The Agent returns the information to the Manager over the IP network. The Manager then responds to the new information by updating the database and redrawing the topology map if necessary. Internet SCSI (iSCSI) Discovery is an Internet Protocol (IP)-based storage networking standard for linking data storage. It was developed by the Internet Engineering Task Force (IETF). iSCSI can be used to transmit data over LANs and WANs.32 IBM TotalStorage Productivity Center V2.3: Getting Started
    • The discovery paths are shown in parentheses in the diagram in Figure 2-4.Figure 2-4 Fabric Manager inband and outband discovery pathsIBM TotalStorage Productivity Center for DataWithin the IBM TotalStorage Productivity Center, the data manager is used to collectinformation about logical drives, file systems, individual files, database usage, and more.Agents are installed on the application servers and perform a regular scan to report back theinformation. To report on a subsystem level, a SMI-S interface is also built in. This informationis correlated with the data that is gathered from the agents to show the LUNs that a host isusing (an agent must be installed on that host).In contrast to Productivity Center for Disk and Productivity Center for Replication, the SMI-Sinterface in Productivity Center for Data is only used to retrieve information, but not toconfigure a device. Restriction: The SLP User Agent integrated into the Data Manager uses SLP Directory Agents and Service Agents to find services in the local subnet. To discover CIM Agents from remote networks, they have to be registered to either the Directory Agent or Service Agent, which is located in the local subnet unless routers are configured to also route multicast packets. You need to add each CIM Agent (that is not discovered) manually to the Data Manager; refer to “Configuring the CIM Agents” on page 290. Chapter 2. Key concepts 33
    • 2.2 Standards used in IBM TotalStorage Productivity Center This section presents an overview of the standards that are used within IBM TotalStorage Productivity Center by the different components. SLP and CIM are described in detail since they are new concepts to many people that work with IBM TotalStorage Productivity Center and are important to understand. Vendor specific tools are available to manage devices in the SAN, but these proprietary interfaces are not used within IBM TotalStorage Productivity Center. The only exception is the application programming interface (API) that Brocade has made available to manage their Fibre Channel switches. This API is used within IBM TotalStorage Productivity Center for Fabric.2.2.1 ANSI standards Several standards have been published for the inband management of storage devices, for example, SCSI Enclosure Services (SES). T11 committee Since the 1970s, the objective of the ANSI T11 committee is to define interface standards for high-performance and mass storage applications. Since that time, the committee has completed work on three projects: High-Performance Parallel Interface (HIPPI) Intelligent Peripheral Interface (IPI) Single-Byte Command Code Sets Connection (SBCON) Currently the group is working on Fibre Channel (FC) and Storage Network Management (SM) standards. Fibre Channel Generic Services The Fibre Channel Generic Services (FC-GS-3) Directory Service and the Management Service are being used within IBM TotalStorage Productivity Center for the SAN management. The availability and level of function depends on the implementation by the individual vendor. IBM TotalStorage Productivity Center for Fabric uses this standard.2.2.2 Web-Based Enterprise Management Web-Based Enterprise Management (WBEM) is an initiative of the Distributed Management Task Force (DTMF) with the objective to enable the management of complex IT environments. It defines a set of management and Internet standard technologies to unify the management of complex IT environments. The three main conceptual elements of the WBEM initiative are: Common Information Model (CIM) CIM is a formal object-oriented modeling language that is used to describe the management aspects of systems. See also “Common Information Model” on page 47. xmlCIM This is a grammar to describe CIM declarations and messages used by the CIM protocol. Hypertext Transfer Protocol (HTTP) HTTP is used as a way to enable communication between a management application and a device that both use CIM.34 IBM TotalStorage Productivity Center V2.3: Getting Started
    • The WBEM architecture defines the following elements: CIM Client The CIM Client is a management application similar to IBM TotalStorage Productivity Center that uses CIM to manage devices. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. CIM Managed Object A CIM Managed Object is a hardware or software component that can be managed by a management application using CIM. CIM Agent The CIM Agent is embedded into a device or it can be installed on the server using the CIM provider as the translator of device’s proprietary commands to CIM calls, and interfaces with the management application (the CIM Client). The CIM Agent is linked to one device. CIM Provider A CIM Provider is the element that translates CIM calls to the device-specific commands. It is like a device driver. A CIM Provider is always closely linked to a CIM Object Manager or CIM Agent. CIM Object Manager A CIM Object Manager (CIMOM) is a part of the CIM Server that links the CIM Client to the CIM Provider. It enables a single CIM Agent to talk to multiple devices. CIM Server A CIM Server is the software that runs the CIMOM and the CIM provider for a set of devices. This approach is used when the devices do not have an embedded CIM Agent. This term is often not used. Instead people often use the term CIMOM when they really mean the CIM Server.2.2.3 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) defines standards that are used within IBM TotalStorage Productivity Center. You can find more information on the Web at: http://www.snia.org Fibre Channel Common HBA API The Fibre Channel Common HBA API is used as a standard for inband storage management. It acts as a bridge between a SAN management application like Fabric Manager and the Fibre Channel Generic Services. IBM TotalStorage Productivity Center for Fabric Agent uses this standard. Storage Management Initiative - Specification SNIA has fully adopted and enhanced the CIM for Storage Management in its SMI-S. SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. Chapter 2. Key concepts 35
    • The idea behind SMI-S is to standardize the management interfaces so that management applications can use these and provide cross device management. This means that a newly introduced device can be immediately managed as it conforms to the standards. SMI-S extends CIM and WBEM with the following features: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model SMI-S defines profiles and recipes within the CIM that enables a management client to reliably use a component vendor’s implementation of the standard, such as the control of LUNs and zones in the context of a SAN. Consistent use of durable names As a storage network configuration evolves and is re-configured, key long-lived resources, such as disk volumes, must be uniquely and consistently identified over time. Rigorously documented client implementation considerations SMI-S provides client developers with vital information for traversing CIM classes within a device or subsystem and between devices and subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system SMI-S compliant products, when introduced in a SAN environment, automatically announce their presence and capabilities to other constituents using SLP (see 2.3.1, “SLP architecture” on page 38). Resource locking SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources through a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA also provides interoperability tests which help vendors to test their applications and devices if they conform to the standard. Managers or components that use this standard include: IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Data2.2.4 Simple Network Management Protocol The SNMP is an Internet Engineering Task Force (IETF) protocol for monitoring and managing systems and devices in a network. Functions supported by the SNMP protocol are the request and retrieval of data, the setting or writing of data, and traps that signal the occurrence of events. SNMP is a method that enables a management application to query information from a managed device. The managed device has software running that sends and receives the SNMP information. This software module is usually called the SNMP agent.36 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Device management An SNMP manager can read information from an SNMP agent to monitor a device. Therefore the device needs to be polled on an interval basis. The SNMP manager can also change the configuration of a device, by setting certain values to corresponding variables. Managers or components that use these standards include the IBM TotalStorage Productivity Center for Fabric. Traps A device can also be set up to send a notification to the SNMP manager (this is called a trap) to asynchronously inform this SNMP manager of a status change. Depending on the existing environment and organization, it is likely that your environment already has an SNMP management application in place. The managers or components that use this standard are: IBM TotalStorage Productivity Center for Fabric (sending and receiving of traps) IBM TotalStorage Productivity Center for Data can be set up to send traps, but does not receive traps IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication events can be sent as SNMP traps by utilizing the IBM Director infrastructure. Management Information Base SNMP use a hierarchical structured Management Information Base (MIB) to define the meaning and the type of a particular value. An MIB defines managed objects that describe the behavior of the SNMP entity, which can be anything from a IP router to a storage subsystem. The information is organized in a tree structure. Note: For more information about SNMP, refer to TCP/IP Tutorial and Technical Overview, GG24-3376. IBM TotalStorage Productivity Center for Data MIB file For users planning to use the IBM TotalStorage Productivity Center for Data SNMP trap alert notification capabilities, an SNMP MIB is included in the server installation. You can find the SNMP MIB in the file tivoli_install_directory/snmp/tivoliSRM.MIB. The MIB is provided for use by your SNMP management console software. Most SNMP management station products provide a program called an MIB compiler that can be used to import MIBs. This allows you to better view Productivity Center for Data generated SNMP traps from within your management console software. Refer to your management console software documentation for instructions on how to compile or import a third-party MIB.2.2.5 Fibre Alliance MIB The Fibre Alliance has defined an MIB for the management of storage devices. The Fibre Alliance is presenting the MIB to the IETF standardization. The intention of putting together this MIB was to have one MIB that covers most (if not all) of the attributes of storage devices from multiple vendors. The idea was to have only one MIB that is loaded onto an SNMP manager, one MIB file for each component. However, this requires that all devices comply with that standard MIB, which is not always the case. Chapter 2. Key concepts 37
    • Note: This MIB is not part of IBM TotalStorage Productivity Center. To learn more about Fibre Alliance and MIB, refer to the following Web sites: http://www.fibrealliance.org http://www.fibrealliance.org/fb/mib_intro.htm2.3 Service Location Protocol (SLP) overview The SLP is an IETF standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services. SLP enables the discovery and selection of generic services, which can range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user can specify to search for all available printers that support PostScript, based on the given service type (printers), and the given attributes (PostScript). SLP searches the user’s network for any matching services and returns the discovered list to the user.2.3.1 SLP architecture The SLP architecture includes three major components, a Service Agent (SA), a User Agent (UA), and a Directory Agent (DA). The SA and UA are required components in an SLP environment, where the SLP DA is optional. The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. In SLP, an SA is used to report to UAs that a service that has been registered with the SA is available. The following sections describe each of these components. Service Agent (SA) The SLP SA is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services by using broadcasts. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available.38 IBM TotalStorage Productivity Center V2.3: Getting Started
    • The SA can run in the same process or in a different process as the service itself. In eithercase, the SA supports registration and de-registration requests for the service (as shown inthe right part of Figure 2-5). The service registers itself with the SA during startup, andremoves the registration for itself during shutdown. In addition, every service registration isassociated with a life-span value, which specifies the time that the registration will be active.In the left part of the diagram, you can see the interaction between a UA and the SA.Figure 2-5 SLP SA interactions (without SLP DA)A service is required to reregister itself periodically, before the life-span of its previousregistration expires. This ensures that expired registration entries are not kept. For instance, ifa service becomes inactive without removing the registration for itself, that old registration isremoved automatically when its life span expires. The maximum life span of a registration is65535 seconds (about 18 hours).User Agent (UA)The SLP UA is a process working on the behalf of the user to establish contact with somenetwork service. The UA retrieves (or queries for) service information from the ServiceAgents or Directory Agents.The UA is a component of SLP that is closely associated with a client application or a userwho is searching for the location of one or more services in the network. You can use the SLPUA by defining a service type that you want the SLP UA to locate. The SLP UA then retrievesa set of discovered services, including their service Uniform Resource Locator (URL) and anyservice attributes. You can then use the service’s URL to connect to the service.The SLP UA locates the registered services, based on a general description of the servicesthat the user or client application has specified. This description usually consists of a servicetype, and any service attributes, which are matched against the service URLs registered inthe SLP Service Agents.The SLP UA usually runs in the same process as the client application, although it is notnecessary to do so. The SLP UA processes find requests by sending out multicast messagesto the network and targeting all SLP SAs within the multicast range with a single UserDatagram Protocol (UDP) message. The SLP UA can, therefore, discover these SAs with aminimum of network overhead. When an SA receives a service request, it compares its ownregistered services with the requested service type and any service attributes, if specified,and returns matches to the UA using a unicast reply message. Chapter 2. Key concepts 39
    • The SLP UA follows the multicast convergence algorithm and sends repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the service’s URL (see Figure 2-6). Figure 2-6 SLP UA interactions without SLP DA An SLP UA is not required to discover all matching services that exist in the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets. They can be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs can recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range. This can cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs. Directory Agent The SLP DA is an optional component of SLP that collects and caches network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. You can consider the SLP DA as an intermediate tier in the SLP architecture. It is placed between the UAs and the SAs so that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic in the network. It also protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment.40 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 2-7 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs.Figure 2-7 SLP User Agent interactions with User Agent and Service AgentWhen SLP DAs are present, the behavior of both SAs and UAs changes significantly. Whenan SA is first initializing, it performs a DA discovery using a multicast service request. It alsospecifies the special, reserved service type service:directory-agent. This process is alsocalled active DA discovery. It is achieved through the same mechanism as any otherdiscovery using SLP.Similarly, in most cases, an SLP UA also performs active DA discovery using multicastingwhen it first starts. However, if the SLP UA is statically configured with one or more DAaddresses, it uses those addresses instead. If it is aware of one or more DAs, either throughstatic configuration or active discovery, it sends unicast service requests to those DAs insteadof multicasting to SAs. The DA replies with unicast service replies, providing the requestedservice URLs and attributes. Figure 2-8 shows the interactions of UAs and SAs with DAs,during active DA discovery.Figure 2-8 SLP Directory Agent discovery interactions Chapter 2. Key concepts 41
    • The SLP DA functions similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DA’s IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that may already be active in the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-9 shows the interactions of DAs with SAs and UAs, during passive DA discovery. Figure 2-9 Service Location Protocol passive DA discovery Why use an SLP DA? The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicasting enabled, you can configure SLP to use broadcast. However, broadcast is inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.42 IBM TotalStorage Productivity Center V2.3: Getting Started
    • When to use DAsUse DAs in your enterprise when any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.SLP communicationSLP uses three methods to send messages across an IP network: unicast, broadcast, ormulticast. Data can be sent to one single destination (unicast) or to multiple destinations thatare listening at the same time (multicast). The difference between a multicast and a broadcastis quite important. A broadcast addresses all stations in a network. Multicast messages areonly used by those stations that are members of a multicast group (that have joined amulticast group).UnicastThe most common communication method, unicast, requires that a sender of a messageidentifies one and only one target of that message. The target IP address is encoded withinthe message packet, and is used by the routers along the network path to route the packet tothe proper destination.If a sender wants to send the same message to multiple recipients, then multiple messagesmust be generated and placed in the network, one message per recipient. When there aremany potential recipients for a particular message, then this places an unnecessary strain onthe network resources, since the same data is duplicated many times, where the onlydifference is the target IP address encoded within the messages.BroadcastIn cases where the same message must be sent to many targets, broadcast is a much betterchoice than unicast, since it puts much less strain in the network. Broadcasting uses a specialIP address, 255.255.255.255, which indicates that the message packet is intended to be sentto all nodes in a network. As a result, the sender of a message needs to generate only asingle copy of that message, and can still transmit it to multiple recipients, that is to allmembers of the network.The routers multiplex the message packet, as it is sent along all possible routes in thenetwork to reach all possible destinations. This puts much less strain on the networkbandwidth, since only a single message stream enters the network, as opposed to onemessage stream per recipient. However, it puts much more strain on the individual nodes(and routers) in the network, since every node receives the message, even though most likelynot every node is interested in the message. This means that those members of the networkthat were not the intended recipients, who receive the message anyway, must receive theunwanted message and discard it. Due to this inefficiency, in most network configurations,routers are configured to not forward any broadcast traffic. This means that any broadcastmessages can only reach nodes on the same subnet as the sender.MulticastThe ability of the SLP to automatically discover services that are available in the network,without a lot of setup or configuration, depends in a large part on the use of IP multicasting. IPmulticasting is a broad subject in itself, and only a brief and simple overview is provided here. Chapter 2. Key concepts 43
    • Multicasting can be thought of as more sophisticated broadcast, which aims to solve some of the inefficiencies inherent in the broadcasting mechanism. With multicasting, again the sender of a message has to generate only a single copy of the message, saving network bandwidth. However unlike broadcasting, with multicasting, not every member of the network receives the message. Only those members who have explicitly expressed an interest in the particular multicast stream receive the message. Multicasting introduces a concept called a multicast group, where each multicast group is associated with a specific IP address. A particular network node (host) can join one or more multicast groups, which notifies the associated router or routers that there is an interest in receiving multicast streams for those groups. When the sender, who does not necessarily have to be part of the same group, sends messages to a particular multicast group, that message is routed appropriately to only those subnets, which contain members of that multicast group. This avoids flooding the entire network with the message, as is the case for broadcast traffic. Multicast addresses The Internet Assigned Numbers Authority (IANA), which controls the assignment of IP addresses, has assigned the old Class D IP address range to be used for IP multicasting. Of this entire range, which extends from 224.0.0.0 to 239.255.255.255, the 224.0.0.* addresses are reserved for router management and communication. Some of the 224.0.1.* addresses are reserved for particular standardized multicast applications. Each of the remaining addresses corresponds to a particular general purpose multicast group. The Service Location Protocol uses address 239.255.255.253 for all its multicast traffic. The port number for SLP is 427, for both unicast and multicast. Configuration recommendations Ideally, after IBM TotalStorage Productivity Center is installed, it would discover all storage devices that it can physically reach over the IP network. However in most situations, this is not the case. This is primarily due to the previously mentioned limitations of multicasting and the fact that the majority of routers have multicasting disabled by default. As a result, in most cases without any additional configuration, IBM TotalStorage Productivity Center discovers only those storage devices that reside in its own subnet, but no more. The following sections provide some configuration recommendations to enable TotalStorage Productivity Center to discover a larger set of storage devices. Router configuration The vast majority of the intelligence that allows multicasting to work is implemented in the router operating system software. As a result, it is necessary to properly configure the routers in the network to allow multicasting to work effectively. Unfortunately, there is a dizzying array of protocols and algorithms which can be used to configure particular routers to enable multicasting. These are the most common ones: Internet Group Management Protocol (IGMP) is used to register individual hosts in particular multicast groups, and to query group membership on particular subnets. Distance Vector Multicast Routing Protocol (DVMRP) is a set of routing algorithms that use a technique called Reverse Path Forwarding to decide how multicast packets are to be routed in the network. Protocol-Independent Multicast (PIM) comes in two varieties: dense mode (PIM-DM) and sparse mode (PIM-SM). They are optimized to networks where either a large percentage of nodes require multicast traffic (dense), or a small percentage require the traffic (sparse).44 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Multicast Open Shortest Path First (MOSPF) is an extension of OSPF, a “link-state” unicast routing protocol that attempts to find the shortest path between any two networks or subnets to provide the most optimal routing of packets.The routers of interest are all those which are associated with subnets that contain one ormore storage devices which are to be discovered and managed by TotalStorage ProductivityCenter. You can configure the routers in the network to enable multicasting in general, or atleast to allow multicasting for the SLP multicast address, 239.255.255.253, and port, 427.This is the most generic solution and permits discovery to work the way that it was intendedby the designers of SLP.To properly configure your routers for multicasting, refer to your router manufacturer’sreference and configuration documentation. Although older hardware may not supportmulticasting, all modern routers do. However, in most cases, multicast support is disabled bydefault, which means that multicast traffic is sent only among the nodes of a subnet but is notforwarded to other subnets. For SLP, this means that service discovery is limited to only thoseagents which reside in the same subnet.Firewall configurationIn the case where one or more firewalls are used between TotalStorage Productivity Centerand the storage devices that are to be managed, the firewalls need to be configured to passtraffic in both directions, as SLP communication is two way. This means that whenTotalStorage Productivity Center, for example, queries an SLP DA that is behind a firewall forthe registered services, the response will not use an already opened TCP/IP session but willestablish another connection in the direction from the SLP DA to the TotalStorageProductivity Center. For this reason, port 427 should be opened in both directions, otherwisethe response will not be received and TotalStorage Productivity Center will not recognizeservices offered by this SLP DA.SLP DA configurationIf router configuration is not feasible, another technique is to use SLP DAs to circumvent themulticast limitations. Since with statically configured DAs, all service requests are unicastinstead of multicast by the UA, it is possible to simply configure one DA for each subnet thatcontains storage devices which are to be discovered by TotalStorage Productivity Center.One DA is sufficient for each of such subnets, although more can be configured without harm,perhaps for reasons of fault tolerance. Each of these DAs can discover all services within itsown subnet, but no other services outside its own subnet. To allow Productivity Center todiscover all of the devices, you must statically configure it with the addresses of each of theseDAs. You accomplish this using the IBM Director GUI’s Discovery Preference panel. From theMDM SLP Configuration tab, you can enter a list of DA addresses.As described previously, Productivity Center unicasts service requests to each of thesestatically configured DAs, but also multicasts service requests on the local subnet on whichProductivity Center is installed. Figure 2-10 on page 46 displays a sample environment whereDAs have been used to bridge the multicast gap between subnets in this manner. Note: At this time, you cannot set up IBM TotalStorage Productivity Center for Data to use remote DAs such as Productivity Center for Disk and Productivity Center for Replication. You need to define all remote CIM Agents by creating a new entry in the CIMOM Login panel or you can register remote services in DA which resides in local subnet. Refer to “Configuring the CIM Agents” on page 290 for detailed information. Chapter 2. Key concepts 45
    • Figure 2-10 Recommended SLP configuration You can easily configure an SLP DA by changing the configuration of the SLP SA included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA instead. The procedure to perform this configuration is explained in 6.2, “SLP DA definition” on page 248. Note that the change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function as normal, sending registration and de-registration commands to the DA directly. SLP configuration with services outside local subnet SLA DA and SA can also be configured to cache CIM services information from non-local subnets. Usually CIM Agents or CIMOMs will have local SLP SA function. When there is a need to discover CIM services outside the local subnet and the network configuration does not permit the use of SLP DA in each of them (for example, firewall rules do not allow two way communication on port 427), remote services can be registered on the SLP DA in the local subnet. This configuration can be done by using slptool, which is part of SLP installation packages. Such registration is not persistent across system restarts. To achieve persistent registration of services outside of the local subnet, these services need to be defined in the registration file used by SLP DA at startup. Refer to 5.7.3, “Setting up the Service Location Protocol Directory Agent” on page 221 for information on setting up the slp.reg file.46 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 2.3.2 Common Information Model The CIM Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. CIM uses schemas as a kind of class library to define objects and methods. The schemas can be categorized into three types: Core schema: Defines classes and relationships of objects Common schema: Defines common components of systems Extension schema: Entry point for vendors to implement their own schema The CIM/WBEM architecture defines the following elements: Agent code or CIM Agent An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. The Agent is embedded into a device, which can be hardware or software. CIM Object Manager The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or a device provider such as a CIM Agent. Client application or CIM Client A storage management program, such as TotalStorage Productivity Center, that initiates CIM requests to the CIM Agent for the device. A CIM Client can reside anywhere in the network, because it uses HTTP to talk to CIM Object Managers and Agents. Device or CIM Managed Object A Managed Object is a hardware or software component that can be managed by a management application by using CIM, for example, a IBM SAN Volume Controller. Device provider A device-specific handler that serves as a plug-in for the CIMOM. That is, the CIMOM uses the handler to interface with the device. Note: The terms CIM Agent and CIMOM are often used interchangeably. At this time, few devices come with an integrated CIM Agent. Most devices need a external CIMOM for CIM to enable management applications (CIM Clients) to talk to the device. For ease of installation, IBM provides an Integrated Configuration Agent Technology (ICAT), which is a bundle that includes the CIMOM, the device provider, and an SLP SA. Integrating legacy devices into the CIM model Since these standards are still evolving, we cannot expect that all devices will support the native CIM interface. Because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-11 on page 48. Chapter 2. Key concepts 47
    • The CIM Agent or CIMOM translates a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage ESS includes a CIMOM inside it. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the “Embedded Model” in Figure 2-11. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks. CIM Client Management Application 0..n CIMxml CIM operations over http [TCP/IP] Agent Object Manager 0..n Provider 0..n 1 Proprietary 1 1 n Proprietary Agent Device or Device or Subsystem Device or Subsystem 0..n Subsystem Proxy Model Embedded Model Proxy Model Figure 2-11 CIM Agent and Object Manager overview CIM Agent implementation When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables IBM TotalStorage Productivity Center for Data, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-12 on page 49 shows an overview of the CIM Agent.48 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 2-12 CIM Agent overview The CIM Agent includes a CIMOM, which adapts various devices using a plug-in called a provider. The CIM Agent can work as a proxy or can be embedded in storage devices. When the CIM Agent is installed as a proxy, the IBM CIM Agent can be installed on the same server that supports the device user interface. CIM Object Manager The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between the catcher and sender use the language and models defined by the SMI-S standard. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions.2.4 Component interaction This section provides an overview of the interactions between the different components by using standardized management methods and protocols.2.4.1 CIMOM discovery with SLP The SMI-S specification introduces SLP as the method for the management applications (the CIM clients) to locate managed objects. SLP is explained in more detail in 2.3, “Service Location Protocol (SLP) overview” on page 38. Figure 2-13 on page 50 shows the interaction between CIMOMs and SLP components. Chapter 2. Key concepts 49
    • Lock CIM Client Directory Manager Management Manager Application SA SA UA 0..n DA 0..n 0..n SLP TCP/IP CIMxml CIM operations over http [TCP/IP] SA SA Agent Object Manager 0..n Provider 0..n 1 1 Proprietary SA 1 Proprietary n Agent Device or Device or Subsystem Device or Subsystem 0..n Subsystem Proxy Model Embedded Model Proxy Model Figure 2-13 SMI-S extensions to WBEM/CIM2.4.2 How CIM Agent works The CIM Agent typically works as explained in the following sequence and as shown in Figure 2-14 on page 51: 1. The client application locates the CIMOM by calling an SLP directory service. 2. The CIMOM is invoked. 3. The CIMOM registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. 4. With this information, the client application starts to directly communicate with the CIMOM. 5. The client application sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. 6. The CIMOM directs the requests to the appropriate functional component of the CIMOM or to a device provider. 7. The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy client application requests. 8. — 10. The client application requests are made.50 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 2-14 CIM Agent work flow2.5 Tivoli Common Agent Services The Tivoli Common Agent Services is a new concept with the goal to provide a set of functions for the management of agents that will be common to all Tivoli products. At the time of this writing, IBM TotalStorage Productivity Center for Fabric and IBM TotalStorage Productivity Center for Data are the first applications that use this new concept. See Figure 2-15 on page 52 for an overview of the three elements in the Tivoli Common Agent Services infrastructure. In each of the planning and installation guides of the Productivity Center for Fabric and Productivity Center for Data, there is a chapter that provides information about the benefits, system requirements and sizing, security considerations, and the installation procedures. The Agent Manager is the central network element, that together with the distributed Common Agents, builds an infrastructure which is used by other applications to deploy and manage an agent environment. Each application uses a Resource Manager that is built into the application server (Productivity Center for Data or Productivity Center for Fabric) to integrate in this environment. Note: You can have multiple Resource Managers of the same type using a single Agent Manager. This may be necessary to scale the environment when, for example, one Data Manager cannot handle the load any more. The Agents will be managed by only one of the Data Managers as in this example. Chapter 2. Key concepts 51
    • Figure 2-15 Tivoli Common Agent Services The Common Agent provides the platform for the application specific agents. Depending on the tasks for which a subagent is used, the Common Agent is installed on the customers’ application servers, desktop PCs, or notebooks. Note: In different documentation, Readme files, directory and file names, you also see the terms Common Endpoint, Endpoint, or simply EP. This always refers to the Common Agent, which is part of the Tivoli Common Agent Services. The Common Agent talks to the application specific subagent, with the Agent Manager and the Resource Manager, but the actual system level functions are invoked by the subagent. The information that the subagent collects is sent directly to the Resource Manager by using the application’s native protocol. This is enabled to have down-level agents in the same environment, as the new agents that are shipped with the IBM TotalStorage Productivity Center. Certificates are used to validate if a requester is allowed to establish a communication. Demo keys are supplied to quickly set up and configure a small environment, since every installation CD uses the same certificates, this is not secure. If you want to use Tivoli Common Agent Services in a production environment, we recommend that you use your own keys that can be created during the Tivoli Agent Manager installation. One of the most important certificates is stored in the agentTrust.jks file. The certificate can also be created during the installation of Tivoli Agent Manager. If you do not use the demo certificates, you need to have this file available during the installation of the Common Agent and the Resource Manager. This file is locked with a password (the agent registration password) to secure the access to the certificates. You can use the ikeyman utility in the javajre subdirectory to verify your password.52 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 2.5.1 Tivoli Agent Manager The Tivoli Agent Manager requires a database to store information in what is called the registry. Currently there are three options for installing the database: using IBM Cloudscape™ (provided on the installation CD), a local DB2 database, or a remote DB2 database. Since the registry does not contain much information, using the Cloudscape database is OK. In our setup described later in the book, we chose a local DB2 database, because the DB2 database was required for another component that was installed on the same machine. WebSphere Application Server is the second prerequisite for the Tivoli Agent Manager. This is installed if you use the Productivity Center Suite Installer or if you choose to use the Tivoli Agent Manager installer. We recommend that you do not install WebSphere Application Server manually. Three dedicated ports are used by the Agent Manager (9511-9513). Port 9511 is the most important port because you have to enter this port during the installation of a Resource Manager or Common Agent, if you choose to change the defaults. When the WebSphere Application Server is being installed, make sure that the Microsoft Internet Information Server (IIS) is not running, or even better that it is not installed. Port 80 is used by the Tivoli Agent Manager for the recovery of agents that can no longer communicate with the manager, because of lost passwords or certificates. This Agent Recovery Service is located by a DNS entry with the unqualified host name of TivoliAgentRecovery. Periodically, check the Agent Manager log for agents that are unable to communicate with the Agent Manager server. The recovery log is in the %WAS_INSTALL_ROOT%AgentManager logsSystemOut.log file. Use the information in the log file to determine why the agent could not register and then take corrective action. During the installation, you also have to specify the agent registration password and the Agent Registration Context Root. The password is stored in the AgentManager.properties file on the Tivoli Agent Manager. This password is also used to lock the agentTrust.jks certificate file. Important: A detailed description about how to change the password is available in the corresponding Resource Manager Planning and Installation Guide. Since this involves redistributing the agentTrust.jks files to all Common Agents, we encourage you to use your own certificates from the beginning. To control the access from the Resource Manager to the Common Agent, certificates are used to make sure that only an authorized Resource Manager can install and run code on a computer system. This certificate is stored in the agentTrust.jks and locked with the agent registration password.2.5.2 Common Agent As mentioned earlier, the Common Agent is used as a platform for application specific agents. These agents sometimes are called subagents. The subagents can be installed using two different methods: Using an application specific installer From a central location once the Common Agent is installed Chapter 2. Key concepts 53
    • When you install the software, the agent has to register with the Tivoli Agent Manager. During this procedure, you need to specify the registration port on the manager (by default 9511). Furthermore, you need to specify an agent registration password. This registration is performed by the Common Agent, which is installed automatically if not already installed. If the subagent is deployed from a central location, the port 9510 is by default used by the installer (running on the central machine), to communicate with the Common Agent to download and install the code. When this method is used, no password or certificate is required, because these were already provided during the Common Agent installation on the machine. If you choose to use your own certificate during the Tivoli Agent Manager installation, you need to supply it for the Common Agent installation.54 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Part 2Part 2 Installing the IBM TotalStorage Productivity Center base product suite In this part of the book we provide information to help you successfully install the prerequisite products that are required before you can install the IBM TotalStorage Productivity Center product suite. This includes installing: DB2 IBM Director WebSphere Application Server Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication© Copyright IBM Corp. 2005. All rights reserved. 55
    • 56 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3 Chapter 3. Installation planning and considerations IBM TotalStorage Productivity Center is made up of several products which can be installed individually, as a complete suite, or any combination in between. By installing multiple products, a synergy is created which allows the products to interact with each other to provide a more complete solution to help you meet your business storage management objectives. This chapter contains information that you will need before beginning the installation. It also discusses the supported environments and pre-installation tasks.© Copyright IBM Corp. 2005. All rights reserved. 57
    • 3.1 Configuration You can install the storage management components of IBM TotalStorage Productivity Center on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: Windows 2000 Server with Service Pack 4 Windows 2000 Advanced Server Windows 2003 Enterprise Edition Note: Refer to the following Web site for the updated support summaries, including specific software, hardware, and firmware levels supported: http://www.storage.ibm.com/software/index.html If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend that you install IBM Tivoli Provisioning Manager on a separate Windows machine.3.2 Installation prerequisites This section lists the minimum prerequisites for installing IBM TotalStorage Productivity Center. Hardware The following hardware is required: Dual Pentium® 4 or Intel® Xeon 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Center for Fabric (optional) 5 GB available disk space. Database You must comply with the following database requirements: The installation of DB2 Version 8.2 is part of the Prerequisite Software Installer and is required by all the managers. Other databases that are supported are: – For IBM TotalStorage Productivity Center for Fabric: • IBM Cloudscape 5.1.60 (provided on the CD) – For IBM TotalStorage Productivity Center for Data: • Microsoft SQL Server Version 7.0, 2000 • Oracle 8i, 9i, 9i V2 • Sybase SQL Server (Adaptive Server Enterprise) Version 12.5 or higher • IBM Cloudscape 5.1.60 (provided on the CD)58 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.2.1 TCP/IP ports used by TotalStorage Productivity Center This section provides an overview of the TCP/IP ports used by IBM TotalStorage Productivity Center. TCP/IP ports used by Disk and Replication Manager The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication Manager installation program preconfigures the TCP/IP ports used by WebSphere. Table 3-1 lists the values that correspond to the WebSphere ports. Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value WebSphere ports 427 SLP port 2809 Bootstrap port 9080 HTTP Transport port 9443 HTTPS Transport port 9090 Administrative Console port 9043 Administrative Console Secure Server port 5559 JMS Server Direct Address port 5557 JMS Server Security port 5 5558 JMS Server Queued Address port 8980 SOAP Connector Address port 7873 DRS Client Address port TCP/IP ports used by Agent Manager The Agent Manager uses the TCP/IP ports listed in Table 3-2. Table 3-2 TCP/IP ports for Agent Manager Port value Usage 9511 Registering agents and resource managers 9512 Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets 9513 Requesting updates to the certificate revocation list Requesting Agent Manager information Downloading the truststore file 80 Agent recovery service Chapter 3. Installation planning and considerations 59
    • TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric The Fabric Manager uses the default TCP/IP ports listed in Table 3-3. Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value Usage 8080 NetView Remote Web console 9550 HTTP port 9551 Reserved 9552 Reserved 9553 Cloudscape server port 9554 NVDAEMON port 9555 NVREQUESTER port 9556 SNMPTrapPort port on which to get events forwarded from Tivoli NetView 9557 Reserved 9558 Reserved 9559 Tivoli NetView Pager daemon 9560 Tivoli NetView Object Database daemon 9661 Tivoli NetView Topology Manager daemon 9562 Tivoli NetView Topology Manager socket 9563 Tivoli General Topology Manager 9564 Tivoli NetView OVs_PMD request services 9565 Tivoli NetView OVs_PMD management services 9565 Tivoli NetView trapd socket 9567 Tivoli NetView PMD service 9568 Tivoli NetView General Topology map service 9569 Tivoli NetView Object Database event socket 9570 Tivoli NetView Object Collection facility socket 9571 Tivoli NetView Web Server socket 9572 Tivoli NetView SnmpServer60 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Fabric Manager remote console TCP/IP default portsThe Fabric Manager uses the ports in Table 3-4 for its remote console.Table 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value Usage 9560 HTTP port 9561 Reserved 9561 Reserved 9562 ASF Jakarta Tomcat’s Local Server port 9563 Tomcat’s warp port 9564 NVDAEMON port 9565 NVREQUESTER port 9569 Tivoli NetView Pager daemon 9570 Tivoli NetView Object Database daemon 9571 Tivoli NetView Topology Manager daemon 9572 Tivoli NetView Topology Manager socket 9573 Tivoli General Topology Manager 9574 Tivoli NetView OVs_PMD request services 9575 Tivoli NetView OVs_PMD management services 9576 Tivoli NetView trapd socket 9577 Tivoli NetView PMD service 9578 Tivoli NetView General Topology map service 9579 Tivoli NetView Object Database event socket 9580 Tivoli NetView Object Collection facility socket 9581 Tivoli NetView Web Server socket 9582 Tivoli NetView SnmpServerFabric Agents TCP/IP portsThe Fabric Agents use the TCP/IP ports listed in Table 3-5.Table 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Agents Port value Usage 9510 Common agent 9514 Used to restart the agent 9515 Used to restart the agent Chapter 3. Installation planning and considerations 61
    • 3.2.2 Default databases created during the installation During the installation of IBM TotalStorage Productivity Center, we recommend that you use DB2 as the preferred database type. Table 3-6 lists all the default databases that the installer creates during the installation. Table 3-6 Default DB2 databases Application Default database name (DB2) IBM Director No default; we created database, IBMDIR Tivoli Agent Manager IBMCDB IBM TotalStorage Productivity Center for Disk DMCOSERV and Replication Base IBM TotalStorage Productivity Center for Disk PMDATA IBM TotalStorage Productivity Center for ESSHWL Replication hardware subcomponent IBM TotalStorage Productivity Center for ELEMCAT Replication element catalog IBM TotalStorage Productivity Center for REPMGR Replication replication manager IBM TotalStorage Productivity Center for SVCHWL Replication SVC hardware subcomponent IBM TotalStorage Productivity Center for Fabric ITSANM IBM TotalStorage Productivity Center for Data No default; we created Database, TPCDATA3.3 Our lab setup environment This section gives a brief overview of what our lab setup environment looked like and what we used to document the installation. Server hardware used We used four IBM Eserver xSeries servers with: 2 x 2.4 GHz CPU per system 4 GB Memory per system 73 GB HDD per system Windows 2000 with Service Pack 4 System 1 The name of our first system was Colorado. The following applications were installed on this system: DB2 IBM Director WebSphere Application Server WebSphere Application Server update Tivoli Agent Manager IBM TotalStorage Productivity Center for Disk and Replication Base IBM TotalStorage Productivity Center for Disk IBM TotalStorage Productivity Center for Replication62 IBM TotalStorage Productivity Center V2.3: Getting Started
    • IBM TotalStorage Productivity Center for Data IBM TotalStorage Productivity Center for FabricSystem 2The name of our second system was Gallium. The following applications were installed onthis server: Data AgentSystem 3The name of our third system was PQDISRV. The following applications were installed on thisserver: DB2 Application softwareSystems used for CIMOM serversWe used four xSeries servers for our Common Information Model Object Manager (CIMOM)servers. They consisted of: 2 GHz CPU per system 2 GB Memory per system 40 GB HDD per system Windows 2000 server with Service Pack 4CIMOM system 1Our first CIMOM server was named TPCMAN. On this server, we installed ESS CLI ESS CIMOM LSI Provider (FAStT CIMOM)CIMOM system 3Our third CIMOM system was named SVCCON. We installed the following applications onthis server: SAN Volume Controller (SVC) Console SVC CIMOMNetworkingWe used the following switches for networking: IBM Ethernet 10/100 24 Port switch 2109 F16 Fiber switchStorage devicesWe employed the following storage devices: IBM TotalStorage Enterprise Storage Server (ESS) 800 and F20 DS8000 DS6000 DS4000 IBM SVCFigure 3-1 on page 64 shows a diagram of our lab setup environment. Chapter 3. Installation planning and considerations 63
    • . ESS Management SVCCCONN MARYLAMB TPCMAN XXX.YYY.6.29 Console SVC CIMOM ESS CIMOM FAStT CIMOM XXX.YYY.6.26 W2K W2K W2K W2K SVC Cluster XXX.YYY.140.14 XXX.YYY.140.15 XXX.YYY.ZZZ.25 XXX.YYY.ZZZ.34 XXX.YYY.ZZZ.35 XXX.YYY.ZZZ.73 Ethernet Switch XXX.YYY.ZZZ.10 Colorado Server Gallium Server W2K W2K PQDISRV Server Faroe Server W2K W2K -> IBM TotalStorage Productivity Center for -> Tivoli Agent Manager Disk, and Replication -> Application Server ->Application Server -> IBM TotalStorage Productivity Center for -> IBM TotalStorage Productivity Fabric Center for Data XXX.YYY.ZZZ.49 XXX.YYY.ZZZ.36 XXX.YYY.ZZZ.100 XXX.YYY.ZZZ.69 2109-F16 Fiber Switch XXX.YYY.ZZZ.201 FAStT 700 XXX.YYY.ZZZ.202 XXX.YYY.ZZZ.203Figure 3-1 Lab setup environment3.4 Pre-installation check list You need to complete the following tasks in preparation for installing the IBM TotalStorage Productivity Center. Print the tables in Appendix A, “Worksheets” on page 991, to keep track of the information you will need during the installation, such as user names, ports, IP addresses, and locations of servers and managed devices. 1. Determine which elements of the TotalStorage Productivity Center you will install. 2. Uninstall Internet Information Services. 3. Grant the following privileges to the user account that will be used to install the TotalStorage Productivity Center: – Act as part of the operating system – Create a token object – Increase quotas – Replace a process-level token – Logon as a service 4. Install and configure Simple Network Management Protocol (SNMP) (Fabric requirement). 5. Identify any firewalls and obtain the required authorization. 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers.64 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.5 User IDs and security This section discusses the user IDs that are used during the installation and those that are used to manage and work with TotalStorage Productivity Center. It also explains how you can increase the basic security of the different components.3.5.1 User IDs This section lists and explains the user IDs used in a IBM TotalStorage Productivity Center environment. For some of the IDs, refer to Table 3-8 for a link to additional information that is available in the manuals. Suite Installer user We recommend that you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights listed in Table 3-7. Table 3-7 Requirements for the Suite Installer user User rights/policy Used for Act as part of the operating system DB2 Productivity Center for Disk Fabric Manager Create a token object DB2 Productivity Center for Disk Increase quotas DB2 Productivity Center for Disk Replace a process-level token DB2 Productivity Center for Disk Log on as a service DB2 Debug programs Productivity Center for Disk Table 3-8 shows the user IDs that are used in a TotalStorage Productivity Center environment. It provides information about the Windows group to which the user ID must belong, whether it is a new user ID that is created during the installation, and when the user ID is used.Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element User ID New user Type Group or Usage groups Suite Installer Administrator No DB2 db2admina Yes, Windows DB2 management and Windows will be Service Account created IBM Director tpcadmina No Windows DirAdmin or Windows Service Account (see also “IBM DirSuper Director” on page 67) Chapter 3. Installation planning and considerations 65
    • Element User ID New user Type Group or Usage groups Resource Manager managerb No, Tivoli N/A, internal Used during the registration of a default Agent user Resource Manager to the Agent user Manager Manager Common Agent AgentMgrb No Tivoli N/A, internal Used to authenticate agents and (see also “Common Agent user lock the certificate key files Agent” on page 67) Manager Common Agent itcauserb Yes, Windows Windows Windows Service Account will be created TotalStorage tpccimoma Yes, will Windows DirAdmin This ID is used to accomplish Productivity Center be connectivity with the managed universal user created devices. For example, this ID has to be set up on the CIM Agents. Tivoli NetView c Windows See “Fabric Manager User IDs” on page 68 IBM WebSphere a Windows See “Fabric Manager User IDs” on page 68 Host Authentication a Windows See “Fabric Manager User IDs” on page 68 a. This account can have any name you choose. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here. See “Fabric Manager User IDs” on page 68. Granting privileges Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They may not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can optionally set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, follow these steps: 1. Click Start →Settings →Control Panel. 2. Double-click Administrative Tools. 3. Double-click Local Security Policy. 4. The Local Security Settings window opens. Expand Local Policies. Then double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: a. Highlight the policy to be selected. b. Double-click the policy and look for the user’s name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are selected.66 IBM TotalStorage Productivity Center V2.3: Getting Started
    • c. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: i. In the Local Security Policy Setting window, click Add. ii. In the Select Users or Groups window, under the Name column, highlight the user of group. iii. Click Add to place the name in the lower window. iv. Click OK to add the policy to the user or group.5. After you set these user rights, either by using the installation program or manually, log off the system and then log on again for the user rights to take effect.6. Restart the installation program to continue with the IBM TotalStorage Productivity Center for Disk and Replication Base.TotalStorage Productivity Center communication userThe communication user account is used for authentication between several differentelements of the environment. For example, if WebSphere Application Server is installed withthe Suite Installer, its Administrator ID is the communication users.IBM DirectorWith Version 4.1, you no longer need to create an “internal” user account. All user IDs mustbe operating system accounts and members of one of the following groups: DirAdmin or DirSuper groups (Windows), diradmin, or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux)In addition, a host authentication password is used to allow managed hosts and remoteconsoles to communicate with IBM Director.Resource ManagerThe user ID and password (default is manager and password) for the Resource Manager isstored in the AgentManagerconfigAuthorization.xml file on the Agent Manager. Since this isused only during the initial registration of a new Resource Manager, there is no problem withchanging the values at any time. You can find a detailed procedure on how to change this inthe Installation and Planning Guides of the corresponding manager.You can have multiple Resource Manager user IDs if you want to separate the administratorsfor the different managers, for example for IBM TotalStorage Productivity Center for Data andIBM TotalStorage Productivity Center for Fabric.Common AgentEach time the Common Agent is started, this context and password are used to validate theregistration of the agent with the Tivoli Agent Manager. Furthermore the password is used tolock the certificate key files (agentTrust.jks).The default password is changeMe, but you should change the password when you install theTivoli Agent Manager. The Tivoli Agent Manager stores this password in theAgentManager.properties file.If you start with the defaults, but want to change the password later, all the agents have to bechanged. A procedure to change the password is available in the Installation and PlanningGuides of the corresponding managers (at this time Data or Fabric). Since the password isused to lock the certificate files, you must also apply this change to Resource Managers. Chapter 3. Installation planning and considerations 67
    • The Common Agent user ID AgentMgr is not a user ID, but rather the context in which the agent is registered at the Tivoli Agent Manager. There is no need to change this, so we recommend that you accept the default. TotalStorage Productivity Center universal user The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. This user ID communicates with CIMOMs during install and post install. It also communicates with WebSphere. Fabric Manager User IDs During the installation of IBM TotalStorage Productivity Center for Fabric, you can select if you want to use individual passwords for such subcomponents as DB2, IBM WebSphere, NetView and the Host Authentication. You can also choose to use the DB2 administrator’s user ID and password to make the configuration simpler. Figure 4-117 on page 164 shows the window where you can choose the options.3.5.2 Increasing user security The goal of increasing security is to have multiple roles available for the various tasks that can be performed. Each role is associated with a certain group. The users are only added to those groups that they need to be part of to fulfill their work. Not all components have the possibility to increase the security. Others methods require some degree of knowledge about the specific components to perform the configuration successfully. IBM TotalStorage Productivity Center for Data During the installation of Productivity Center for Data, you can enter the name of a Windows group. Every user within this group is allowed to manage Productivity Center for Data. Other users may only start the interface and look at it. You can add or change the name of that group later by editing the server.config file and restarting Productivity Center for Data. Productivity Center for Data does not support the following domain login formats for logging into its server component: (domain name)/(username) (username)@(domain) Because it does not support these formats, you must set up users in a domain account that can log into the server. Perform the following steps before you install Productivity Center for Data in your environment: 1. Create a Local Admin Group. 2. Create a Domain Global Group. 3. Add the Domain Global Group to the Local Admin Group. Productivity Center for Data looks up the SID for the domain user when the login occurs. You only need to specify a user name and password.68 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.5.3 Certificates and key files Within a TotalStorage Productivity Center environment, several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager. Productivity Center for Disk and Replication certificates The WebSphere Application Server that is part of Productivity Center for Disk and Replication uses certificates for Secure Sockets Layer (SSL) communication. During the installation, key files can be generated as self-signed certificates, but you must enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for the key file on the Agent Manager is C:IBMmdmdmkeys. Tivoli Agent Manager certificates The Agent Manager comes with demonstration certificates that you can use. However, you can also create new certificates during the installation of Agent Manager (see Figure 4-26 on page 104). If you choose to create new files, the password that you enter on the panel, as shown in Figure 4-27 on page 105, as the Agent registration password is used to lock the agentTrust.jks key file. The default directory for that key file on the Agent Manager is C:Program FilesIBMAgentManagercerts. There are more key files in that directory, but during the installation and first steps, the agentTrust.jks file is the most important one. This is only important if you allow the installer to create your keys.3.5.4 Services and service accounts The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. To keep it simple, we did not include all the DB2 services in the table.Table 3-9 Services and service accounts Element Service name Service account Comment DB2 db2admin The account needs to be part of Administrators and DB2ADMNS. IBM Director IBM Director Server Administrator You need to modify the account to be part of one of the groups: DirAdmin or DirSuper. Agent Manager IBM WebSphere Application LocalSystem You need to set this service to start Server V5 — Tivoli Agent automatically, after the installation. Manager Common Agent IBM Tivoli Common Agent — itcauser C:Program Filestivoliep Productivity Center IBM TotalStorage Productivity TSRMsrv1 for Data Center for Data server Productivity Center IBM WebSphere Application LocalSystem for Fabric Server V5 — Fabric Manager Chapter 3. Installation planning and considerations 69
    • Element Service name Service account Comment Tivoli NetView Tivoli NetView Service NetView Service3.6 Starting and stopping the managers To start, stop or restart one of the managers or components, you use the Windows control panel. Table 3-10 shows a list of the services.Table 3-10 Services used for TotalStorage Productivity Center Element Service name Service account DB2 db2admin IBM Director IBM Director Server Administrator Agent Manager IBM WebSphere Application Server V5 - Tivoli Agent Manager LocalSystem Common Agent IBM Tivoli Common Agent — C:Program Filestivoliep itcauser Productivity Center for Data IBM TotalStorage Productivity Center for Data Server TSRMsrv1 Productivity Center for Fabric IBM WebSphere Application Server V5 - Fabric Manager LocalSystem Tivoli NetView Service Tivoli NetView Service NetView3.7 Windows Management Instrumentation Before beginning the Prerequisite Software installation, the Windows Management Instrumentation service must first be stopped and disabled. To disable the service, follow the steps below. 1. Go to Start → Settings → Control Panel → Administrative Tools → Services. 2. Scroll down and double-click the Windows Management Instrumentation service (see Figure 3-2 on page 71).70 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 3-2 Windows Management Instrumentation service3. In the Windows Management Instrumentation Properties window, go down to Service status and click the Stop button (Figure 3-3). Wait for the service to stop.Figure 3-3 Stopping Windows Management Instrumentation Chapter 3. Installation planning and considerations 71
    • 4. After the service is stopped, in the Windows Management Instrumentation Properties window, change the Startup type to Disabled (Figure 3-4) and click OK. Figure 3-4 Disabled Windows Management Instrumentation 5. After disabling the service, it may start again. If so, go back and stop the service again. The service should now be stopped and disabled as shown in Figure 3-5. Figure 3-5 Windows Management Instrumentation successfully disabled Important: After the Prerequisite Software installation completes. You must enable the Windows Management Instrumentation service before installing the suite. To enable the service, change the Startup type from Disabled (see Figure 3-4) to Automatic.72 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.8 World Wide Web Publishing As with the Windows Management Instrumentation service, the World Wide Web Publishing service must also be stopped and disabled before starting the Prerequisite Software Installer. To stop the World Wide Web Publishing service, simply follow the same steps in section Figure 3.7 on page 70. This service can remain disabled.3.9 Uninstalling Internet Information Services Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the following procedure. 1. Click Start → Settings → Control Panel. 2. Click Add/Remove Programs. 3. In the Add or Remove Programs window, click Add/Remove Windows Components. 4. In the Windows Components panel, deselect IIS.3.10 Installing SNMP Before you install the components of the TotalStorage Productivity Center, install and configure SNMP. 1. Click Start → Settings → Control Panel. 2. Click Add/Remove Programs. 3. In the Add or Remove Programs window, click Add/Remove Windows Components. 4. Double-click Management and Monitoring Tools. 5. In the Windows Components panel, select Simple Network Management Protocol and click OK. 6. Close the panels and accept the installation of the components. 7. The Windows installation CD or installation files are required. Make sure that the SNMP services are configured as explained in these steps: a. Right-click My Computer and select Manage. b. In the Computer Management window, click Services and Applications. c. Double-click Services. 8. Scroll down to and double-click SNMP Service. 9. In the SNMP Service Properties window, follow these steps: 10.Click the Traps tab (see Figure 3-6 on page 74). Chapter 3. Installation planning and considerations 73
    • d. Make sure that the public name is available. Figure 3-6 Traps tab in the SNMP Service Properties window e. Click the Security tab (see Figure 3-7). f. Select Accept SNMP packets from any host. g. Click OK. Figure 3-7 SNMP Security Properties window 11.After you set the public community name, restart the SNMP community service.74 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.11 IBM TotalStorage Productivity Center for Fabric Prior to installing IBM TotalStorage Productivity Center for Fabric, there are planning considerations and prerequisite tasks that you need to complete.3.11.1 The computer name IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow this procedure. 1. Right-click the My Computer icon on your desktop and select Properties. 2. The System Properties window opens. a. Click the Network Identification tab. Click Properties. b. The Identification Changes panel opens. i. Verify that your computer name is entered correctly. This is the name that the computer is identified as in the network. ii. Verify that the full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. iii. Click More. c. The DNS Suffix and NetBIOS Computer Name panel opens. Verify that the Primary DNS suffix field displays a domain name. Important: The fully qualified host name must match the HOSTS file name (including case-sensitive characters).3.11.2 Database considerations When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created if you specified the DB2 database. The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before you re-install the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is also TSANMDB. You cannot change this database name. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines may end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines.3.11.3 Windows Terminal Services You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView appear on the manager or remote console machine only. The dialogs do not appear in the Windows Terminal Services session. Chapter 3. Installation planning and considerations 75
    • 3.11.4 Tivoli NetView IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to version 7.1.3. If you have a Tivoli NetView release earlier than Version 7.1.1, IBM TotalStorage Productivity Center for Fabric prompts you to uninstall Tivoli NetView before you install this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adapter Important: Ensure that the Windows 2000 Terminal Services is not running. Go to the Services panel and check for Terminal Services. User IDs and password considerations TotalStorage Productivity Center for Fabric only supports local user IDs and groups. It does not support domain user IDs and groups. Cloudscape database If you install TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager WebSphere administrative user ID and password host authentication password Tivoli NetView password only DB2 database If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the Agent Manager Resource manager user ID and password to register with the Agent Manager DB2 administrator user ID and password DB2 user ID and password WebSphere administrative user ID and password Host authentication password only Tivoli NetView password only Note: If you are running Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must act as part of the operating system user. WebSphere To change the WebSphere user ID and password, follow this procedure: 1. Open the install_locationappswaspropertiessoap.client.props file.76 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 2. Modify the following entries: – com.ibm.SOAP. login Userid=user_ID (enter a value for user_ID) – com.ibm.SOAP. login Password=password (enter a value for password) 3. Save the file. 4. Run the following script: ChangeWASAdminPass.bat user_ID password install_dir Here user_ID is the WebSphere user ID and password is the password. install_dir is the directory where the manager is installed and is optional. For example, install_dir is c:Program FilesIBMTPCFabricmanagerbinW32-ix86.3.11.5 Personal firewall If you have a software firewall on your system, disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager. Security considerations You set up security by using certificates. There are demonstration certificates or you can generate new certificates. This option is specified when you installed the Agent Manager. See Figure 4-26 on page 104. We recommend that you generate new certificates. If you used the demonstration certificates, continue with the installation. If you generated new certificates, follow this procedure: 1. Copy the manager CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This overwrites the existing agentTrust.jks file. 3. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested.3.11.6 Changing the HOSTS file When you install Service Pack 3 for Windows 2000 on your computers, follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol, which returns the short name and not the fully qualified host name. You can avoid this problem by changing the entries in the corresponding host tables on the Domain Name System (DNS) server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See 3.11.1, “The computer name” on page 75, for details about determining the host name. To correct this problem, you have to edit the HOSTS file. The HOSTS file is in the %SystemRoot%system32driversetc directory. Example 3-1 Sample HOSTS file # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. Chapter 3. Installation planning and considerations 77
    • # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a # symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com Note: Host names are case sensitive, which is limitation within WebSphere. Check your host name.3.12 IBM TotalStorage Productivity Center for Data Prior to installing IBM TotalStorage Productivity Center for Data, there are planning considerations and prerequisite tasks that you need to complete.3.12.1 Server recommendations The IBM TotalStorage Productivity Center for Data server component acts as a traffic officer for directing information and handling requests from the agent and UI components installed within an environment. You need to install at least one server within your environment. We recommend that you do not manage more than 1000 agents with a single server. If you need to install more than 1000 agents, we suggest that you install an additional server for those agents to maintain optimal performance.3.12.2 Supported subsystems and databases This section contains the subsystems, file system formats, and databases that the TotalStorage Productivity Center for Data supports. Storage subsystem support Data Manager currently supports the monitoring and reporting of the following storage subsystems: Hitachi Data Systems HP StorageWorks IBM FAStT 200, 600, 700, and 900 with an SMI-S 1.0 compliant CIM interface SAN Volume Controller Console Version 1.1.0.2, 1.1.0.9, 1.2.0.5, 1.2.0.6 (1.3.2 Patch available), 1.2.1.x, 1.2.0.6, SAN Volume Controller CIMOM Version 1.1.0.1, 1.2.0.4, 1.2.0.5 (1.3.2 patch available), 1.2.0.5, 1.2.1.x ESS ICAT 1.1.0.2, 1.2.0.15, 1.2.0.29, 1.2.x, 1.2.1.40 and later for ESS78 IBM TotalStorage Productivity Center V2.3: Getting Started
    • File system support Data Manager supports the monitoring and reporting of the following file systems: FAT FAT32 NTFS4, NTFS5 EXT2, EXT3 AIX_JFS HP_HFS VXFS UFS TMPFS AIX_OLD NW_FAT NW_NSS NF WAFL FAKE AIX_JFS2 SANFS REISERFS Network File System support Data Manager currently supports the monitoring and reporting of the following Network File Systems (NFS): IBM TotalStorage SAN File System 1.0 (Version 1 Release 1), from AIX V5.1 (32-bit) and Windows 2000 Server/Advanced Server clients IBM TotalStorage SAN File System 2.1, 2.2 from AIX V5.1 (32-bit), Windows 2000 Server/Advanced Server, Red Hat Enterprise Linux 3.0 Advanced Server, and SUN Solaris 9 clients General Parallel File System (GPFS) v2.1, v2.2 RDBMS support Data Manager currently supports the monitoring of the following relational database management systems (RDBMS): Microsoft SQL Server 7.0, 2000 Oracle 8i, 9i, 9i V2, 10G Sybase SQL Server 11.0.9 and higher DB2 Universal Database™ (UDB) 7.1, 7.2, 8.1, 8.2 (64-bit UDB DB2 instances are supported)3.12.3 Security considerations This section describes the security issues that you must consider when installing Data Manager. Chapter 3. Installation planning and considerations 79
    • User levels There are two levels of users within IBM TotalStorage Productivity Center for Data: non-administrator users and administrator users. The level of users determine how they use IBM TotalStorage Productivity Center for Data. Non-administrator users – View the data collected by IBM TotalStorage Productivity Center for Data. – Create, generate, and save reports. IBM TotalStorage Productivity Center for Data administrators. These users can: – Create, modify, and schedule Pings, Probes, and Scans – Create, generate, and save reports – Perform administrative tasks and customize the IBM TotalStorage Productivity Center for Data environment – Create Groups, Profiles, Quotas, and Constraints – Set alerts Important: Security is set up by using the certificates. You can use the demonstration certificates or you can generate new certificates. It is recommended that you generate new certificates when you install the Agent Manager. Certificates If you generated new certificates, follow this procedure: 1. Copy the CD image to your computer. 2. Copy the agentTrust.jks file from the Agent Manager directory AgentManager/certs to the CommonAgentcerts directory of the manager CD image. This overwrites the existing agentTrust.jks file. You can write a new CD image with the new file or keep this image on your computer and point the Suite Installer to the directory when requested. Important: Before installing IBM TotalStorage Productivity Center for Data, define the group within your environment that will have administrator rights within Data Manager. This group must exist on the same machine where you are installing the Server component. During the installation, you are prompted to enter the name of this group.80 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3.12.4 Creating the DB2 database Before you install the component, create the IBM TotalStorage Productivity Center for Data database. 1. From the start menu, select Start →Programs →IBM DB2 →General Administration Tools →Control Center. 2. This launches the DB2 Control Center. Create a database that is used for IBM TotalStorage Productivity Center for Data as shown in Figure 3-8. Select All Databases, right-click and select Create Databases →Standard. Figure 3-8 DB2 database creation Chapter 3. Installation planning and considerations 81
    • 3. In the window that opens (Figure 3-9), complete the required database name information. We used the database name of TPCDATA. Click Finish to complete the database creation. Figure 3-9 DB2 database information for creation82 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 4 Chapter 4. Installing the IBM TotalStorage Productivity Center suite Installation of the TotalStorage Productivity Center suite of products is done using the install wizards. The first, the Prerequisite Software Installer, installs all the products needed before one can install the TotalStorage Productivity Center suite. The second, the Suite Installer, installs the individual components or the entire suite of products. This chapter documents the use of the Prerequisite Software Installer and the Suite Installer. It also includes hints and tips based on our experience.© Copyright IBM Corp. 2005. All rights reserved. 83
    • 4.1 Installing the IBM TotalStorage Productivity Center IBM TotalStorage Productivity Center provides a Prerequisite Software Installer and Suite Installer that helps guide you through the installation process. You can also use the Suite Installer to install stand-alone components. The Prerequisite Software Installer installs the following products in this order: 1. DB2, which is required by all the managers 2. WebSphere Application Server, which is required by all the managers except for TotalStorage Productivity Center for Data 3. Tivoli Agent Manager, which is required by Fabric Manager and Data Manager The Suite Installer installs the following products or components in this order: 1. IBM Director, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 2. Productivity Center for Disk and Replication Base, which is required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication 3. TotalStorage Productivity Center for Disk 4. TotalStorage Productivity Center for Replication 5. TotalStorage Productivity Center for Fabric - Manager 6. TotalStorage Productivity Center for Data - Manager In addition to the manager installations, the Suite Installer guides you through the installation of other IBM TotalStorage Productivity Center components. You can select more than one installation option at a time. This redbook separates the types of installations into several sections to help explain them. The additional types of installation tasks are: IBM TotalStorage Productivity Center Agent installations IBM TotalStorage Productivity Center GUI/Client installations Language Pack installations IBM TotalStorage Productivity Center product uninstallations4.1.1 Considerations You may want to use IBM TotalStorage Productivity Center for Disk to manage the IBM TotalStorage Enterprise Storage Server (ESS), DS8000, DS6000, Storage Area Network (SAN) Volume Controller (SVC), IBM TotalStorage Fibre Array Storage Technology (FAStT), or DS4000 storage subsystems. In this case, you must install the prerequisite input/output (I/O) Subsystem Licensed Internal Code (SLIC) and Common Information Model (CIM) Agent for the devices. See Chapter 6, “Configuring IBM TotalStorage Productivity Center for Disk” on page 247, for more information. If you are installing the CIM Agent for the ESS, or the DS8000 or DS6000 you must install it on a separate machine. TotalStorage Productivity Center 2.3 does not support Linux on zSeries or on S/390®. Nor does IBM TotalStorage Productivity Center support Windows domains.84 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 4.2 Prerequisite Software Installation This section guides you step by step through the install process of the prerequisite software components.4.2.1 Best practices Before you begin installing the prerequisite software components, we recommend that you complete the following tasks: 1. Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center components, including the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, IBM TotalStorage Productivity Center for Replication, IBM TotalStorage Productivity Center for Data and IBM TotalStorage Productivity Center for Fabric. For details refer to “Granting privileges” on page 66. 2. Make sure Internet Information Services (IIS) is not installed on the server. If it is installed, uninstall it using the procedure in 3.9, “Uninstalling Internet Information Services” on page 73. 3. Install and configure Simple Network Management Protocol (SNMP) described in 3.10, “Installing SNMP” on page 73. 4. Stop and disable Windows Management Instrumentation (Figure 3.7 on page 70) and World Wide Web Publishing (3.8, “World Wide Web Publishing” on page 73) services. 5. Create a database for Agent Manager installation. To create the database, see 3.12.4, “Creating the DB2 database” on page 81. The default database name for Agent Manager is IBMCDB.4.2.2 Installing prerequisite software Follow these steps to install the prerequisite software components: 1. Insert the IBM TotalStorage Productivity Center Prerequisite Software Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-1). 2. The installer language window (Figure 4-1) opens. From the list, select a language. This is the language that is used to install this product. Click OK. Figure 4-1 Prerequisite Software Installer language Chapter 4. Installing the IBM TotalStorage Productivity Center suite 85
    • 3. The Prerequisite Software Installer wizard welcome pane in Figure 4-2 opens. Click Next. The Software License Agreement panel is then displayed. Read the terms of the license agreement. If you agree with the terms of the license agreement select the I accept the terms in the license agreement radio button and click Next to continue. Figure 4-2 Prerequisite Software Installer wizard 4. The prerequisite operating system check panel in Figure 4-3 on page 87 opens. When it completes successfully click Next.86 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Figure 4-3 Prerequisite Operating System check5. The Tivoli Common Directory location panel (Figure 4-4) opens and prompts for a location for the log files. Accept the default location or enter a different location. Click Next to continue.Figure 4-4 Tivoli Common Directory location Chapter 4. Installing the IBM TotalStorage Productivity Center suite 87
    • 6. The product selection panel (Figure 4-5) opens. To install the entire TotalStorage Productivity Center suite, check the boxes next to DB2, WebSphere, and Agent Manager. Figure 4-5 Product selection 7. The DB2 Universal Database panel (Figure 4-6) opens. Select Enterprise Server Edition and click Next to continue. Figure 4-6 DB2 Universal Database88 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Note: After clicking Next (Figure 4-6), if you see the panel in Figure 4-7, you must first stop and disable the Windows Management Instrumentation service before continuing with the installation. See Figure 3.7 on page 70 for detailed instructions.Figure 4-7 Windows Management Instrumentation service warning Chapter 4. Installing the IBM TotalStorage Productivity Center suite 89
    • 8. The DB2 user name and password panel (Figure 4-8) opens. If the DB2 user name exists on the system, the correct password must be entered or the DB2 installation will fail. If the DB2 user name does not exist it will be created by the DB2 install. In our installation we accepted the default user name and entered a unique password. Click Next to continue. Figure 4-8 DB2 user configuration90 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 9. The Target Directory Confirmation panel (Figure 4-9) opens. Accept the default target directories for DB2 installation or enter a different location. Click Next.Figure 4-9 Target Directory Confirmation10.The select the languages panel (Figure 4-10) opens. This installs the languages selected for DB2. Select your desired language(s). Click Next.Figure 4-10 Language selection Chapter 4. Installing the IBM TotalStorage Productivity Center suite 91
    • 11.The Preview Prerequisite Software Information panel (Figure 4-11) opens. Review the information and click Next. Figure 4-11 Preview Prerequisite Software Information 12.The WebSphere Application Server system prerequisites check panel (Figure 4-12) opens. When the check completes successfully click Next. Figure 4-12 WebSphere Application Server system prerequisites check92 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 13.The installation options panel (Figure 4-13) opens. Select the type of installation you wish to perform. The rest of this section guides you through Unattended Installation. Unattended Installation guides you through copying all installation images to a central location called the installation image depot. Once the copies are completed, the component installations proceed with no further intervention needed. Attended Installation prompts you to enter the location of each install image as needed. Click Next to continue.Figure 4-13 Installation options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 93
    • 14.The install image depot location panel opens (see Figure 4-14). Enter the location where all installation images are to be copied. Click Next. Figure 4-14 Install image depot location 15.You are first prompted for the location of the DB2 installation image (see Figure 4-15). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy. Figure 4-15 DB2 installation source94 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 16.After the DB2 installation image is copied to the install image depot, you are prompted for the location of the WebSphere installation image (see Figure 4-16). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy.Figure 4-16 WebSphere installation source17.After the WebSphere installation image is copied, you are prompted for the location of the WebSphere Cumulative fix 3 installation image (see Figure 4-17). Browse to the installation image and select the path to the installation files or insert the install CD and click Copy.Figure 4-17 WebSphere fix 3 installation source Chapter 4. Installing the IBM TotalStorage Productivity Center suite 95
    • 18.When an install image has been successfully copied to the Install Image Depot, a green check mark appears to the right of the prerequisite. After all the prerequisite software images are successfully copied to the install image depot (Figure 4-18), click Next. Figure 4-18 Installation images copied successfully96 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 19.The installation of DB2, WebSphere, and the WebSphere Fix Pack begins. When a prerequisite is successfully installed, a green check mark appears to its left. If the installation of a prerequisite fails, a red X appears to the left. If a prerequisite installation fails, exit the installer, check the logs to determine and correct the problem, and restart the installer. When the installation completes successfully (see Figure 4-19), click Next.Figure 4-19 DB2 and WebSphere installation complete Chapter 4. Installing the IBM TotalStorage Productivity Center suite 97
    • 20.The Agent Manager Registry Information panel opens. Select the type of database, specify the database name, and choose a local or remote database. The default DB2 database name is IBMCDB. For a local database connection, the DB2 database will be created if it does not exist. We recommend that you take the default database name for a local database. Click Next to continue (see Figure 4-20). Attention: For a remote database connection, the database specified below must exist. Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on how to create a database in DB2. Figure 4-20 Agent Manager Registry Information98 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 21.The Database Connection Information panel in Figure 4-21 opens. Specify the location of the database software directory (for DB2, the default install location is C:Program FilesIBMSQLLIB), the database user name and password. You must specify the database host name and port if you are using a remote database. Click Next to continue.Figure 4-21 Agent Manager database connection Information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 99
    • Note: For a remote database connection the database specified in Figure 4-20 on page 98 must exist. If the database does not exist, you will see the error message shown in Figure 4-22. Refer to 3.12.4, “Creating the DB2 database” on page 81 for information on how to create a database in DB2. Figure 4-22 DB2 database error100 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 22.A panel opens prompting for a location to install Tivoli Agent Manager (see Figure 4-23). Accept the default location or enter a different location. Click Next to continue.Figure 4-23 Tivoli Agent Manager installation directory Chapter 4. Installing the IBM TotalStorage Productivity Center suite 101
    • 23.The WebSphere Application Server Information panel (Figure 4-24) opens. This panel lets you specify the host name or IP address, and the cell and node names on which to install the Agent Manager. If you specify a host name, use the fully qualified host name. For example, specify HELIUM.almaden.ibm.com. If you use the IP address, use a static IP address. This value is used in the URLs for all Agent Manager services. We recommend that you use the fully qualified host name, not the IP address of the Agent Manager server. Typically the cell and node name are both the same as the host name of the computer. If WebSphere was installed before you started the Agent Manager installation wizard, you can look up the cell and node name values in the %WebSphere Application Server_INSTALL_ROOT%binSetupCmdLine.bat file. You can also specify the ports used by the Agent Manager. We recommend that you accept the defaults. – Registration Port: The default is 9511 for the server-side Secure Sockets Layer (SSL). – Secure Port: The default is 9512 for client authentication, two-way SSL. – Public Port: The default is 9513. If you are using WebSphere network deployment or a customized deployment, make sure that the cell and node names are correct. For more information about WebSphere deployment, see your WebSphere documentation. After filling in the required information in the WebSphere Application Server Information panel, click Next. Figure 4-24 WebSphere Application Server Information102 IBM TotalStorage Productivity Center V2.3: Getting Started
    • Note: If an IP address is entered in the WebSphere Application Server Information panel shown in Figure 4-24, the next panel (see Figure 4-25) explains why a host name is recommended. Click Back to use a host name or click Next to use the IP address.Figure 4-25 Agent Manager IP address warning Chapter 4. Installing the IBM TotalStorage Productivity Center suite 103
    • 24.The Security Certificates panel (Figure 4-26) opens. Specify whether to create new certificates or to use the demonstration certificates. In a typical production environment, you would create new certificates. The ability to use demonstration certificates is provided as a convenience for testing and demonstration purposes. Make a selection and click Next. Figure 4-26 Security Certificates104 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 25.The Security Certificate Settings panel (see Figure 4-27) opens. Specify the certificate authority name, security domain, and agent registration password. The agent registration password is used to register the agents. You must provide this password when you install the agents. This password also sets the Agent Manager key store and trust store files. Record this password, it will be used again in the installation process. The domain name is used in the right-hand portion of the distinguished name (DN) of every certificate issued by the Agent Manager. It is the name of the security domain defined by the Agent Manager. Typically, this value is the registered domain name or contains the registered domain name. For example, for the computer system myserver.ibm.com, the domain name is ibm.com. This value must be unique in your environment. If you have multiple Agent Managers installed, this value must be different on each Agent Manager. The default agent registration password is changeMe and it is case sensitive. Click Next to continue.Figure 4-27 Security Certificate Settings Chapter 4. Installing the IBM TotalStorage Productivity Center suite 105
    • 26.The User input summary panel for Agent Manager (see Figure 4-28) opens. Review the information and click Next. Figure 4-28 User input summary106 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 27.The summary information for Agent Manager panel (see Figure 4-29) opens. Click Next.Figure 4-29 Agent Manager installation summary Chapter 4. Installing the IBM TotalStorage Productivity Center suite 107
    • 28.You will see a panel indicating the status of the Agent Manager install process. the IBMCDB database will be created and tables are added to the database. Once the installation of agent manager completes the Summary of Installation and Configuration Results panel (see Figure 4-30) opens. Click Next to continue. Figure 4-30 Summary of Installation and Configuration Results108 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 29.The next panel (Figure 4-31) informs you when the Agent Manager service started successfully. Click Finish.Figure 4-31 Agent Manager service started Chapter 4. Installing the IBM TotalStorage Productivity Center suite 109
    • 30.The next panel (Figure 4-32) indicates the installation of prerequisite software is complete. Click Finish to exit the prerequisite installer. Figure 4-32 Prerequisite software installation complete4.3 Suite installation This section guides you through the step by step process to install the TotalStorage Productivity Center components you select. The Suite Installer launches the installation wizard for each manager you chose to install.4.3.1 Best practices Before you begin installing the suite of products complete the following tasks. 1. If you are running the Fabric Manager installation under Windows 2000, the Fabric Manager installation requires the user ID to have the following user rights: Act as part of the operating system Log on as a service user rights see Granting privileges under 3.5.1, “User IDs” on page 65 2. Enable Windows Management Instrumentation (see Figure 3.7 on page 70) 3. Install SNMP (see 3.10, “Installing SNMP” on page 73) 4. Create the database for the TotalStorage Productivity Center for Data installation (see 3.12.4, “Creating the DB2 database” on page 81).4.3.2 Installing the TotalStorage Productivity Center suite Follow these steps for successful installation:110 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 1. Insert the IBM TotalStorage Productivity Center Suite Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD-ROM drive. Double-click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient. Eventually, you see the language selection panel (Figure 4-33).2. The Installer language window (see Figure 4-33) opens. From the list, select a language. This is the language used to install this product. Click OK.Figure 4-33 Installer Wizard3. You see the Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel (see Figure 4-34). Click Next.Figure 4-34 Welcome to IBM TotalStorage Productivity Center panel4. The Software License Agreement panel (Figure 4-35 on page 112) opens. Read the terms of the license agreement. If you agree with the terms of the license agreement, select the I accept the terms of the license agreement radio button. Then click Next. If you do not accept the terms of the license agreement, the installation program ends without installing IBM TotalStorage Productivity Center components. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 111
    • Figure 4-35 License agreement 5. The next panel enables you to select the type of installation (Figure 4-36). Select Manager installations of Data, Disk, Fabric, and Replication and then click Next. Figure 4-36 IBM TotalStorage Productivity Center options panel112 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 6. In the next panel (see Figure 4-37), select the components that you want to install. Click Next to continue.Figure 4-37 IBM TotalStorage Productivity Center components Chapter 4. Installing the IBM TotalStorage Productivity Center suite 113
    • 7. The suite installer installs the IBM Director first (see Figure 4-38). Click Next. Figure 4-38 IBM Director prerequisite install 8. The IBM Director installation is now ready to begin (see Figure 4-39). Click Next. Figure 4-39 Begin IBM Director installation114 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 9. The package location for IBM Director panel (see Figure 4-40) opens. Enter the appropriate information and click Next. Note: Make sure the Windows Management Instrumentation service is disabled (see Figure 3.7 on page 70 for detailed instructions). If it is enabled, a window appears prompting you to disable the service after you click Next to continue.Figure 4-40 IBM Director package location10.The next panel (see Figure 4-41) provides information about the IBM Director post installation reboot option. When prompted, choose the option to reboot later. Click Next.Figure 4-41 IBM Director information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 115
    • 11.The IBM Director Server - InstallShield Wizard panel (Figure 4-42) opens. It indicates that the IBM Director installation wizard will launch. Click Next. Figure 4-42 IBM Director InstallShield Wizard 12.The License Agreement window opens (Figure 4-43). Read the license agreement. Click I accept the terms in the license agreement radio button and then click Next. Figure 4-43 IBM Director license agreement116 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 13.The next window (Figure 4-44) displays an advertisement for Enhance IBM Director with the new Server Plus Pack window. Click Next.Figure 4-44 IBM Director new Server Plus Pack window14.The Feature and installation directory window (Figure 4-45) opens. Accept the default settings and click Next.Figure 4-45 IBM Director feature and installation directory window Chapter 4. Installing the IBM TotalStorage Productivity Center suite 117
    • 15.The IBM Director service account information window (see Figure 4-46) opens. a. Type the domain for the IBM Director system administrator. Alternatively, if there is no domain, then type the local host name (the recommended setup). b. Type a user name and password for IBM Director. The IBM Director will run under this user name and you will log on to the IBM Director console using this user name. In our installation we used the user ID we created to install the TotalStorage Productivity Center. This user must be part of the Administrator group. c. Click Next to continue. Figure 4-46 Account information 16.The Encryption settings window (Figure 4-47) opens. Accept the default settings in the Encryption settings window. Click Next. Figure 4-47 Encryption settings118 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 17.In the Software Distribution settings window (Figure 4-48), accept the default values and click Next. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director.Figure 4-48 Installation target directory18.The Ready to Install the Program window (Figure 4-49) opens. Click Install.Figure 4-49 Installation ready Chapter 4. Installing the IBM TotalStorage Productivity Center suite 119
    • 19.The Installing IBM Director server window (Figure 4-50) reports the status of the installation. Figure 4-50 Installation progress 20.The Network driver configuration window (Figure 4-51) opens. Accept the default settings and click OK. Figure 4-51 Network driver configuration The secondary window closes and the installation wizard performs additional actions which are tracked in the status window.120 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 21.The Select the database to be configured window (Figure 4-52) opens. Select IBM DB2 Universal Database and click Next.Figure 4-52 Database selection22.The IBM Director DB2 Universal Database configuration window (Figure 4-53) opens. It may be behind the status window. You must click this window to bring it to the foreground. a. In the Database name field, type a new database name for the IBM Director database table or type an existing database name. b. In the User ID and Password fields, type the DB2 user ID and password that you created during the DB2 installation. c. Click Next to continue.Figure 4-53 Database selection configuration Chapter 4. Installing the IBM TotalStorage Productivity Center suite 121
    • 23.In the IBM Director DB2 Universal Database configuration secondary window (Figure 4-54), accept the default DB2 node name LOCAL - DB2. Click OK. Figure 4-54 Database node name selection 24.The Database configuration in progress window is displayed at the bottom of the IBM Director DB2 Universal Database configuration window. Wait for the configuration to complete and the secondary window to close. 25.When the InstallShield Wizard Completed window (Figure 4-55) opens, click Finish. Figure 4-55 Completed installation Important: Do not reboot the machine at the end of the IBM Director installation. The Suite Installer reboots the machine.122 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 26.When you see IBM Director Server Installer Information window (Figure 4-56), click No.Figure 4-56 IBM Director reboot option Important: Are you installing IBM TotalStorage Productivity Center for Data? If so, have you created the database for IBM TotalStorage Productivity Center for Data or are you using a existing database? If you are installing Tivoli Disk manager, you must have created the administrative superuser ID and group and set the privileges.27.The Install Status panel (see Figure 4-57) opens after a successful installation. Click Next.Figure 4-57 IBM Director Install Status successful Chapter 4. Installing the IBM TotalStorage Productivity Center suite 123
    • 28.In the machine reboot window (see Figure 4-58), click Next to reboot the machine. Important: If the server does not reboot at this point, cancel the installer and reboot the server. Figure 4-58 Install wizard completion124 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 4.3.3 IBM TotalStorage Productivity Center for Disk and Replication Base There are three separate installations to perform: Install the IBM TotalStorage Productivity Center for Disk and Replication Base code. Install the IBM TotalStorage Productivity Center for Disk. Install the IBM TotalStorage Productivity Center for Replication. IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in 3.5.1, “User IDs” on page 65. Act as part of the operating system Create a token object Increase quotas Replace a process level token Debug programs After rebooting the machine the installer initializes to continue the suite install. A window opens prompting you to select the installation language to be used for this wizard (Figure 4-59). Select the language and click OK. Figure 4-59 Selecting the language for the IBM TotalStorage Productivity Center installation wizard Chapter 4. Installing the IBM TotalStorage Productivity Center suite 125
    • 1. The next panel enables you to select the type of installation (Figure 4-60). Select Manager installations of Data, Disk, Fabric, and Replication and click Next. Figure 4-60 IBM TotalStorage Productivity Center options panel 2. The next window (Figure 4-61) opens allowing you to select which components to install. Select the components you wish to install (all components in this case) and click Next. Figure 4-61 TotalStorage Productivity Center components126 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 3. The installer checks that all prerequisite software is installed on your system (see Figure 4-62). Click Next.Figure 4-62 Prerequisite software check4. Figure 4-63 shows the Installer window about to begin installation of Productivity Center for Disk and Replication Base. The window also displays the products that are yet to be installed. Click Next to begin the installation.Figure 4-63 IBM TotalStorage Productivity Center installation information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 127
    • 5. The Package Location for Disk and Replication Manager window (Figure 4-64) opens. Enter the appropriate information and click Next. Figure 4-64 Package location for Productivity Center Disk and Replication 6. The Information for Disk and Replication Base Manager panel (see Figure 4-65) opens. Click Next. Figure 4-65 Installer information128 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 7. The Welcome panel (see Figure 4-66) opens. It indicates that the Disk and Replication Base Manager installation wizard will be launched. Click Next.Figure 4-66 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information Chapter 4. Installing the IBM TotalStorage Productivity Center suite 129
    • 8. In the Destination Directory panel (Figure 4-67), you confirm the target directories. Enter the directory path or accept the default directory and click Next. Figure 4-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory 9. In the IBM WebSphere Instance Selection panel (see Figure 4-68), click Next. Figure 4-68 WebSphere Application Server information130 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 10.If the installation user ID privileges were not set, you see an information panel stating that you need to set the privileges (see Figure 4-69). Click Yes.Figure 4-69 Verifying the effective privileges11.The required user privileges are set and an informational window opens (see Figure 4-70). Click OK.Figure 4-70 Message indicating the enablement of the required privileges12.At this point, the installation terminates. You must close the installer. Log off of Windows, log back on again, and then restart the installer. Chapter 4. Installing the IBM TotalStorage Productivity Center suite 131
    • 13.In the Installation Type panel (Figure 4-71), select Typical and click Next. Figure 4-71 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation 14.If the IBM Director Support Program and IBM Director Server service is still running, the Servers Check panel (see Figure 4-72) opens and prompts you to stop the services. Click Next to stop the services. Figure 4-72 Server checks132 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 15.In the User Name Input 1 of 2 panel (Figure 4-73), enter the name and password for the IBM TotalStorage Productivity Center for Disk and Replication Base super user ID. This user name must be defined to the operating system. In our environment we used tpccimom as our super user. After entering the required information click Next to continue.Figure 4-73 IBM TotalStorage Productivity Center for Disk and Replication Base superuser information16.If the specified super user ID is not defined to the operating system a window asking if you would like to create it appears (see Figure 4-74). Click Yes to continue.Figure 4-74 Create new local user account Chapter 4. Installing the IBM TotalStorage Productivity Center suite 133
    • 17.In the User Name Input 2 of 2 panel (Figure 4-75), enter the user name and password for the IBM DB2 Universal Database Server. This is the user ID that was specified when DB2 was installed (see Figure 4-8 on page 90). Click Next to continue. Figure 4-75 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information134 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 18.The SSL Configuration panel (Figure 4-76) opens. If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation. The information that you enter will be used later. a. Choose either of the following options: • Generate a self-signed certificate: Select this option if you want the installer to automatically generate these certificate files. We generate the certificates in our installation. • Defer the generation of the certificate as a manual post-installation task: Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. b. Enter the Key file and Trust file passwords. The passwords must be a minimum of six characters in length and cannot contain spaces. You should record the passwords in the worksheets provided in Appendix A, “Worksheets” on page 991. c. Click Next.Figure 4-76 Key and Trust file options Chapter 4. Installing the IBM TotalStorage Productivity Center suite 135
    • The Generate Self-Signed Certificate window opens (see Figure 4-77). Complete all the required fields and click Next to continue. Figure 4-77 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information136 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 19.Next you see the Create Local Database window (Figure 4-78). Accept the default database name of DMCOSERV, or optionally enter the database name. Click Next to continue. Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications.Figure 4-78 IBM TotalStorage Productivity Center for Disk and Replication Base database name Chapter 4. Installing the IBM TotalStorage Productivity Center suite 137
    • 20.The Preview window (Figure 4-79) displays a summary of all of the choices that you made during the customizing phase of the installation. Click Install to complete the installation. Figure 4-79 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information138 IBM TotalStorage Productivity Center V2.3: Getting Started
    • 21.The DB2 database is created, the keys are generated, and the Productivity Center for Disk and Replication base is installed. The Finish window opens. You can view the log file f