Managing disk subsystems using ibm total storage productivity center sg247097
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,419
On Slideshare
1,419
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
3
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Front coverManaging Disk Subsystems usingIBM TotalStorage Productivity CenterInstall and customize ProductivityCenter for DiskInstall and customize ProductivityCenter for ReplicationUse Productivity Center tomanage your storage Mary Lovelace Jason Bamford Dariusz Ferenc Madhav Vazeibm.com/redbooks
  • 2. International Technical Support OrganizationManaging Disk Subsystems using IBM TotalStorageProductivity CenterSeptember 2005 SG24-7097-01
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page ix.Second Edition (September 2005)This edition applies to Version 2 Release 1 of IBM TotalStorage Productivity Center (product number5608-TC1, 5608-TC4, 5608-TC5).© Copyright International Business Machines Corporation 2004, 2005. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  • 4. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 5 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 7 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 10 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 12 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 15 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.1.1 CIM/WEB management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.1 The SNIA Shared Storage Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.2 SMI Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.3 Integrating existing devices into the CIM model . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.4 CIM Agent implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.5 CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3 Common Information Model (CIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.1 How the CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4 Service Location Protocol (SLP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.2 SLP service agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.3 SLP user agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.4 SLP directory agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.4.5 Why use an SLP DA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.6 When to use DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.4.7 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.4.8 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . . 40 2.4.9 Configuring SLP Directory Agent addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5 Productivity Center for Disk and Replication architecture . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 3. TotalStorage Productivity Center suite installation . . . . . . . . . . . . . . . . . . 43 3.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.1 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45© Copyright IBM Corp. 2004, 2005. All rights reserved. iii
  • 5. 3.1.3 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 45 3.1.4 Default databases created during install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.1 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Uninstall Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.3 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.4 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.6 Change the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.1 Prerequisite product install: DB2 and WebSphere . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5.2 Installing IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.3 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . . 86 3.5.5 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.5.6 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 100 3.5.7 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 4. CIMOM installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5.1 ESS CLI install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5.2 ESS CIM Agent install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.6 Configuring the ESS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.6.1 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.6.2 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.6.3 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.6.4 CIMOM User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 4.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 4.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 150 4.7.4 Configuring IBM Director for SLP discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.7.5 Registering the ESS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 4.7.6 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 167iv Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 6. 4.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 1734.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 173 4.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 4.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Chapter 5. TotalStorage Productivity Center common base use . . . . . . . . . . . . . . . 1775.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.3.1 Configure MDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5.3.2 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.3.3 Discovering new storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.3.4 Manage CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.3.5 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945.5 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5.5.1 Changing the display name of an ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.5.2 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5.5.3 Assigning and unassigning ESS volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.5.4 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5.5.5 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025.6 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5.6.1 Changing the display name of a SAN Volume Controller . . . . . . . . . . . . . . . . . . 204 5.6.2 Working with SAN Volume Controller mdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.6.3 Creating new Mdisks on supported storage devices. . . . . . . . . . . . . . . . . . . . . . 206 5.6.4 Create and view SAN Volume Controller Vdisks . . . . . . . . . . . . . . . . . . . . . . . . 2075.7 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.7.1 Changing the display name of a DS4000 or FAStT . . . . . . . . . . . . . . . . . . . . . . 210 5.7.2 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.7.3 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5.7.4 Assigning hosts to DS4000 and FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . 213 5.7.5 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . 2145.8 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.8.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . . . 219 5.8.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Chapter 6. TotalStorage Productivity Center for Disk use . . . . . . . . . . . . . . . . . . . . . 2276.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2286.2 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.2.1 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.2.2 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.2.3 Reviewing Data collection task status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.2.4 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.2.5 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 6.2.6 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6.2.7 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.2.8 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2616.3 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.2 Creating gauges example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.3.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.3.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Contents v
  • 7. 6.3.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . . 268 6.4 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.4.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.4.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.5 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.5.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . . 275 6.5.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . . 276 6.6 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.1 Workload profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.2 Using VPA with predefined Workload profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.3 Launching VPA tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.4 ESS User Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 6.6.5 Configuring VPA settings for the ESS diskspace request. . . . . . . . . . . . . . . . . . 283 6.6.6 Choosing Workload Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.6.7 Choosing candidate locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 6.6.8 Verify settings for VPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.9 Approve recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.10 VPA loopback after Implement Recommendations selected . . . . . . . . . . . . . . 294 6.7 Creating and managing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 6.7.1 Choosing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 6.8.1 Installing IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console. . . . 319 6.8.3 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 323 6.8.4 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 328 Chapter 7. TotalStorage Productivity Center for Fabric use . . . . . . . . . . . . . . . . . . . 331 7.1 TotalStorage Productivity Center for Fabric overview . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.1.1 Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 7.1.2 Supported switches for zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 7.1.3 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 7.1.4 Enabling zone control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 7.1.5 TotalStorage Productivity Center for Disk eFix . . . . . . . . . . . . . . . . . . . . . . . . . . 338 7.1.6 Installing the eFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 7.2 Installing Fabric remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 7.3 TotalStorage Productivity Center for Disk integration . . . . . . . . . . . . . . . . . . . . . . . . . 346 7.4 Launching TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . 352 Chapter 8. TotalStorage Productivity Center for Replication use. . . . . . . . . . . . . . . . 355 8.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . . 356 8.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 8.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 8.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 8.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 8.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 8.2 Exploiting TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . 361 8.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362vi Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 8. 8.2.2 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 8.2.3 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 8.2.4 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 8.2.5 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8.2.6 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8.2.7 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 8.2.8 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 8.2.9 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 8.2.10 Storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.2.11 Point-in-Time Copy: Creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 8.2.12 Creating a session: Verifying source-target relationship. . . . . . . . . . . . . . . . . . 379 8.2.13 Continuous Synchronous Remote Copy: Creating a session . . . . . . . . . . . . . . 385 8.2.14 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 8.2.15 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . 3958.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . . 407 8.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 8.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 8.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 8.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415Chapter 9. Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4219.1 Troubleshooting tips: Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9.1.3 Restricting discovery scope in TotalStorage Productivity Center . . . . . . . . . . . . 423 9.1.4 Following discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . . . 423 9.1.5 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 9.1.6 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 9.1.7 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 9.1.8 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 9.1.9 ESS registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 9.1.10 Viewing Event entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4319.2 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 9.2.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 9.2.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4359.3 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 9.3.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . . . 4359.4 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 9.4.1 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 9.4.2 SVC Data collection task failure due to previous running task . . . . . . . . . . . . . . 445Chapter 10. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 44910.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45010.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 450 10.2.1 Performance Manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45110.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 10.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 10.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 10.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 10.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45710.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 10.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45810.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Contents vii
  • 9. 10.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 10.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 10.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 10.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 10.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 485 10.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 10.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 Appendix A. TotalStorage Productivity Center DB2 table formats. . . . . . . . . . . . . . . 497 A.1 Performance Manager tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.1 VPVPD table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.2 VPCFG table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 A.1.3 VPVOL table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 A.1.4 VPCCH table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 B.1 User IDs and passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.1.1 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.1.2 User IDs and passwords to lock the key files . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 B.2 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 B.2.1 IBM Enterprise Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 B.2.2 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 B.2.3 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Appendix C. Event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 C.1 Event management introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.1.1 Understanding events and event actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 C.1.2 Understanding event filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 C.1.3 Event Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 C.1.4 Event Data Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 C.1.5 Updating Event Plans, Filters, and Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529viii Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 10. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisionsare inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2004, 2005. All rights reserved. ix
  • 11. TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: Eserver® DB2® OS/390® e-business on demand™ Enterprise Storage Server® QMF™ iSeries™ ESCON® Redbooks™ z/OS® FlashCopy® Redbooks (logo) ™ AIX® Informix® S/390® Cloudscape™ Intelligent Miner™ Tivoli Enterprise™ Cube Views™ IBM® Tivoli Enterprise Console® CICS® Lotus® Tivoli® DataJoiner® MVS™ TotalStorage® DB2 Universal Database™ NetView® WebSphere®The following terms are trademarks of other companies:Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registeredtrademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.Excel, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the UnitedStates, other countries, or both.EJB, Java, JDBC, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the UnitedStates, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure ElectronicTransaction LLC.Other company, product, and service names may be trademarks or service marks of others.x Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 12. Preface IBM® TotalStorage® Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server®, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli® Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Mary Lovelace is a Consulting IT Specialist in the International Technical Support Organization. She has more than 20 years of experience with IBM in large systems, storage and storage networking product education, system engineering and consultancy, and systems support. Jason Bamford is a Certified IT Specialist in the IBM Software Business, United Kingdom. He has 21 years customer experience in finance, commercial and public sector accounts, deploying mid-range systems in AIX®, Windows® and other UNIX® variants. An IBM employee for the past eight years, Jason specializes in IBM software storage products and is a subject matter expert in the UK for Tivoli Storage Manager. Dariusz Ferenc is a Technical Support Specialist with Storage Systems Group at IBM Poland. He has been with IBM for four years and he has nearly 10 years of experience in storage systems. He is in Technical Support in a CEMA region and is an IBM Certified Specialist in various storage products. His responsibility involves providing technical support and designing storage solutions. Darek holds a degree in Computer Science from the Poznan University of Technology, Poland. Madhav Vaze is an Accredited Senior IT Specialist and ITS Storage Engagement Lead in Singapore, specializing in storage solutions for Open Systems. Madhav has 19 years of experience in the IT services industry and five years of experience in IBM storage hardware and software. He has acquired the Brocade BFCP and SNIA professional certification.© Copyright IBM Corp. 2004, 2005. All rights reserved. xi
  • 13. The team: Dariusz, Jason, Mary, Madhav Thanks to the following people for their contributions to this project: Sangam Racherla International Technical Support Organization, San Jose Center Bob Haimowitz ITSO Raleigh Center Diana Duan Michael Liu Richard Kirchofer Paul Lee Thiha Than Bill Warren Martine Wedlake IBM San Jose, California Mike Griese Technical Support Marketing Lead Scott Drummond Program Director Storage Networking Curtis Neal Scott Venuti Open Systems Demo Center, San Jose Russ Smith Storage Software Project Management Jeff Ottman Systems Group TotalStorage Education Architect Doug Dunham Tivoli Swat Teamxii Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 14. Ramani Routray Almaden Research Center The original authors of this book are: Ivan Aliprandi William Andrews John A. Cooper Daniel Demer Werner Eggli Tom Smythe Peter ZerbiniBecome a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an Internet note to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xiii
  • 15. xiv Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 16. 1 Chapter 1. IBM TotalStorage Productivity Center overview IBM TotalStorage Productivity Center is software, part of the IBM TotalStorage open software family, designed to provide a single point of control for managing both IBM and non-IBM networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage Fibre Array Storage Technology (FAStT), and IBM TotalStorage DS4000 series. TotalStorage Productivity Center is a solution for customers with storage management requirements, who want to reduce the complexities and costs of storage management, including management of SAN-based storage, while consolidating control within a consistent graphical user interface. While the focus of this book is the IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication components of the IBM TotalStorage Productivity Center, this chapter provides an overview of the entire IBM TotalStorage Open Software Family.© Copyright IBM Corp. 2004, 2005. All rights reserved. 1
  • 17. 1.1 Introduction to IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. The TotalStorage Productivity Center is based on the recent standard issued by the Storage Networking Industry Association (SNIA). The standard addresses the interoperability of storage hardware and software within a SAN.1.1.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 1-1 SAN management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).2 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 18. 1.2 IBM TotalStorage Open Software family The IBM TotalStorage Open Software Family, is designed to provide a full range of capabilities, including storage infrastructure management, Hierarchical Storage Management (HSM), archive management, and recovery management. The On Demand storage environment is shown in Figure 1-2. The hardware infrastructure is a complete range of IBM storage hardware and devices providing flexibility in choice of service quality and cost structure. On top of the hardware infrastructure is the virtualization layer. The storage virtualization is infrastructure software designed to pool storage assets, enabling optimized use of storage assets across the enterprise and the ability to modify the storage infrastructure with minimal or no disruption to application services. The next layer is composed of storage infrastructure management to help enterprises understand and proactively manage their storage infrastructure in the on demand world; hierarchical storage management to help control growth; archive management to manage cost of storing huge quantities of data; recovery management to ensure recoverability of data. The top layer is storage orchestration which automates work flows to help eliminate human error. Figure 1-2 Enabling customer to move toward On Demand Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3 on page 4. Chapter 1. IBM TotalStorage Productivity Center overview 3
  • 19. Figure 1-3 IBM TotalStorage open software family1.3 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to On Demand storage needs. The IBM TotalStorage Productivity Center offering is a powerful set of tools designed to help simplify the management of complex storage network environments. The IBM TotalStorage Productivity Center consists of TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, TotalStorage Productivity Center for Data (formerly Tivoli Storage Resource Manager) and TotalStorage Productivity Center for Fabric (formerly Tivoli SAN Manager). Taking a closer look at storage infrastructure management (see Figure 1-4 on page 5), we focus on four subject matter experts to empower the storage administrators to effectively do their work. Data subject matter expert San Fabric subject matter expert Disk subject matter expert Replication subject matter expert4 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 20. Figure 1-4 Centralized, automated storage infrastructure management1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 6 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager). Chapter 1. IBM TotalStorage Productivity Center overview 5
  • 21. Figure 1-5 Monitor and Configure the Storage Infrastructure Data area Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.6 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 22. TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information. Architecture The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.1.3.2 Fabric subject matter expert: Productivity Center for Fabric The storage infrastructure management for Fabric covers the Storage Area Network (SAN). To handle and manage SAN events you need a comprehensive tool. The tool must have a single point of operation and it tool must be able to perform all the tasks from the SAN. This role is filled by the TotalStorage Productivity Center for Fabric (formerly the IBM Tivoli SAN Manager) which is a part of the IBM TotalStorage Productivity Center. The Fabric subject matter expert is the expert in the SAN. Its role is: Discovery of fabric information Provide the ability to specify fabric policies – What HBAs to use for each host and for what purpose – Objectives for zone configuration (for example, shielding host HBAs from one another and performance) Automatically modify the zone configuration TotalStorage Productivity Center for Fabric provides real-time visual monitoring of SANs, including heterogeneous switch support, and is a central point of control for SAN configuration (including zoning). It automates the management of heterogeneous storage area networks, resulting in” Improved Application Availability – Predicting storage network failures before they happen enabling preventative maintenance – Accelerate problem isolation when failures do happen Chapter 1. IBM TotalStorage Productivity Center overview 7
  • 23. Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert. Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.8 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 24. TotalStorage Productivity Center for Fabric componentsThe major components of the TotalStorage Productivity Center for Fabric include: A manager or server, running on a SAN managing server Agents, running on one or more managed hosts Management console, which is by default on the Manager system, plus optional additional remote consoles Outband agents - consisting of vendor-supplied MIBs for SNMPThere are two additional components which are not included in the TotalStorage ProductivityCenter. IBM Tivoli Enterprise™ Console (TEC) which is used to receive TotalStorage Productivity Center for Fabric generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the TotalStorage Productivity Center for Fabric. The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business.The TotalStorage Productivity Center for Fabric functions are distributed across the Managerand the Agent.TotalStorage Productivity Center for FabricServer Performs initial discovery of environment: – Gathers and correlates data from agents on managed hosts – Gathers data from SNMP (outband) agents – Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView® Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console® or SNMP managersTotalStorage Productivity Center for Fabric AgentGathers information about: SANs by querying switches and devices for attribute and topology information Host-level storage, such as file systems and LUNs Event and other information detected by HBAs Forwards topology and event information to the ManagerDiscover SAN components and devicesTotalStorage Productivity Center for Fabric uses two methods to discover information aboutthe SAN - outband discovery, and inband discovery.Outband discovery is the process of discovering SAN information, including topology anddevice data, without using the Fibre Channel data paths. Outband discovery uses SNMPqueries, invoked over IP network. Outband management and discovery is normally used tomanage devices such as switches and hubs which support SNMP. Chapter 1. IBM TotalStorage Productivity Center overview 9
  • 25. In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network. TotalStorage Productivity Center for Fabric benefits TotalStorage Productivity Center for Fabric discovers the SAN infrastructure, and monitors the status of all the discovered components. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or “smartsets”, of components). This will help them increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. TotalStorage Productivity Center for Fabric helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance, and Identify and resolve problems in the storage infrastructure quickly, when a problem Supported devices for TotalStorage Productivity Center for Fabric Provide fault isolation of SAN links. For more information about the TotalStorage Productivity Center for Fabric, refer to IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848.1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk The Disk subject matter expert’s job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 11. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.10 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 26. Figure 1-7 Monitor and configure the Storage Infrastructure Disk areaThe TotalStorage Productivity Center for Disk provides the raw capabilities of initiating andscheduling performance data collection on the supported devices, of storing the receivedperformance statistics into database tables for later use, and of analyzing the stored data andgenerating reports for various metrics of the monitored devices. In conjunction with datacollection, the TotalStorage Productivity Center for Disk is responsible for managing andmonitoring the performance of the supported storage devices. This includes the ability toconfigure performance thresholds for the devices based on performance metrics, thegeneration of alerts when these thresholds are exceeded, the collection and maintenance ofhistorical performance data, and the creation of gauges, or performance reports, for thevarious metrics to display the collected historical data to the end user. The TotalStorageProductivity Center for Disk enables you to perform sophisticated performance analysis forthe supported storage devices.FunctionsTotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2® database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs. Chapter 1. IBM TotalStorage Productivity Center overview 11
  • 27. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be "started", which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 6, “TotalStorage Productivity Center for Disk use” on page 227.1.3.4 Replication subject matter expert: Productivity Center for Replication The Replication subject matter expert’s job is to provide a single point of control for all replication activities. This role is filled by the TotalStorage Productivity Center for Replication. Given a set of source volumes to be replicated, the Productivity Center for Replication will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volumes relationships are set up. Given a set of source volumes that represent an application, the Productivity Center for Replication will group these in a consistency group, give that consistency group a name, and allow you to start replication on the application. Productivity Center for Replication will start up all replication pairs and monitor them to completion. If any of the replication pairs fail, meaning the application is out of sync, the Productivity Center for Replication will suspend them until the problem is resolved, resync12 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 28. them and resume the replication. The Productivity Center for Replication provides completemanagement of the replication process.The requirements addressed by the Replication subject matter expert are shown Figure 1-8.Replication in a complex environment needs to be addressed by a comprehensivemanagement tool like the TotalStorage Productivity Center for Replication.Figure 1-8 Monitor and Configure the Storage Infrastructure Replication areaFunctionsData replication is the core function required for data protection and disaster recovery. Itprovides advanced copy services functions for supported storage subsystems on the SAN.Replication Manager administers and configures the copy services functions and monitorsthe replication actions. Its capabilities consist of the management of two types of copyservices: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), andthe Point-in-Time Copy (also known as FlashCopy®). At this time TotalStorage ProductivityCenter for Replication supports the IBM TotalStorage ESS.Productivity Center for Replication includes support for replica sessions, which ensures thatdata on multiple related heterogeneous volumes is kept consistent, provided that theunderlying hardware supports the necessary primitive operations. Productivity Center forReplication also supports the session concept, such that multiple pairs are handled as aconsistent unit, and that Freeze-and-Go functions can be performed when errors in mirroringoccur. Productivity Center for Replication is designed to control and monitor the copy servicesoperations in large-scale customer environments.Productivity Center for Replication provides a user interface for creating, maintaining, andusing volume groups and for scheduling copy tasks. The User Interface populates lists ofvolumes using the Device Manager interface. Some of the tasks you can perform withProductivity Center for Replication are: Chapter 1. IBM TotalStorage Productivity Center overview 13
  • 29. Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: – Create Session Wizard. – Select Source Group. – Select Copy Type. – Select Target Pool. – Save Session. Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 8, “TotalStorage Productivity Center for Replication use” on page 355.1.4 IBM TotalStorage Productivity Center All the subject matter experts, for Data, Fabric, Disk, and Replication are components of the IBM TotalStorage Productivity Center. The IBM TotalStorage Productivity Center is the first offering to be delivered as part of the IBM TotalStorage Open Software Family. The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center allows you to manage your storage infrastructure using existing storage management products — Productivity Center for Data, Productivity Center for Fabric, Productivity Center for Disk and Productivity Center for Replication — from one physical place. The IBM TotalStorage Productivity Center components can be launched from the IBM TotalStorage Productivity Center launch pad as shown in Figure 1-9 on page 15.14 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 30. Figure 1-9 IBM TotalStorage Productivity Center Launch Pad The IBM TotalStorage Productivity Center establishes the foundation for IBM’s e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand™ environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.1.4.1 Productivity Center for Disk and Productivity Center for Replication The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console. This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device Chapter 1. IBM TotalStorage Productivity Center overview 15
  • 31. Figure 1-10 Managing multiple devices Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 17 provides an overview of Productivity Center for Disk and Productivity Center for Replication.16 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 32. IBM TotalStorage Productivity Center Performance Replication Manager Manager Device Manager IBM Director IBM TotalStorage Productivity Center for Fabric WebSphere Application Server DB2Figure 1-11 Productivity Center overviewThe Productivity Center for Disk and Productivity Center for Replication provides support forconfiguration, tuning, and replication of the virtualized SAN. As with the individual devices,the Productivity Center for Disk and Productivity Center for Replication layers are open andcan be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center forDisk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for ReplicationDevice ManagerThe Device Manager is responsible for the discovery of supported devices; collecting asset,configuration, and availability data from the supported devices; and providing a limitedtopography view of the storage usage relationships between those devices.The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storagedevices adheres to the SNIA SMI-S specification standards. Device Manager uses theService Level Protocol (SLP) to discover SMI-S enabled devices. The Device Managercreates managed objects to represent these discovered devices. The discovered managedobjects are displayed as individual icons in the Group Contents pane of the IBM DirectorConsole as shown in Figure 1-12 on page 18. Chapter 1. IBM TotalStorage Productivity Center overview 17
  • 33. Figure 1-12 IBM Director Console Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device. SAN Management When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts LUN configurations change18 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 34. Performance Manager functionThe Performance Manager function provides the raw capabilities of initiating and schedulingperformance data collection on the supported devices, of storing the received performancestatistics into database tables for later use, and of analyzing the stored data and generatingreports for various metrics of the monitored devices. In conjunction with data collection, thePerformance Manager is responsible for managing and monitoring the performance of thesupported storage devices. This includes the ability to configure performance thresholds forthe devices based on performance metrics, the generation of alerts when these thresholdsare exceeded, the collection and maintenance of historical performance data, and thecreation of gauges, or performance reports, for the various metrics to display the collectedhistorical data to the end user. The Performance Manager enables you to performsophisticated performance analysis for the supported storage devices.Functions Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device.The eligible metrics for threshold checking are fixed for each storage device. If the thresholdmetrics are modified by the user, the modifications are accepted immediately and applied tochecking being performed by active performance collection tasks. Examples of thresholdmetrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rateThere is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. – Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices. Chapter 1. IBM TotalStorage Productivity Center overview 19
  • 35. – Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database. Gauges The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed. Database services for managing the collected performance data The performance data collected from the supported devices is stored in a DB2 database. Database services are provided that enable you to manage the potential volumes of data. Database purge function A database purge function deletes older performance data samples and, optionally, the associated exception data. Flexibility is built into the purge function, and it enables you to specify the data to purge, allowing important data to be maintained for trend purposes. You can specify to purge all of the sample data from all types of devices older than a specified number of days. You can specify to purge the data associated with a particular type of device. If threshold checking was enabled at the time of data collection, you can exclude data that exceeded at least one threshold value from being purged. You can specify the number of days that data is to remain in the database before being purged. Sample data and, optionally, exception data older than the specified number of days will be purged. A reorganization function is performed on the database tables after the sample data is deleted from the respective database tables.20 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 36. Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI. Volume Performance Advisor The advanced performance analysis provided by Productivity Center for Disk is intended to address the challenge of allocating more storage in a storage system so that the users of the newly allocated storage achieve the best possible performance. The Volume Performance Advisor is an automated tool that helps the storage administrator pick the best possible placement of a new LUN to be allocated (that is, the best placement from a performance perspective). It also uses the historical performance statistics collected from the supported devices to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables that are user-controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function so that, for example, when a new LUN is added to the ESS, the Device Manager can seamlessly select the best possible LUN. Replication Manager function Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Productivity Center for Replication administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). Currently replication functions are provided for the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Multiple pairs are handled as a consistent unit, Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication is controlled by applying predefined policies to Groups and Pools, which are groupings of LUNs that are managed by the Replication Manager. It provides the ability to copy a Group to a Pool, in which case it creates valid mappings for source and target volumes and optionally presents them to the user for verification that the mapping is acceptable. In this case, it manages Pool membership by removing target volumes from the pool when they are used, and by returning them to the pool only if the target is specified as being discarded when it is deleted.1.4.2 Event services At the heart of any systems management solution is the ability to alert the system administrator in the event of a system problem. IBM Director provides a method of alerting called Event Action Plans, which enables the definition of event triggers independently from actions that might be taken. An event is an occurrence of a predefined condition relating to a specific managed object that identifies a change in a system process or a device. The notification of that change can be Chapter 1. IBM TotalStorage Productivity Center overview 21
  • 37. generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.1.5 Taking steps toward an On Demand environment So what is an On Demand operating environment? It is not a specific set of hardware and software. Rather, it is an environment that supports the needs of the business, allowing it to become and remain responsive, variable, focused, and resilient. An On Demand operating environment unlocks the value within the IT infrastructure to be applied to solving business problems. It is an integrated platform, based on open standards, to enable rapid deployment and integration of business applications and processes. Combined with an environment that allows true virtualization and automation of the infrastructure, it enables delivery of IT capability On Demand. An On Demand operating environment must be: Flexible Self-managing Scalable Economical22 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 38. Resilient Based on open standardsThe move to an On Demand storage environment is an evolving one, it does not happen all atonce. There are several next steps that you may take to move to the On Demandenvironment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows.No matter which steps you take to an On Demand environment there will be results. Theresults will be improved application availability, optimized storage resource utilization, andenhanced storage personnel productivity. Chapter 1. IBM TotalStorage Productivity Center overview 23
  • 39. 24 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 40. 2 Chapter 2. Key concepts This chapter gives you an understanding of the basic concepts that you must know in order to use TotalStorage Productivity Center. These concepts include standards for storage management, Service Location Protocol (SLP), Common Information Model (CIM) agent, and Common Information Model Object Manager (CIMOM).© Copyright IBM Corp. 2004, 2005. All rights reserved. 25
  • 41. 2.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 2-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. Figure 2-1 SAN standards bodies Key standards for storage management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).2.1.1 CIM/WEB management model CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Distributed Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBM’s intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces.26 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 42. CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.2.2 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is to ensure that storage networks become efficient, complete, and trusted solutions across the IT community. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market.2.2.1 The SNIA Shared Storage Model IBM is an active member of SNIA and fully supports SNIA’s goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 2-2. Figure 2-2 The SNIA Storage Model IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Chapter 2. Key concepts 27
  • 43. Block aggregation The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or Block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: – Space management through combining or splitting native storage into new, aggregated block storage – Striping through spreading the aggregated block storage across several native storage devices – Redundancy through point-in-time copy and both local and remote mirroring File aggregation or File level virtualization The file/record layer in the SNIA model is responsible for packing items such as files and databases into larger entities such as block-level volumes and storage devices. File aggregation or File level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: – Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support – Enhance productivity by providing centralized and simplified management through policy-based storage management automation – Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMI-S/CIM/WBEM, see the SNIA and DMTF Web sites: http://www.snia.org http://www.dmtf.org2.2.2 SMI Specification SNIA has fully adopted and enhanced CIM standard for Storage Management in its SMI Specification (SMI-S). SMI-S was launched in mid-2002 to create and develop a universal open interface for managing storage devices including storage networks. The idea behind SMI-S is to standardize the management interfaces so that management applications can utilize these and provide cross device management. This means that a newly introduced device can be immediately managed as it will conform to the standards. SMI-S extends CIM/WBEM with the following features: A single management transport: Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMI-S. A complete, unified, and rigidly specified object model: SMI-S defines “profiles” and “recipes” within the CIM that enables a management client to reliably utilize a component vendor’s implementation of the standard, such as the control of LUNs and Zones in the context of a SAN.28 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 44. Consistent use of durable names: As a storage network configuration evolves and is re-configure, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMI-S provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system: SMI-S compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.2.2.3 Integrating existing devices into the CIM model As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMI-S is introducing CIM Agents and CIM Object Managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMI-S. The agent is used for one device and an object manager for a set of devices. This type of operation is also called proxy model and is shown in Figure 2-3. The CIM Agent or CIM Object Manager (CIMOM) will translate a proprietary management interface to the CIM interface. The CIM Agent for the IBM TotalStorage Enterprise Storage Server includes a CIMOM inside it. Figure 2-3 CIM Agent / Object Manager Chapter 2. Key concepts 29
  • 45. In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the “Embedded Model” in Figure 2-3 on page 29. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.2.2.4 CIM Agent implementation When a CIM Agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. This interface enables TotalStorage Productivity Center for Data, TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication, IBM Director, and vendor tools to manage the SAN infrastructure more effectively. By implementing a standard interface over all devices, an open environment is created in which tools from a variety of vendors can work together. This reduces the cost of developing integrated management applications, installing and configuring management applications, and managing the SAN infrastructure. Figure 2-4 is an overview of the CIM agent. Figure 2-4 CIM agent overview The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface.2.2.5 CIM Object Manager The SNIA SMI-S standard designates that either a proxy or an embedded agent may be used to implement CIM. In each case, the CIM objects are supported by a CIM Object Manager. External applications communicate with CIM through HTTP to exchange XML messages that are used to configure and manage the device. In a proxy configuration, the CIMOM runs outside of the device and can manage multiple devices. In this case, a provider component is installed into the CIMOM to enable the CIMOM to manage specific devices such as the ESS or SAN Volume Controller. The providers adapt the CIMOM to work with different devices and subsystems. In this way, a single CIMOM installation can be used to access more than one device type, and more than one device of each type on a subsystem. The CIMOM acts as a catcher for requests that are sent from storage management applications. The interactions between catcher and sender use the language and models defined by the SMI-S standard.30 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 46. This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions. Figure 2-5 shows CIM enablement model. Figure 2-5 CIM enablement model2.3 Common Information Model (CIM) The Common Information Model (CIM) Agent provides a means by which a device can be managed by common building blocks rather than proprietary software. If a device is CIM-compliant, software that is also CIM-compliant can manage the device. Vendor applications can benefit from adopting the common information model because they can manage CIM-compliant devices in a common way, rather than using device-specific programming interfaces. Using CIM, you can perform tasks in a consistent manner across devices and vendors. A CIM agent typically involves the following components: Agent code: An open-systems standard that interprets CIM requests and responses as they transfer between the client application and the device. CIM Object Manager (CIMOM): The common conceptual framework for data management that receives, validates, and authenticates the CIM requests from the client application. It then directs the requests to the appropriate component or device provider. Client application: A storage management program, like TotalStorage Productivity Center, that initiates CIM requests to the CIM agent for the device. Device: The storage server that processes and hosts the client application requests. Device provider: A device-specific handler that serves as a plug-in for the CIM. That is, the CIMOM uses the handler to interface with the device. Chapter 2. Key concepts 31
  • 47. Service Location Protocol (SLP): A directory service that the client application calls to locate the CIMOM.2.3.1 How the CIM Agent works The CIM Agent typically works in the following way (see Figure 2-6): (1) The client application locates the CIMOM by calling an SLP directory service. (2) When the CIMOM is first invoked, (3) it registers itself to the SLP and supplies its location, IP address, port number, and the type of service it provides. (4) With this information, the client application starts to directly communicate with the CIMOM. The client application then (5) sends CIM requests to the CIMOM. As requests arrive, the CIMOM validates and authenticates each request. (6) It then directs the requests to the appropriate functional component of the CIMOM or to a device provider. (7) The provider makes calls to a device-unique programming interface on behalf of the CIMOM to satisfy (8)-(9)-(10) client application requests. Figure 2-6 CIM Agent work flow2.4 Service Location Protocol (SLP) The Service Location Protocol (SLP) is an Internet Engineering Task Force (IETF) standard, documented in Request for Comments (RFCs) 2165, 2608, 2609, 2610, and 2614. SLP provides a scalable framework for the discovery and selection of network services.32 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 48. SLP enables the discovery and selection of generic services, which could range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user could specify to search for all available printers that support Postscript. Based on the given service type (printers), and the given attributes (Postscript), SLP searches the users network for any matching services, and returns the discovered list to the user.2.4.1 SLP architecture The Service Location Protocol (SLP) architecture includes three major components, a service agent, a user agent, and a directory agent. The service agent and user agent are required components in an SLP environment, whereas the SLP directory agent is optional. Following is a description of these components: Service agent (SA) A process working on the behalf of one or more network services to broadcast the services. User agent (UA) A process working on the behalf of the user to establish contact with some network service. The UA retrieves network service information from the service agents or directory agents. Directory agent (DA) A process that collects network service broadcasts. Note: The SLP directory agent is completely different and separate from the IBM Director Agent, which occupies the lowest tier in the IBM Director architecture.2.4.2 SLP service agent The Service Location Protocol (SLP) service agent (SA) is a component of the SLP architecture that works on behalf of one or more network services to broadcast the availability of those services. The SA replies to external service requests using IP unicasts to provide the requested information about the registered services, if it is available. The SA can run in the same process or in a different process as the service itself. But in either case, the SA supports registration and de-registration requests for the service. The service registers itself with the SA during startup, and removes the registration for itself during shutdown. In addition, every service registration is associated with a life-span value, which specifies the time that the registration will be active. A service is required to reregister itself periodically, before the life-span of its previous registration expires. This ensures that expired registration entries are not kept. For instance, if Chapter 2. Key concepts 33
  • 49. a service becomes inactive without removing the registration for itself, that old registration will be removed automatically when its life-span expires. The maximum life-span of a registration is 65,535 seconds (about 18 hours).2.4.3 SLP user agent The Service Location Protocol (SLP) user agent (UA) is a process working on the behalf of the user to establish contact with some network service. The UA retrieves service information from the service agents or directory agents. The UA is a component of SLP that is closely associated with a client application or a user who is searching for the location of one or more services on the network. You can use the SLP UA by defining a service type that you want the SLP UA to locate. The SLP UA then retrieves a set of discovered services, including their service Uniform Resource Locator (URL) and any service attributes. You can then use the services URL to connect to the service. The SLP UA locates the registered services, based on a general description of the services that the user or client application has specified. This description usually consists of a service type, and any service attributes, which are matched against the service URLs registered in the SLP service agents. The SLP UA usually runs in the same process as the client application, although it is not necessary to do so. The SLP UA processes find requests by sending out multicast messages to the network and targeting all SLP SAs within the multicast range with a single User Datagram Protocol (UDP) message. The SLP UA is, therefore, able to discover these SAs with a minimum of network overhead. When an SA receives a service request, it compares its own registered services with the requested service type and any service attributes, if specified, and returns matches to the UA using a unicast reply message. The SLP UA follows the multicast convergence algorithm, and sends out repeated multicast messages until no new replies are received. The resulting set of discovered services, including their service URL and any service attributes, are returned to the client application or user. The client application or user is then responsible for contacting the individual services, as needed, using the services URL (see Figure 2-7 on page 35).34 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 50. Figure 2-7 Service Location Protocol user agent A SLP UA is not required to discover all matching services that exist on the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets, which could be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs are able to recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range, which could cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs.2.4.4 SLP directory agent The Service Location Protocol (SLP) directory agent (DA) is an optional component of SLP that collects network service broadcasts. The DA is primarily used to simplify SLP administration and to improve SLP performance. The SLP DA can be thought of as an intermediate tier in the SLP architecture, placed between the user agents (UAs) and the service agents (SAs), such that both UAs and SAs communicate only with the DA instead of with each other. This eliminates a large portion of the multicast request or reply traffic on the network, and it protects the SAs from being overwhelmed by too many service requests if there are many UAs in the environment. Figure 2-8 on page 36 shows the interactions of the SLP UAs and SAs in an environment with SLP DAs. Chapter 2. Key concepts 35
  • 51. S CIMOM A Subnet A CIMOM S S CIMOM CIMOM A A DA Subnet B MDM S CIMOM A SLP UA S CIMOM A S CIMOM A CIMOM S CIMOM A DA Subnet C Figure 2-8 SLP UA, SA and DA interaction When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request and specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery, and it is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts up. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service Uniform Resource Locators (URLs) and attributes. Figure 2-9 on page 37 shows the interactions of UAs and SAs with DAs, during active DA discovery.36 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 52. S LP D A D A A dvertisem ent D A A dvertisem ent S ervice S ervice C lient R equest R equest or user S LP UA "S ervice: D irectory S LP D A "Service: D irectory S LP SA S ervice A gent A gent D A A dvertisem ent D A A dvertisem ent D A A dvertisem ent D A A dvertisem ent S LP D AFigure 2-9 Service Location Protocol DA functionsThe SLP DA functions very similarly to an SLP SA, receiving registration and deregistrationrequests, and responding to service requests with unicast service replies. There are a coupleof differences, however, where DAs provide more functionality than SAs. One area,mentioned previously, is that DAs respond to service requests of the service:directory-agentservice type with a DA advertisement response message, passing back a service URLcontaining the DAs IP address. This allows SAs and UAs to perform active discovery on DAs.One other difference is that when a DA first initializes, it sends out a multicast DAadvertisement message to advertise its services to any existing SAs (and UAs) that mightalready be active on the network. UAs can optionally listen for, and SAs are required to listenfor, such advertisement messages. This listening process is also sometimes called passiveDA discovery. When the SA finds a new DA through passive DA discovery, it sendsregistration requests for all its currently registered services to that new DA.Figure 2-10 on page 38 shows the interactions of DAs with SAs and UAs, during passive DAdiscovery. Chapter 2. Key concepts 37
  • 53. SLP U A SLP SA DA SLP DA SLP U A A d v e rti A d v e rti SLP SA sem ent DA sem ent SLP U A SLP SA Figure 2-10 Service Location Protocol passive DA discovery2.4.5 Why use an SLP DA? The primary reason to use DAs is to reduce the amount of multicast traffic involved in service discovery. In a large network with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UA’s scopes reduce multicast. By eliminating multicast for normal UA request, delays and timeouts are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. In networks without multicast outing enabled, you can configure SLP to use broadcast. However, broadcast is very inefficient, because it requires each host to process the message. Broadcast also does not normally propagate across routers. As a result, in a network without multicast, DAs can be deployed on multihomed hosts to bridge SLP advertisements between the subnets.2.4.6 When to use DAs Use DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or timeouts during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. Your network does not have multicast enabled and consists of multiple subnets that must share services.38 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 54. 2.4.7 SLP configuration recommendation Some configuration recommendations are provided for enabling TotalStorage Productivity Center to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration, SLP directory agent configuration, and environment configuration. Router configuration Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation. SLP directory agent configuration Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for TotalStorage Productivity Center each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center Discovery Preference panel as discussed in “Configuring SLP Directory Agent addresses” on page 41. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly. Environment configuration It might be advantageous to configure SLP DAs in the following environments: In environments where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, an SLP DA should be configured. This ensures that the existing SAs are not overwhelmed by too many service requests. In environments where there are many SLP SAs, a DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request. Chapter 2. Key concepts 39
  • 55. 2.4.8 Setting up the Service Location Protocol Directory Agent You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Find the slp.conf file in the CIM Agent installation directory and perform the following steps to edit the file: – Make a backup copy of this file and name it slp.conf.bak. – Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. – Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:winnt), or in the /etc directory for UNIX machines. 4. Restart the daemon process and the CIMOM process for the CIM Agent. Refer to the CIM Agent documentation for your operating system and Chapter 4, “CIMOM installation and configuration” on page 119 for more details. Note: The CIMOM process might start automatically when you restart the SLP daemon. 5. You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. 6. Go to the TotalStorage Productivity Center Discovery Preference settings panel (Figure 2-11 on page 41), and enter the host names or IP addresses of each of the machines that are running the SLP DA that was set up in the prior steps. Note: Enter only a simple host name or IP address; do not enter protocol and port number. Result When a discovery task is started (either manually or scheduled), TotalStorage Productivity Center will discover all devices on the subnet on which TotalStorage Productivity Center resides, and it will discover all devices with affinity to the SLP DAs that were configured.40 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 56. 2.4.9 Configuring SLP Directory Agent addresses Perform this task to configure the addresses for the Service Location Protocol (SLP) Directory Agent (DA) for TotalStorage Productivity Center.TotalStorage Productivity Center uses the DA addresses during device discovery. When configured with DAs, the TotalStorage Productivity Center SLP User Agent (UA) sends service requests to each of the configured DA addresses in turn to discover the registered services for each. The UA also continues discovery of registered services by performing multicast service discovery. This additional action ensures that registered services are discovered when going from an environment without DAs to one with DAs. Note: If you have set up an SLP DA in the subnet that the TotalStorage Productivity Center server is in, you can register specific devices to be discovered and managed by TotalStorage Productivity Center that are outside that subnet.You do this by registering the CIM Agent to SLP. Refer to Chapter 4, “CIMOM installation and configuration” on page 119 for details. Perform the following steps to configure the addresses for the SLP directory agent: From the IBM Director menu bar, click Options. The Options menu is displayed. From the TotalStorage Productivity Center selections, click Discovery Preferences panel. The Discovery Preferences menu for is displayed. Select MDM SLP Configuration tab (see Figure 2-11). Figure 2-11 MDM SLP Configuration panel In the SLP Directory Agent Configuration section, type a valid Internet host name or an IP address (in dotted decimal format). Click Add. The host and scope information that you entered are displayed in the SLP Directory Agents Table. Click Change to change the host name or IP address for a selected item in the SLP Directory Agents Table. Chapter 2. Key concepts 41
  • 57. Click Remove to delete a selected a item from the SLP Directory Agents Table. Click OK to add or change the directory agent information. Click Cancel to cancel adding or changing the directory agent information.2.5 Productivity Center for Disk and Replication architecture Figure 2-12 provides an overview of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication architecture. All of the components of the TotalStorage Productivity Center are shown - Device Manager, TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Keep in mind that the TotalStorage Productivity Center for Replication and TotalStorage Productivity Center for Disk are separately orderable features on TotalStorage Productivity Center. The communication protocols and flow between supported devices through the TotalStorage Productivity Center Server and Console are shown. Multiple Device Manager Console TotalStorage Productivity Center Console Device Replication Performance Mgr. Console Mgr. Console Mgr. Console IBM Director Console LAN (TCP / IP) TotalStorage Prod. Center WAS Server Device SOAP Manager IBM DirectorServer Co-Server Performance Manager JDBC Co-Server Replication IBM DB2 Manager Workgroup Server Co-Server LAN (TCP / IP) ESS ICAT SVC ICAT (Proxy) (Proxy) FAStT CIMOM / SLP CIMOM / SLP CIMOM / SLP ESS SVC FAStT ESS SVC Figure 2-12 TotalStorage Productivity Center architecture overview42 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 58. 3 Chapter 3. TotalStorage Productivity Center suite installation The components of the IBM TotalStorage Productivity Center can be installed individually using the component install as shipped, or they can be installed using the Suite Installer shipped with the package. In this chapter we document the use of the Suite Installer. Hints and tips based on our experience are included.© Copyright IBM Corp. 2004, 2005. All rights reserved. 43
  • 59. 3.1 Installing the IBM TotalStorage Productivity Center IBM TotalStorage Productivity Center provides a suite installer that helps guide you through the installation process. You can also use the suite installer to install the components standalone. One advantage of the suite installer is that it will interrogate your system and install required prerequisites. The suite installer will install the following prerequisite products or components in this order: – DB2 (required by all the managers) – IBM Director (required by TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication) – Tivoli Agent Manager (required by Fabric Manager and Data Manager) – WebSphere Application Server (required by all the managers except for TotalStorage Productivity Center for Data) The suite installer will then guide you through the installation of the IBM TotalStorage Productivity Center components. You can select more than one installation option at a time, but in this book we focus on the Productivity Center for Disk and Productivity Center for Replication install. The types of installation tasks are: – IBM TotalStorage Productivity Center Manager Installations – IBM TotalStorage Productivity Center Agent Installations – IBM TotalStorage Productivity Center GUI/Client Installations – Language Pack Installations – Uninstall IBM TotalStorage Productivity Center Products Considerations If you want the ESS, SAN Volume Controller, or FAStT storage subsystems to be managed using IBM TotalStorage Productivity Center for Disk, you must install the prerequisite I/O Subsystem Licensed Internal Code and CIM Agent for the devices. See Chapter 4, “CIMOM installation and configuration” on page 119 for more information. If you are installing the CIM agent for the ESS, you must install it on a separate machine from the Productivity Center for Disk and Productivity Center for Replication code. Note that IBM TotalStorage Productivity Center does not support zLinux on S/390® and does not support windows domains.3.1.1 Configurations The storage management components of IBM TotalStorage Productivity Center can be installed on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: – Windows 2000 Server with Service Pack 4 – Windows 2000 Advanced Server – Windows 2003 Enterprise Server Edition Note: Refer to the following Web sites for the updated support summaries, including specific software, hardware, and firmware levels supported. http://www.storage.ibm.com/software/index.html http://www.ibm.com/software/support/44 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 60. If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend you install IBM Tivoli Provisioning Manager on a separate Windows machine.3.1.2 Installation prerequisites This section lists the minimum prerequisites for installing TotalStorage Productivity Center. Hardware Dual Pentium® 4 or Xeon™ 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Centerfor Fabric (optional) 80 GB available disk space Database The installation of DB2 Version 8.2 is part of the suite installer and is required by all the managers.3.1.3 TCP/IP ports used by TotalStorage Productivity Center This section provides an overview of the TCP/IP ports used by TotalStorage Productivity Center. Productivity Center for Disk and Productivity Center for Replication The IBM TotalStorage Productivity Center for Disk and IBM TotalStorage Productivity Center for Replication installation program will pre-configure the TCP/IP ports used by WebSphere®. Table 3-1 TCP/IP ports for IBM TotalStorage Productivity Center for Disk and Replication Base Port value WebSphere ports 2809 Bootstrap port 9080 HTTP Transport port 9443 HTTPS Transport port 9090 Administrative Console port 9043 Administrative Console Secure Server port 5559 JMS Server Direct Address port 5557 JMS Server Security port 5 5558 JMS Server Queued Address port 8980 SOAP Connector Address port 7873 DRS Client Address p TCP/IP ports used by agent manager The Agent Manager uses these TCP/IP ports. Chapter 3. TotalStorage Productivity Center suite installation 45
  • 61. Table 3-2 TCP/IP ports for agent manager Port value Usage 9511 Registering agents and resource managers 9512 Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets 9513 Requesting updates to the certificate revocation list Requesting agent manager information Downloading the truststore file 80 Agent recovery service TCP/IP ports used by IBM TotalStorage Productivity Center for Fabric The Fabric Manager uses these default TCP/IP ports. Table 3-3 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric Port value Usage 8080 NetView Remote Web console 9550 HTTP port 9551 Reserved 9552 Reserved 9553 Cloudscape™ server port 9554 NVDAEMON port 9555 NVREQUESTER port 9556 SNMPTrapPort port on which to get events forwarded from Tivoli NetView 9557 Reserved 9558 Reserved 9559 Tivoli NetView Pager daemon 9560 Tivoli NetView Object Database daemon 9661 Tivoli NetView Topology Manager daemon 9562 Tivoli NetView Topology Manager socket 9563 Tivoli General Topology Manager 9564 Tivoli NetView OVs_PMD request services 9565 Tivoli NetView OVs_PMD management services 9565 Tivoli NetView trapd socket 9567 Tivoli NetView PMD service 9568 Tivoli NetView General Topology map service 9569 Tivoli NetView Object Database event socket46 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 62. Port value Usage 9570 Tivoli NetView Object Collection facility socket 9571 Tivoli NetView Web server socket 9572 Tivoli NetView SnmpServerFabric Manager remote console TCP/IP default portsTable 3-4 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric remote console Port value Usage 9560 HTTP port 9561 Reserved 9561 Reserved 9562 Tomcat’s Local Server port 9563 Tomcat’s warp port 9564 NVDAEMON port 9565 NVREQUESTER port 9569 Tivoli NetView Pager daemon 9570 Tivoli NetView Object Database daemon 9571 Tivoli NetView Topology Manager daemon 9572 Tivoli NetView Topology Manager socket 9573 Tivoli General Topology Manager 9574 Tivoli NetView OVs_PMD request services 9575 Tivoli NetView OVs_PMD management services 9576 Tivoli NetView trapd socket 9577 Tivoli NetView PMD service 9578 Tivoli NetView General Topology map service 9579 Tivoli NetView Object Database event socket 9580 Tivoli NetView Object Collection facility socket 9581 Tivoli NetView Web server socket 9582 Tivoli NetView SnmpServerFabric agents TCP/IP portsTable 3-5 TCP/IP ports for IBM TotalStorage Productivity Center for Fabric agents Port value Usage 9510 Common agent 9514 Used to restart the agent 9515 Used to restart the agent Chapter 3. TotalStorage Productivity Center suite installation 47
  • 63. 3.1.4 Default databases created during install During the installation of IBM TotalStorage Productivity Center we recommend that you use DB2 as the preferred database type. Table 3-6 lists the default databases that the installer will create during the installation. Table 3-6 Default DB2 databases Application Dealt Database Name (DB2) IBM Director No Default (WE created Database: DIRECTOR) Tivoli Agent Manager IBMCDB IBM TotalStorage Productivity Center for Disk DMCOSERV and Replication Base IBM TotalStorage Productivity Center for Disk PMDATA IBM TotalStorage Productivity Center for ESSHWL Replication hardware subcomponent IBM TotalStorage Productivity Center for ELEMCAT Replication element catalog IBM TotalStorage Productivity Center for REPMGR Replication, Replication Manager IBM TotalStorage Productivity Center for Fabric ITSANMDB3.2 Pre-installation check list The following is a list of the tasks you need to complete in preparation for the install of the IBM TotalStorage Productivity Center. You should print the tables in Appendix B, “Worksheets” on page 505 to keep track of the information you will need during the install (for example usernames, ports, IP addresses and locations of servers and managed devices). 1. Determine which elements of the TotalStorage Productivity Center you will be installing 2. Uninstall Internet Information Services 3. Grant the user account that will be used to install the TotalStorage Productivity Center the following privileges: – Act as part of the operating system – Create a token object – Increase quotas – Replace a process-level token – Logon as a service 4. Install and configure SNMP (Fabric requirement) 5. Identify any firewalls and obtain required authorization 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers3.2.1 User IDs and security This section will list and explain the user IDs used in a IBM TotalStorage Productivity Center environment during the installation and also those that are later used to manage and work with TotalStorage Productivity Center. For some of the IDs the table Table 3-8 on page 49 includes a link to further information that is available in the manuals.48 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 64. Suite Installer user We recommend you use the Windows Administrator or a dedicated user for the installation of TotalStorage Productivity Center. That user ID should have the user rights shown in Table 3-7. Table 3-7 Requirements for the Suite Installer user User rights/Policy Used for Act as part of the operating system DB2 Productivity Center for Disk Fabric Manager Create a token object DB2 Productivity Center for Disk Increase quotas DB2 Productivity Center for Disk Replace a process-level token DB2 Productivity Center for Disk Log on as a service DB2 Debug programs Productivity Center for Disk Table 3-8 shows the user IDs used in our TotalStorage Productivity Center environment.Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element User ID New user Type Group(s) Usage Suite Installer Administrator no DB2 db2admina yes, will Windows DB2 management and Windows be Service Account created IBM Director Administratora no Windows DirAdmin or Windows Service Account (see also below) DirSuper Resource Manager managerb no, Tivoli n/a - internal used during the registration of a default Agent user Resource Manager to the Agent user Manager Manager Common Agent AgentMgrb no Tivoli n/a - internal used to authenticate agents and lock (see also below) Agent user the certificate key files Manager Common Agent itcauserb yes, will Windows Windows Windows Service Account be created TotalStorage TPCSUIDa yes, will Windows DirAdmin This ID is used to accomplish Productivity Center be connectivity with the managed universal user created devices, i.e this ID has to be set up on the CIM agents c Tivoli NetView Windows see “Fabric Manager User IDs” on page 51 c IBM WebSphere Windows see “Fabric Manager User IDs” on page 51 Chapter 3. TotalStorage Productivity Center suite installation 49
  • 65. Element User ID New user Type Group(s) Usage c Host Authentication Windows see “Fabric Manager User IDs” on page 51 a. This account can have whatever name you like. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here, see “Fabric Manager User IDs” on page 51. Granting privileges Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. It is recommended that this user ID be the superuser ID. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can, optionally, set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, select the following path and select the appropriate user: Click Start →Settings → Control Panel – Double-click Administrative Tools – Double-click Local Security Policy; the Local Security Settings window opens. – Expand Local Policies. – Double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: • Highlight the policy to be checked. • Double-click the policy and look for the user’s name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are checked. • If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: a) Click Add on the Local Security Policy Setting window. b) In the Select Users or Groups window, highlight the user of group under the Name column. c) Click Add to put the name in the lower window. d) Click OK to add the policy to the user or group. After these user rights are set (either by the installation program or manually), log off the system, and then log on again in order for the user rights to take effect. You can then restart the installation program to continue with the install of the IBM TotalStorage Productivity Center for Disk and Replication Base. IBM Director With Version 4.1, you no longer need to create “internal” user account. All user IDs must be operating system accounts and members of one of the following: DirAdmin or DirSuper groups (Windows), diradmin or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux)50 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 66. In addition to the above there is a host authentication password that is used to allow managed hosts and remote consoles to communicate with IBM Director. TotalStorage Productivity Center superuser ID The account used to accomplish connectivity with managed devices has to be part of the DirAdmin (Windows) or diradmin (Linux) group. Do not be confused by the name, it is really only a communication user ID. Fabric Manager User IDs During the installation of IBM TotalStorage Productivity Center for Fabric you can select if you want to use individual passwords for the sub components like DB2, IBM WebSphere, NetView and for the Host Authentication. You can also choose use the DB2 administrators user ID and password to make the configuration much simpler. Figure 3-97 on page 113 shows the window where you can choose the options.3.2.2 Certificates and key files With in a TotalStorage Productivity Center environment several applications use certificates to ensure security: Productivity Center for Disk, Productivity Center for Replication, and Tivoli Agent Manager. Productivity Center for Disk and Replication certificates The WebSphere Application Server that is part of Productivity Center for Disk and Productivity Center for Replication uses certificates for SSL communication. During the installation key files can be generated as a self-signed certificates, but you will have to enter a password for each file to lock it. The default file names are: MDMServerKeyFile.jks MDServerTrusFile.jks The default directory for that key file on the agent manager is: C:Program FilesIBMmdmdmkeys Tivoli Agent Manager certificates The Agent Manager comes with demon certificates that you can use, but you can also create new certificates during the installation of agent manager (see Figure 3-49 on page 83). If you choose to create new files, the password that you have entered on the panel shown in Figure 3-50 on page 84 as the Agent registration password will be used to lock the key file: agentTrust.jks The default directory for that key file on the agent manager is: C:Program FilesIBMAgentManagercerts There are more key files in that directory, but during the installation and first steps the agentTrust.jks file is the most important one. And this is only important if you let the installer create you own keys. Chapter 3. TotalStorage Productivity Center suite installation 51
  • 67. 3.3 Services and service accounts The managers and components that belong to the TotalStorage Productivity Center are started as Windows Services. Table 3-9 provides an overview of the most important services. Note that we did not include all the DB2 services in the table, to keep it simple.Table 3-9 Services and Service Accounts Element Service name Service account Comment DB2 db2admin The account needs to be part of: Administrators and DB2ADMNS IBM Director IBM Director Server Administrator You need to modify the account, to be part of one of the groups: DirAdmin or DirSuper Agent Manager IBM WebSphere Application LocalSystem You need to set this service to start Server V5 - Tivoli Agent automatically, after the installation Manager Common Agent IBM Tivoli Common Agent - itcauser C:Program Filestivoliep Productivity Center IBM WebSphere Application LocalSystem for Fabric Server V5 - Fabric Manager Tivoli NetView Tivoli NetView Service NetView Service3.3.1 Starting and stopping the managers To start, stop or restart one of the managers or components you simply use the windows control panel. Table 3-10 is a list of the services.Table 3-10 Services used for TotalStorage Productivity Center Element Service name Service account DB2 db2admin IBM Director IBM Director Server Administrator Agent Manager IBM WebSphere Application Server V5 - LocalSystem Tivoli Agent Manager Common Agent IBM Tivoli Common Agent - C:Program itcauser Filestivoliep Productivity Center IBM WebSphere Application Server V5 - LocalSystem for Fabric Fabric Manager Tivoli NetView Tivoli NetView Service NetView Service3.3.2 Uninstall Internet Information Services Make sure Internet Information Services (IIS) is not installed on the server, if it is installed uninstall it using the following procedure. – Click Start →Settings →Control Panel – Click Add/Remove Programs52 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 68. – Click Add/Remove Windows Components – Remove the tick box from Internet Information Services (IIS)3.3.3 SNMP install Before installing the components of the TotalStorage Productivity Center you should install and configure Simple Network Management Protocol (SNMP). – Click Start →Settings →Control Panel – Click Add/Remove Programs – Click Add/Remove Windows Components – Double-click Management and Monitoring Tools – Click Simple Network Management Protocol – Click OK Close the panels and accept the installation of the components the Windows installation CD or installation files will be required. Make sure that the SNMP services is configured, this can be configured by – Right-click My Computer – Click Manage – Click Services An alternative method follows: – Click Start → Run... – Type in MMC (Microsoft® Management console) and click OK. – Click Console → Add/Remove Snap-in... – Click Add and add Services. Select the services and scroll down to SNMP Service as shown in Figure 3-1 on page 54. – Double-click SNMP Service. – Click the Traps panel tab. – Make sure that the public community name is available if not add it. – Make sure that on the Security tab Accept SNMP packets from any host is checked. Chapter 3. TotalStorage Productivity Center suite installation 53
  • 69. Figure 3-1 SNMP Security After setting the public community name restart the SNMP community service.3.4 IBM TotalStorage Productivity Center for Fabric The primary focus of this book is the install and use of the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. We have included the IBM TotalStorage Productivity Center for Fabric for completeness since it is used with the Productivity Center for Disk. There are planning considerations and prerequisite tasks that need to be completed.3.4.1 The computer name IBM TotalStorage Productivity Center for Fabric requires fully qualified host names for the manager, managed hosts, and the remote console. To verify your computer name on Windows, follow the procedure below. Right–click the My Computer icon on your desktop. Click Properties The System Properties panel is displayed. Click on the Network Identification tab. Click on Properties The Identification Changes panel is displayed. Verify that your computer name is entered correctly. This is the name that the computer will be identified as in the network. Also verify that the Full computer name is a fully qualified host name. For example, user1.sanjose.ibm.com is a fully qualified host name. Click More54 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 70. The DNS Suffix and NetBIOS Computer Name panel is displayed. Verify that the Primary DNS suffix field displays a domain name. The fully qualified host name must match the HOSTS file name (including case–sensitive characters).3.4.2 Database considerations When you install IBM TotalStorage Productivity Center for Fabric, a DB2 database is automatically created (if you specified the DB2 database). The default database name is TSANMDB. If you installed IBM TotalStorage Productivity Center for Fabric previously, are using a DB2 database, and want to save the information in the database before reinstalling the manager, you must use DB2 commands to back up the database. The default name for the IBM TotalStorage Productivity Center for Fabric DB2 database is TSANMDB. The database name for Cloudscape is TSANMDB. You cannot change the database name for Cloudscape. If you are installing the manager on more than one machine in a Windows domain, the managers on different machines might end up sharing the same DB2 database. To avoid this situation, you must either use different database names or different DB2 user names when installing the manager on different machines.3.4.3 Windows Terminal Services You cannot use the Windows Terminal Services to access a machine that is running the IBM TotalStorage Productivity Center for Fabric console (either the manager or remote console machine). Any IBM TotalStorage Productivity Center for Fabric dialogs launched from the SAN menu in Tivoli NetView will appear on the manager or remote console machine only. The dialogs will not appear in the Windows Terminal Services session.3.4.4 Tivoli NetView IBM TotalStorage Productivity Center for Fabric also installs Tivoli NetView 7.1.3. If you already have Tivoli NetView 7.1.1 installed, IBM TotalStorage Productivity Center for Fabric upgrades it to Version 7.1.3. If you have a Tivoli NetView release below Version 7.1.1, IBM TotalStorage Productivity Center for Fabric will prompt you to uninstall Tivoli NetView before installing this product. If you have Tivoli NetView 7.1.3 installed, ensure that the following applications are stopped. You can check for Tivoli NetView by opening the Tivoli NetView console icon on your desktop. – Web Console – Web Console Security – MIB Loader – MIB Browser – Netmon Seed Editor – Tivoli Event Console Adaptor Important: Also ensure that you do not have the Windows 2000 Terminal Services running. Go to the Services panel and check for Terminal Services. Chapter 3. TotalStorage Productivity Center suite installation 55
  • 71. User IDs and password considerations IBM TotalStorage Productivity Center for Fabric only supports local user IDs and groups. IBM TotalStorage Productivity Center for Fabric does not support domain user IDs and groups. Cloudscape database If you install IBM TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you will need the following user IDs and passwords: – Agent manager name or IP address and password – Common agent password to register with the agent manager – Resource manager user ID and password to register with the agent manager – WebSphere administrative user ID and password host authentication password – Tivoli NetView password only DB2 database If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you will need the user IDs and passwords listed below: – Agent manager name or IP address and password – Common agent password to register with the agent manager – Resource manager user ID and password to register with the agent manager – DB2 administrator user ID and password – DB2 user ID and password v WebSphere administrative user ID and password – Host authentication password only – Tivoli NetView password only Note: If you are running under Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must have the Act as part of the operating system user privilege. WebSphere To change the WebSphere user ID and password, follow this procedure: – Open the file: <install_location>appswaspropertiessoap.client.props – Modify the following entries: com.ibm.SOAP. login Userid=<user_ID> (enter a value for user_ID) com.ibm.SOAP. login Password=<password> (enter a value for password) – Save the file. – Run the following script: ChangeWASAdminPass.bat <user_ID> <password> <install_dir> Where <user_ID> is the WebSphere user ID and <password> is the password. <install_dir> is the directory where the manager is installed and is optional. For example, <install_dir> is c:Program FilesIBMTPCFabricmanagerbinW32-ix86.3.4.5 Personal firewall If you have a software firewall on your system, you should disable the firewall while installing the Fabric Manager. The firewall causes Tivoli NetView installation to fail. You can enable the firewall after you install the Fabric Manager.56 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 72. Security Considerations Set up security by using the demonstration certificates or by generating new certificates was a option that was specified when you installed the agent manager as shown in Figure 3-49 on page 83. If you used the demonstration certificates carry on with the installation. If you generated new certificates, follow this procedure: – Copy the manager CD image to your computer. – Copy the agentTrust.jks file from the agent manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This will overwrite the existing agentTrust.jks file. – You can write a new CD image with the new file or keep this image on your computer and point the suite installer to the directory when requested.3.4.6 Change the HOSTS file When you install Service Pack 3 for Windows 2000 on your computers, you must follow these steps to avoid addressing problems with IBM TotalStorage Productivity Center for Fabric. The problem is caused by the address resolution protocol which returns the short name (not fully qualified host name). This problem can be avoided by changing the entries in the corresponding host tables on the DNS server and on the local computer. The fully qualified host name must be listed before the short name as shown in Example 3-1. See “The computer name” on page 54 for details on determining the host name. To correct this problem you will have to edit the HOSTS file. The HOSTS file is in the following directory: %SystemRoot%system32driversetc Example 3-1 Sample HOSTS file # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a # symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost # 192.168.123.146 jason.groupa.mycompany.com jason 192.168.123.146 jason jason.groupa.mycompany.com Note: Host names are case–sensitive. This is a WebSphere limitation. Check your host name. Chapter 3. TotalStorage Productivity Center suite installation 57
  • 73. 3.5 Installation process Depending on which managers you plan to install, these are the prerequisites programs that are installed first. The suite installer will install these prerequisites programs in this order: – DB2 – WebSphere Application Server – IBM Director – Tivoli Agent Manager The suite installer then launches the installation wizard for each manager you have chosen to install. If you are running the Fabric Manager install under Windows 2000, the Fabric Manager installation requires that user ID must have the Act as part of the operating system and Log on as a service user rights. Insert the IBM TotalStorage Productivity Center suite installer CD into the CD–ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CD–ROM drive. Double–click setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient until the language selection panel in Figure 3-2 appears. The language panel is displayed. Select a language from the drop–down list. This is the language that is used for installing this product Click OK as shown in Figure 3-2. Figure 3-2 Installer Wizard The Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel is displayed. Click Next as shown in Figure 3-3 on page 59.58 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 74. Figure 3-3 Welcome to IBM TotalStorage Productivity Center panelThe Software License Agreement panel is displayed. Read the terms of the licenseagreement. If you agree with the terms of the license agreement, select the – I accept the terms of the license agreement radio button. – Click Next to continue as shown in Figure 3-4.If you do not accept the terms of the license agreement, the installation program will endwithout installing IBM TotalStorage Productivity Center.Figure 3-4 License agreement Chapter 3. TotalStorage Productivity Center suite installation 59
  • 75. The Select Type of Installation panel is displayed. Select Manager installations of Data, Disk, Fabric, and Replication and click Next to continue as shown in Figure 3-5. Figure 3-5 IBM TotalStorage Productivity Center options panel The Select the Components panel is displayed. Select the components you want to install. Click Next to continue as shown in Figure 3-6. Figure 3-6 IBM TotalStorage Productivity Center components WinMgmt is a service of Windows that need to be stopped before proceeding with the install. If the service is running you will see the panel in Figure 3-7 on page 61. Click Next to stop the services.60 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 76. Figure 3-7 WinMgmt information windowThe window in Figure 3-8 will open. Click Next once again to stop WinMgmt. Note: You should stop this service prior to beginning the install of TotalStorage Productivity Center to prevent these windows from appearing.Figure 3-8 Services informationThe Prerequisite Software panel is displayed. The products will be installed in the order listed.Click Next to continue as shown in Figure 3-9 on page 62.In this example, the first prerequisites to be installed are DB2 and WebSphere. Chapter 3. TotalStorage Productivity Center suite installation 61
  • 77. Note: The installer will interrogate the server to determine what prerequisites are installed on the server and list what remains to be installed. Figure 3-9 Prerequisite installation3.5.1 Prerequisite product install: DB2 and WebSphere The DB2 installation Information panel is displayed. The products will be installed in the order shown in Figure 3-10 on page 63. From the DB2 installation information panel click Next to continue. Note: If DB2 is already installed on the server the installer will skip the DB2 install.62 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 78. Figure 3-10 Products to be installedThe DB2 User ID and Password panel is displayed. Accept the default user name or enter anew user ID and password. Click Next to continue as shown in Figure 3-11.Figure 3-11 DB2 User configurationThe Confirm Target Directories for DB2 panel is displayed. Accept the default directory orenter a target directory. Click Next to continue as shown in Figure 3-12 on page 64. Chapter 3. TotalStorage Productivity Center suite installation 63
  • 79. Figure 3-12 DB2 Target Directory You will be prompted for the location of the DB2 installation image.Browse to the installation image or installer CD select the required information and click Install as shown in Figure 3-13. Figure 3-13 Installation source Note: If you use the DB2 CD for this step, the Welcome to DB2 panel is displayed. Click Exit to exit the DB2 installation wizard. The suite installer will guide you through the DB2 installation. The Installing Prerequisites (DB2) panel is displayed with the word Installing on the right side of the panel. When the component is installed a green arrow appears next to the component name (see Figure 3-14 on page 65). Wait for all the prerequisite programs to install. Click Next. Note: Depending on the speed of your machine, this can take from 30–40 minutes to install.64 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 80. Figure 3-14 Installing Prerequisites window - DB2 installingAfter DB2 has installed a green check mark will appear next to the text DB2 UniversalDatabase™ Enterprise Server Edition. The installer will start the install of WebSphere asshown in Figure 3-15.Figure 3-15 Installing Prerequisites window - WebSphere installingAfter WebSphere has installed a green check mark will appear next to the text WebSphereApplication Server. The installer will start the install of WebSphere Fixpack as shown inFigure 3-16 on page 66. Chapter 3. TotalStorage Productivity Center suite installation 65
  • 81. Figure 3-16 Installing Prerequisites window - WebSphere Fixpack installing After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-15 on page 65. Figure 3-17 Installing Prerequisites window - WebSphere Fixpack installed After the DB2 WebSphere, and WebSphere fixpack are installed the DB2 Server installation was successful window opens (see Figure 3-18 on page 67). Click Next to continue.66 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 82. Figure 3-18 DB2 installation successful The WebSphere Application Server installation was successful window opens (see Figure 3-19). Click Next to continue. Figure 3-19 WebSphere Application Server installation was successful3.5.2 Installing IBM Director The suite installer will present you with the panel showing the remaining products to be installed. The next prerequisite product to be installed is the IBM Director (see Figure 3-20 on page 68). Chapter 3. TotalStorage Productivity Center suite installation 67
  • 83. Figure 3-20 Installer prerequisite products panel The location of the IBM Director install package panel is displayed. Enter the installation source or insert the CD-ROM and enter the CD drive location. Click Next as shown in Figure 3-21. Figure 3-21 IBM Director Installation source The next panel provides information about the IBM Director post install reboot option. Note that you should choose the option to reboot later when prompted (seeFigure 3-22 on page 69). Click Next to continue.68 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 84. Figure 3-22 IBM Director informationThe IBM Director Server - InstallShield Wizard panel is displayed indicating that the IBMDirector installation wizard will be launched. Click Next to continue (see Figure 3-23).Figure 3-23 IBM Director InstallShield WizardThe License Agreement window opens next. Read the license agreement. Click I accept theterms in the license agreement radio button as shown in Figure 3-24 on page 70. ClickNext to continue. Chapter 3. TotalStorage Productivity Center suite installation 69
  • 85. Figure 3-24 IBM Director licence agreement The next window is the advertisement for Enhance IBM Director with the new Server Plus Pack window (see Figure 3-25). Click Next to continue. Figure 3-25 IBM Director information The Feature and installation directory window opens (see Figure 3-26 on page 71). Accept the default settings and click Next to continue.70 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 86. Figure 3-26 IBM Director feature and installation directory windowThe IBM Director service account information window opens (see Figure 3-27). Type thedomain for the IBM Director system administrator. Alternatively, if there is no domain, thentype the local host name (this is the recommended setup). Type a user name and passwordfor IBM Director. The IBM Director will run under this user name and you will log on to the IBMDirector console using this user name. Click Next to continue.Figure 3-27 Account informationThe Encryption settings window opens as shown in Figure 3-28 on page 72. Accept thedefault settings in the Encryption settings window. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 71
  • 87. Figure 3-28 Encryption settings In the Software Distribution settings window, accept the default values and click Next as shown in Figure 3-29. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director. Figure 3-29 Install target directory The Ready to Install the Program window opens (see Figure 3-30 on page 73). Click Install to continue.72 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 88. Figure 3-30 Installation readyThe Installing IBM Director server window reports the status of the installation as shown inFigure 3-31.Figure 3-31 Installation progressThe Network driver configuration window opens. Accept the default settings and click OK tocontinue. Chapter 3. TotalStorage Productivity Center suite installation 73
  • 89. Figure 3-32 Network driver configuration The secondary window closes and the installation wizard performs additional actions which are tracked in the status window. The Select the database to be configured window opens (see Figure 3-33). Select IBM DB2 Universal Database in the Select the database to be configured window. Click Next to continue. Figure 3-33 Data base selection The IBM Director DB2 Universal Database configuration window will open (see Figure 3-34). It might be behind the status window, and you must click it to bring it to the foreground.74 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 90. In the Database name field, type a new database name for the IBM Director database table ortype an existing database name.In the User ID and Password fields, type the DB2 user ID and password that you createdduring the DB2 installation. Click Next to continue.Figure 3-34 Database selection configurationAccept the default DB2 node name LOCAL - DB2 in the IBM Director DB2 UniversalDatabase configuration secondary window as shown in Figure 3-35. Click OK to continue.Figure 3-35 Database node name selectionThe Database configuration in progress window is displayed at the bottom of the IBM DirectorDB2 Universal Database configuration window. Wait for the configuration to complete and thesecondary window to close. Click Finish as shown in Figure 3-36 on page 76 when the InstallShield Wizard Completed window opens. Chapter 3. TotalStorage Productivity Center suite installation 75
  • 91. Figure 3-36 Completed installation Important: Do not reboot the machine at the end of the IBM Director installation. The suite installer will reboot the machine. Click No as shown in Figure 3-37. Figure 3-37 IBM Director reboot option Click Next to reboot the machine as shown in Figure 3-38 on page 77. Important: If the server does not reboot at this point cancel the installer and reboot the server.76 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 92. Figure 3-38 Install wizard completion After rebooting the machine the installer will initialize. The window Select the installation language to be used for this wizard opens. Select the language and Click OK to continue (see Figure 3-39). Figure 3-39 IBM TotalStorage Productivity Center installation wizard language selection The installation confirmation panel is displayed click Next as shown in Figure 3-40 on page 78.3.5.3 Tivoli Agent Manager The next product to be installed in the Tivoli Agent Manager (see Figure 3-40 on page 78). The Tivoli Agent manager is required if you are installing the Productivity Center for Fabric or the Productivity Center for Data. It is not required for the Productivity Center for Disk or the Productivity Center for Replication. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 77
  • 93. Figure 3-40 IBM TotalStorage Productivity Center installation information The Package Location panel is displayed (see Figure 3-41). Select the installation source or CD-ROM drive and click Next. Note: If you specify the path for the installation source you must specify the path at the win directory level. Figure 3-41 Tivoli Agent Manager installation source The Tivoli Agent Manager Installer window opens (see Figure 3-42 on page 79). Click Next to continue.78 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 94. Figure 3-42 Tivoli Agent Manager installer launch windowThe Install Shield wizard will start. Then you see the language installation option window inFigure 3-43. Select the required language and click OK.Figure 3-43 Tivoli Agent Manager installation wizardThe Software License Agreement window opens. Click I accept the terms of the licenseagreement to continue. Chapter 3. TotalStorage Productivity Center suite installation 79
  • 95. Figure 3-44 Tivoli Agent Manager License agreement The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-45. Figure 3-45 Tivoli Agent Manager prerequisite source directory80 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 96. The DB2 information panel is displayed (see Figure 3-46). If you do not want to accept thedefaults, enter the – DB2 User Name – DB2 PortEnter the DB2 Password and click Next to continue.Figure 3-46 DB2 User informationThe WebSphere Application Server Information panel is displayed. This panel lets youspecify the host name or IP address, and the cell and node names on which to install theagent manager. If you specify a host name, use the fully qualified host name. For example,specify x330f03.almaden.ibm.com. If you use the IP address, use a static IP address. Thisvalue is used in the URLs for all agent manager services.Typically the cell and node name are both the same as the host name of the computer. IfWebSphere was installed before you started the agent manager installation wizard, you canlook up the cell and node name values in the – %WebSphere Application Server_INSTALL_ROOT%binSetupCmdLine.bat file. – You can also specify the ports used by the agent manager: – Registration (the default is 9511 for the server–side SSL) – Secure communications (the default is 9512 for client authentication, two–way SSL) – Public communication (the default is 9513)If you are using WebSphere network deployment or a customized deployment, make surethat the cell and node names are correct. For more information about WebSpheredeployment, see your WebSphere documentation, Click Next as shown in Figure 3-47 onpage 82. Chapter 3. TotalStorage Productivity Center suite installation 81
  • 97. Figure 3-47 WebSphere Application Server information Figure 3-48 WebSphere Application Server information The Security Certificates panel is displayed in Figure 3-49 on page 83. Specify whether to create new certificates or to use the demonstration certificates. In a typical production82 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 98. environment, create new certificates. The ability to use demonstration certificates is providedas a convenience for testing and demonstration purposes. Make a selection and click Next tocontinue.Figure 3-49 Tivoli Agent Manager security certificatesThe security certificate settings panel is displayed. Specify the certificate authority name,security domain, and agent registration password. The agent registration password is thepassword used to register the agents. You must provide this password when you install theagents. This password also sets the agent manager key store and trust store files.The domain name is used in the right-hand portion of the distinguished name (DN) of everycertificate issued by the agent manager. It is the name of the security domain defined by theagent manager. Typically, this value is the registered domain name or contains the registereddomain name. For example, for the computer system myserver.ibm.com, the domain name isibm.com. This value must be unique in your environment. If you have multiple agentmanagers installed, this value must be different on each agent manager.The default agent registration password is changeMe; click Next as shown in Figure 3-50 onpage 84. Chapter 3. TotalStorage Productivity Center suite installation 83
  • 99. Figure 3-50 Security certificate settings Preview Prerequisite Software Information panel is displayed. Click Next as shown in Figure 3-51. Figure 3-51 Prerequisite reuse information The Summary Information for Agent Manager panel is displayed. Click Next as shown in Figure 3-52 on page 85.84 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 100. Figure 3-52 Installation summaryThe Installation of Agent Manager Completed panel is displayed, Click Finish as shown inFigure 3-53.Figure 3-53 Completion summaryThe Installation of Agent Manager Successful panel is displayed. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 85
  • 101. Important: There are three configuration tasks left to: Start the Agent Manager Service Set the service to start automatically Add a DNS entry for the Agent Recovery Service with the unqualified host name TivoliAgentRecovery and port 80. Tip: The Database created for the IBM Agent Manager is IBMCDB.3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base There are three separate installs: – Install the IBM TotalStorage Productivity Center for Disk and Replication Base code – Install the IBM TotalStorage Productivity Center for Disk – Install the IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in “User IDs and security” on page 48. – Act as part of the operating system – Create a token object – Increase quotas – Replace a process level token – Debug programs Figure 3-54 IBM TotalStorage Productivity Center installation information86 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 102. The Package Location for Disk and Replication Manager window (Figure 3-54 on page 86) isdisplayed. Enter the appropriate information and click Next to continue.Figure 3-55 Package location for Productivity Center Disk and ReplicationThe Information for Disk and Replication Manager panel is displayed. Click Next to continueas shown in Figure 3-56.Figure 3-56 Installer informationThe Launch Disk and Replication Manager Base panel is displayed indicating that the Diskand Replication Manager installation wizard will be launched. Click Next to continue asshown in Figure 3-57 on page 88. Chapter 3. TotalStorage Productivity Center suite installation 87
  • 103. Figure 3-57 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-58. Figure 3-58 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory88 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 104. The IBM WebSphere selection panel will be displayed, click Next to continue as shown inFigure 3-59.Figure 3-59 WebSphere Application Server informationIf the Installation User ID privileges were not set a information panel stating that the privilegesneeds to be set will be displayed, click Yes to continue.At this point the installation will terminate, close the installer log and log back on and restartthe installer.Select the Typical radio button. Click Next to continue as shown in Figure 3-60 on page 90. Chapter 3. TotalStorage Productivity Center suite installation 89
  • 105. Figure 3-60 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation If the IBM Director Support Program and IBM Director Server service is still running a information panel will be displayed that the services will be stopped click Next to stop the running services as shown in Figure 3-61. Figure 3-61 Server checks90 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 106. You must enter the name and password for the IBM TotalStorage Productivity Center for Diskand Replication Base super user ID in the IBM TotalStorage Productivity Center for Disk andReplication Base installation window. This user name must be defined to the operatingsystem. Click Next to continue as shown in Figure 3-62.Figure 3-62 IBM TotalStorage Productivity Center for Disk and Replication Base Superuser informationYou need to enter the user name and password for the IBM DB2 Universal Database Server,click Next to continue as shown in Figure 3-63 on page 92. Chapter 3. TotalStorage Productivity Center suite installation 91
  • 107. Figure 3-63 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation in the SSL Configuration window. The information you enter will be used later. Generate a self-signed certificate – Select this option if you want the installer to automatically generate these certificate files (used for this installation). Defer the generation of the certificate as a manual post-installation task – Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. In this case the next step, Generate Self-Signed Certificate, is skipped. Fill in the Key file and Trust file password.92 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 108. Figure 3-64 Key and Trust file optionsIf you chose to have the installation program generate the certificate for you, the GenerateSelf-Signed Certificate window opens, after completing all the fields click Next as shown inFigure 3-65.Figure 3-65 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information Chapter 3. TotalStorage Productivity Center suite installation 93
  • 109. You are presented with the Create Local Database window. Enter the database name, click Next to continue as shown in Figure 3-66. l Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications. Figure 3-66 IBM TotalStorage Productivity Center for Disk and Replication Base Database name The Preview window displays a summary of all of the choices that were made during the customizing phase of the installation, click Install to complete the installation as shown in Figure 3-67 on page 95.94 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 110. Figure 3-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information The Finish window opens. You can view the log file for any possible error messages. The log file is located in (installeddirectory)logsdmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation. The post-install tasks information opens in a Notepad. You should read the information and complete any required tasks.3.5.5 IBM TotalStorage Productivity Center for Disk The next product to be installed is the Productivity Center for Disk as indicated in Figure 3-68 on page 96. Click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 95
  • 111. Figure 3-68 IBM TotalStorage Productivity Center installer information The Package Location for IBM TotalStorage Productivity Center for Disk panel is displayed. Enter the appropriate information and click Next to continue as shown in Figure 3-70 on page 97. Figure 3-69 Productivity Center for Disk install package location The Launch IBM TotalStorage Productivity Center for Disk panel is displayed indicating that the IBM TotalStorage Productivity Center for Disk installation wizard will be launched (see Figure 3-70 on page 97). Click Next to continue.96 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 112. Figure 3-70 IBM TotalStorage Productivity Center for Disk installerThe Productivity Center for Disk Installer - Welcome panel is displayed (see Figure 3-71).Click Next to continue.Figure 3-71 IBM TotalStorage Productivity Center for Disk Installer WelcomeThe confirm target directories panel is displayed. Enter the directory path or accept thedefault directory (see Figure 3-72 on page 98) and click Next to continue. Chapter 3. TotalStorage Productivity Center suite installation 97
  • 113. Figure 3-72 Productivity Center for Disk Installer - Destination Directory The IBM TotalStorage Productivity Center for Disk Installer - Installation Type panel opens (see Figure 3-73). Select Typical install in the radio button click Next to continue. Figure 3-73 Productivity Center for Disk Installation Type The database configuration panel will be opened accept the database name or re-enter a new data base name, click Next to continue as shown in Figure 3-74 on page 99.98 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 114. Figure 3-74 IBM TotalStorage Productivity Center for Disk database nameReview the information about the IBM TotalStorage Productivity Center for Disk previewpanel and click Install as shown in Figure 3-75.Figure 3-75 IBM TotalStorage Productivity Center for Disk installation preview Chapter 3. TotalStorage Productivity Center suite installation 99
  • 115. The installer will create the required database (see Figure 3-76) and install the product. You will see a progress bar for the Productivity Center for Disk install status. Figure 3-76 Productivity Center for Disk DB2 database creation When the install is complete you will see the panel in Figure 3-77. You should review the post installation tasks. Click Finish to continue. Figure 3-77 Productivity Center for Disk Installer - Finish3.5.6 IBM TotalStorage Productivity Center for Replication The InstallShield will be displayed. Read the information and click Next to continue as shown in Figure 3-78 on page 101.100 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 116. Figure 3-78 IBM TotalStorage Productivity Center installation overviewThe Package Location for Replication Manager panel is displayed. Enter the appropriateinformation and click Next to continueThe Welcome window opens with suggestions about what documentation to review prior toinstallation. Click Next to continue as shown in Figure 3-79, or click Cancel to exit theinstallation.Figure 3-79 IBM TotalStorage Productivity Center for Replication installation Chapter 3. TotalStorage Productivity Center suite installation 101
  • 117. The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-80. Figure 3-80 IBM TotalStorage Productivity Center for Replication installation directory The next panel (see Figure 3-81) asks you to select the install type. Select the Typical radio button and click Next to continue. Figure 3-81 Productivity Center for Replication Install type selection102 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 118. Enter parameters for the new DB2 Hardware subcomponent database in the database nameor accept the default. We recommend you accept the default. Click Next to continue asshown in Figure 3-82. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.Figure 3-82 IBM TotalStorage Productivity Center for Replication hardware database nameEnter parameters for the new Element Catalog subcomponent database in the databasename or accept the default, click Next to continue as shown in Figure 3-83 on page 104. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents. Chapter 3. TotalStorage Productivity Center suite installation 103
  • 119. Figure 3-83 IBM TotalStorage Productivity Center for Replication element catalog database name Enter parameters for the new Replication Manager subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-84 on page 105. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.104 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 120. Figure 3-84 IBM TotalStorage Productivity Center for Replication, Replication Manager database nameSelect the required database tuning cycle in hours, select Next to continue as shown inFigure 3-85.Figure 3-85 IBM TotalStorage Productivity Center for Replication database tuning cycle Chapter 3. TotalStorage Productivity Center suite installation 105
  • 121. Review the information about the IBM TotalStorage Productivity Center for Replication preview panel and click Install as shown in Figure 3-86. Figure 3-86 IBM TotalStorage Productivity Center for Replication installation information The Productivity Center for Replication Installer - Finish panel in Figure 3-87 will be displayed upon successful installation. Read the post installation tasks. Click Finish to complete the installation. Figure 3-87 Productivity Center for Replication installation successful106 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 122. 3.5.7 IBM TotalStorage Productivity Center for Fabric We have included the installation for the Productivity Center for Fabric here. Refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for more information on using the Productivity Center for Fabric with the Productivity Center for Disk. Prior to installing IBM TotalStorage Productivity Center for Fabric, there are prerequisite tasks that need to be completed. These tasks are described in detail in 3.4, “IBM TotalStorage Productivity Center for Fabric” on page 54. These tasks include: “The computer name” on page 54 “SNMP install” on page 53 “Database considerations” on page 55 “Windows Terminal Services” on page 55 “User IDs and password considerations” on page 56 “Personal firewall” on page 56 “Tivoli NetView” on page 55 “Security Considerations” on page 57 Installing the manager After the successful installation of the Productivity Center for Replication, the suite installer will begin the Productivity Center for Fabric install (see Figure 3-88). Click Next to continue. Figure 3-88 IBM TotalStorage Productivity Center installation information The install shield will be displayed read the information and click Next to continue. The Package Location for Productivity Center for Fabric Manager panel is displayed (see Figure 3-89 on page 108). Enter the appropriate information and click Next to continue. Important: The package location at this point is very important If you used the demonstration certificates point to the CD-ROM drive. If you generated new certificates point to the manager CD image with the new agentTrust.jks file. Chapter 3. TotalStorage Productivity Center suite installation 107
  • 123. Figure 3-89 Productivity Center for Fabric install package location The language installation option panel is displayed, select the required language and click OK as shown in Figure 3-90. Figure 3-90 IBM TotalStorage Productivity Center for Fabric install wizard The Welcome panel is displayed. Click Next to continue as shown in Figure 3-91 on page 109.108 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 124. Figure 3-91 IBM TotalStorage Productivity Center for Fabric welcome informationSelect the type of installation you want to perform (see Figure 3-92 on page 110). In this casewe are installing the IBM TotalStorage Productivity Center for Fabric code. You can also usethe suite installer to perform a remote deployment of the Fabric agent.This operation can beperformed only if you have previously installed the common agent on a machines. Forexample, you might have installed the Data agent on the machines and want to add theFabric agent to the same machines. You must have installed the Fabric Manager before youcan deploy the Fabric agent. You cannot select both Fabric Manager Installation and RemoteFabric Agent Deployment at the same time. You can only select one option. Click Next tocontinue. Chapter 3. TotalStorage Productivity Center suite installation 109
  • 125. Figure 3-92 Fabric Manager installation type selection The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-93. Figure 3-93 IBM TotalStorage Productivity Center for Fabric installation directory The Port Number panel is displayed. This is a range of eight port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 7 numbers will be reserved for use by IBM TotalStorage Productivity110 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 126. Center for Fabric. For example, if you specify port number 9550, IBM TotalStorageProductivity Center for Fabric will use port numbers 9550–9557.Ensure that the port numbers you use are not used by other applications at the same time. Todetermine which port numbers are in use on a particular computer, type either of the followingcommands from a command prompt. We recommend you use the first command. – netstat -a – netstat -anThe port numbers in use on the system are listed in the Local Address column of the output.This field has the format host:port. Enter the primary port number as shown in Figure 3-94and click Next to continue.Figure 3-94 IBM TotalStorage Productivity Center for Fabric port numberThe Database choice panel is displayed. You can select DB2 or Cloudscape. If you selectDB2, you must have previously installed DB2 on the server. DB2 is the recommendedinstallation option. Select Next to continue as shown in Figure 3-95 on page 112. Chapter 3. TotalStorage Productivity Center suite installation 111
  • 127. Figure 3-95 IBM TotalStorage Productivity Center for Fabric database selection type The next panel allows you to select the WebSphere Application Server to use in the install. In this installation we used Embedded WebSphere Application Server. Click Next to continue as shown in Figure 3-97 on page 113. Figure 3-96 Productivity Center for Fabric WebSphere Application Server type selection The Single or Multiple User ID and Password panel (using DB2) is displayed (see Figure 3-97 on page 113). If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2 user and WebSphere user. You can also use the DB2 administrative password for the host authentication and NetView password.112 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 128. For example, if you selected all the choices in the panel, you will use the DB2 administrativeuser ID and password for the DB2 and WebSphere user ID and password. You will also usethe DB2 administrative password for the host authentication and NetView password. If youselect a choice, you will not be prompted for the user ID or password for each item you select. Note: If you selected Cloudscape as your database, this panel is not displayed.Click Next to continue.Figure 3-97 IBM TotalStorage Productivity Center for Fabric user and password optionsThe User ID and Password panel (using DB2) is displayed. If you selected DB2 as yourdatabase, you will see this panel. This panel allows you to use the DB2 administrative user IDand password for the DB2, enter the required User ID and Password, click Next to continueas shown in Figure 3-98 on page 114. Chapter 3. TotalStorage Productivity Center suite installation 113
  • 129. Figure 3-98 IBM TotalStorage Productivity Center for Fabric database user information Enter parameters for the new database in the database name or accept the default, click Next to continue as shown in Figure 3-99. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications. Figure 3-99 IBM TotalStorage Productivity Center for Fabric database name Enter parameters for the database drive, click Next to continue as shown in Figure 3-100 on page 115.114 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 130. Figure 3-100 IBM TotalStorage Productivity Center for Fabric database drive informationThe Agent Manager Information panel is displayed. You must provide the followinginformation: – Agent manager name or IP address. This is the name or IP address of your agent manager. – Agent manager registration port. This is the port number of your agent manager. – Agent registration password (twice). This is the password used to register the common agent with the agent manager as shown in Figure 3-50 on page 84 if the password was not set and the default was accepted the password is changeMe. – Resource manager registration user ID. This is the user ID used to register the resource manager with the agent manager (default is manager) – Resource manager registration password (twice). This is the password used to register the resource manager with the agent manager (default is password).Fill in the information and click Next to continue as shown in Figure 3-101 on page 116. Chapter 3. TotalStorage Productivity Center suite installation 115
  • 131. Figure 3-101 IBM TotalStorage Productivity Center for Fabric agent manager information The IBM TotalStorage Productivity Center for Fabric Install panel is displayed. This panel provides information about the location and size of the Fabric Manager. Click Next to continue as shown in Figure 3-102. Figure 3-102 IBM TotalStorage Productivity Center for Fabric installation information The Status panel is displayed. The installation can take about 15–20 minutes to complete. When the installation has completed, the Successfully Installed panel is displayed, click Next to continue as shown in Figure 3-103 on page 117.116 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 132. Figure 3-103 IBM TotalStorage Productivity Center for Fabric installation statusThe install wizard Complete Installation panel is displayed. Do not restart your computer, clickNo, I will restart my computer later. Click Finish to complete the installation as shown inFigure 3-104.Figure 3-104 IBM TotalStorage Productivity Center for Fabric restart optionsThe Install Status panel will be displayed indicating the Productivity Center for Fabricinstallation was successful. Click Next to continue as shown in Figure 3-105 on page 118. Chapter 3. TotalStorage Productivity Center suite installation 117
  • 133. Figure 3-105 IBM TotalStorage Productivity Center installation information118 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 134. 4 Chapter 4. CIMOM installation and configuration This chapter provides a step-by-step guide to configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) that are required to use the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication.© Copyright IBM Corp. 2004, 2005. All rights reserved. 119
  • 135. 4.1 Introduction After you have completed the installation of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication simply as TotalStorage Productivity Center. The TotalStorage Productivity Center for Disk uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server Verifying connection to ESS Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller4.2 Planning considerations for Service Location Protocol The Service Location Protocol (SLP) has three major components, Service Agent (SA) and User Agent (UA) and a Directory Agent (DA). The SA and UA are required components and DA is an optional component. You may have to make a decision whether to use SLP DA in your environment based on considerations as described below.4.2.1 Considerations for using SLP DAs You may consider to use DA is to reduce the amount of multicast traffic involved in service discovery. In a large net work with many UAs and SAs, the amount of multicast traffic involved in service discovery can become so large that network performance degrades. By deploying one or more DAs, UAs must unicast to DAs for service and SAs must register with DAs using unicast. The only SLP-registered multicast in a network with DAs is for active and passive DA discovery.120 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 136. SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider using DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.4.2.2 SLP configuration recommendation Some configuration recommendations are provided for enabling TotalStorage Productivity Center for Disk to discover a larger set of storage devices. These recommendations cover some of the more common SLP configuration problems. This topic discusses router configuration and SLP directory agent configuration. Router configuration Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. - Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations). Chapter 4. CIMOM installation and configuration 121
  • 137. SLP directory agent configuration Configure the SLP directory agents (DAs) to circumvent the multicast limitations. With statically configured DAs, all service requests are unicast by the user agent. Therefore, it is possible to configure one DA for each subnet that contains storage devices that are to be discovered by TotalStorage Productivity Center for Disk. One DA is sufficient for each of such subnets. Each of these DAs can discover all services within its own subnet, but no other services outside its own subnet. To allow TotalStorage Productivity Center for Disk to discover all of the devices, it needs to be statically configured with the addresses of each of these DAs. This can be accomplished using the TotalStorage Productivity Center for Disk Discovery Preference panel as discussed in “Configuring IBM Director for SLP discovery” on page 152. You can use this panel to enter a list of DA addresses. TotalStorage Productivity Center for Disk sends unicast service requests to each of these statically configured DAs, and sends multicast service requests on the local subnet on which TotalStorage Productivity Center for Disk is installed. Configure an SLP DA by changing the configuration of the SLP service agent (SA) that is included as part of an existing CIM Agent installation. This causes the program that normally runs as an SLP SA to run as an SLP DA. Note: The change from SA to DA does not affect the CIMOM service of the subject CIM Agent, which continues to function normally, sending registration and deregistration commands to the DA directly.4.3 General performance guidelines Here are some general performance considerations for configuring the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication environment. Do not overpopulate the SLP discovery panel with SLP agent hosts. Remember that TotalStorage Productivity Center for Disk includes a built-in SLP User Agent (UA) that will receive information about SLP Service Agents and Directory Agents (DA) that reside in the same subnet as the TotalStorage Productivity Center for Disk installation. You should have not more than one DA per subnet. Misconfiguring the IBM Director discovery preferences may impact performance on auto discovery or on device presence checking. It may also result in application time-outs, as attempts are made to resolve and communicate with hosts that are not available. It should be considered mandatory to run the ESS CLI and ESS CIM agent software on another host of comparable size to the main TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication server. Attempting to run a full TotalStorage Productivity Center implementation (Device Manager, Performance Manager, Replication Manager, DB2, IBM Director and the WebSphere Application server) on the same host as the ESS CIM agent, will result in dramatically increased wait times for data retrieval. Based on our ITSO Lab experience, it is suggested to have separate servers for TotalStorage Productivity Center for Disk along with TotalStorage Productivity Center for Replication, ESS CIMOM and DS 4000 family CIMOM. Otherwise, you may have port conflicts, increased wait times for data retrieval and resource contention.122 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 138. 4.4 Planning considerations for CIMOM The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface. Figure 4-1 on page 123 shows overview of CIM agent. Figure 4-1 CIM Agent Overview You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA.4.4.1 CIMOM configuration recommendations Following recommendations are based on our experience in ITSO Lab environment: The CIMOM agent code which you are planning to use, must be supported by the installed version of TotalStorage Productivity Center for Disk. You may refer to the link below for latest updates: http://www-1.ibm.com/servers/storage/support/software/tpcdisk/ You must have CIMOM supported firmware level on the storage devices. It you have incorrect version or firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have dedicated server for CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize and need to open the firewall ports only for TotalStorage Productivity Center for Disk communication with CIMOM. Chapter 4. CIMOM installation and configuration 123
  • 139. Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center server. This is due to resource contention, TCP/IP port requirements and system services co-existence.4.5 Installing CIM agent for ESS Before starting Multiple Device Manager discovery, you must first configure the Common Information Model Object Manager (CIMOM) for ESS. The ESS CIM Agent package is made up of the following parts (see Figure 4-2). Figure 4-2 ESS CIM Agent Package This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system.4.5.1 ESS CLI install The following list of installation and configuration tasks are in the order in which they should be performed: Before you install the ESS CIM Agent you must install the IBM TotalStorage Enterprise Storage System Command Line Interface (ESS CLI). The ESS CIM Agent installation program checks your system for the existence of the ESS CLI and reports that it cannot continue if the ESS CLI is not installed as shown in Figure 4-3 on page 125.124 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 140. Figure 4-3 ESS CLI install requirement for ESS CIM Agent Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236.Perform the following steps to install the ESS CLI for Windows: Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 4-4 on page 126 through Figure 4-7 on page 127. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI. Chapter 4. CIMOM installation and configuration 125
  • 141. Figure 4-4 InstallShield Wizard for ESS CLI Figure 4-5 Choose target system panel126 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 142. Figure 4-6 ESS CLI Setup Status panelFigure 4-7 ESS CLI installation complete panel Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system. Chapter 4. CIMOM installation and configuration 127
  • 143. Verify that the ESS CLI is installed: – Click Start –> Settings –> Control Panel. – Double-click the Add/Remove Programs icon. – Verify that there is an IBM ESS CLI entry. Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u itso -p itso13sj -s 9.43.226.43 list server Where: – 9.43.226.43 represents the IP address of the Enterprise Storage Server – itso represents the Enterprise Storage Server Specialist user name – itso13sj represents the Enterprise Storage Server Specialist password for the user name Figure 4-8 shows the response from the esscli command. Figure 4-8 ESS CLI verification4.5.2 ESS CIM Agent install To install the ESS CIM Agent in your Windows system, perform the following steps: Log on to your system as the local administrator. Insert the CIM Agent for ESS CD into the CD-ROM drive. The Install Wizard launchpad should start automatically, if you have autorun mode set on your system. You should see launchpad similar to Figure 4-9 on page 129. You may review the Readme file from the launchpad menu. Subsequently, you can Click Installation Wizard. The Installation Wizard starts executing setup.exe program and shows the Welcome panel in Figure 4-10 on page 130. Note: The ESS CIM Agent program should start within 15 - 30 seconds if you have autorun mode set on your system. If the installer window does not open, perform the following steps: – Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. – If you are using a Command Prompt window, run setup.exe. – If you are using Windows Explorer, double-click the setup.exe file.128 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 144. Note: If you are using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing setup.exe from the longer pathname may fail. An example of a short pathname is C:CIMOMsetup.exe.Figure 4-9 ESS CIMOM launchpad The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 4-10 on page 130). Chapter 4. CIMOM installation and configuration 129
  • 145. Figure 4-10 ESS CIM Agent welcome window The License Agreement window opens. Read the license agreement information. Select “I accept the terms of the license agreement”, then click Next to accept the license agreement (see Figure 4-11 on page 131).130 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 146. Figure 4-11 ESS CIM Agent license agreement The Destination Directory window opens. Accept the default directory and click Next (see Figure 4-12 on page 132). Chapter 4. CIMOM installation and configuration 131
  • 147. Figure 4-12 ESS CIM Agent destination directory panel The Updating CIMOM Port window opens (see Figure 4-13 on page 133). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a132 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 148. Figure 4-13 ESS CIM Agent port window The Installation Confirmation window opens (see Figure 4-14 on page 134). Click Install to confirm the installation location and file size. Chapter 4. CIMOM installation and configuration 133
  • 149. Figure 4-14 ESS CIM Agent installation confirmation The Installation Progress window opens (see Figure 4-15 on page 135) indicating how much of the installation has completed.134 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 150. Figure 4-15 ESS CIM Agent installation progress When the Installation Progress window closes, the Finish window opens (see Figure 4-16 on page 136). Check the View post installation tasks check box if you want to continue with post installation tasks when the wizard closes. We recommend you review the post installation tasks. Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxxlogsinstall.log, where xxx is the destination directory where the ESS CIM Agent for Windows is installed. Chapter 4. CIMOM installation and configuration 135
  • 151. Figure 4-16 ESS CIM Agent install complete- starting services Click Finish to exit the installation wizard (see Figure 4-17 on page 137).136 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 152. Figure 4-17 ESS CIM Agent install successful4.5.3 Post Installation tasks Continue with the following post installation tasks for the ESS CIM Agent. Verify the installation of the SLP Verify that the Service Location Protocol is started. Select Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find Service Location Protocol in the Services window list. For this component, the Status column should be marked Started as shown in Figure 4-18 on page 138. Chapter 4. CIMOM installation and configuration 137
  • 153. Figure 4-18 Verify Service Location Protocol started If SLP is not started, right-click the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started. Verify the installation of the ESS CIM Agent Verify that the CIMOM service is started. If you closed the Services window, select Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-click the Services icon. Find the IBM CIM Object Manager - ESS in the Services window list. For this component, the Status column should be marked Started and the Startup Type column should be marked Automatic, as shown in Figure 4-19 on page 139.138 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 154. Figure 4-19 ESS CIM OBject Manager started confirmation If the IBM CIM Object Manager is not started, right-click the IBM CIM Object Manager - ESS and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has been successfully installed on your Windows system. Next, perform the configuration tasks.4.6 Configuring the ESS CIM Agent for Windows This task configures the ESS CIM Agent after it has been successfully installed.4.6.1 Registering ESS Devices Perform the following steps to configure the ESS CIM Agent: Configure the ESS CIM Agent with the information for each Enterprise Storage Server the ESS CIM Agent is to access. – Start → Programs → IBM TotalStorage CIM Agent for ESS → Enable ESS Communication as shown in Figure 4-20 on page 140. Chapter 4. CIMOM installation and configuration 139
  • 155. Figure 4-20 Configuring the ESS CIM Agent – Type the command addess <ip> <user> <password> command for each ESS (as shown in Figure 4-21 on page 141): • Where, <ip> represents the IP address of the cluster of Enterprise Storage Server • <user> represents the Enterprise Storage Server Specialist user name140 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 156. • <password> represents the Enterprise Storage Server Specialist password for the user name Important: ESS CIM agent relies on ESS CLI connectivity from ESS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended to verify this by launching ESS specialist browser from the ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure you are authenticated with correct ESS passwords and IP addresses. If the ESS are on the different subnet than the ESS CIMOM server and behind a firewall, then you must authenticate through firewall first before registering the ESS with CIMOM. If you have a bi-directional firewall between ESS devices and CIMOM server then you must verify the connection using rsTestConnection command of ESS CLI code. If the ESS CLI connection is not successful, you must authenticate through the firewall in both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS. Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat with all the ESS successfully, you may proceed for entering ESS IP addresses. If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it retries the authentication. Figure 4-21 The addess command example4.6.2 Register ESS server for Copy services Type the following command for each ESS server that is configured for Copy Services: addesserver <ip> <user> <password> • Where <ip> represents the IP address of the Enterprise Storage Server • <user> represents the Enterprise Storage Server Specialist user name • <password> represents the Enterprise Storage Server Specialist password for the user name Repeat the previous step for each additional ESS device that you want to configure. Chapter 4. CIMOM installation and configuration 141
  • 157. Close the setdevice interactive session by typing exit. Once you have defined all the ESS servers, you must stop and restart the CIMOM to make the CIMOM initialize the information for the ESS servers. Note: CIMOM collects and caches the information from the defined ESS servers at startup time, the starting of the CIMOM might take a longer period of time the next time you start it. Attention: If the username and password entered is incorrect or the ESS CIM agent does not connect to the ESS this will cause a error and the ESS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. – rmess <ip> Whenever you add or remove ESS from CIMOM registration, you must re-start the CIMOM to pick up updated ESS device list.4.6.3 Restart the CIMOM Perform the following steps to use the Windows Start Menu facility to stop and restart the CIMOM. This is required so that CIMOM can register new devices or un-register deleted devices. – Stop the CIMOM by selecting Start → Programs → IBM TotalStorage CIM Agent for ESS → Stop CIMOM service. A Command Prompt window opens to track the stoppage of the CIMOM (as shown in Figure 4-22).If the CIMOM has stopped successfully, the following message is displayed: Figure 4-22 Stop ESS CIM Agent – Restart the CIMOM by selecting Start → Programs → IBM TotalStorage CIM Agent for ESS → Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 4-23 on page 143 displayed:142 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 158. Figure 4-23 Restart ESS CIM Agent Note: The restarting of the CIMOM may take a while because it is connecting to the defined ESS servers and is caching that information for future use.4.6.4 CIMOM User Authentication Use the setuser interactive tool to configure the CIMOM for the users who will have the authority to use the CIMOM. The user is the TotalStorage Productivity Center for Disk and Replication superuser. Important: A TotalStorage Productivity Center for Disk and Replication superuserid and password must be created and be the same for all CIMOMs TotalStorage Productivity Center for Disk that is to discover. This user ID should be less than or equal to eight characters. Upon installation of the CIM Agent for ESS, the provided default user name is “superuser” with a default password of “passw0rd”. The first time you use the setuser tool, you must use this user name and password combination. Once you have defined other user names, you can start the setuser command by specifying other defined CIMOM user names. Note: The users which you configure to have authority to use the CIMOM are uniquely defined to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. – Open a Command Prompt window and change directory to the ESS CIM Agent directory, for example “C:Program FilesIBMcimagent”. – Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. – Type the command adduser cimuser cimpass in the setuser interactive session to define new users. • Where cimuser represents the new user name to access the ESS CIM Agent CIMOM Chapter 4. CIMOM installation and configuration 143
  • 159. • cimpass represents the password for the new user name to access the ESS CIM Agent CIMOM Close the setdevice interactive session by typing exit. For our ITSO Lab setup we used TPCSUID as superuser and ITSOSJ as password.4.7 Verifying connection to the ESS During this task the ESS CIM Agent software connectivity to the Enterprise Storage Server (ESS) is verified. The connection to the ESS is through the ESS CLI software. If the network connectivity fails or if the user name and password that you set in the configuration task is incorrect, the ESS CIM Agent cannot connect successfully to the ESS. The installation, verification, and configuration of the ESS CIM Agent must be completed before you verify the connection to the ESS. Verify that you have network connectivity to the ESS from the system where the ESS CIM Agent is installed. Issue a ping command to the ESS and check that you can see reply statistics from the ESS IP address. Verify that the SLP is active by selecting Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon. You should see similar to Figure 4-18 on page 138. Ensure that Status is Started. Verify that the CIMOM is active by selecting Start → Settings → Control Panel → Administrative Tools → Services. Launch Services panel and select IBM CIM Object Manager service. Verify the Status is shown as Started, as shown inFigure 4-24. Figure 4-24 Verify ESS CIMOM has started144 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 160. Verify that SLP has dependency on CIMOM, this is automatically configured when you installed the CIM agent software. Verify this by selecting Start → Settings → Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and subsequently select properties on Service Location Protocol as shown in Figure Figure 4-25.Figure 4-25 SLP properties panel Click on Properties and select Dependencies tab as shown in Figure 4-26 on page 146. You must ensure that IBM CIM Object Manager has a dependency on Service Location Protocol (this should be the case by default). Chapter 4. CIMOM installation and configuration 145
  • 161. Figure 4-26 SLP dependency on CIMOM Verify CIMOM registration with SLP by selecting Start →Programs →TotalStorage CIM Agent for ESS →Check CIMOM Registration. A window opens displaying the wbem services as shown in.Figure 4-27. These services have either registered themselves with SLP or you have explicitly registered them with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It may take some time for a CIM Agent to register with SLP. Figure 4-27 Verify CIM Agent registration with SLP Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered ESSs. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center for Disk superuser name and passw0rd in order for TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The146 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 162. verifyconfig command checks the registration for the ESS CIM Agent and checks that it can connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 4-28). Figure 4-28 The verifyconfig command4.7.1 Problem determination You might run into the some errors. If that is a case, you may verify with cimom.log file. This file is located in C:Program FilesIBMcimagent directory. You may verify that you have the entries with your current install timestamp as shown in Figure 4-29. The entries of specific interest are: CMMOM050OI Registered service service:wbem:https://x.x.x.x:5989 with SLP SA CMMOM0409I Server waiting for connections This first entry indicates that the CIMOM has sucessfully registered with SLP using the port number specified at ESS CIM agent install time and second indicates that it has started sucessfully and waiting for connections. Figure 4-29 CIMOM Log Success Chapter 4. CIMOM installation and configuration 147
  • 163. If you still have problems, Refer to the IBM TotalStorage Enterprise Storage Server Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the doc directory at the root of the CIM Agent CD. The Figure 4-30 shows the location of installguide in doc directory of the CD. Figure 4-30 ESS Application Programming Interface Reference guide4.7.2 Confirming the ESS CIMOM is available Before you proceed, you need to be sure that the ESS CIMOM is listening for incoming connections. To do this run a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on the configured port (as indicated by a black screen with cursor on the top left) will tell you that the ESS CIMOM is active. You selected this port during ESS CIMOM code installation. If the telnet connection fails, you will have a panel like the one shown in Figure 4-31. In such case, you have to investigate the problem until you get a blank screen for telnet port. Figure 4-31 Example of telnet fail onnection Another method to verify that ESS CIMOM is up and running is to use the CIM Browser interface. For Windows machines change the working directory to c:Program Filesibmcimagent and run startcimbrowser. The WBEM browser in Figure 4-32 on page 149 will appear. The default user name is superuser and the default password is passw0rd. If you have already changed it, using the setuser command, the new userid and148 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 164. password must be provided. This should be set to the TotalStorage Productivity Center forDisk userid and password.Figure 4-32 WBEM BrowserWhen login is successful, you should see a panel like the one in Figure 4-33.Figure 4-33 CIMOM Browser window Chapter 4. CIMOM installation and configuration 149
  • 165. 4.7.3 Setting up the Service Location Protocol Directory Agent You can use the following procedure to set up the Service Location Protocol (SLP) Directory Agent (DA) so that TotalStorage Productivity Center for Disk can discover devices that reside in subnets other than the one in which TotalStorage Productivity Center for Disk resides. Perform the following steps to set up the SLP DAs: 1. Identify the various subnets that contain devices that you want TotalStorage Productivity Center for Disk to discover. 2. Each device is associated with a CIM Agent. There might be multiple CIM Agents for each of the identified subnets. Pick one of the CIM Agents for each of the identified subnets. (It is possible to pick more than one CIM Agent per subnet, but it is not necessary for discovery purposes.) 3. Each of the identified CIM Agents contains an SLP service agent (SA), which runs as a daemon process. Each of these SAs is configured using a configuration file named slp.conf. Perform the following steps to edit the file: – For example, if you have ESS CIM agent installed in the default install directory path, then go to C:Program FilesIBMcimagentslp directory. – Look for file named slp.conf – Make a backup copy of this file and name it slp.conf.bak. – Open the slp.conf file and scroll down until you find (or search for) the line ;net.slp.isDA = true Remove the semi-colon (;) at the beginning of the line. Ensure that this property is set to true (= true) rather than false. Save the file. – Copy this file (or replace it if the file already exists) to the main windows subdirectory for Windows machines (for example c:winnt), or in the /etc directory for UNIX machines. 4. It is recommended to reboot the SLP server at this stage. Otherwise, alternatively, you may choose to restart the SLP and CIMOM services. You can do this from your windows desktop → Start Menu → Settings → Control Panel → Administrative tools → Services. Launch the Services GUI → locate the Service Location Protocol, right-click and select stop. It will pop-up another panel which will request to stop IBM CIM Object Manager service. You may click Yes. You may start the SLP daemon again after it has stopped sucessfully. Alternatively, you may choose to re-start the CIMOM using command line as shown in Figure 4-34. Figure 4-34 Stop and Start CIMOM using commandline150 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 166. Note: The CIMOM process might not start automatically when you restart the SLP daemon. After you execute stopcimom and startcimom commands shown below, you should get response that it has stopped and started sucessfully. CIMOM startup takes considerable time if you have configured many ESS. To ensure that it has started and is listening, you may verify cimom.log file as shown in Figure 4-29 on page 147. You should see the message as “ CMMOMxxxx server waiting for connections...”Creating slp.reg file Important: To avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is “C:winnt”. Slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received.slp.reg file exampleThe following is a slp.reg file sample.Example 4-1 slp.reg file############################################################################### OpenSLP static registration file## Format and contents conform to specification in IETF RFC 2614, see also# http://www.openslp.org/doc/html/UsersGuide/SlpReg.html###############################################################################----------------------------------------------------------------------------# Register Service - SVC CIMOMS#----------------------------------------------------------------------------service:wbem:https://9.43.226.237:5989,en,65535# use default scopes: scopes=test1,test2description=SVC CIMOM Open Systems Lab, Cottle Roadauthors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbinicreation_date=04/02/20service:wbem:https://9.11.209.188:5989,en,65535# use default scopes: scopes=test1,test2description=SVC CIMOM Tucson L2 Labauthors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbinicreation_date=04/02/20#service:wbem:https://9.42.164.175:5989,en,65535# use default scopes: scopes=test1,test2#description=SVC CIMOM Raleigh SAN Central#authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini#creation_date=04/02/20#---------------------------------------------------------------------------- Chapter 4. CIMOM installation and configuration 151
  • 167. # Register Service - SANFS CIMOMS #---------------------------------------------------------------------------- #service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- # Register Service - FAStT CIMOM #---------------------------------------------------------------------------- #service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/204.7.4 Configuring IBM Director for SLP discovery You have now converted the SLP SA of the CIM Agent to run as an SLP DA. The CIMOM is not affected and will register itself with the DA instead of the SA. However, the DA will automatically discover all other services registered with other SLP SAs in that subnet. Attention: You will need to register the IP address of the server running SLP DA daemon with the IBM Director to facilitate MDM SLP discovery. You can do this using IBM Director console interface of TotalStorage Productivity Center for Disk. At this stage, it is assumed that you have already completed the installation of TotalStorage Productivity Center for Disk on a separate and dedicated server. You may proceed to that server for performing following steps and launch IBM Director console. Go to the IBM Director Console → Options → Discovery Preference → MDM SLP Configuration settings panel, and enter the host names or IP addresses of each of the machines that are running the SLP DA that was set up in the prior steps. As shown in Figure 4-35 on page 153, put in IP address of the SLP DA server and click Add → OK.152 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 168. Figure 4-35 IBM Director Discovery Preference Panel4.7.5 Registering the ESS CIM Agent to SLP You need to manually register the ESS CIM agent to the SLP DA only when the following conditions are both true: There is no ESS CIM Agent in the TotalStorage Productivity Center for Disk server subnet (TotalStorage Productivity Center for Disk). The SLP DA used by Multiple Device Manager is also not running an ESS CIM Agent. Tip: If either of the preceding conditions are false, you do not need to perform the following steps. To register the ESS CIM Agent issue the following command on the SLP DA server: C:>CD C:Program FilesIBMcimagentslp slptool register service:wbem:https://ipaddress:port Where ipaddress is the ESS CIM Agent ip address. For our ITSO setup, we used IP address of our ESS CIMOM server as 9.1.38.48 and default port number 5989. Issue a verifyconfig command as shown in Figure 4-28 on page 147 to confirm that SLP is aware of the registration. Chapter 4. CIMOM installation and configuration 153
  • 169. Attention: Whenever you update SLP configuration as shown above, you may have to stop and start slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you re-start SLP daemon, ensure that IBM ESS CIMOM agent has also re-started. Otherwise you may issue startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server. Please note that for ESS CIMOM startup takes longer time.4.7.6 Verifying and managing CIMOMs availability You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered to the SLP DA. Launch the IBM Director Console and select TotalStorage Productivity Center for Disk → Manage CIMOMs in the tasks panel as shown in Figure 4-36. The panel shows status of connection to respective CIMOM servers. Our ITSO ESS CIMOM server connection status is indicated in first line. with IP address 9.1.38.48, port 5996 and status as Success. Figure 4-36 Manage CIMOM panel Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. Figure 4-37 on page 155 shows the properties panel. You may verify username and password information. The namespace, username and password are picked up automatically, hence it is not required to be entered manually. This is the same username / password you configured in earlier steps with setuser command. This username is used by CIMOM to logon to TotalStorage Productivity Center for Disk. If you have problems getting a successful connection then you may enter manually namespace as /root/ibm and your CIMOM username / password.154 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 170. Figure 4-37 CIMOM Properties panel You can click Test Connection button. It should show a similar panel to Figure 4-38. It shows the connection is successful. Figure 4-38 Test Connection for CIMOM At this point TotalStorage Productivity Center for Disk has registered ESS CIMOM and is ready for device discovery.4.8 Installing CIM agent for IBM DS4000 family The latest code for IBM DS4000 family is available at the IBM support Web site. You need to download the correct and supported level of CIMOM code for TotalStorage Productivity Center for Disk Version 2.1. You can navigate from the following IBM support Web site for TotalStorage Productivity Center for Disk to acquire the correct CIMOM code: http://www-1.ibm.com/servers/storage/support/software/tpcdisk/ You may to have traverse through multiple links to get to the download files. At the time of writing this book, we accessed the Web page as shown in Figure 4-39 on page 156. Chapter 4. CIMOM installation and configuration 155
  • 171. Figure 4-39 IBM support matix Web page While scrolling down the same Web page, we got the following link for DS 4000 CIMOM code as in Figure 4-40 on page 157. This link leads tothe engenio provider Web site. The current supported code level is 1.0.59, as indicated in the Web page.156 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 172. Figure 4-40 Web download link for DS Family CIMOM codeFrom the Web site select the operating system used for the server on which the IBM DSfamily CIM Agent will be installed. You will download a setup.exe file. Save it to a directory onthe server you will be installing the DS 4000 CIM Agent on (see Figure 4-41 on page 158). Chapter 4. CIMOM installation and configuration 157
  • 173. Figure 4-41 DS CIMOM Install Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 4-42). Click Next to continue. Figure 4-42 LSI SMI-SProvider window158 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 174. The LSI License Agreement window opens next. If you agree with the terms of the licenseagreement, click Yes to accept the terms and continue the installation (see Figure 4-43 onpage 159).Figure 4-43 LSI License AgreementThe LSI System Info window opens. The minimum requirements are listed along with theinstall system disk free space and memory attributes as shown in Figure 4-44. If the installsystem fails the minimum requirements evaluation, then a notification window will appear andthe installation will fail. Click Next to continue.Figure 4-44 System Info window Chapter 4. CIMOM installation and configuration 159
  • 175. The Choose Destination Location window appears. Click Browse to choose another location or click Next to begin the installation of the FAStT CIM agent (see Figure 4-45 on page 160). Figure 4-45 Choose a destination The InstallShield Wizard will prepare and copy the files into the destination directory. See Figure 4-46. Figure 4-46 Install Preparation window160 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 176. The README will appear after the files have been installed. Read through it to becomefamiliar with the most current information (see Figure 4-47 on page 161). Click Next whenready to continue.Figure 4-47 README fileIn the Enter IPs and/or Hostnames window enter the IP addresses and hostnames of theFAStT devices this FAStT CIM agent will manage as shown in Figure 4-48.Figure 4-48 FAStT device list Chapter 4. CIMOM installation and configuration 161
  • 177. Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time until all the FAStT devices have been entered and click Next (see Figure 4-49 on page 162). Figure 4-49 Enter hostname or IP address Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same subnet. This may cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the FAStT devices. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button which will open the Windows Explorer. Locate and select the file and then click Open to import the file contents. When all the FAStT device hostnames and IP addresses have been entered, click Next to start the SMI-S Provider Service (see Figure 4-50). Figure 4-50 Provider Service starting When the Service has started, the installation of the FAStT CIM agent is complete (see Figure 4-51 on page 163).162 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 178. Figure 4-51 Installation completeArrayhosts FileThe installer will create a file called – %installroot%SMI-SProviderwbemservicescimombinarrayhosts.txt As shown in Figure Figure 4-52. In this file the IP addresses of installed DS 4000 units can be reviewed, added or edited.Figure 4-52 Arrayhosts fileVerifying LSI Provider Service availabilityYou can verify from Windows Services Panel that the LSI Provider service has started asshown in Figure 4-53 on page 164. If you change the contents of arrayhost file for adding anddeleting DS 4000 devices, then you will need to restart the LSI Provider service using theWindows Services Panel. Chapter 4. CIMOM installation and configuration 163
  • 179. Figure 4-53 LSI Provider Service Registering DS4000 CIM agent The DS4000 CIM Agent needs to be registered with an SLP DA if the FAStT CIM Agent is in a different subnet then that of IBM TotalStorage Productivity Center for Disk and Replication Base environment. The registration is not currently provided automatically by the CIM Agent. You register the DS 4000 CIM Agent with SLP DA from a command prompt using the slptool command. An example of the slptool command is shown below. You must change the IP address to reflect the IP address of the workstation or server where you installed the DS 4000 family DS 4000 CIM Agent. The IP address of our FAStT CIM Agent is 9.1.38.79 and port 5988. You need to execute this command on your SLP DA server. It our ITSO lab, we used SLP DA on the ESS CIMOM server. You need to go to directory C:Program FilesIBMcimagentslp and run: slptool register service:wbem:http:9.1.38.79:5988 Important: You cannot have the FAStT management password set if you are using IBM TotalStorage Productivity Center. At this point you may run following command on the SLP DA server to verify that DS 4000 family FAStT CIM agent is registered with SLP DA. slptool findsrvs wbem The response from this command will show the available services which you may verify.4.8.1 Verifying and Managing CIMOM availability You may now verify that TotalStorage Productivity Center for Disk can authenticate and discover the CIMOM agent services which are registered by SLP DA. You may proceed to your TotalStorage Productivity Center for Disk server. Launch the IBM Director Console and select → TotalStorage Productivity Center for Disk → Manage CIMOMs in the tasks panel as shown in Figure 4-54 on page 165. The panel shows status of connection to respective CIMOM servers. Our ITS DS4000 CIMOM server164 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 180. connection status is indicated in first line. with IP address 9.1.38.79, port 5988 and status asSuccess.Figure 4-54 Manage CIMOM Panel Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually.In order to verify and re-confirm the connection, you may select the respective connectionstatus and click Properties. The Figure 4-55 shows the properties panel. You may verifyusername and password information. The namespace, username and password are pickedup automatically, hence it is not required to be entered manually. If you have problems forgetting successful connection then you may enter manually namespace as /root/lsissi andyour CIMOM username / password.Figure 4-55 DS CIMOM Properties PanelYou can click the Test Connection button. It should show similar panel as Figure 4-56 onpage 166. It shows the connection is successful. Chapter 4. CIMOM installation and configuration 165
  • 181. Figure 4-56 Test Connection for CIMOM At this point TotalStorage Productivity Center for Disk has registered DS 4000 CIMOM and ready for device discovery.4.9 Configuring CIMOM for SAN Volume Controller The CIM Agent for SAN Volume Controller is part of the SAN Volume Controller Console and provides TotalStorage Productivity Center for Disk with access to SAN Volume Controller clusters. You must customize the CIM Agents in your enterprise to accept the TotalStorage Productivity Center for Disk user name and password. Figure 4-57 explains the communication between TotalStorage Productivity Center for Disk and SAN Volume Controller Environment. Figure 4-57 TotalStorage Productivity Center for Disk and SVC communication For additional details on how to configure the SAN Volume Controller Console refer to the redbook IBM TotalStorage Introducing the SAN Volume Controller, SG24-6423. To discover and manage the SAN Volume Controller we need to ensure that our TotalStorage Productivity Center for Disk superuser name and password (the account we166 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 182. specify in the TotalStorage Productivity Center for Disk configuration panel as shown in Figure 4-58) matches an account defined on the SAN Volume Controller console, in our case we implemented username TPCSUID and password ITSOSJ. You may want to adapt a similar nomenclature and setup the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center for Disk. Figure 4-58 Configure MDM Panel4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account As stated previously, you should implement a unique userid to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Login to the SAN Volume Controller console with a superuser account 2. Click Users under My Work on the left side of the panel (see Figure 4-59 on page 168). Chapter 4. CIMOM installation and configuration 167
  • 183. Figure 4-59 SAN Volume Controller console 3. Select Add a user in the drop-down under Users panel and click Go (see Figure 4-60). Figure 4-60 SAN Volume Controller console Add a user168 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 184. 4. An introduction window is opened, click Next (see Figure 4-61).Figure 4-61 SAN Volume Controller Add a user wizard5. Enter the User Name and Password and click Next (see Figure 4-62 on page 170). Chapter 4. CIMOM installation and configuration 169
  • 185. Figure 4-62 SAN Volume Controller Console Define users panel 6. Select your candidate cluster and move it to the right under Administrator Clusters (see Figure 4-63). Click Next to continue. Figure 4-63 SAN Volume Controller console Assign administrator roles 7. Click Next after you Assign service roles (see Figure 4-64 on page 171).170 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 186. Figure 4-64 SAN Volume Controller Console Assign user roles8. Click Finish after you Verify user roles (see Figure 4-65 on page 172). Chapter 4. CIMOM installation and configuration 171
  • 187. Figure 4-65 SAN Volume Controller Console Verify user roles 9. After you click Finish, the Viewing users panel opens (see Figure 4-66). Figure 4-66 SAN Volume Controller Console Viewing Users172 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 188. Confirming the SAN Volume Controller CIMOM is available Before you proceed, you need to be sure that the CIMOM on the SAN Volume Controller is listening for incoming connections. To do this, issue a telnet command from the server where TotalStorage Productivity Center for Disk resides. A successful telnet on port 5989 (as indicated by a black screen with cursor on the top left) will tell you that the CIMOM SAN Volume Controller console is active. If the telnet connection fails, you will have a panel like the one in Figure 4-67. Figure 4-67 Example of telnet fail connection4.9.2 Registering the SAN Volume Controller host in SLP The next step to detecting an SAN Volume Controller is to manually register the SAN Volume Controller console to the SLP DA. Tip: If your SAN Volume Controller console resides in the same subnet as the TotalStorage Productivity Center for Disk server, SLP registration will be automatic so you do not need to perform the following step. To register the SAN Volume Controller Console perform the following command on the SLP DA server: slptool register service:wbem:https://ipaddress:5989 Where ipaddress is the SAN Volume Controller console ip address. Run a verifyconfig command to confirm that SLP ia aware of the SAN Volume Controller console registration.4.10 Configuring CIMOM for TotalStorage Productivity Centerfor Disk summary TotalStorage Productivity Center for Disk discovers both IBM storage devices that comply with the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location Protocol (SLP). The TotalStorage Productivity Center for Disk server software performs SLP discovery on the network. The User Agent looks for all registered services with a service type of service:wbem. TotalStorage Productivity Center for Disk performs the following discovery tasks: Chapter 4. CIMOM installation and configuration 173
  • 189. Locates individual storage devices Retrieves vital characteristics for those storage devices Populates The TotalStorage Productivity Center for Disk internal databases with the discovered information The TotalStorage Productivity Center for Disk can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services have been discovered through SLP, The TotalStorage Productivity Center for Disk contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. TotalStorage Productivity Center for Disk gathers the vital characteristics of each of these devices. For The TotalStorage Productivity Center for Disk to successfully communicate with the CIMOMs, the following conditions must be met: A common user name and password must be configured for all the CIM Agent instances that are associated with storage devices that are discoverable by TotalStorage Productivity Center for Disk (use adduser as described in 4.6.4, “CIMOM User Authentication” on page 143). That same user name and password must also be configured for TotalStorage Productivity Center for Disk using the Configure MDM task in the TotalStorage Productivity Center for Disk interface. If a CIMOM is not configured with the matching user name and password, it will be impossible to determine which devices the CIMOM supports. As a result, no devices for that CIMOM will appear in the IBM Director Group Content pane. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host where TotalStorage Productivity Center for Disk is installed must include in its list of domain names all the domains that contain storage devices that are discoverable by TotalStorage Productivity Center for Disk. It is important to verify that CIMOM is up and running. To do that, use the following command from TotalStorage Productivity Center for Disk server: telnet CIMip port Where, CIMip is the ip address where CIM Agent run and port is the port value used for the communication (5989 for secure connection, 5988 for unsecure connection).4.10.1 SLP registration and slptool TotalStorage Productivity Center for Disk uses Service Location Protocol (SLP) discovery, which requires that all of the CIMOMs that TotalStorage Productivity Center for Disk discovers are registered using the Service Location Protocol (SLP). SLP can only discover CIMOMs that are registered in its IP subnet. For CIMOMs outside of the IP subnet, you need to use an SLP DA and register the CIMOM using slptool. Ensure that the CIM_InteropSchemaNamespace and Namespace attributes are specified. For example, type the following command: slptool register service:wbem:https://myhost.com:port Where, myhost.com is the name of the server hosting the CIMOM, and port is the port number of the service, such as 5989.174 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 190. 4.10.2 Persistency of SLP registration Although it is acceptable to register services manually into SLP, it is possible for SLP users to to statically register existing services (applications that were not compiled to use the SLP library) using a configuration file that SLP reads at startup, called slp.reg. All of the registrations are maintained by slpd and will remain registered as long as slpd is alive. The Service Location Protocol (SLP) registration is lost if the server where SLP resides is rebooted or when the Service Location Protocol (SLP) service is stopped. A Service Location Protocol (SLP) manual registration is needed for all the CIMOMs outside the subnet where SLP DA resides. Important: to avoid to register manually the CIMOM outside the subnet every time that the Service Location Protocol (SLP) is restarted, create a file named slp.reg. The default location for the registration is for Windows machines “c:winnt”, or “/etc” directory for UNIX machines. Slpd reads the slp.reg file on startup and re-reads it when ever the SIGHUP signal is received.4.10.3 Configuring slp.reg file Here is an example of the slp.reg file: ############################################################################# # # OpenSLP static registration file # # Format and contents conform to specification in IETF RFC 2614, see also # http://www.openslp.org/doc/html/UsersGuide/SlpReg.html # ############################################################################# #---------------------------------------------------------------------------- # Register Service - SVC CIMOMS #---------------------------------------------------------------------------- service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- Chapter 4. CIMOM installation and configuration 175
  • 191. # Register Service - SANFS CIMOMS #---------------------------------------------------------------------------- #service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------- # Register Service - FAStT CIMOM #---------------------------------------------------------------------------- #service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20176 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 192. 5 Chapter 5. TotalStorage Productivity Center common base use This chapter provides information about the functions of the Productivity Center common base. The components of Productivity Center common base include: Configuring MDM Launch and log on to TotalStorage Productivity Center Launching device managers Performing device discovery Performing device inventory collection Working with ESS Working with SAN Volume Controller Working with IBM DS4000 family (formally FAStT) Event management© Copyright IBM Corp. 2004, 2005. All rights reserved. 177
  • 193. 5.1 Productivity Center common base: Introduction Before using Productivity Center common base features you need to perform some configuration steps. This will permit you to detect storage devices to be managed. Version 2.1 of Productivity Center common base permits you to discover and manage: ESS 2105-F20, 2105-800, 2105-750 SAN Volume Controller (SVC) DS4000 family (formally FAStT product range) Provided you have discovered a supported IBM storage device, Productivity Center common base storage management functions will be available for drag-and-drop operations. Alternatively, right-click the discovered device to display a drop-down with all available functions specific to it. We will review the available operations that can be performed in the sections that follows. Note: Not all functions of TotalStorage Productivity Center are applicable to all device types. For example, they cannot display the virtual disks on a DS4000 because the virtual disks concept is only applicable to the SAN Volume Controller. The sections that follow cover the functions available for each of the supported device types.5.2 Launching TotalStorage Productivity Center Productivity Center common base along with TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication are accessed via the TotalStorage Productivity Center Launchpad (Figure 5-1) icon on your desktop. Select Manage Disk Performance and Replication to start the IBM Director console interface. Figure 5-1 TotalStorage Productivity Center launchpad Alternatively access IBM Director from Windows Start → Programs → IBM Director → IBM Director Console Log on to IBM Director using the superuser id and password defined at installation. Please note that passwords are case sensitive. Login values are: IBM Director Server: Hostname of the machine where IBM Director is installed User ID: The username to logon with. This is the superuser ID. Enter it in the form <hostname><username>178 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 194. Password: The case sensitive superuser ID password Figure 5-2 shows the IBM Director Login panel you will see after launching IBM Director. Figure 5-2 IBM Director Log on5.3 Exploiting Productivity Center common base The Productivity Center common base module adds the Multiple Device Manager submenu task on the right-hand Tasks pane of the IBM Director Console as shown in Figure 5-3 on page 180. Note: The Multiple Device Manager product has been rebranded to TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You will still see the name Multiple Device Manager in some panels and messages. Productivity Center common base will install the following sub-components into the Multiple Device Manager menu: Configure MDM Launch Device Manager Launch Tivoli SAN Manager (now called TotalStorage Productivity Center for Fabric) Manage CIMOMs Manage Storage Units (menu) – Inventory Status – Managed Disks – Virtual Disks – Volumes Chapter 5. TotalStorage Productivity Center common base use 179
  • 195. Note: The Manage Performance and Manage Replication tasks that you see in Figure 5-3 on page 180 become visible when TotalStorage Productivity Center for Disk or TotalStorage Productivity Center for Replication are installed. Although this chapter covers Productivity Center common base you would have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication or both. Figure 5-3 IBM Director Console with Productivity Center common base5.3.1 Configure MDM The name Multiple Device Manager (MDM) is now known as Productivity Center common base. However this version of the code shows the previous MDM name. It should not be necessary to alter any values in here unless passwords need to change. This menu option (Figure 5-4 on page 181) allows you to perform the following actions: Provide a Productivity Center common base superuser account name and password. The username field will be populated with the value provided at installation. There is no reason to change this value. Provide information about the DB2 host. Again this value will be populated with the information available when the installation was performed and there should be no reason to modify the value in this field. Provide location and password information for TotalStorage Productivity Center for Fabric. This version of software carries the previous name of fabric, Tivoli SAN manager (TSANM).180 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 196. See Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for more details on using and configuring TotalStorage Productivity Center for Fabric. Figure 5-4 Configure MDM5.3.2 Launch Device Manager The Launch Device Manager task may be dragged onto an available storage device. For ESS this will open the ESS Specialist window for a chosen device. For SAN Volume Controller it will launch a browser session to that device. For DS4000 or FAStT devices is function is not available.5.3.3 Discovering new storage devices Assuming that you have followed the steps outlined in Chapter 4, “CIMOM installation and configuration” on page 119. The following tasks should be completed in order to discover devices defined to our Productivity Center common base host: All CIM agents are running and are registered with the SLP server. The SLP agent host is defined in the IBM Director options (Figure 5-5 on page 182) if it resides in a different subnet to that of the TotalStorage Productivity Center server (Options → Discovery Preferences → MDM SLP Configuration tab). Note: If the Productivity Center common base host server resides in the same subnet as the CIMOM, then it is not a requirement that the SLP DA host IP address be specified in the Discovery Preferences (Figure 5-5). Refer to Chapter 2, “Key concepts” on page 25 for details. 1. Discovery will happen automatically based on preferences that are defined in the Options → Discovery Preferences → MDM SLP Configuration tab. The default values for Auto discovery interval and Presence check interval is set to 0 (see Figure 5-5 on page 182). These values should be set to a more suitable value, for example to 1 hour for Auto discovery interval and 15 minutes for Presence check interval. The values you Chapter 5. TotalStorage Productivity Center common base use 181
  • 197. specify will have a performance impact on the CIMOMs and Productivity Center common base servers, so do not set these values too low. Figure 5-5 Discovery Preferences MDM SLP Configuration 2. Turn off automatic inventory on discovery Important: Because of the time and CIMOM resources needed to perform inventory on storage devices it is undesirable and unnecessary to perform this each time Productivity Center common base performs a device discovery. Turn off automatic inventory by selecting Options → Server Preferences as shown in Figure 5-6 on page 183.182 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 198. Figure 5-6 Selecting Server PreferencesNow uncheck the Collect On Discovery tick box as shown in Figure 5-7, all other options canremain unchanged. Select OK when done.Figure 5-7 Server Preferences3. You can click the Discover all Systems in the top left corner of the IBM Director Console to initiate an immediate discovery task (see Figure 5-8 on page 184). Chapter 5. TotalStorage Productivity Center common base use 183
  • 199. Figure 5-8 Discover All Systems icon 4. You can also use the IBM Director Scheduler to create a scheduled job for new device discovery. – Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks → Scheduler (see Figure 5-9 on page 185).184 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 200. Figure 5-9 Tasks Scheduler option for Discovery – In the Scheduler click File → New Job (see Figure 5-10).Figure 5-10 Task Scheduler Discovery job – Establish parameters for the new job. Under the Date/Time tab. Include date and time to perform the job, and whether the job is to be repeated (see Figure 5-11 on page 186). Chapter 5. TotalStorage Productivity Center common base use 185
  • 201. Figure 5-11 Discover job parameters – From the Task tab (see Figure 5-12), select Discover MDM storage devices/SAN Elements, then click Select. Figure 5-12 Discover job selection task – Click File → Save as, or use the Save as icon. – Provide a descriptive job name in the Save Job panel (see Figure 5-13 on page 187) and click OK.186 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 202. Figure 5-13 Discover task job name5.3.4 Manage CIMOMs The Manage CIMOMs menu option as seen in Figure 5-6 on page 183 lets you view the CIMOMs that have been discovered by Productivity Center common base. It should not normally be necessary to alter any information in the panel. The connection status of each CIMOM is displayed. A success state means that Productivity Center common base is able to connect to the CIMOM using the Namespace, User name and Password defined to it. It does not mean that the CIMOM can access a storage device. Figure 5-14 Discovered CIMOMs list To view or change the details of a CIMOM or perform a connection test, select the CIMOM as seen in Figure 5-14 and then the click the Properties button from the right of the panel. Figure 5-15 on page 188 shows the properties for a DS4000 or FAStT CIMOM. Chapter 5. TotalStorage Productivity Center common base use 187
  • 203. Figure 5-15 CIMOM details for an DS4000 or FAStT CIMOM Important: Namespace must be set to rootlsissi for DS4000 and FAStT CIMOMs. It should be discovered automatically but if your connection fails, please verify. Also DS4000 and FAStT CIMOMs do not need a User name or Password set. Entering them has no effect on the success of a Test Connection. Figure 5-16 CIMOM details for a SAN Volume Controller Figure 5-16 shows the CIMOM properties for a SAN Volume Controller. Important: Namespace must be set to rootibm for SAN Volume Controller CIMOMs. It should be discovered automatically but if you experience connection failures, please verify it has been set correctly. For more detailed information about configuring CIMOMs, refer to Chapter 4, “CIMOM installation and configuration” on page 119.188 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 204. Tip: If you move or delete CIMOMs in your environment the old CIMOM entries are not automatically updated and entries with a Failure status will be seen as in Figure 5-14. These invalid entries can slow down discovery performance as TotalStorage Productivity Center tries to contact them each time it performs a discovery. You cannot delete CIMOM entries directly from the Productivity Center common base interface. Delete them using the DB2 control center tool as described in 5.3.5, “Manually removing old CIMOM entries” on page 189.5.3.5 Manually removing old CIMOM entries It may be necessary from time to time to remove CIMOMs entries from Productivity Center common base. This can happen if you move a CIMOM to another server in your environment, change the CIMOM’s IP address etc... Productivity Center common base does not allow direct removal of a CIMOM entry using the Director interface. To delete a CIMOM remove the data rows manually from DB2 using the process that follows. Process overview: Delete any non-existing storage devices from the TotalStorage Productivity Center that are associated with the CIMOM entry to be removed. Launch DB2 Control Center. Navigate to the DMCOSERV database. Locate the DMCIMOM table. Delete the data rows relating to old CIMOM(s). Commit changes to DMCIMOM table. Locate the BASEENTITY table. Filter rows DISCRIM_BASEENTITY = DMCIMOM. Delete the data rows relating to the old CIMOM(s). Commit changes to BASEENTITY table. Locate the DMREFERENCE table. Delete the data rows relating to the old CIMOM(s). Commit changes to DB2 table. The following figures illustrate the process. Before deleting a non-existing CIMOM(s) through DB2 tables first delete any storage devices that are associated with them in TotalStorage Productivity Center. Right-click the selected device and choose Delete as shown in Figure 5-17 on page 190. Chapter 5. TotalStorage Productivity Center common base use 189
  • 205. Figure 5-17 Delete invalid device from TotalStorage Productivity Center Launch DB2 Control Center (Figure 5-18 on page 191). This is a general administration tool for managing DB2 databases and table. Attention: DB2 Control Center is a database administration tool. It gives you direct and complete access to the data stored in all the TotalStorage Productivity Center databases. Altering data through this tool can cause damage to the TotalStorage Productivity Center environment. Be careful not to alter data unnecessarily using this tool.190 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 206. Figure 5-18 Launching DB2 Control CenterLaunch the DB2 Control Center as seen in Figure 5-18.Figure 5-19 Navigate to the DMCOSERV databaseNavigate down the structure in the left hand panel to open up the DMCOSERV database,then click the Tables option. A list of tables for this database will appear in the right handupper panel as seen in Figure 5-19. Locate the DMCIMOM table as shown and double-clickto open a new window (Figure 5-20 on page 192) showing the data rows. Chapter 5. TotalStorage Productivity Center common base use 191
  • 207. Figure 5-20 Deleting rows from the DMCIMOM table in DB2 Identify the CIMOM rows to be deleted by their IP address as shown in Figure 5-20. Click once on the row to be delete to select it. Click on the Delete Row button to remove it from the table. When you have made your changes you must click the Commit button for the table changes to be made effective. Now click Close to finish with this table. If you make any mistakes before you have pressed the Commit button you can click the Roll Back button to undo the changes. Now locate the BASEENTITY table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. This table contains many rows of data. Filter the data to show only entries that relate to CIMOMs. Click the Filter button to open the filter panel as seen in Figure 5-22 on page 193. Figure 5-21 BASEENTITY table192 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 208. Figure 5-22 Filtering the BASEENTITY tableEnter DMCIMOM in the values field as shown in Figure 5-22 and click OK. The table data isnow filtered to show only CIMOM entries as seen in Figure 5-23.Figure 5-23 BASEENTITY table filtered to CIMOMsUse a single click to select the entries by IP address that relate to the non-existent CIMOMs.Click Delete Row to remove them. Click Commit to make the changes effective, then Close.You can used Roll Back to undo any mistakes before a Commit. Chapter 5. TotalStorage Productivity Center common base use 193
  • 209. Figure 5-24 DMREFERENCE table Now locate the DMREFERENCE table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. Note: The DMREFERENCE table may contain more than one entry for each of the non-existent CIMOM(s). It may not contain any rows at all for the CIMOM(s). Delete all relevant rows for the non-existent CIMOM(s) if they exist. If there are no rows in this table for the CIMOM(s) you are deleting they are not linked to any devices and this is OK.5.4 Performing volume inventory This function is used to collect the detailed volume information from a discovered device and place it into the Productivity Center common base databases. You need to do this at least once before Productivity Center common base can start to work with a device. When the Productivity Center common base functions are subsequently used to create/remove LUN’s the volume inventory is automatically kept up to date and it is therefore not necessary to repeatedly run inventory collection from the storage devices. Version 2.1 of Productivity Center common base does not currently contain the full feature set of all functions for the supported storage devices. This will make it necessary to use the storage device’s own management tools for some tasks. For instance you can create new Vdisks with Productivity Center common base on a SAN Volume Controller but you cannot delete them. You will need to use the SAN Volume Controller’s own management tools to do this. For these types of changes to be reflected in Productivity Center common base an inventory collection will be necessary to re-synchronize the storage device and Productivity Center common base inventory. Attention: The use of volume inventory is common to ALL supported storage devices and must be performed before disk management functions are available.194 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 210. Figure 5-25 Launch Perform Inventory CollectionTo start inventory collection right-click the chosen device and select Perform InventoryCollection as shown in Figure 5-25.A new panel will appear (Figure 5-26) as a progress indication that the inventory process isrunning. At this stage Productivity Center common base is talking to the relevant CIMOM tocollect volume information from the storage device. After a short while the information panelwill indicated that the collection has been successful. You can now close this window.Figure 5-26 Inventory collection in progress Attention: When the panel in Figure 5-26 indicates that the collection has been successfully completed, it does not necessarily mean that the volume information has been fully processed by Productivity Center common base at this point. To track the detailed processing status, launch the Inventory Status tasks seen in Figure 5-27. Chapter 5. TotalStorage Productivity Center common base use 195
  • 211. Figure 5-27 Launch Inventory Status To see the processing status of an inventory collection launch the Inventory Status task as seen in Figure 5-27. Figure 5-28 Inventory Status The example Inventory Status panel seen in Figure 5-28 shows the progress of the processing for a SAN Volume Controller. Use the refresh button in the bottom left of the panel to update it with the latest progress. You can also launch the Inventory Status panel before starting an inventory collection to watch the process end to end. In our test lab the inventory process time for an SVC took around 2 minutes end to end.196 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 212. 5.5 Working with ESS This section covers the Productivity Center common base functions that are available when managing ESS devices. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-29. Tasks access: You will see in the right hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However not all functions are applicable to all supported devices. Right-click access: To access all functions available for a specific device simply right-click it to see a drop-down menu of options for that device. Figure 5-29 shows the drop-down menu for an ESS. Figure 5-29 also shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have either or both TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed. Figure 5-29 Accessing Productivity Center common base functions Chapter 5. TotalStorage Productivity Center common base use 197
  • 213. 5.5.1 Changing the display name of an ESS You can change the display name of a discovered ESS device to something more meaningful to your organization. Right-click the chosen ESS (Figure 5-30) and select the Rename option. Figure 5-30 Changing the display name of an ESS Enter a more meaningful device name as in Figure 5-31 and click OK. Figure 5-31 Entering a user defined subsystem name5.5.2 ESS Volume inventory To view the status of the volumes available within a given ESS device, perform one of the following: Right-click the ESS device and select Volumes as in Figure 5-32 on page 199 On the right-hand side under the Tasks column, drag Managed Storage Units → Volumes onto the storage device you want to query. Tip: Before volumes can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to view volumes for an ESS that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection see section 5.4, “Performing volume inventory” on page 194.198 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 214. Figure 5-32 Working with ESS volumesIn either case, in the bottom left corner, the status will change from Ready to Starting Taskand it will remain this way until the volume inventory appears. Figure 5-33 shows theVolumes panel for the select ESS device that will appear.Figure 5-33 ESS volume inventory panel Chapter 5. TotalStorage Productivity Center common base use 199
  • 215. 5.5.3 Assigning and unassigning ESS volumes From the ESS volume inventory panel (Figure 5-33 on page 199) you can modify existing volume assignments by either assigning a volume to a new host port(s) or by unassigning a host from an existing volume to host port(s) mapping. To assign a volume to a host port, you can click the Assign host button on the right side of the volume inventory panel (Figure 5-33 on page 199). You will be presented with a panel like the one in Figure 5-34. Select from the list of available host port world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for Volume assignment, click OK. Figure 5-34 Assigning ESS LUNs When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed you will see a message panel as in Figure 5-35 on page 201. When the volume has been successfully assigned to the selected host port the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric (formerly known as TSANM) is installed, refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.200 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 216. Figure 5-35 Tivoli SAN Manager warning5.5.4 Creating new ESS volumes To create new ESS volumes select the Create button from the Volumes panel as seen in Figure 5-33 on page 199. The Create volume panel will appear (Figure 5-36). Figure 5-36 ESS create volume Use the drop-down fields to select the Storage type and choose from Available arrays on the ESS. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking on hosts. On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known as TSANM) is not installed you will see an message panel as seen in Figure 5-37 on page 202. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for complete details of its operation. Chapter 5. TotalStorage Productivity Center common base use 201
  • 217. Figure 5-37 Tivoli SAN Manager warning5.5.5 Launch device manager for an ESS device This option allows you to link directly to the ESS Specialist of the chosen device: Right-click the ESS storage resource, and select Launch Device Manager. On the right-hand side under the Tasks column, drag Managed Storage Units →Launch Device Managers onto the storage device you want to query Figure 5-38 ESS specialist launched by Productivity Center common base202 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 218. 5.6 Working with SAN Volume Controller This section covers the Productivity Center common base functions that are available when managing SAN Volume Controller subsystems. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-39 on page 204. Tasks access: You will see in the right hand task panel that there are a number of available task under the Manage Storage Units section. These management functions can be invoked by dragging them onto the chosen device. However not all functions are appropriate to all supported devices. Right-click access: To access all functions available for a specific device right-click it to see a drop-down menu of options for that device. Figure 5-39 on page 204 shows the drop-down menu for a SAN Volume Controller. Note: Overall the SAN Volume Controller functionality offered in Productivity Center common base compared to that of the native SAN Volume Controller Web based GUI is fairly limited in Version 2.1. There is the ability to add existing unmanaged LUN’s to existing Mdisk groups, but there are no tools to remove Mdisks from a group or create/delete Mdisk groups. The functions available for Vdisks are similar too. Productivity Center common base can create new Vdisks in a given Mdisk group but there is little other control over the placement of these volumes. It is not possible to remove Vdisks or reassign them to other hosts using Productivity Center common base. Chapter 5. TotalStorage Productivity Center common base use 203
  • 219. 5.6.1 Changing the display name of a SAN Volume Controller You can change the display name of a discovered SAN Volume Controller to something more meaningful in your organization. Right-click the chosen device (Figure 5-39) and select the Rename option. Figure 5-39 Changing the display name of an SVC Figure 5-40 Enter a user defined SAN Volume Controller name Enter a meaningful name for the device and click OK as in Figure 5-40.5.6.2 Working with SAN Volume Controller mdisks To view the properties of SAN Volume Controller managed disks (Mdisk) as shown in Figure 5-41 on page 205 perform one of the following: Right-click the SVC storage resource, and select Managed Disks. On the right-hand side under the Tasks column, drag Managed Storage Units → Managed Disks onto the storage device you want to query.204 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 220. Tip: Before SAN Volume Controller managed disk properties (mdisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Managed Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. Refer to 5.4, “Performing volume inventory” on page 194 for details of performing this.Figure 5-41 The mdisk properties panel for SAN Volume ControllerFigure 5-41 shows candidate, or unmanaged Mdisks, that are available for inclusion into anexisting mdisk group.To add one or more unmanaged disks to an existing mdisk group: Select the MDisk group from the pull-down. Select one mdisk from the list of candidate mdisks, or use the <Ctrl> key to select multiple disks. Click the OK button at the bottom of the window and the selected Mdisk(s) will be added to the Mdisk group. Chapter 5. TotalStorage Productivity Center common base use 205
  • 221. 5.6.3 Creating new Mdisks on supported storage devices Attention: The Create button as seen in Figure 5-41 is not for creating new Mdisk groups. It is for creating new Mdisks on storage devices serving the SAN Volume controller. It is not possible to create new Mdisk groups using Version 2.1 of Productivity Center common base. Select the Mdisk group from the pull-down (Figure 5-41 on page 205). Select the Create button. A new panel opens to create the storage volume (Figure 5-42). Select a device accessible to the SVC (devices not marked by an asterisk). Devices marked with an asterisk are not acting as storage to the selected SAN Volume Controller. Figure 5-42 shows an ESS with an asterisk next to it. This is because of the setup on the test environment. Make sure the device you select does not have an asterisk next to it. Specify the number of Mdisks in the Volume quantity and size in the Requested volume size. Select the Defined SVC ports that should be assigned to these new Mdisks. Note: If TotalStorage Productivity Center for Fabric is installed and configured extra panels will appear to create appropriate zoning for this operation. See Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for details. Click OK to start a process that will create new volume on the selected storage device and then added then to the SAN Volume Controllers Mdisk group Figure 5-42 Create volumes to be added as Mdisks Productivity Center common base will now requests the specified storage amount from the specified backend storage device.206 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 222. 5.6.4 Create and view SAN Volume Controller Vdisks To create or view the properties of SAN Volume Controller virtual disks (Vdisk) as shown in Figure 5-43 perform one of the following: Right-click the SVC storage resource, and select Virtual Disks. On the right-hand side under the Tasks column, drag Managed Storage Units → Virtual Disks onto the storage device you want to query. In Version 2.1 of Productivity Center common base it is not possible to delete Vdisks. It is also not possible to assign or reassign Vdisks to a host after the creation process. Keep this in mind when working with storage use Productivity Center common base on a SAN Volume Controller. These task can still be performed using the native SAN Volume Controller Web based GUI. Tip: Before SAN Volume Controller virtual disk properties (Vdisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Virtual Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. To perform an inventory collection see section 5.4, “Performing volume inventory” on page 194. Figure 5-43 Launch Virtual Disks Viewing vdisks Figure 5-44 on page 208 show the Vdisk inventory and volume attributes for the selected SAN Volume controller. Chapter 5. TotalStorage Productivity Center common base use 207
  • 223. Figure 5-44 The vdisk properties panel Creating a vdisk To create a new Vdisk use the Create button as shown in Figure 5-44. You need to provide a suitable Vdisk name and select the Mdisk group from which you want to create the Vdisk.Specify the number of Vdisks to be created and the size in megabytes or gigabytes that each Vdisk should be. Figure 5-45 on page 209 shows example input in these fields.208 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 224. Figure 5-45 SAN Volume Controller vdisk creation The Host ports section of the Vdisk properties panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide Vdisk access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-46. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for details on how to configure and use it. Figure 5-46 Tivoli SAN Manager warning5.7 Working with DS4000 family or FAStT storage This section covers the Productivity Center common base functions that are available when managing DS4000 and FAStT type subsystems. There are two ways to access Productivity Center functions for a given device and these can be seen in Figure 5-47 on page 210. Chapter 5. TotalStorage Productivity Center common base use 209
  • 225. Tasks access: You will see in the right hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However not all functions are appropriate to all supported devices. Right-click access: To access all functions available for the selected device, right-click it to see a drop-down menu of options for it; Figure 5-47. Figure 5-47 shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have and/or TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed5.7.1 Changing the display name of a DS4000 or FAStT You can change the display name of a discovered DS4000 or FAStT subsystem to something more meaningful to your organization. Right-click the selected DS4000 or FAStT and click the Rename option; Figure 5-47. Figure 5-47 Changing the display name of a DS4000 or FAStT Figure 5-48 Entering a user defined display name for DS4000 or FAStT name210 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 226. Enter a meaningful name for the device and click OK as in Figure 5-48 on page 2105.7.2 Working with DS4000 or FAStT volumes To view the status of the volumes available within a selected DS4000 or FAStT device, perform one of the following: Right-click the DS4000 or FAStT storage resource, and select Volumes. On the right-hand side under the Tasks column, drag Managed Storage Units → Volumes onto the storage device you want to query. In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory is completed (see Figure 5-50 on page 212). Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to 5.4, “Performing volume inventory” on page 194 for details. Figure 5-49 Working with DS4000 and FAStT volumes Chapter 5. TotalStorage Productivity Center common base use 211
  • 227. Figure 5-50 DS4000 and FAStT volumes panel Figure 5-50 shows the volume inventory for the selected device. From this panel you can Create and Delete volumes or assign and unassign volumes to hosts.5.7.3 Creating DS4000 or FAStT volumes To create new storage volumes on a DS4000 or FAStT select the Create button from the right side of the Volumes panel (Figure 5-50). You will be presented with the Create volume panel as in Figure 5-51 below. Figure 5-51 DS4000 or FAStT create volumes Select the desired Storage Type and array from Available arrays using the drop-downs. Then enter the Volume quantity and Requested volume size of the new volumes. Finally select the host posts you want to assign to the new volumes from the Defined host ports scroll box, holding the <Crtl> key to select multiple ports. The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-52 on page 213. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for details on how to configure and use it.212 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 228. Figure 5-52 Tivoli SAN Manager warning If TotalStorage Productivity Center for Fabric is not installed click OK to continue.5.7.4 Assigning hosts to DS4000 and FAStT volumes Use this feature to assign hosts to an existing DS4000 or FAStT volume. To assign a DS4000 or FAStT volume to a host port first select a volume by clicking on it from the volumes panel (Figure 5-50 on page 212). Now click the Assign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 5-53. Select from the list of available host ports world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK . Figure 5-53 Assign host ports to DS4000 or FAStT The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-54 on page 214. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, “TotalStorage Productivity Center for Fabric use” on page 331 for details on how to configure and use it. Chapter 5. TotalStorage Productivity Center common base use 213
  • 229. Figure 5-54 Tivoli SAN Manager warning If TotalStorage Productivity Center for Fabric is not installed click OK to continue.5.7.5 Unassigning hosts from DS4000 or FAStT volumes To unassign a DS4000 or FAStT volume from a host port first select a volume by clicking on it from the volumes panel (Figure 5-50 on page 212). Now click the Unassign host button from the right side of the Volumes panel. You will be presented with a panel as in Figure 5-55. Select from the list of available host port world wide port names (wwpns), and select either a single host port wwpn, or more than one by holding down the control <Ctrl> key and selecting multiple host ports. When the desired host ports have been selected for host assignment, click OK Note: If the Unassign host button is grayed out when you have selected a volume this means that there are no current hosts assignment for that volume. If you believe this is incorrect it could be that the Productivity Center common base inventory is out of step with this devices configuration. This can arise when an administrator makes changes to the device outside of the Productivity Center common base interface. To correct this problem perform an inventory for the DS4000 or FAStT and repeat. Refer to 5.4, “Performing volume inventory” on page 194. Figure 5-55 Unassign host ports from DS4000 or FAStT TotalStorage Productivity Center for Fabric is not called to perform zoning clean up in Version 2.1. This functionality is planned in a future release.214 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 230. 5.8 Event Action Plan Builder The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Understanding Event Action Plans An Event Action Plan associates one or more event filters with one or more actions. For example, an Event Action Plan can be created to send a page to the network administrators pager if an event with a severity level of critical or fatal is received by the IBM Director Server. You can include as many event filter and action pairs as needed in a single Event Action Plan. An Event Action Plan is activated only when you apply it to a managed system or group. If an event targets a system to which the plan is applied and that event meets the filtering criteria defined in the plan, the associated actions are performed. Multiple event filters can be associated with the same action, and a single event filter can be associated with multiple actions. The list of action templates you can use to define actions are listed in the Actions pane of the Event Action Plan Builder window (see Figure 5-56). Figure 5-56 Action templates Creating an Event Action Plan Event Action Plans are created in the Event Action Plan Builder window. To open this window from the Director Console, click the Event Action Plan Builder icon on the toolbar. The Event Action Plan Builder window is displayed (see Figure 5-57 on page 216). Chapter 5. TotalStorage Productivity Center common base use 215
  • 231. Figure 5-57 Event Action Plan Builder Here are the tasks to create an Event Action Plan. 1. To begin do one of the following: – Right-click Event Actions Plan in the Event Action Plans pane to access the context menu, and then select New. – Select File → New → Event Action Plan from the menu bar. – Double-click the Event Action Plan folder in the Event Action Plans pane (see Figure 5-58). Figure 5-58 Create Event Action Plan 2. Enter the name you want to assign to the plan and click OK to save the new plan. The new plan entry with the name you assigned is displayed in the Event Action Plans pane. The plan is also added to the Event Action Plans task as a child entry in the Director Console (see Figure 5-59 on page 217). Now that you have defined an event action plan, you can assign one or more filters and actions to the plan.216 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 232. Figure 5-59 New Event Action Plan Note: You can create a plan without having defined any filters or actions. The order in which you build a filter, action, and Event Action Plan does not matter.3. Assign at least one filter to the Event Action Plan using one of the following methods: – Drag the event filter from the Event Filters pane to the Event Action Plan in the Event Action Plans pane. – Highlight the Event Action Plan, then right-click the event filter to display the context menu and select Add to Event Action Plan. – Highlight the event filter, then right-click the Event Action Plan to display the context menu and select Add Event Filter (see Figure 5-60 on page 218). Chapter 5. TotalStorage Productivity Center common base use 217
  • 233. Figure 5-60 Add events to the action plan The filter is now displayed as a child entry under the plan (see Figure 5-61). Figure 5-61 Events added to action plan 4. Assign at least one action to at least one filter in the Event Action Plan using one of the following methods: – Drag the action from the Actions pane to the target event filter under the desired Event Action Plan in the Event Action Plans pane. – Highlight the target filter, then right-click the desired action to display the context menu and select Add to Event Action Plan. – Highlight the desired action, then right-click the target filter to display the context menu and select Add Action. The action is now displayed as a child entry under the filter (see Figure 5-62 on page 219).218 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 234. Figure 5-62 Action as child of Display Events Action Plan 5. Repeat the previous two steps for as many filter and action pairings as you want to add to the plan. You can assign multiple actions to a single filter and multiple filters to a single plan. Note: The plan you have just created is not active because it has not been applied to a managed system or a group. In the next section we explain how to apply an Event Action Plan to a managed system or group. For information about editing or deleting a plan, refer to Appendix C, “Event management” on page 511.5.8.1 Applying an Event Action Plan to a managed system or group An Event Action Plan is activated only when it is applied to a managed system or group. To activate a plan: Drag the plan from the Tasks pane of the Director Console to a managed system in the Group Contents pane or to a group in the Groups pane. Drag the system or group to the plan. Select the plan, right-click the system or group, and select Add Event Action Plan (see Figure 5-63 on page 220). Chapter 5. TotalStorage Productivity Center common base use 219
  • 235. Figure 5-63 Notification of Event Action Plan added to group/system(s) Repeat this step for all associations you want to make. You can activate the same Event Action Plan for multiple systems (see Figure 5-64). Figure 5-64 Director with Event Action Plan - Display Events Once applied, the plan is activated and displayed as a child entry of the managed system or group to which it is applied when the Associations - Event Action Plans item is checked. Message Browser When an event occurs, the Message Browser (see Figure 5-65 on page 221) pops up on the server console.220 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 236. Figure 5-65 Message Browser If the message has not yet been viewed, then that Status for that message will be blank. When viewed, a checked envelope icon will appear under the Status column next to the message. To see greater detail on a particular message, select the message in the left pain and click the Event Details button (see Figure 5-66). Figure 5-66 Event Details window5.8.2 Exporting and importing Event Action Plans With the Event Action Plan Builder, you can import and export action plans to files. This enables you to move action plans quickly from one IBM Director Server to another or to import action plans that others have provided. Export Event Action Plans can be exported to three types of files: Archive: Backs up the selected action plan to a file that can be imported into any IBM Director Server. Chapter 5. TotalStorage Productivity Center common base use 221
  • 237. HTML: Creates a detailed listing of the selected action plans, including its filters and actions, in an HTML file format. XML: Creates a detailed listing of the selected action plans, including its filters and actions, in an XML file format. To export an Event Action Plan, do the following: 1. Open the Event Action Plan Builder. 2. Select an Event Action Plan from those available under the Event Action Plan folder. 3. Select File → Export, then click the type of file you want to export to (see Figure 5-67). If this Event Action Plan will be imported by an IBM Director Server, then select Archive. Figure 5-67 Archiving an Event Action Plan 4. Name the archive and set a location to save in the Select Archive File for Export window as shown in Figure 5-68 on page 223.222 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 238. Figure 5-68 Select destination and file name Tip: When you export an action plan, regardless of the type, the file is created on a local drive on the IBM Director Server. If an IBM Director Console is used to access the IBM Director Server, then the file could be saved to either the Server or the Console by selecting Server or Local from the Destinations pull-down. It cannot be saved to a network drive. Use the File Transfer task if you want to copy the file elsewhere.ImportEvent Action Plans can be imported from a file. The file must be an Archive export of anaction plan from another IBM Director Server. The steps to import an Event Action Plan areas follows:1. Transfer the archive file to be imported to a drive on the IBM Director Server.2. Open the Event Action Plan Builder from the main Console window.3. Click File → Import → Archive (see Figure 5-69 on page 224). Chapter 5. TotalStorage Productivity Center common base use 223
  • 239. Figure 5-69 Importing an Event Action Plan 4. From the Select File for Import window (see Figure 5-70), select the archive file and location. The file must be located on the IBM Director Server. If using the Console, you must transfer the file to the IBM Director Server before it can be imported. Figure 5-70 Select file for import 5. Click OK to begin the import process. The Import Action Plan window opens, displaying the action plan to import (see Figure 5-71 on page 225). If the action plan had been assigned previously to systems or groups, you will be given the option to preserve associations during the import. Select Import to complete the import process.224 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 240. Figure 5-71 Verifying import of Event Action Plan Chapter 5. TotalStorage Productivity Center common base use 225
  • 241. 226 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 242. 6 Chapter 6. TotalStorage Productivity Center for Disk use This chapter provides a step-by-step guide to configuring and using the Performance Manager functions provided by the TotalStorage Productivity Center for Disk.© Copyright IBM Corp. 2004, 2005. All rights reserved. 227
  • 243. 6.1 Performance Manager GUI The Performance Manager Graphical User Interface can be launched from IBM Director console Interface. After logging on to IBM Director, you will see a window as in Figure 6-1. On right most tasks pane, you will see Manage Performance launch menu. It is highlighted and expanded in the figure shown. Figure 6-1 IBM Director Console with Performance Manager6.2 Exploiting Performance Manager You can use the Performance Manager component of TotalStorage Productivity Center for Disk to manage and monitor the performance of the storage devices that TotalStorage Productivity Center for Disk supports. Performance Manager provides the following functions: Collecting data from devices Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS) and IBM TotalStorage SAN Volume Controller in the first release. Configuring performance thresholds228 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 244. You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria allows Performance Manager to notify you when a certain threshold has been crossed, thus enabling you to take action before a critical event occurs. Viewing performance data You can view performance data from the Performance Manager database using the gauge application programming interfaces (APIs). These gauges present performance data in graphical and tabular forms. Using Volume Performance Advisor (VPA) The Volume performance advisor is an automated tool that helps you select the best possible placement of a new LUN from a performance perspective. This function is integrated with Device Manager so that, when the VPA has recommended locations for requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without going back to Device Manager. Managing Workload Profile You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. The installation of the Performance Manager component onto an existing TotalStorage Productivity Center for Disk server provides a new ‘Manage Performance’ task tree (Figure 6-2) on the right-hand side of the TotalStorage Productivity Center for Disk host. This task tree includes: Figure 6-2 New Performance Manager tasks6.2.1 Performance Manager data collection To collect performance data for the Enterprise Storage Server (ESS), Performance Manager invokes the ESS Specialist server, setting a particular performance data collection frequency and duration of collection. Specialist collects the performance statistics from an ESS, establishes a connection, and sends the collected performance data to Performance Manager. Performance Manager then processes the performance data and saves it in Performance Manager database tables. From this section you can create data collection tasks for the supported, discovered IBM storage devices. There are two ways to use the Data Collection task to begin gathering device performance data. 1. Drag and drop the data collection task option from the right-hand side of the Multiple Device Manager application, onto the Storage Device you want to create the new task for. Chapter 6. TotalStorage Productivity Center for Disk use 229
  • 245. 2. Or, right-click a storage device in the center column, and select the Performance Data Collection Panel menu option as shown in Figure 6-3. Figure 6-3 ESS tasks panel Either operation results in a new window named Create Performance Data Collection Task (Figure 6-4). In this window you will specify: A task name A brief description of the task The sample frequency in minutes The duration of data collection task (in hours) Figure 6-4 Create Performance Data Collection Task for ESS230 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 246. In our example, we are setting up a data collection task on an ESS with Device ID2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5minutes and duration is 1 hour.It is possible to add more ESSs to the same data collection task, by clicking the Add button onthe right-hand side. You can click individual devices, or select multiples by making use of theCtrl key. See Figure 6-5 for an example of this panel. In our example, we created task for ESSwith device ID 2105.22513.Figure 6-5 Adding multiple devices to a single taskOnce we have established the scope of our data collection task and have clicked the OKbutton, we see our new data collection task available in the right-hand task column (seeFigure 6-6 on page 232). We have created task Cottle_ESS in the example. Tip: When providing a description for a new data collection task, you may elect to provide information about the duration and frequency of the task. Chapter 6. TotalStorage Productivity Center for Disk use 231
  • 247. Figure 6-6 A new data collection task In order to schedule it, right-click the selected task (see Figure 6-7 on page 233).232 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 248. Figure 6-7 Scheduling new data collection taskYou will see another window as shown in Figure 6-8.Figure 6-8 Scheduling taskYou have the option to use the job scheduling facility of TotalStorage Productivity Center forDisk, or to execute the task immediately.If you elect Execute Now, you will see a panel similar to the one in Figure 6-9 on page 234,providing you with some information about task name, task status, including the time it wasinitialized. Chapter 6. TotalStorage Productivity Center for Disk use 233
  • 249. Figure 6-9 Task progress panel If you would rather schedule the task to occur at a future time, or to specify additional parameters for the job schedule, you will walk through the panel in Figure 6-10. You may provide scheduled job description for the scheduled job. In our example, we created a job 24March Cottle ESS. Figure 6-10 New scheduled job panel234 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 250. 6.2.2 Using IBM Director Scheduler function You may specify additional scheduled job parameters by using the Advanced button. You will see the panel in Figure 6-11. You can also launch this panel from IBM Director Console → Tasks →Scheduler → File → New Job. You can also setup the repeat frequency of the task. Figure 6-11 New scheduled job, advanced tab Once you are finished customizing the job options, you may save it using either the File → Save as menu, or by clicking on the diskette icon in the top left corner of the advanced panel. When you save with advanced job options, you may provide descriptive name for the job as shown in Figure 6-12 on page 236. Chapter 6. TotalStorage Productivity Center for Disk use 235
  • 251. Figure 6-12 Save job panel with advanced options You should receive a confirmation that your job has been saved as shown in Figure 6-13. Figure 6-13 scheduled job is saved6.2.3 Reviewing Data collection task status You can review the task status using Task Status under the rightmost column Tasks. See Figure 6-14 on page 237.236 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 252. Figure 6-14 Task StatusUpon double-clicking Task Status it launches following panel as shown in Figure 6-15 onpage 238. Chapter 6. TotalStorage Productivity Center for Disk use 237
  • 253. Figure 6-15 Task Status Panel For reviewing the task status, you can click the task shown under the Task name column. For example, we selected the task FCA18P which was aborted, as shown in Figure Figure 6-16 on page 239. Subsequently, it will show the details with Device ID, Device status and Error Message ID in Device status box. You can click the entry in the device status box. It will further show up the Error message in the Error message box.238 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 254. Figure 6-16 Task status details6.2.4 Managing Performance Manager Database The collected performance data is stored in backend DB2 database. This database needs to be maintained in order to keep only relevant data in the database. Your may decide freqency for purging old data based on your organizations requirements. The performance database panel can be launched by clicking Performance Database as shown in Figure 6-17 on page 240. It will show Performance Database Properties panel as shown in Figure 6-18 on page 241. Chapter 6. TotalStorage Productivity Center for Disk use 239
  • 255. Figure 6-17 Launch Performance Manager database You can use performance database panel to specify properties for a performance database purge task. The sizing function on this panel shows used space and free space in the database. You can choose to purge performance data based on age of the data,the type of the data and the storage devices associated with the data.240 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 256. Figure 6-18 Properties of Performance databaseThe Performance database properties panel shows following: Database name The name of the database Database location The file system on which the database resides. Total file system capacity The total capacity available to the file system, in Gigabytes. Space currently used on file system It is shown in Gigabytes and also in percentage. Performance Manager database full The amount of space used by Performance Manager. The percentage shown is the percentage of available space (total space - currently used space) used by Performance Manager database. The following formula is used to derive the percentage of disk space full in the Performance Manager database: a= total capacity of the file system b= total allocated space for Performance Manager database on the file system c= the portion of the allocated space that is used by the Performance Manager database Chapter 6. TotalStorage Productivity Center for Disk use 241
  • 257. For any decimal amount over a particular number, the percentage is rounded up to the next largest integer. For example, 5.1% is rounded to and displayed as 6%. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now. High: You should purge data soon. Critical: You need to purge data now. Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Purge database options Groups the database purge information. Name Type A name for the performance database purge task. The maximum length for a name can be from 1 to 250 characters. Description (optional) Type a description for the performance database purge task. The maximum length for a description can be from 1 to 250 characters. Device type Select one or more storage device types for the performance database purge. Options are SVC, ESS, or All. (Default is All.) Purge performance data older than Select the maximum age for data to be retained when the purge task is run. You can specify this value in days (1-365) or years (1-10). For example, if you select the Days button and a value of 10, the database purge task will purge all data older than 10 days when it is run. Therefore, if it has been more than 10 days since the task was run, all performance data would be purged. Defaults are 365 days or 10 years. Purge data containing threshold exception information Deselecting this option will preserve performance data that contains information about threshold exceptions. This information is required to display exception gauges. This option is selected by default. Save as task button When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved to the IBM Director Task pane under the Performance Manager Database. Once it is saved, the task can be scheduled using the IBM Director scheduler function.6.2.5 Performance Manager gauges Once data collection is complete, you may use the gauges task to retrieve information about a variety of storage device metrics. Gauges are used to tunnel down to the level of detail necessary to isolate performance issues on the storage device. To view information collected by the Performance Manager, a gauge must be created or a custom script written to access the DB2 tables/fields directly.242 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 258. Creating a gaugeOpen the IBM Director and do one of the following tasks: Right-click the storage device in the center pane and select Gauges (see Figure 6-19).Figure 6-19 Right-click gauge opening You can click Gauges on panel shown and it will produce Job Status window as shown in Figure Figure 6-21 on page 244. It is also possible to launch Gauge creation by expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag the Gauges item to the storage device desired and drop to open the gauges for that device (see Figure 6-20 on page 244). Chapter 6. TotalStorage Productivity Center for Disk use 243
  • 259. Figure 6-20 Drag-n-drop gauge opening This will produce the Job status window (see Figure 6-21) while the Performance gauges window opens. You will see the Job status window while other selected windows are opening. Figure 6-21 Opening Performance gauges job status The Performance gauges window will be empty until a gauge is created for use. We have created three gauges (see Figure 6-22). Figure 6-22 Performance gauges Clicking on the Create button to the left brings up the Job status window while the Create performance gauge window opens.244 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 260. The Create performance gauge window changes values depending on whether the cluster,array, or volume items are selected in the left pane. Clicking on the cluster item in the leftpane produces a window as seen in Figure 6-23.Figure 6-23 Create performance gauge - PerformanceUnder the Type pull-down, select Performance or Exception.PerformanceCluster Performance gauges provide details on the average cache holding time in seconds aswell as the percent of I/O requests that were delayed due to NVS memory shortages. TwoCluster Performance gauges are required per ESS to view the available historical data foreach cluster. Additional gauges can be created to view live performance data. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. If ‘test’ were used as a gauge name, then it cannot be used for another gauge - even if another storage device were selected - as it would not be unique in the database. Example names: 28019P_C1H would represent the ESS serial number (28019), the performance gauge type (P), the cluster (C1), and historical (H) while 28019E would Chapter 6. TotalStorage Productivity Center for Disk use 245
  • 261. represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays would build on that nomenclature to group the gauges by ESS on the Gauges window. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window. Metric(s): Click on the metric(s) that will be displayed by default when the gauge is opened for viewing. Those metrics with the same value under the Units column in the Metrics table can be selected together using either Shift mouse-click or Ctrl mouse-click. The metrics in this field can be changed on the historical gauge after the gauge has been opened for viewing. In other words, a historical gauge for each metric or group of metrics is not necessary. However, these metrics cannot be changed for live gauges. A new gauge is required for each metric or group of metrics desired. Component: Select a single device from the Component table. This field cannot be changed when the gauge is opened for viewing. Data points: Selecting this radio button enables the gauge to display most recent data being obtained from currently running performance collectors against the storage device. One most recent performance data gauge is required per cluster and per metric to view live collection data. The Device pull-down displays text informing the user whether or not a performance collection task is running against this Device. You can select no. of datapoints as per your requirement to display the last “x” data points from the date of the last collection. The data collection could be currently running or most recent one. Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Click the OK button when ready to save the performance gauge (see Figure 6-24 on page 247). In the example shown inFigure 6-24 on page 247, we have created gauge with name 22513C1H with description as average cache holding time. We selected starting and ending date as 11-March-2005. This corresponds with our data collection task schedule.246 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 262. Figure 6-24 Ready to save performance gaugeThe gauge appears after clicking the OK button with the Display gauge box checked or whenthe Display button is clicked after selecting the appropriate gauge on the Performancegauges window (see Figure 6-26 on page 248). If you decide not to display gauge and saveonly, then you will see panel as shown in Figure 6-25.Figure 6-25 Saved performance gauges Chapter 6. TotalStorage Productivity Center for Disk use 247
  • 263. Figure 6-26 Cluster performance gauge - upper The top of the gauge contains the following labels: Graph Name The Name of the gauge Description The Description of the gauge Device The storage device selected for the gauge Component level Cluster, Array, Volume Component ID The ID # of the component (Cluster, Array, Volume) Threshold The thresholds that were applied to the metrics Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Metrics may be selected either individually or in groups as long as the data types are the same (for example, seconds with seconds, milliseconds with milliseconds or percent with percent). Click the Apply button to force a Performance Gauge section update with the new y-axis data. The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force a Performance Gauge section update with the new x-axis data.For example, we applied Total I/O Rate metric to the saved gauge and resultant graph is as shown in Figure 6-27 on page 249. The Performance Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-27 on page 249).248 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 264. Figure 6-27 Cluster performance gauge with applied I/O rate metricClick the Refresh button in the Performance Gauge section to update the graph with theoriginal metrics and date/time criteria. The date and time of the last refresh appear to the rightof the Refresh button. The date and time displayed update first followed by the contents of thegraph which can be up to several minutes later.Finally, the data used to generate the graph are displayed at the bottom of the window (seeFigure 6-28 on page 250). Each of the columns in the data section can be sorted up or downby clicking on the column heading (see Figure 6-32 on page 253). The sort reads the datafrom left to right so the results may not be as expected.The gauges for the array and volume components function in the same manner as the clustergauge created above. Chapter 6. TotalStorage Productivity Center for Disk use 249
  • 265. Figure 6-28 Create Performance Gauge- Lower Exception Exception gauges display data only for those active thresholds that were crossed during the reporting period. One Exception gauge displays threshold exceptions for the entire storage device based on the thresholds active at the time of collection. To create an exception gauge, select Exception from the Type pull-down menu (see Figure 6-29 on page 251).250 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 266. Figure 6-29 Create performance gauge - ExceptionBy default, the Cluster will be highlighted in the left pane and the metrics and componentsections will not be available. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying.Click the OK button when ready to save the performance gauge. We created exceptiongauge as shown in Figure 6-30 on page 252. Chapter 6. TotalStorage Productivity Center for Disk use 251
  • 267. Figure 6-30 Ready to save exception gauge The top of the gauge contains the following labels: Graph Name The Name of the gauge Description The Description of the gauge Device The storage device selected for the gauge Threshold The thresholds that were applied to the metrics Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Start Date: and End Date: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force an Exceptions Gauge section update with the new x-axis data. The Exceptions Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-31 on page 253).252 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 268. Figure 6-31 Exceptions gauge - upperClick the Refresh button in the Exceptions Gauge section to update the graph with the originaldate criteria. The date and time of the last refresh appear to the right of the Refresh button.The date and time displayed update first followed by the contents of the graph which can beup to several minutes later. Finally, the data used to generate the graph are displayed at thebottom of the window.Each of the columns in the data section can be sorted up or down by clicking on the columnheading (see Figure 6-32).Figure 6-32 Data sort options Chapter 6. TotalStorage Productivity Center for Disk use 253
  • 269. Display Gauges To display previously created gauges, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33). Figure 6-33 Performance gauges window Select one of the gauges and then click Display. Gauge Properties The Properties button allows the the following fields/choices to be modified: Performance Description Metrics Component Data points Date range - date and time ranges You can change the data displayed in the gauge from Data points with an active data collection to Date range (see Figure 6-34 on page 255). Selecting Date range allows you to choose the Start date and End Date using the performance data stored in the DB2 database.254 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 270. Figure 6-34 Performance gauge propertiesExceptionYou can change the Type property of the gauge definition from Performance to Exception. Fora gauge type of Exception you can only choose to view data for a Date range.(seeFigure 6-35 on page 256). Chapter 6. TotalStorage Productivity Center for Disk use 255
  • 271. Figure 6-35 Exception gauge properties Delete a gauge To delete a previously created gauge, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33 on page 254). Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to remove the gauge (see Figure 6-36). Figure 6-36 Confirm gauge removal To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if desired.256 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 272. 6.2.6 ESS thresholds Thresholds are used to determine watermarks for warning and error indicators for an assortment of storage metrics, including: Disk Utilization Cache Holding Time NVS Cache Full Total I/O Requests Thresholds are used either by: 1. Right-clicking on a storage device in the center panel of TotalStorage Productivity Center for Disk, and selecting the thresholds menu option (Figure 6-37) 2. Or, by dragging and dropping the thresholds task from the right tasks panel in Multiple Device Manager, onto the desired storage device, to display or modify the thresholds for that device Figure 6-37 Opening the thresholds panel Upon opening the thresholds submenu, you will see the following display, which shows the default thresholds in place for ESS as shown in Figure 6-38 on page 258. Chapter 6. TotalStorage Productivity Center for Disk use 257
  • 273. Figure 6-38 Performance Thresholds main panel On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties, Filters, and Properties. If the selected task is already enabled, then the Enable button will appear greyed out, as in our case. If we attempt to disable a threshold that is currently enabled, by clicking on the disable button, a message will be displayed as shown in Figure 6-39. Figure 6-39 Disabling threshold warning panel You may elect to continue, and disable the selected threshold, or to cancel the operation by clicking Don’t disable threshold. The copy threshold properties button will allow you to copy existing thresholds to other devices of similar type (ESS, in our case). The window in Figure 6-40 on page 259 is displayed.258 Managing Disk Subsystems using IBM TotalStorage Productivity Center
  • 274. Figure 6-40 Copying thresholds panel Note: As shown in Figure 6-40 the copying threshold panel is aware that we have registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by the semicolon delimited IP address field for the device ID “2105.22219”.The Filters window is another available thresholds option. From this panel, you can enable,disable and modify existing filter values against selected thresholds as shown in Figure 6-41.Figure 6-41 Threshold filters panelFinally, you can open the properties panel for a selected threshold, and are shown the panelin Figure 6-42 on page 260. You have options to acknowledge the values at their currentsettings, or modify the warning or error levels, or select the alert level (none, warning only,and warning or error are the available options). Chapter 6. TotalStorage Productivity Center for Disk use 259
  • 275. Figure 6-42 Threshold properties panel6.2.7 Data collection for SAN Volume Controller Performance Manager uses an integrated configuration assistant tool (ICAT) interface of a SAN Volume Controller (SVC) to start and stop performance statistics collection on an