• Like
  • Save
Tivoli storage productivity center v4.2 release guide sg247894
Upcoming SlideShare
Loading in...5
×
 

Tivoli storage productivity center v4.2 release guide sg247894

on

  • 1,032 views

 

Statistics

Views

Total Views
1,032
Views on SlideShare
1,032
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Tivoli storage productivity center v4.2 release guide sg247894 Tivoli storage productivity center v4.2 release guide sg247894 Document Transcript

    • Front coverDraft Document for Review February 17, 2011 2:17 am SG24-7894-00IBM Tivoli StorageProductivity Center V4.2Release GuideLearn the new features and functions inTivoli Storage Productivity Center V4.2Understand Storage ResourceAgents and their functionPlan to migrate, implementand customize this release Mary Lovelace Alejandro Berardinelli H. Antonio Vazquez Brust Harsha Gunatilaka Hector Hugo Ibarra Danijel Paulin Markus Standauibm.com/redbooks
    • Draft Document for Review February 17, 2011 2:17 am 7894edno.fm International Technical Support Organization Tivoli Storage Productivity Center V4.2 Release Update December 2010 SG24-7894-00
    • 7894edno.fm Draft Document for Review February 17, 2011 2:17 am Note: Before using this information and the product it supports, read the information in “Notices” on page xi.First Edition (December 2010)This edition applies to Version4, Release 2 of IBM Tivoli Storage Productivity Center (product numbers5608-WB1, 5608-WB2, 5608-WB3, 5608-WC3, 5608-WC4,5608-E14).This document created or updated on February 17, 2011.© Copyright International Business Machines Corporation 2010. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
    • Draft Document for Review February 17, 2011 2:17 am 7894edno.fm 3
    • 7894edno.fm Draft Document for Review February 17, 2011 2:17 am4 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894TOC.fmContents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview . . . . . . 1 1.1 Introduction to IBM Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Function in Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Data server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Device server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.4 Tivoli Integrated Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.5 Tivoli Storage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.6 DB2 database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.7 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.8 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.9 Integration with other applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Tivoli Storage Productivity Center family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 TPC for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3 TPC for Disk Midrange Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.4 TPC Basic Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.5 Tivoli Storage Productivity Center Standard Edition . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.6 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.7 SSPC 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.8 IBM System Director Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 New function since version 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.1 What is new for Tivoli Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 What is new for IBM Tivoli Storage Productivity Center for Replication . . . . . . . . 13 1.5 Contents of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 2. Tivoli Storage Productivity Center install on Windows . . . . . . . . . . . . . . . 17 2.1 Tivoli Storage Productivity Center installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Preinstallation steps for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 Verifying system hardware and software prerequisites. . . . . . . . . . . . . . . . . . . . . 21 2.2.2 Verifying primary domain name systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.3 Activating NetBIOS settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.4 User IDs and passwords to be used and defined . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 Installing TPC prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.1 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . 34 2.4.1 Creating the Database Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35© Copyright IBM Corp. 2010. All rights reserved. iii
    • 7894TOC.fm Draft Document for Review February 17, 2011 2:17 am 2.4.2 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . 43 2.4.3 Agent installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.4.4 Disabling TPC or TPC for Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2.5 Applying a new build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chapter 3. Tivoli Storage Productivity Center install on Linux. . . . . . . . . . . . . . . . . . . 75 3.1 Tivoli Storage Productivity Center installation on Linux . . . . . . . . . . . . . . . . . . . . . . . . 76 3.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.2 Preinstallation steps for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.2.1 Verifying system hardware and software prerequisites. . . . . . . . . . . . . . . . . . . . . 78 3.2.2 Prerequisite component for Tivoli Storage Productivity Center V4.2 . . . . . . . . . . 78 3.3 Installing the TPC prerequisite for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.3.1 DB2 installation: GUI install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . . 96 3.4.1 Creating the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.4.2 Installing TPC Servers, GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Chapter 4. Tivoli Storage Productivity Center install on AIX . . . . . . . . . . . . . . . . . . . 121 4.1 Tivoli Storage Productivity Center installation on AIX . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.1.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.1.2 Product code media layout and components . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.2 Preinstallation steps for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.2.1 Verifying system hardware prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.2.2 Verifying system software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.2.3 Prerequisite component for Tivoli Storage Productivity Center V4.2 . . . . . . . . . 124 4.3 Installing the prerequisites for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.3.1 DB2 installation: Command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.4 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . . . . . 130 4.4.1 Creating the database schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.4.2 Installing Tivoli Storage Productivity Center components . . . . . . . . . . . . . . . . . . 138 Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level. . 157 5.1 Migration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.1.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.1.3 TPC-R considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2 Credentials migration tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5.3 Agent Manager, Data and Fabric agents consideration . . . . . . . . . . . . . . . . . . . . . . . 171 5.4 Migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5.4.1 Migration from version 3.x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.4.2 Migration from version 4.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.5 Upgrading Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.6 Upgrading TPC-R in high availability environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Chapter 6. Agent migration and upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6.1 CAS and SRA history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6.3 Scenarios to migrate from CAS to SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6.3.1 Installation wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6.3.2 TPC user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6.3.3 Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 6.4 CIMOM to NAPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215iv Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894TOC.fm Chapter 7. Native API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 7.1 NAPI and other changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 7.1.1 Changed panels and/or tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 7.2 Behind the scenes - The External Process Manager . . . . . . . . . . . . . . . . . . . . . . . . . 219 7.3 Solution Design for device access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.3.1 Planning for NAPI and NAPI Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 7.3.2 Planning for CIMOM Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.3.3 Planning for Configure Devices Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 7.3.4 Planning for Monitoring Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7.4 Using Configure Devices wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 7.4.1 Add/configure a IBM DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 7.4.2 Add/configure a IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 7.4.3 Add and configure a IBM XIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 7.4.4 Add and configure a CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 7.5 Adding Fabrics and Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 7.5.1 Add/configure a Brocade Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 7.5.2 Add/configure a McData Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 7.5.3 Add/configure a Cisco Fabric or Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 7.5.4 Add/configure a Qlogic Fabric/Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.5.5 Add/configure a Brocade/McData Fabric/Switch. . . . . . . . . . . . . . . . . . . . . . . . . 253 7.6 Other enhancements and changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Chapter 8. Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2 SRA Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2.1 User Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.2.2 Platform Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 8.2.3 Communication Requirements and Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.3 SRA installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.3.1 Local graphical installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.4 SRA deployment from TPC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.5 Local/CLI install of SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 8.5.1 Steps to install the Storage Resource Agents through CLI. . . . . . . . . . . . . . . . . 271 8.6 Database Monitoring with SRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 8.6.1 Register the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 8.6.2 Set up probes and scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 8.6.3 Database capacity reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 8.6.4 Database usage reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 8.7 NetApp/NSeries Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 8.7.1 Overview of NAS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 8.7.2 Configure TPC to manage IBM N Series or NetApp Filer through Windows Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 8.7.3 Configure TPC to manage IBM N Series or NetApp Filer through UNIX Storage Resource Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 8.7.4 Retrieving and Displaying Data about NAS Filer . . . . . . . . . . . . . . . . . . . . . . . . 310 8.8 VMware Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.9 VMware Virtual Machine Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 8.10 Batch Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 8.11 SRA Fabric Functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 8.11.1 HBA Library Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 8.11.2 SRA Fabric Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 8.11.3 Fabric Agent Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 8.12 Agent resource utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Contents v
    • 7894TOC.fm Draft Document for Review February 17, 2011 2:17 am 8.13 HBA Information Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 8.14 Collecting SRA Support Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 8.15 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Chapter 9. Tivoli Storage Productivity Center for Disk Midrange Edition . . . . . . . . . 331 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 9.2 Supported devices and firmware levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 9.3 Licensing methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 9.4 Key benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Chapter 10. Tivoli Storage Productivity Center for Replication . . . . . . . . . . . . . . . . . 335 10.1 What’s new . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 10.2 Open HyperSwap replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 10.2.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 10.2.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 10.2.3 How to setup Open HyperSwap session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 10.2.4 How to perform Open HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 10.2.5 TPC-R high availability with Open HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . 357 10.3 Copy set soft removal of hardware relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 10.4 Log packages download from TPC-R GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 10.5 Path Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 10.6 SVC enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 10.6.1 SAN Volume Controller space-efficient volumes . . . . . . . . . . . . . . . . . . . . . . . 376 10.6.2 SAN Volume Controller incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . 377 10.7 DS8000 enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 10.7.1 DS8000 extent space efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 10.7.2 Global Mirror session enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 10.7.3 Multiple Global Mirror sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Chapter 11. XIV Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 11.1 Supported Firmware Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.2 Adding an XIV to TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.3 XIV Performance Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 11.4 XIV storage provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Chapter 12. SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 12.1 Purpose of SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 12.2 What’s New in TPC 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 12.3 Pre-requisites for using SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 12.4 Supported storage subsystems in SAN Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 12.5 Storage Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 12.5.1 Storage Resource Group Monitoring and Alerting . . . . . . . . . . . . . . . . . . . . . . 403 12.6 Creating a Space-Only SAN Planner Recommendation . . . . . . . . . . . . . . . . . . . . . . 405 12.7 Creating a DR Planner Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 12.8 SAN Planner with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Chapter 13. Job Management Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 13.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 13.2 Job Management Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 13.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schedule460 13.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Run460 13.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Job460 13.3 Job Management changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 13.3.1 Default Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462vi Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894TOC.fm 13.3.2 CLI and Event Driven Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 13.4 Job Management Panel explained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 13.4.1 Entities details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 13.4.2 Schedules details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 13.4.3 Runs and Jobs details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 13.5 Walk through examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 13.5.1 List Schedule log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 13.5.2 View and act on recommendation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Chapter 14. Fabric enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 14.1 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.2 Additional switch models supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.2.1 Brocade 3016 / 5410 / 5470 / 5480 / 7800 / M5440 . . . . . . . . . . . . . . . . . . . . . 478 14.2.2 Brocade Silk Worm 7800 (IBM 2005-R06) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.2.3 Brocade DCX-4S Backbone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.2.4 Brocade 8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.2.5 Cisco Nexus 5000 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 14.3 Additional HBA and CNA models supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 14.4 Integration with Brocade Data Center Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . 480 14.4.1 Supported functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 14.4.2 Adding a DCFM server into Tivoli Storage Productivity Center . . . . . . . . . . . . 480 Chapter 15. IBM Total Productivity Center database considerations . . . . . . . . . . . . 489 15.1 Database tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 15.1.1 Setting DB2 variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 15.1.2 Tune the database manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 15.1.3 Change DB2 active logs directory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 15.2 TPC repository database sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 15.2.1 Storage subsystem performance data sizing . . . . . . . . . . . . . . . . . . . . . . . . . . 498 15.3 Repository calculation templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 15.3.1 Worksheet - Sizing SVC performance collection . . . . . . . . . . . . . . . . . . . . . . . 506 Chapter 16. IBM Total Productivity Center database backup on Windows . . . . . . . . 513 16.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 16.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 16.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 16.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 16.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 515 16.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 16.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 16.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 519 16.7.1 Add new variables to Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 16.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 521 16.7.3 Reboot the TPC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 16.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 524 16.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 524 16.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 525 16.8.2 Create online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . . . . 527 16.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 16.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 16.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 16.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 16.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 531 16.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . . 531 Contents vii
    • 7894TOC.fm Draft Document for Review February 17, 2011 2:17 am 16.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 16.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 532 16.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 533 16.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 16.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 16.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 534 16.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 16.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 536 16.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 16.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 16.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 16.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 16.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 16.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 16.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 16.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 Chapter 17. IBM Total Productivity Center database backup on AIX . . . . . . . . . . . . . 553 17.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 17.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 17.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 17.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 17.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 555 17.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 17.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 17.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 559 17.7.1 Add new variables to AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 17.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 560 17.7.3 Restart DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 17.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 563 17.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 563 17.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 564 17.8.2 Create an online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . 566 17.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 17.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 17.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 17.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 17.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 569 17.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . . 570 17.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 17.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 572 17.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 572 17.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 17.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 17.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 573 17.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 17.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 575 17.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 17.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 17.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 17.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 17.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 17.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589viii Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894TOC.fm 17.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 17.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Chapter 18. IBM Total Productivity Center database backup on Linux . . . . . . . . . . . 591 18.1 Before you start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 18.2 Scripts provided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 18.3 Database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 18.4 Database backup method considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593 18.4.1 Offline backup advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 593 18.5 Common backup setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 18.6 Offline backup to filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 18.7 Offline backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 597 18.7.1 Add new variables to Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 18.7.2 Configure Tivoli Storage Manager option file and password. . . . . . . . . . . . . . . 598 18.7.3 Restart DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 18.7.4 Create an offline backup to Tivoli Storage Manager script . . . . . . . . . . . . . . . . 601 18.8 Online backup to Tivoli Storage Manager setup steps . . . . . . . . . . . . . . . . . . . . . . . 601 18.8.1 DB2 parameter changes for archive logging to Tivoli Storage Manager. . . . . . 602 18.8.2 Create an online backup script for Tivoli Storage Manager . . . . . . . . . . . . . . . 604 18.9 Online backup to a filesystem setup steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 18.9.1 Set up DB2 archive logging to a filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 18.9.2 Create online backup script to filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606 18.10 Performing offline database backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 18.10.1 Performing an offline backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 607 18.10.2 Performing an offline backup to Tivoli Storage Manager . . . . . . . . . . . . . . . . 608 18.11 Performing online database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 18.11.1 Performing an online database backup to Tivoli Storage Manager . . . . . . . . 610 18.11.2 Performing an online backup to a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 610 18.12 Other backup considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 18.13 Managing database backup versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 18.13.1 Managing backup versions for a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . 611 18.13.2 Managing archive log files on a filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . 612 18.13.3 Managing backup versions that you store in Tivoli Storage Manager. . . . . . . 613 18.14 Verifying a backup file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 18.15 Restoring the TPC database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620 18.15.1 Restoring from offline backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 18.15.2 Restoring from online backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624 18.15.3 Potential agent issues after the restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 18.16 Backup scheduling and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 18.16.1 Frequency of full TPCDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 18.16.2 TPCDB backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 Chapter 19. Useful information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 19.1 User defined properties for Fabrics and Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 630 19.2 IBM Software Support Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630 19.3 IBM Support Assistant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 19.4 Certificate errors in Windows Internet Explorer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 19.4.1 Step 1: Adress mismatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 19.4.2 Step 2: New certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 19.5 Tivoli Storage Productivity Center support matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 636 19.6 DB2 hints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 19.6.1 SQL5005C System Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 19.6.2 User ID to stop and start DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 Contents ix
    • 7894TOC.fm Draft Document for Review February 17, 2011 2:17 am Appendix A. DB2 table space considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 Selecting an SMS or DMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 Advantages of an SMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 Advantages of a DMS table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642 Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 User IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Server information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 User IDs and passwords for key files and installation. . . . . . . . . . . . . . . . . . . . . . . . . . 647 LDAP information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 IBM System Storage Enterprise Storage Server/DS6000/DS8000. . . . . . . . . . . . . . . . 649 IBM DS4000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Appendix C. Configuring X11 forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653 Preparing the display export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Preparation of the AIX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 Preparation of the Windows workstation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Launching an Xming X Window session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 VNC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 How to get Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667x Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894spec.fmNoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where suchprovisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATIONPROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS ORIMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs.© Copyright IBM Corp. 2010. All rights reserved. xi
    • 7894spec.fm Draft Document for Review February 17, 2011 2:17 amTrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business MachinesCorporation in the United States, other countries, or both. These and other IBM trademarked terms aremarked on their first occurrence in this information with the appropriate symbol (® or ™), indicating USregistered or common law trademarks owned by IBM at the time this information was published. Suchtrademarks may also be registered or common law trademarks in other countries. A current list of IBMtrademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtmlThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX 5L™ HyperSwap® System Storage® AIX® IBM® System x® AS/400® NetView® System z® DB2 Universal Database™ Passport Advantage® Tivoli Enterprise Console® DB2® Power Systems™ Tivoli® DS4000® pSeries® TotalStorage® DS6000™ RDN® WebSphere® DS8000® Redbooks® XIV® Enterprise Storage Server® Redbooks (logo) ® z/OS® FlashCopy® System p® zSeries® HACMP™ System Storage DS®The following terms are trademarks of other companies:Data ONTAP, FilerView, NetApp, Network Appliance, and the Network Appliance logo are trademarks orregistered trademarks of Network Appliance, Inc. in the U.S. and other countries.Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, othercountries, or both.Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,other countries, or both.Intel, Itanium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks ofIntel Corporation or its subsidiaries in the United States and other countries.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.xii Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894pref.fmPreface IBM® Tivoli® Storage Productivity Center V4.2 is a feature-rich storage management software suite. The integrated suite provides detailed monitoring, reporting, and management within a single console. This IBM Redbooks® publication is intended for storage administrators and users who are installing and exploiting the features and functions in IBM Tivoli Storage Productivity Center V4.2. The information in the the book can be used to plan for, install and customize the components of Tivoli Storage Productivity Center in your storage infrastructure. There are Important functional enhancements in this release: Storage Resource Agent - now supports file level and database level storage resource management (SRM) reporting for a broad set of platforms. IBM XIV® Storage System support updated - adds discovery, provisioning, and performance management support . Storage Area Network (SAN) configuration planning - now includes best practice provisioning of replication relationships and supports basic provisioning of non-IBM storage systems. Open HyperSwap® for the IBM AIX® environment - delivers application failover (no disruption of application I/O) across a synchronous mirror distance. Step-by-step procedures are provided to help perform tasks such as migrate to Storage Resource Agents, utilize Native APIs, understand and use SAN configuration planning functions, and maintain your DB2® database repository.The team who wrote this book This book was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Mary Lovelace is a Consulting IT Specialist at the International Technical Support Organization. She has experience with IBM in large systems, storage and Storage Networking product education, system engineering and consultancy, and system support. She has written redbooks about Tivoli Storage Productivity Center, Tivoli Stprage Manager, Scale Out Network Attached Storage and z/OS® storage products. Alejandro Berardinelli has been an IT Specialist within IBM Uruguay since 2005. Today his primary focus is on Tivoli® Storage Manager and Tivoli Storage Productivity Center deployments and support for multiple platforms, including AIX®, Linux®, Windows®, and z/OS. He also performs IBM storage implementations involving IBM DS8000, DS5000, tape subsystems and Brocade and CISCO switches. He has provided storage support for several customers in South America. Alejandro has a Computer Engineer degree from UDELAR. H. Antonio Vazquez Brust is an Argentina based IT Architect and works as a Technical Solution Architect for IBM Complex Engagement Services. His job roles prior to joining CES, include AIX, Linux®, AS/400®, Windows®, networking and SAN administration. Antonio has a diploma in Computer Sciences and worked for IBM for six years. Harsha Gunatilaka is a Software Engineer for Tivoli Storage Software in Tucson, AZ. He is currently part of the IBM Tivoli Storage Productivity Center development and test team. He is an IBM Certified Deployment Professional on Tivoli Storage Productivity Center and has© Copyright IBM Corp. 2010. All rights reserved. xiii
    • 7894pref.fm Draft Document for Review February 17, 2011 2:17 am experience with a wide array of IBM storage products and software. He holds a degree in Management Information Systems from the University of Arizona. Hector Ibarra is an Infrastructure IT Architect specialized in cloud computing and storage solutions based in Argentina and currently working at the IBM Argentina Delivery Center (DCA). Hector has been designated as ITA Leader for The VMware Center of Competence in 2006. He specializes in virtualization technologies and has assisted several global IBM clients to deploy virtualized infrastructures across the world. Since 2009 he has been working as the Leader for the DCA Strategy and Architecture Services department from where major projects are driven.. Danijel Paulin is an Systems Architect in IBM Croatia, working for the Systems Architect team in the CEE region. He has 13 years of experience in IT. Before joining IBM Croatia in 2003 he worked for a financial company in Croatia and was responsible for IBM mainframe and storage administration. His areas of expertise include IBM high-end disk and tape storage subsystems and architecture and design of various HA/DR/BC solutions for mainframe and open systems. . Markus Standauis an IT Specialist from Germany who joined IBM in 1996. He has worked for IBM Global Services for eight years in the field of storage services and is currently working as a Field Technical Sales Support Specialist. He studied at the University of Cooperative Education in Stuttgart, Germany, and is a graduate engineer in Information Technology. His areas of expertise include Backup and Recovery, Disaster Recovery, and OnDemand Storage Services, as well as IBM Tivoli Storage Productivity Center since Version 2.1. Thanks to the following people for their contributions to this project: Don Brennan Rich Conway Bob Haimowitz International Technical Support Organization Marcelo Ricardo Die IBM Argentina Randy Blea Diana Duan William Olsen Wayne SunNow you can become a published author, too! Heres an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlxiv Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894pref.fmComments welcome Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html Preface xv
    • 7894pref.fm Draft Document for Review February 17, 2011 2:17 amxvi Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm 1 Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview In this chapter, we introduce IBM Tivoli Storage Productivity Center, providing a high level technical overview of the product, its architecture and base components. We discuss the following topics: Introduction to IBM Tivoli Storage Productivity Center Function Architecture Product family New function since V4.1 Contents of this book© Copyright IBM Corp. 2010. All rights reserved. 1
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am1.1 Introduction to IBM Tivoli Storage Productivity Center The IBM Tivoli Storage Productivity Center suite of storage infrastructure management tools can help customers improve time to value, as well as reduce the complexity of managing their storage environments by centralizing, simplifying and optimizing storage tasks associated with storage systems, storage networks, replication services and capacity management.1.1.1 Function in Tivoli Storage Productivity Center Tivoli Storage Productivity Center includes: Storage resource management (SRM): – Reporting of volumes and file systems on a server level – Reporting on NAS and NetWare file systems – Reporting of databases capacity and usage – Constraint and quota reporting Storage subsystem management: – Volume allocation and assignment (provisioning) – Asset reporting – Performance reporting – DS8000® element management Fabric management: – Zoning – Asset reporting – Performance reporting Replication management Alerting In addition to these basic functions, Tivoli Storage Productivity Center includes more advanced functions that provide you with a set of analytics functions such as: Topology Viewer Data path explorer Configuration history Storage optimizer SAN planner Configuration analytics Consider that the type of license you have will determine the function that is available to you. Table 1-1 on page 3 provides a summary of the functions provided in Tivoli Storage Productivity Center.2 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fmTable 1-1 Tivoli Storage Productivity Center summary of functions Data Management Disk Management Fabric Management Tape Management Host centric For Storage subsystems For Fabrics For tape libraries Discovery of storage Discovery of storage Discovery of storage Discovery of storage resources resources resources resources Monitoring of storage Monitoring of storage Monitoring of storage Monitoring of storage resources resources resources resources Filesystem extension Configuration (for Configuration (for Enterprise-wide example, creating example, zoning) reporting volumes) Performance Application centric Performance management Monitoring of relational management databases DB2, Oracle, SQL server, Sybase Chargeback1.2 Architecture The IBM Tivoli Storage Productivity Center consists of several key components. In this section we show how this components are related and we describe them briefly. We also describe the different interfaces that you can use to access Tivoli Storage Productivity Center and finally its integration with other products.1.2.1 Architecture overview Figure 1-1 illustrates an architectural overview for IBM Tivoli Storage Productivity Center. Figure 1-1 Tivoli Storage Productivity Center Version 4.2 - Architecture Overview Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 3
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am1.2.2 Data server This component is the control point for product scheduling functions, configuration, event information, reporting, and graphical user interface (GUI) support. It coordinates communication with and data collection from agents that scan file systems and databases to gather storage demographics and populate the database with results. Automated actions can be defined to perform file system extension, data deletion, and Tivoli Storage Manager backup or archiving, or event reporting when defined thresholds are encountered. The Data server is the primary contact point for GUI user interface functions. It also includes functions that schedule data collection and discovery for the Device server.1.2.3 Device server This component discovers, gathers information from, analyzes performance of, and controls storage subsystems and SAN fabrics. It coordinates communication with and data collection from agents that scan SAN fabrics and storage devices.1.2.4 Tivoli Integrated Portal IBM Tivoli Storage Productivity Center V4 is integrated with IBM Tivoli Integrated Portal (TIP). This integration provides functionalities like single sign-on and the use of Tivoli Common Reporting. Single sign-on Enables you to access Tivoli Storage Productivity Center and then Tivoli Storage Productivity Center for Replication using a single user ID and password. Tivoli Common Reporting Tivoli Common Reporting (TCR) is a component provided by TIP. It is one possible option to implement customized reporting solutions using SQL database access, providing output in HTML, PDF or Microsoft® Excel. Note that Tivoli Common Reporting is intended to provide a platform to reproduce custom reports in an easy way or for reports that are to be run repeatedly—typically on a daily, weekly, or monthly basis. It does not provide any online report creation or report customization features.1.2.5 Tivoli Storage Productivity Center for Replication Starting with TPC V4.1, the IBM Tivoli Storage Productivity Center for Replication product is starting to get integrated into TPC. Currently the integration is limited to basic functions such as providing Launch in Context links in the TPC GUI, as well as crosschecks when a volume is deleted with TPC and mapping of user roles.1.2.6 DB2 database A single database instance serves as the repository for all Tivoli Storage Productivity Center components. This repository is where all of your storage information and usage statistics are stored. All agent and user interface access to the central repository is done through a series of calls and requests made to the server. All database access is done using the server component to maximize performance and to eliminate the need to install database connectivity software on your agent and UI machines.4 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm1.2.7 Agents Outside of the server, there are several interfaces that are used to gather information about the environment. The most important sources of information are the TPC agents (Storage resource agent, Data agent and Fabric agent) as well as SMI-S enabled storage devices that use a CIMOM agent (either embedded or as a proxy agent). Storage Resource agent, CIM agents, and Out of Band fabric agents gather host, application, storage system, and SAN fabric information and send that information to the Data Server or Device server. Note that Data agents and Fabric agents are supported in Tivoli Storage Productivity Center V4.2. However, no new functions were added to those agents for this release. For optimal results when using Tivoli Storage Productivity Center, migrate the Data agents and Fabric agents to Storage Resource agents.1.2.8 Interfaces As TPC gathers information from your storage (servers, subsystems, and switches) across your enterprise, it accumulates a repository of knowledge about your storage assets and how they are used. You can use the reports provided in the user interface view and analyze that repository of information from various perspectives to gain insight into the use of storage across your enterprise. The user interfaces (UI) enable users to request information and then generate and display reports based on that information. Certain user interfaces can also be used for configuration of TPC or storage provisioning for supported devices. The following interfaces are available for TPC: TPC GUI: This is the central point of TPC administration. Here you have the choice of configuring TPC after installation, define jobs to gather information, initiate provisioning functions, view reports, and work with the advanced analytics functions. Java™ Web Start GUI: When you use Java Web Start, the regular TPC GUI will be downloaded to your workstation and started automatically, so you do not have to install the GUI separately. The main reason for using the Java Web Start is that it can be integrated into other products (for example, TIP). By using Launch in Context from those products, you will be guided directly to the select panel. The Launch in Context URLs can also be assembled manually and be used as bookmarks. TPCTOOL: This is a command line (CLI) based program which interacts with the TPC Device Server. Most frequently it is used to extract performance data from the TPC repository database in order to create graphs and charts with multiple metrics, with various unit types and for multiple entities (for example, Subsystems, Volumes, Controller, Arrays) using charting software. Commands are entered as lines of text (that is, sequences of types of characters) and output can be received as text. Furthermore, the tool provides queries, management, and reporting capabilities, but you cannot initiate Discoveries, Probes and performance collection from the tool. Database access: Starting with TPC V4, the TPC database provides views that provide access to the data stored in the repository, which allows you to create customized reports. The views and the Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 5
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am required functions are grouped together into a database schema called TPCREPORT. For this, you need to have sufficient knowledge about SQL. To access the views, DB2 supports various interfaces, for example, JDBC and ODBC.1.2.9 Integration with other applications In this section, we describe Tivoli Storage Productivity Center integration with IBM Tivoli Storage Manager for backup or archival of files, IBM Tivoli Enterprise Console® (TEC) or any other SNMP manager for alert notification. Integration with Tivoli Storage Manger Use the Archive/Backup function available within the Reporting facility to define IBM Tivoli Storage Manager archive or backup jobs to run against the files that you select from reports. This function enables you to select a specific file or group of files from Data Manager reports that you want to archive or backup using Tivoli Storage Manager. Some of this reports include largest files, most obsolete files, duplicate files and constraint violations. The results of the IBM Tivoli Storage Manager backup-archive commands are viewable through the graphical user interface. In the case of constraints configured to archive-backup violating files, the results are included in the agent scan job logs (scans are responsible for enforcing constraints). In the case of file report driven archive-backup operations, a new type of job (archive-backup job) is created. The results of the backup operations in this case are found in archive-backup job logs. SNMP For users planning to use the Simple Network Managment Protocol (SNMP) trap alert notification capabilities of Tivoli Storage Productivity Center, SNMP Management Information Base (SNMP MIB) files are included on the installation media. The MIB is provided for use by your SNMP management console software (for example, IBM Tivoli NetView® or HP Openview). This will allow you to better view TPC-generated SNMP traps from within your management console software. Integration with Tivoli Enterprise Console Netcool/OMNIbus Tivoli Storage Productivity Center can use the Event Integration Facility (EIF) to send messages to the IBM Tivoli Enterprise Console (TEC) or the follow-on product Netcool/OMNIbus. This can allow one of the two central monitoring applications to consider Tivoli Storage Productivity Center alerts in causal analysis for problems. TEC/OMNIbus is added as a destination for alerts, in addition to SNMP Trap and Windows Event Log.1.3 Tivoli Storage Productivity Center family In this section we describe the Tivoli Storage Productivity Center components.1.3.1 TPC for Data Tivoli Storage Productivity Center for Data provides over 400 enterprise-wide reports, monitoring and alerts, policy-based action and file-system capacity automation in a heterogeneous environment. Tivoli Storage Productivity Center for Data is designed to help improve capacity utilization of file systems and databases and add intelligence to data protection and retention practices.6 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm1.3.2 TPC for Disk Tivoli Storage Productivity Center for Disk is designed to provide storage device configuration and management from a single console. It includes performance capabilities to help monitor and manage performance, and measure service levels by storing received performance statistics into database tables for later use. Policy-based automation enables event action based on business policies. It sets performance thresholds for the devices based on selected performance metrics, generating alerts when those thresholds are exceeded. Tivoli Storage Productivity Center for Disk helps simplify the complexity of managing multiple SAN-attached storage devices.1.3.3 TPC for Disk Midrange Edition Tivoli Storage Productivity Center for Disk Midrange Edition is designed help reduce the complexity of managing storage devices by allowing administrators to configure, manage and monitor performance of their entire storage infrastructure from a single console. Tivoli Storage Productivity Center for Disk Midrange Edition provides the same features and functions as Tivoli Storage Productivity Center for Disk, but is limited to managing IBM System Storage® DS3000, DS4000®, DS5000 and FAStT devices. It provides performance management, monitoring and reporting for these devices.1.3.4 TPC Basic Edition IBM Tivoli Storage Productivity Center Basic Edition is focused on providing basic device management services for IBM System Storage DS3000, DS4000, DS5000, DS6000™, DS8000, XIV, IBM SAN Volume Controller and heterogeneous storage environments. This tool provides storage administrators a simple way to conduct device management for multiple storage arrays and SAN fabric components from a single integrated console. IBM Tivoli Storage Productivity Center Basic Edition also does discovery and asset management of tape libraries, specifically IBM 3494 and 3584 Tape Libraries.1.3.5 Tivoli Storage Productivity Center Standard Edition IBM Tivoli Storage Productivity Center Standard Edition is one of the industry’s most comprehensive storage resource management solutions by combining the consolidated benefits of the four previouse components as one bundle. In addition to the benefits and features of Data, Disk, Disk Midrange Edition and Basic Edition, IBM Tivoli Storage Productivity Center Standard Edition includes advanced analytics for storage provisioning, change management and performance optimization capabilities. It also offers additional management, control and performance reporting for the fibre channel SAN infrastructure.1.3.6 TPC for Replication The IBM Tivoli Storage Productivity Center for Replication helps to manage the advanced copy services provided by the IBM Enterprise Storage Server® (ESS) Model 800, IBM System Storage DS8000, IBM DS6000 and IBM System Storage SAN Volume Controller (SVC). Details of the Tivoli Storage Productivity Center for Replication V4.2 are found in Chapter 10, “Tivoli Storage Productivity Center for Replication” on page 335. Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 7
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am1.3.7 SSPC 1.5 The IBM System Storage Productivity Center (SSPC) is a hardware appliance that consolidates IBM storage administration and configuration utilities into a single console. The new features, functions and enhancements that are included in IBM System Storage Productivity Center (SSPC) Version 1.5 are described in this section. Machine type MC5 SSPC machine type and model number 2805-MC5 is available with an Intel® Quad-core Xeon processor that runs at 2.53 GHz. Tivoli Storage Productivity Center 4.2.1 IBM Tivoli Storage Productivity Center Basic Edition 4.2.1 and IBM Tivoli Storage Productivity Center for Replication Basic Edition 4.2.1 are preinstalled on the System Storage Productivity Center server. Microsoft Windows Server 2008 R2 Standard operating system for 64-bit processors Tivoli Storage Productivity Center 4.2.1 uses the Windows Server 2008 R2 Standard operating system for 64-bit processors. Optional preinstalled host bus adapter card To provide additional storage on the SSPC server, SSPC V1.5 offers an optional host bus adapter (HBA) so that you can move the Tivoli Storage Productivity Center database from the SSPC server to an internal redundant hard disk drive or to an IBM System Storage DS8000. You can connect the DS8000 storage system to the SSPC server directly or through a storage area network (SAN). You can order the HBA to be preinstalled on the SSPC server or as a Feature Code 3570 for your IBM service representative to install. After you configure multipath input and output (I/O) and zone the SAN into a partition for the SSPC and the DS8000 storage system, you can move the Tivoli Storage Productivity Center database to the storage system. The procedure includes prerequisites. Optional redundant hard disk drives Order a second pair of hard disk drives as Feature Code 5190 if you want to move the Tivoli Storage Productivity Center from the SSPC server and make additional storage available. Contact your IBM service representative to install the redundant hard disk drives. Documentation about international power requirements Power-cord options are available for attaching the SSPC to a power-distribution unit (PDU) for a rack or to a country- or region-specific wall outlet. Options are provided in tables listing power cords and receptacles. DB2 9.7 Tivoli Storage Productivity Center supports IBM DB2 9.7. System Storage DS8000 Release 6.0 You can monitor IBM System Storage DS8000 6.0 or earlier releases with SSPC. SAN Volume Controller Console Release 6.1 SSPC supports IBM System Storage SAN Volume Controller 6.1, but the software is no longer preinstalled on the SSPC server. Instead, you can start the console from the web browser on the SSPC desktop. When you configure SAN Volume Controller to Tivoli Storage Productivity Center, you must supply a private Secure Shell (SSH) key file in OpenSSH format or PuTTY (.ppk) format that is not protected by a password. In previous versions of SSPC, the PuTTY8 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm utility was preinstalled on the SSPC server so that you could convert a PuTTY .ppk key that was protected by a password to OpenSSH key format. The PuTTY utility is no longer preinstalled on the SSPC server. Instead, you can download and install the PuTTYgen utility from a website. With SSPC 1.5, single sign-on authentication is extended to SAN Volume Controller. When you use single sign-on authentication, you can enter one user ID and password to access multiple applications. IBM Storwize V7000 Storwize V7000 is a hardware and software solution that provides online storage optimization through real-time data compression. This solution helps to reduce costs without performance degradation. The Storwize V7000 hardware consists of a set of drive enclosures. Control enclosures contain disk drives and two node canisters. The two nodes within the canisters make an I/O group that is attached to the SAN fabric. A single pair of nodes is responsible for serving I/O on a given volume. Because a volume is served by two nodes, there is no loss of availability if one node fails or is taken offline. Storwize V7000 can be used as a traditional RAID storage system where the internal drives are configured into arrays, and volumes are created from those arrays. Storwize V7000 can also be used to virtualize other storage systems. Storwize V7000 supports both regular and solid-state drives (SSDs). A Storwize V7000 system without any internal drives can be used as a storage virtualization solution. Tivoli Storage Productivity Center provides basic support for this product, including discovery, probes, performance monitoring, Storage Optimizer, and SAN Planner. Similar to other systems, Storwize V7000 is displayed as an entity within data sources, reports, data collection schedules, the topology viewer, and so on. Tivoli Storage Productivity Center for Replication also supports Storwize V7000. Like SAN Volume Controller, single sign-on authentication is available to Storwize V7000. System Storage DS® Storage Manager Release 10.70 The IBM System Storage DS Storage Manager user interface is available for you to optionally install on the SSPC server or on a remote server. The DS Storage Manager 10.70 can manage the DS3000, DS4000, and DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM agent you can start the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of Tivoli Storage Productivity Center. IBM Java Release 1.6 IBM Java 1.6 is preinstalled and can be used with DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM agent management commands The DS CIM agent management commands (DSCIMCLI) for Release 6.0 are preinstalled on the SSPC server. Optional media to recover image for 2805-MC5 Order Feature Code 9010 if you did not back up your SSPC 2805-MC5 image and you need to recover it. The optional media feature includes a recovery CD for the Microsoft Windows Server 2008 R2 Standard operating system and two recovery DVDs for the SSPC 1.5 image. Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 9
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am1.3.8 IBM System Director Storage Productivity Center The IBM ISD Storage Productivity Center is designed to only be used as an embedded version of TPC, without a GUI, only under a consuming application. At the time of this book release is only used under IBM System Director version 6.2.1 using Tivoli Storage Productivity Center V4.2.1.1.4 New function since version 4.1 In this section we describe the new functions provided in Tivoli Storage Productivity Center from V4.2 highlighting the changes since Tivoli Storage Productivity Center V4.1.1.4.1 What is new for Tivoli Storage Productivity Center In this section we provide a summary of the new functions in Tivoli Storage Productivity Center V4.2 since V4.1. Native storage system interfaces provided for DS8000, SVC and XIV To improve the management capabilities and performance of data collection for the DS8000, SAN Volume Controller (SVC), and XIV storage systems, native storage system interfaces are provided. Now TPC communicates with these storage systems through ESSNI interface for DS8000, SSH for SVC and XCLI for XIV. These interfaces replace the CIM agent (SMI-S agent) implementation. SAN Volume Controller When you add the SAN Volume Controller to Tivoli Storage Productivity Center, you need to supply a private SSH key. Tivoli Storage Productivity Center requires OpenSSH key or putty (.ppk) key format. More information can be found in “IBM SAN Volume Controller” on page 225. Configure Devices wizard Use the Configure Devices wizard to set up storage devices for monitoring by IBM Tivoli Storage Productivity Center. The wizard guides you through the steps for adding a device as a data source, running a discovery, including devices in groups, specifying alerts, and setting up data collection schedules. The wizard supports configuration of storage subsystems, fabrics and switches, computers and tape libraries. Job Management panel The Job Management panel in the user interface lets you view and manage the schedules, runs, and jobs related to the storage entities that are monitored by Tivoli Storage Productivity Center. Storage Resource agents The Storage Resource agents now perform the functions of the Data agents and Fabric agents (Out-of-band Fabric agents are still supported and their function has not changed.). Before you migrate an existing Data agent or Fabric agent to a Storage Resource agent or deploy a new Storage Resource agent, make sure that the product functions you want to use on the monitored devices are available for those agents.10 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm Data agents, Fabric agents, and Agent Manager The Data agents, Fabric agents, and Agent Manager are supported in the current release but no new functions were added to these components. The legacy Data agents and Fabric agents are supported at version 3.3.x and 4.1.x, and can communicate with the Tivoli Storage Productivity Center 4.2 server. You can also select to migrate the Data agents and Fabric agents to Storage Resource agents. The Tivoli Storage Productivity Center V4.2 installation program does not support installation of the Data agent or Fabric agent. If you want to install the legacy Data agent or Fabric agent, you must have a previous Tivoli Storage Productivity Center installation program that supports installing the Data agent or Fabric agent. If you are installing DB2 9.7 and want to use the Agent Manager, you must install a new release of Agent Manager 1.4.2 or later. Agent Manager 1.3 does not support DB2 9.7. Most of the information about Tivoli Common Agent Services has been removed from the Tivoli Storage Productivity Center version 4.2 documentation. However, this information is still available in the Information Center for Tivoli Storage Productivity Center version 4.1.1. SAN Planner Tivoli Storage Productivity Center provides a new SAN Planner wizard which has been enhanced to support the following functions. SAN Volume Controller with provisioning and workload profiles The SAN Planner recommendations are limited to SAN Volume Controller front-end operations only. The support includes the creation and provisioning of VDisks with the recommended I/O group and preferred node for each VDisk. The SAN Planner does not support back-end operations such as the creation of new MDisks or the creation or expansion of MDisk groups. Space-only planning for all storage subsystems All storage subsystems supported by Tivoli Storage Productivity Center can be used for space-based planning. Resiliency profile for TPC for Replication supported devices The SAN Planner has a new profile called the resiliency profile for resilient resources. The resiliency profile is created internally when you select different options in the SAN Planner wizard. Resilient resource planning is available only for devices which are supported by Tivoli Storage Productivity Center for Replication. The supported devices are DS8000, DS6000, ESS, and SAN Volume Controller. Space-efficient volumes The SAN Planner now has an option to provision space-efficient volumes on supported storage subsystems. These storage subsystems are: SVC (v4.3 or later), XIV (v10.1 or later), and DS8000 (v4.3 or later). Encrypted volumes Tivoli Storage Productivity Center supports encrypted volumes for the DS8000. The SAN Planner has been enhanced to allow input from the user for encrypted volumes as needed. The SAN Planner currently supports encrypted volumes for the DS8000 and SAN Volume Controller (if the DS8000 is used as a backend device). Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 11
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am Candidate storage resource group For the SAN Planner, the candidate storage resource group is a container of storage resources. When you provide a candidate storage resource group for input, the SAN Planner uses the storage subsystems, pools, and volumes from that storage resource group for provisioning new storage. Tivoli Storage Productivity Center for Disk Midrange Edition Tivoli Storage Productivity Center for Disk Midrange Edition provides storage device configuration, performance monitoring, and management of Storage Area Network attached devices from a single console. This product provides basic disk functions and performance monitoring capabilities for IBM System Storage DS3000, IBM System Storage DS4000, and IBM System Storage DS5000. Operating system support Tivoli Storage Productivity Center supports these new operating systems for the Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication servers: Red Hat Enterprise Linux Server and Advanced Platform 5 for x86-64 Windows 2008 Enterprise Edition R2 Windows 2008 Standard Edition R Tivoli Storage Productivity Center supports these new operating systems for the Storage Resource agents: HP-UX 11i v3 with Itanium® Red Hat Enterprise Linux Advanced Platform version 5.4 Windows 2008 R2 (Standard Edition, Data Center Edition, and Enterprise Edition) Sun Solaris 9 and 10 (SPARC architecture) Windows 2008 Standard Edition Windows 2008 SP2 (Standard Edition, Data Center Edition, and Enterprise Edition) The Tivoli Storage Productivity Center GUI can be installed on Windows 7. You must install the GUI using Java Web Start with Java 6. Java 6 provides additional features to work with the enhanced security of Windows 7. The CLI installation on Windows 7 is not supported. New switches supported in toleration mode only Tivoli Storage Productivity Center displays both Fibre Channel over Ethernet and FC ports in the switch port lists. Now supports the following switches: Brocade 8000, Brocade DCX-4S Backbone and Cisco Nexus 5000. Note that not all functions are supported, for example it does not support the Converged Enhanced Ethernet (CEE) or Fibre Channel over Ethernet (FCoE) connectivity functions. Brocade Data Center Fabric Manager Tivoli Storage Productivity Center supports the new embedded SMI Agent in the Data Center Fabric Manager (DCFM) 10.4.0 or later (it still supports separate non-embedded SMI Agent). This DCFM manages both the McDATA and Brocade switches. The DCFM manages multiple fabrics within and across data centers. When you configure DCFM, you set up one switch to be the "master switch" which interconnects to all the other switches in the fabric. The embedded SMI Agent supports the SMI-S 1.2 standards.12 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm IBM Tivoli Storage Productivity Center Monitoring Agent The IBM Tivoli Storage Productivity Center Monitoring Agent can be used by the IBM Tivoli Enterprise Monitoring Server to monitor systems in your enterprise. This agent is an optional program you can install and use in your enterprise. XIV Storage System Tivoli Storage Productivity Center supports performance monitoring and provisioning for XIV storage systems through the native interface.1.4.2 What is new for IBM Tivoli Storage Productivity Center for Replication Tivoli Storage Productivity Center for Replication 4.2 adds the following new features, functions, and enhancements since Tivoli Storage Productivity Center V4.1. More details on Tivoli Storage Productivity Center for Replication can be found in Chapter 10, “Tivoli Storage Productivity Center for Replication” on page 335. Open HyperSwap replication Open HyperSwap replication is a special Metro Mirror replication method designed to automatically failover I/O from the primary logical devices to the secondary logical devices in the event of a primary disk storage system failure. This function can be done with minimal disruption to the applications that are using the logical devices. Open HyperSwap replication applies to both planned and unplanned replication swaps. When a session has Open HyperSwap enabled, an I/O error on the primary site automatically causes the I/O to switch to the secondary site without any user interaction and with minimal application impact. In addition, while Open HyperSwap is enabled, the Metro Mirror session supports disaster recovery. If a write is successful on the primary site but is unable to get replicated on the secondary site, IBM Tivoli Storage Productivity Center for Replication suspends all replication for the session, thus ensuring that a consistent copy of the data exists on the secondary site. If the system fails, this data might not be the latest data, but the data will be consistent and allow the user to manually switch host servers to the secondary site. Soft removal of hardware relationships When you remove a copy set from IBM Tivoli Storage Productivity Center for Replication, you can choose to keep the hardware relationships on the storage systems. This is useful when you want to migrate from one session type to another or when resolving problems. Download log packages from the graphical user interface To aid in speedy diagnostics of IBM Tivoli Storage Productivity Center for Replication anomalies, you can download a log package to the local system from the graphical user interface. You no longer need to log into the IBM Tivoli Storage Productivity Center for Replication server to collect the log package. Global Mirror and Metro Mirror Path Manager Provides peer-to-peer remote copy (PPRC) path support. The Path Manager allows you to : specify what ports to use when establishing the PPRC paths and keep that information persistent for use when the path is terminated because of a peer-to-peer suspend operation. specify port pairings in a simple CSV file format to establish PPRC data paths. The specified port pairings are used whenever new paths are required to be established. Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 13
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am Additional details available for Global Mirror sessions For Global Mirror sessions, the Session Details panel now includes the Global Mirror Info tab. This tab displays information about the Global Mirror session, including information about the Global Mirror master, consistency groups that have been formed, and data exposure time. SAN Volume Controller session enhancements IBM Tivoli Storage Productivity Center for Replication supports the following: SAN Volume Controller space-efficient volumes in all IBM Tivoli Storage Productivity Center for Replication SAN Volume Controller sessions. SAN Volume Controller space-efficient volumes are intended to be used as FlashCopy® targets. SAN Volume Controller incremental FlashCopy in the IBM Tivoli Storage Productivity Center for Replication FlashCopy, Metro Mirror with practice, and Global Mirror with practice sessions. DS8000 session enhancements IBM Tivoli Storage Productivity Center for Replication supports the following: DS8000 extent space-efficient volumes on all IBM Tivoli Storage Productivity Center for Replication DS8000 sessions. IBM Tivoli Storage Productivity Center for Replication displays whether a volume is extent space efficient or not. There are some restrictions whether a space-efficient volume can be placed in a copy set. This restriction is based on the DS8000 microcode. Multiple Global Mirror sessions in a storage system allows you to create multiple sessions and individually manage (start, suspend, recover, and so on) data assigned to different hosts or applications. DB2 no longer supported as the datastore for operational data With version 4.2, IBM Tivoli Storage Productivity Center for Replication no longer supports DB2 as the datastore for its operational data. It now uses an embedded repository for its operational data. The IBM Tivoli Storage Productivity Center for Replication 4.2 installation program automatically migrates any data in an existing and operational IBM Tivoli Storage Productivity Center for Replication DB2 database to the embedded repository as part of upgrading to IBM Tivoli Storage Productivity Center for Replication 4.2 from an earlier version. New IBM Tivoli Storage Productivity Center for Replication 4.2 installations use the embedded repository by default.1.5 Contents of the book The contents of this book focus on the install and migration to functions provided in Tivoli Storage Productivity Center V4.2. A hands-on scenario approach is taken whenever possible. The following is a list of the topics documented in the remainder fo this book. Tivoli Storage Productivity Center V4.2 introduction and overview Install of Tivoli Storage Productivity Center base code Tivoli Storage Productivity Center installation on Linux Tivoli Storage Productivity Center installation and upgrade on AIX Migrate Tivoli Storage Productivity Center base code to current level Agent migration and upgrade Native API Storage Resource Agent System Storage Productivity Center Tivoli Storage Productivity Center for Disk Midrange Edition14 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Overview.fm Tivoli Storage Productivity Center for Replication XIV Support SAN Planner Job Management Panel Fabric Enhancements VIO Server environment Native Reporting provided in the Tivoli Storage Productivity Center Tivoli Storage Productivity Center database backup on Windows Tivoli Storage Productivity Center database backup on AIX Tivoli Storage Productivity Center database backup on Linux Chapter 1. Tivoli Storage Productivity Center V4.2 introduction and overview 15
    • 7894Overview.fm Draft Document for Review February 17, 2011 2:17 am16 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm 2 Chapter 2. Tivoli Storage Productivity Center install on Windows In this chapter, we show the step-by-step installation of the Tivoli Storage Productivity Center V4.2 on the Windows platform. Of the available installation paths, Typical and Custom, we describe the Custom installation in our environment. We also show TPC for Replication install considerations. This install documented in this book is based on an environment where clean servers are available for the install. A summary of the major changes in V4.2 is also provided.© Copyright IBM Corp. 2010. All rights reserved. 17
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am2.1 Tivoli Storage Productivity Center installation Tivoli Storage Productivity Center a installation wizard that guide you through the installation of the Tivoli Storage Productivity Center servers and agents. The installation in this chapter is not related to any of the different licenses that are available. All editions use the same code base and as such all the panels look the same. The prerequisite components have to be installed prior to invoking the installation wizard.2.1.1 Installation overview In order to get Tivoli Storage Productivity Center V4.2 to work, you need to follow the following steps: Check that the system meets the prerequisites. See 2.2, “Preinstallation steps for Windows” on page 20. Install the prerequisite components. See 2.3, “Installing TPC prerequisites” on page 24. Install Tivoli Storage Productivity Center components. See 2.4, “Installing Tivoli Storage Productivity Center components” on page 34. Install Tivoli Storage Productivity Center agents. See 2.4.3, “Agent installation” on page 61. You should unerstand that there is a difference between an agent installation versus a deployment. We use the term installation if the agent is locally installed with a GUI or CLI installer, but we say deployed, when the Tivoli Storage Productivity Center server is running and pushes the agent onto a server, without being locally logged in on that system. You can install all the Tivoli Storage Productivity Center components using Typical installation or Custom installation. Typical installation The Typical installation allows you to install all the components of the Tivoli Storage Productivity Center on the local server in one step, but you still can decide which components to install: Server:Data Server, Device Server, Replication Manager and TIP Clients:Tivoli Storage Productivity Center GUI Storage Resource Agent The drawback of using the Typical installation is that the defaults for the components are used. Our recommendation is not to use the Typical installation, because the control of the installation process is much better when you use the Custom installation method. Custom installation The Custom installation allows you to install parts of Tivoli Storage Productivity Center separately and provides options toc change default settings, like user IDs and directories. This is the installation method that we recommend. When you install Tivoli Storage Productivity Center, you have these installable components: Database Schema Data Server and Device Server Graphical User Interface (GUI)18 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Command Line Interface (CLI) Storage Resource Agent Tivoli Storage Productivity Center for Replication install considerations At about 75% of the installation the installer will launch the Tivoli Storage Productivity Center for Replication installation wizard to give you the options to also change some installation parameters. You basically have to step through it, and press finsh to start ithe Tivoli Storage Productivity Center for Replication installation procudure. Once this is done, you have to click Finish to return to the Tivoli Storage Productivity Center installer to complete the last few steps of the installation. Installation timing The approximate time to install Tivoli Storage Productivity Center, including Tivoli Integrated Portal, is about 60 minutes. The approximate time to install Tivoli Storage Productivity Center for Replication is about 20 minutes.2.1.2 Product code media layout and components In this section, we describe the contents of the product media at the time of writing. The media content will differ depending on whether you are using the Web images or the physical media shipped with the Tivoli Storage Productivity Center V4.2 package. Passport Advantage and Web media content The Web media consists of a disk image and a SRA zip file. The disk image in broken up into 4 parts: Disk1 part 1 - contains these Tivoli Productivity Center components: – Database Schema – Data Server – Device Server – GUI – CLI – Storage Resource agent Disk1 part 2 - contains these Tivoli Productivity Center components: – IBM Tivoli Integrated Portal – IBM Tivoli Storage Productivity Center for Replication Disk1 part 3 - contains: – IBM Tivoli Integrated Portal Fixpack Note: Part 1, part2 and part 3 are require for every TPC installation and need to be downloaded and extracted to a single directory. Disk1 part 4 - contains an optional component: – IBM Tivoli Storage Productivity Center Monitoring Agent for IBM Tivoli Monitoring Note: On Windows, ensure that the directory name where the installation images reside has no spaces or special characters. This will cause the Tivoli Storage Productivity Center installation to fail. For example, this happens if you have a directory name such as: C:tpc 42 standard editiondisk1 Chapter 2. Tivoli Storage Productivity Center install on Windows 19
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am The SRA zip file contains Tivoli Storage Productivity Center Storage Resource Agents (SRAs). It doesn’t come with a GUI installer. In order to understand how this installation method works see <<< SRA chapter >>> Tivoli Storage Productivity Center Storage Resource Agent contains the local agent installation components: – Storage Resource agent – Installation scripts for the Virtual I/O server The content of this disk is: Directory: readme Directory: sra File: version.txt In addition to the images mentioned above there are these images available: Tivoli Storage Productivity Center Storage National Language Support IBM Tivoli Storage Productivity Center for Replication Two Site Business Continuity License, which is available for Windows, Linux and AIX IBM Tivoli Storage Productivity Center for Replication Three Site Business Continuity License, which is available for Windows, Linux and AIX Physical media The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 part 1 and Disk1 part 2 content described in “Passport Advantage and Web media content” on page 19. The physical media CD is the same as the Web Disk2 media.2.2 Preinstallation steps for Windows Certain prerequisite components need to be installed before proceeding with the Tivoli Storage Productivity Center V4.2 installation. They are: For Tivoli Storage Productivity Center V4.2. DB2 UDB Enterprise Server Edition is the only prerequisite component. The following list shows the supported levels of DB2: – IBM DB2 UDB Enterprise Server Edition • v9.1 Fix Pack 2 or later • v9.5 Fix Pack 3a or later • v9.7 without any Fix Pack Attention: For DB2 9.7, use the version of DB2 shipped with Tivoli Storage Productivity Center. Do not use DB2 9.7 with fix pack 1 or fix pack 2. This causes issues with Tivoli Storage Productivity Center. Starting from Tivoli Storage Productivity Center V4.1, the installation of Tivoli Agent Manager is optional. You are required to install it only if you need to use Data agents on platforms that are not supported with Storage Resource Agents Agent Manager 1.3.2 (any subversion) supports DB 9.1. For DB2 9.5 support, you need to use Agent Manager version 1.3.2.30 which is shipped with Tivoli Storage Productivity Center 4.1.1. If you are planning to use DB2 9.7, you must install Agent Manager 1.4.x or later.20 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Order of prerequisite component installation The order to follow when installing the prerequisite components is: 1. DB2 UDB 2. optionally Tivoli Agent Manager, but this can be also installed later on when TPC is already running2.2.1 Verifying system hardware and software prerequisites For the hardware and software prerequisites, refer to the Tivoli Storage Productivity Center support site. http://www-947.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storag e_Productivity_Center_Standard_Edition2.2.2 Verifying primary domain name systems Before you start the installation, we recommend that you verify if a primary domain name system (DNS) suffix is set. This can require a computer restart. To verify the primary DNS name, follow these steps: 1. Right-click My Computer on your desktop. 2. Click Properties. The System Properties panel is displayed as shown in Figure 2-1. 3. Click the Computer Name tab. On the panel that is displayed, click Change. Figure 2-1 System Properties Chapter 2. Tivoli Storage Productivity Center install on Windows 21
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 4. Enter the host name in the Computer name field. Click More to continue (see Figure 2-2). Figure 2-2 Computer name 5. In the next panel, verify that Primary DNS suffix field displays the correct domain name. Click OK (see Figure 2-3). Figure 2-3 DNS domain name 6. If you made any changes, you must restart your computer for the changes to take effect (see Figure 2-4). Figure 2-4 You must restart the computer for changes to take effect22 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm2.2.3 Activating NetBIOS settings If NetBIOS is not enabled on Microsoft Windows 2003, then GUID is not generated. You must verify and activate NetBIOS settings. On your Tivoli Storage Productivity Center Server, go to Start  Control Panel  Network Connections. Select your Local Area Connections. From the Local Area Connection Properties panel, double-click Internet Protocol (TCP/IP). The next panel is the Internet Protocol (TCP/IP) Properties. Click Advanced as shown in Figure 2-5. Figure 2-5 TPC/IP properties Chapter 2. Tivoli Storage Productivity Center install on Windows 23
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am On the WINS tab, select Enable NetBIOS over TCP/IP and click OK (see Figure 2-6). Figure 2-6 Advanced TCP/IP properties2.2.4 User IDs and passwords to be used and defined For considerations and information on the user IDs and passwords you need to define or set up during TotalStorage® Productivity Center installation. See the planning chapter in the Tivoli Storage Productivity Center Installation and Configuration Guide, SC27-2337 for user ID and password considerations. We have added a useful table in Appendix B, “Worksheets” on page 643 that can be used to track the user ID created and storage subsystem information. Note: It is a good practice to use the worksheets in Appendix B, “Worksheets” on page 643 to record the user IDs and passwords used during the installation of Tivoli Storage Productivity Center.2.3 Installing TPC prerequisites In this section, we show how to install the Tivoli Storage Productivity Center prerequisites on Windows. We perform a typical installation of DB2 Enterprise Server Edition Version 9.7. Before beginning the installation, it is important that you log on to your system as a local administrator with Administrator authority.24 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm2.3.1 DB2 installation To begin the installation of DB2, follow these steps: 1. Insert the IBM DB2 Installer CD into the CD-ROM drive. If Windows autorun is enabled, the installation program ought to start automatically. If it does not, open Windows Explorer and go to the DB2 Installation image path and double-click setup.exe. Note: Only the user ID that has installed the DB2 product does has the privilege to issues the db2start and db2stop commands. You will see the Welcome panel, as shown in Figure 2-7. Select Install a Product to proceed with the installation. Figure 2-7 DB2 Setup Welcome panel Chapter 2. Tivoli Storage Productivity Center install on Windows 25
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 2. The next panel allows you to select the DB2 product to be installed. Select the DB2 Enterprise Server Edition Version 9.7 by clicking Install New to proceed as shown in Figure 2-8. Figure 2-8 Select product 3. The DB2 Setup wizard panel is displayed, as shown in Figure 2-9. Click Next to proceed. Figure 2-9 Setup wizard 4. The next panel displays the license agreement; click I accept the terms in the license agreement (Figure 2-10).26 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-10 License agreement 5. To select the installation type, accept the default of Typical and click Next to continue (see Figure 2-11). Figure 2-11 Typical installation 6. Select Install DB2 Enterprise Server Edition on this computer and save my settings in a response file (see Figure 2-12). Specify the path and the file name for the response file in the Response file name field. The response file will be generated at the end of the installation flow and it can be used to perform additional silent installations of DB2 using the same parameters specified during this installation. Click Next to continue. Chapter 2. Tivoli Storage Productivity Center install on Windows 27
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-12 Installation action 7. The panel shown in Figure 2-13 shows the default values for the drive and directory to be used as the installation folder. You can change these or accept the defaults, then click Next to continue. In our installation, we accept to install on the C: drive. Figure 2-13 Installation folder 8. The next panel requires user information for the DB2 Administration Server; it can be a Windows domain user. If it is a local user, select None - use local user account for the Domain field. The user name field is prefilled with a default user name.You can change it or leave the default and type the password of the DB2 user account that you want to create (see Figure 2-14). Leave the check-box Use the same user name and password for the remaining DB2 services checked and click Next to continue. DB2 creates a user with the following administrative rights: – Act as a part of an operating system. – Create a token object.28 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm – Increase quotas. – Replace a process-level token. – Log on as a service. Figure 2-14 User Information 9. In the Configure DB2 instances panel, accept the default and click Next to continue (see Figure 2-15). Figure 2-15 Configure DB2 instances Chapter 2. Tivoli Storage Productivity Center install on Windows 29
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 10.The next panel allows you to specify options to prepare the DB2 tools catalog. Accept the defaults, as shown in Figure 2-16. Verify that Prepare the DB2 tools catalog on this computer is not selected. Click Next to continue. Figure 2-16 Prepare db2 tools catalog 11.The next panel, shown in Figure 2-17, allows you to set the DB2 server to send notifications when the database needs attention. Ensure that the check-box Set up your DB2 server to send notification is unchecked and then click Next to continue. Figure 2-17 Health Monitor30 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm 12.Accept the defaults for the DB2 administrators group and DB2 users group in the Enable operating system security for DB2 objects panel shown in Figure 2-18 and click Next to proceed. Figure 2-18 Enable operating system security for DB2 objects 13.Figure 2-19 shows the summary panel about what is going to be installed, based on your input. Review the settings and click Finish to continue. Figure 2-19 Summary panel Chapter 2. Tivoli Storage Productivity Center install on Windows 31
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am The DB2 installation proceeds and you see a progress panel similar to the one shown in Figure 2-20. Figure 2-20 DB2 Enterprise Server Edition installation progress 14.When the setup completes, click Next, as shown in Figure 2-21. Figure 2-21 DB2 setup summary panel32 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm 15.The next panel allows you to install additional products. In our installation, we clicked Finish on the panel shown in Figure 2-22 to exit the DB2 setup wizard. Figure 2-22 DB2 setup final panel 16. Click Exit on the DB2 Setup Launchapd (Figure 2-23) to complete the installation. Figure 2-23 DB2 Setup Launchpad Verifying the installation Follow these steps to verify the DB2 installation: 1. Launch a DB2 Command window: Start  IBM DB2  DB2COPY1 (Default)  Command Line Tools  Command Window (see Figure 2-24). Chapter 2. Tivoli Storage Productivity Center install on Windows 33
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-24 DB2 Command Windows 2. Create the SAMPLE database, entering the db2sampl command as shown in Figure 2-25. Figure 2-25 Create the SAMPLE database 3. Enter the following DB2 commands. Connect to the SAMPLE database, issue a simple SQL query, and reset the database connection: db2 connect to sample db2 “select * from staff where dept = 20” db2 connect reset The result of these commands is shown in Figure 2-26. Figure 2-26 DB2 commands results2.4 Installing Tivoli Storage Productivity Center components Now that all the prerequisites have been installed, we can install the Tivoli Storage Productivity Center components, keeping in mind that with Tivoli Storage Productivity Center V4.2 both Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication are installed.34 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm We split the installation in two separate stages: we install the Database Schema first and, after that, we install the Data Server and the Device Server. This is because if you install all the components in one step and any part of the installation fails for any reason (for example, space or passwords), the installation suspends and rolls back, uninstalling all the previously installed components. Other than that, you could also install the schema and Tivoli Storage Productivity Center at the same time2.4.1 Creating the Database Schema Before starting the installation, verify that a supported version of DB2 Enterprise Server Edition has been installed and it has been started. Important: Log on to your system as a local administrator with database authority. Follow these steps: 1. If Windows autorun is enabled, the installation program ought to start automatically. If it does not, open Windows Explorer and go to the Tivoli Storage Productivity Center CD–ROM drive or directory. Double-click setup.exe. 2. Choose your language and click OK (see Figure 2-27). Figure 2-27 Language selection panel Chapter 2. Tivoli Storage Productivity Center install on Windows 35
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 2-28). Figure 2-28 License panel 4. Figure 2-29 shows how to select typical or custom installation. You have the following options: – Typical installation: This selection allows you to install all of the components on the same computer by selecting Servers, Agents, and Clients. – Custom installation: This selection allows you to install the database schema, the TPC server, CLI, GUI and Storage Resource Agent separately. – Installation licenses: This selection installs the Tivoli Storage Productivity Center licenses. The Tivoli Storage Productivity Center license is on the CD. You only need to run this option when you add a license to a Tivoli Storage Productivity Center package that has already been installed on your system. For example, if you have installed Tivoli Storage Productivity Center for Data package, the license will be installed automatically when you install the product. If you decide to later enable Tivoli Storage Productivity Center for Disk, run the installer and select Installation licenses. This option will allow you to install the license key from the CD. You do not have to install the Tivoli Storage Productivity Center for Disk product. In this chapter, we document Custom Installation. Select also the directory where you want to install Tivoli Storage Productivity Center. A default install directory is suggested; you can accept it or change it and then click Next to continue.36 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-29 Custom installation 5. In the Custom installation, you can select all the components in the panel shown in Figure 2-30. By default, all components are checked. Because in our scenario, we show the installation in stages, we only select the option to Create database schema, and click Next to proceed (see Figure 2-30). Figure 2-30 Custom installation component selection Chapter 2. Tivoli Storage Productivity Center install on Windows 37
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 6. To start the Database creation, you must specify a DB2 user ID and password. We suggest that you use the same DB2 user ID that you created when you installed DB2. Click Next, as shown in Figure 2-31. Figure 2-31 DB2 user and password Note: It is importnat that the user that is entered her need to be part of the DB2ADMNS group, because only those users are allowed to do the actions that are required to create a new database and install the schema into that database. If thet user is not part of the DB2ADMNS group it is likely that the installation will fail at about 7% (see Figure 2-32 on page 39).38 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-32 DB2 error during schema installation 7. Enter your DB2 user ID and password again. This ID doesn’t have to be the same ID as the first one. Make sure that you have the option Create local database selected. By default, a database named TPCDB is created. Click Database creation details... to continue (see Figure 2-33). . Figure 2-33 DB2 user and create local database Chapter 2. Tivoli Storage Productivity Center install on Windows 39
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am The panel in Figure 2-34 allows you to change the default space assigned to the database. Review the defaults and make any changes. In our installation we accepted the defaults. For better performance, we recommend that you: – Allocate TEMP DB on a separate physical disk from the Tivoli Storage Productivity Center components. – Create larger Key and Big Databases. Select System managed (SMS) and click OK and then Next to proceed (Figure 2-34). To understand the advantage of an SMS database versus a DMS database or the Automatic Storage, refer to the section entitled, “Selecting an SMS or DMS table space” in Appendix A, “DB2 table space considerations” on page 639. Figure 2-34 DB schema space Note: The Tivoli Storage Productivity Center schema name cannot be longer than eight characters. 8. You will see the Tivoli Storage Productivity Center installation information that you selected as shown in Figure 2-35. Click Install to continue.40 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-35 Tivoli Storage Productivity Center installation information Figure 2-36 is the Database Schema installation progress panel. Wait for the installation to complete. Figure 2-36 installing database schema 9. Upon completion, the Successfully Installed panel is displayed. Click Finish to continue (Figure 2-37). Chapter 2. Tivoli Storage Productivity Center install on Windows 41
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-37 Installation summary information Verifying the installation To check the installation, choose Start  All Programs  IBM DB2  General Administration Tools  Control Center, to start the DB2 Control Center. Under All Databases, verify that you have at least a database named TPCDB, as shown in Figure 2-38. Figure 2-38 Verifying DB2 installation Attention: Do not edit or modify anything in the DB2 Control Center. This can cause serious damage to your table space. Simply use the DB2 Control Center to browse your configuration.42 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Log files Check for errors and Java exceptions in the log files at the following locations: <InstallLocation>TPC.log <InstallLocation>logdbSchemainstall For Windows, the default InstallLocation is c:Program FilesIBMTPC. Check for the success message at the end of the log files for successful installation.2.4.2 Installing Tivoli Storage Productivity Center components In this step we perform a custom installation to install the following components: Data Server Device Server GUI CLI Note: We don’t reccomend to install the Storage Resource Agent (SRA) at this time. Installing any SRA via the installer requires you to also uninstall the SRA using the installer, so so in most cases using the TPC GUI to deploy agents (instead of installing them) is the more flexible approach. During this process two additional components will be also installed: the Tivoli Integrated Portal and the Tivoli Storage Productivity Center for Replication. Preinstallation tasks To install Data Server and Device Server components, you must log on to the Windows system with a user ID that has the following rights, which any user that is part of the DB2ADMNS group has automatically: Log on as a service. Act as part of the operating system. Adjust memory quotas for a process. Create a token object. Debug programs. Replace a process-level token. Be certain that the following tasks are completed: The Database Schema must be installed successfully to start the Data Server installation. The Data Server must be successfully installed prior to installing the GUI. The Device Server must be successfully installed prior to installing the CLI. Local Database Validation error After a successful DB2 9.7 installation on 64-bit Windows 2003 and Windows 2008 servers. the database instance will not be recognized by the Tivoli Storage Productivity Center installer without a reboot of the server. During Tivoli Storage Productivity Center V4.2 install or upgrade, a pop-up window containing the following message may seen "Local database validation: No valid local database found on the system for installation of the Data Server, Device server or creation of the database schema." This requirement is documented in the Tivoli Storage Productivity Center V4.2 InfoCenter. https://www-304.ibm.com/support/docview.wss?uid=swg21452614 Chapter 2. Tivoli Storage Productivity Center install on Windows 43
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Custom installation To perform a custom installation, follow these steps: 1. Start the Tivoli Storage Productivity Center installer. 2. Choose the language to be used for installation. 3. Accept the terms of the License Agreement. 4. Select the Custom Installation. 5. Select the components you want to install. In our scenario, we select the Servers, GUI, CLI as shown in Figure 2-39. Notice that the field, Create database schema, is grayed out. Click Next to continue. Note: Because we selected to install also the Data agents and Fabric agents, the Register with the agent manager check box is selected and grayed out. Figure 2-39 Installation selection 6. If you are running the installation on a system with at least 4 GB but less than 8 GB of RAM, you will get the warning message shown in Figure 2-40. Click the OK button to dismiss it and proceed with the installation. Figure 2-40 Memory warning panel 7. In the Database administrator information, the DB2 user ID and password are filled in because we used them to create the Database Schema. See Figure 2-41. Click Next.44 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-41 DB2 User ID and password The user ID is saved to the install/uninstall configuration files, so if the password has changed from the time you first installed a TPC component a wrong password might be populated into this panel. 8. We want to use the database TPCDB we created in the previous section on the same machine. So we select Use local database and we click Next to continue (Figure 2-42). Figure 2-42 Use local database selection TPC can also run having the DB schema installed on another server. In this case you have to install the TPC schema on that server following the procedure documented in the previous section. Then, installing the other TPC components, you have to select the Chapter 2. Tivoli Storage Productivity Center install on Windows 45
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Use remote database option and specify the host name of the server running the DB2 Manager. The other fields must be prefilled as shown in Figure 2-43. Verify their values and click Next. Note: If you have the TPC schema already installed locally, the option of using a remote database is disabled. You have to uninstall the local copy and rerun the installation program to enable the remote database option. Figure 2-43 Remote database selection If you selected to use a remote database, a warning message shown in Figure 2-44 is presented, reminding you to ensure that the remote DB2 instance is running before proceeding.46 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-44 Ensure that DB2 is running on the remote system 9. In the panel in Figure 2-45, enter the following information: – Data Server Name: Enter the fully qualified host name of the Data Server. – Data Server Port: Enter the Data Server port. The default is 9549. – Device Server Name: Enter the fully qualified host name of the Device Server. – Device Server Port: Enter the Device Server port. The default is 9550. – TPC Superuser: Enter the name of a OS group that will be granted the superuser role within TPC. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value that you enter for LDAP TPC Administrator group overrides the value that you entered here for the TPC superuser. – Host Authentication Password: This is the password used internal communication between TPC components, like the Data Server and the Device Server. Note: This password can be changed when you right click Administrative Services -> Services -> Device Server -> Device Server and select change password. – Data Server Account Password: Chapter 2. Tivoli Storage Productivity Center install on Windows 47
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am For Windows only. TPC installer will create an ID called TSRMsrv1 with the password you specified here to run the Data Server service. The display name for the Data Server in Windows Services panel is: IBM Tivoli Storage Productivity Center - Data Server – WebSphere® Application Server admin ID and Password: This is the user ID and password required by the Device Server to communicate with the embedded WebSphere. You can use the same user as the user that was entered on the panel in Figure 2-42 on page 45. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value entered for the LDAP TPC Administrator group overrides the value you entered here for the WebSphere Application Server admin ID and password. If you click the Security roles... button: The Advanced security roles mapping panel is displayed. You can assign a Windows OS group to a role group for each TPC role that you want to make an association with, so you can have separate authority IDs to do various TPC operations. The operating group must exist before you can associate a TPC role with a group. You do not have to assign security roles at installation time, you can assign these roles after you have installed TPC. If you click the NAS discovery... button: The NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you installed TPC. Click Next to continue (Figure 2-45). Figure 2-45 Component information for installation 10.The following panel lets us select an existing Tivoli Integrated Portal to use or install a new one. Because we are installing a new instance, we have to specify the installation48 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm directory and the port number. See Figure 2-46. TIP will use 10 port numbers starting from the one specified in the Port field (called Base Port). The 10 ports will be: – base port – base port+1 – base port+2 – base port+3 – base port+5 – base port+6 – base port+8 – base port+10 – base port+12 – base port+13 The TIP administrator ID and password are pre-filled with the WebSphere admin ID and password specified during the Device Server installation (see Figure 2-45 on page 48). Figure 2-46 Tivoli Integrated Portal panel 11.The next panel, shown in Figure 2-47, allows you to choose the authentication method that TPC will use to authenticate the users: – If you want to authenticate the users against the operating system, select this option and click Next. – If you want to use an LDAP or Active Directory, you need to have an LDAP server already installed and configured. If you decide to use this option, select the LDAP/Active directory radio button, click Next, and additional panels are displayed. Chapter 2. Tivoli Storage Productivity Center install on Windows 49
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-47 Authentication type Panel a. If you selected the LDAP/Active Directory option, the panel shown in Figure 2-48 is displayed. Insert the LDAP Server host name and change the LDAP Port Number if it is not corresponding to the proposed default value. You also need to fill in the Bind Distinguished Name and the Bind Password only if the anonymous binds are disabled on your LDAP server. Then click Next to continue. Figure 2-48 LDAP Server panels b. In the panel shown in Figure 2-49, you are required to insert the LDAP RDN® for users and groups and the attributes that must be used to search the directory. When you click50 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Next, the TPC installation makes an attempt to connect to the LDAP server to validate the provided parameters. If the validation is successful, you are prompted with the next panel; otherwise an error message is shown explaining the problem encountered. Figure 2-49 LDAP RDN details c. In the panel shown in Figure 2-50, you are requested to specify the LDAP user ID and password corresponding to the TPC Administrator and the LDAP group that will be mapped to the TPC Administrator group. Also in this panel, after filling in the fields and clicking Next, the installation program will connect to the LDAP server to verify the correctness of the provided values. If the validation is successful, the next installation panel is shown. Figure 2-50 LDAP user and group for TPC administration Chapter 2. Tivoli Storage Productivity Center install on Windows 51
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Warning: Due to the WebSphere Application Server APAR PK77578, the LDAP TPC Administrator user name value must not contain a space in it. 12.The Summary information panel is displayed. Review the information, then click Install to continue (see Figure 2-51). Figure 2-51 Summary of installation The installation starts. You might see several messages related to Data Server installation similar to Figure 2-52. Figure 2-52 Installing Data Server52 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm You might also see several messages about the Device Server installation, as shown in Figure 2-53 and after that, messages related to the TIP installation, similar to Figure 2-54. Figure 2-53 Installing Device Server Figure 2-54 Installing TIP Note: The installation of the Tivoli Integrated Portal (TIP) can be a time consuming process, requiring more time than the other TPC components. TIP is finished the the process bar has reached 74% Chapter 2. Tivoli Storage Productivity Center install on Windows 53
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am TPC for Replication installation 13.Upon completion of the TIP installation, the TPC for Replication installation is launched. The TPC installation is temporarily suspended, and the panel in Figure 2-55 remains in the background while the TPC for Replication installation starts (see Figure 2-56.) Figure 2-55 Installation panel launching the TPC for Replication a. The Welcome panel is displayed. See Figure 2-56. Click Next to proceed. Figure 2-56 TPC for replication Welcome panel54 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Warning: If you do not plan to use TPC for Replication, do not interrupt the installation by clicking the Cancel button. This will result in an interruption in the installation process with a subsequent complete TPC installation rollback. Complete the installation and then disable TPC for Replication. b. The installation wizard checks on the system prerequisites to verify that the operating system is supported and the appropriate fix packs are installed (see Figure 2-57). Figure 2-57 System prerequisites check running c. If the system pass the prerequisites check, the panel shown in Figure 2-58 is displayed. Click the Next button. Figure 2-58 System prerequisites check passed Chapter 2. Tivoli Storage Productivity Center install on Windows 55
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am d. The license agreement panel is shown. Accept it and click Next as shown in Figure 2-59. Figure 2-59 License Agreement Panel e. On the panel shown in Figure 2-60, you can select the directory where TPC for Replication will be installed. A default location is displayed. You can accept it or change it based on your requirements. We decided to install TPC for Replication into the E: drive. When done, click Next to continue. Figure 2-60 Destination Directory panel56 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm f. In the panel shown in Figure 2-61 you can select the TPC for Replication user ID and Password. This ID is usually the system administrator user ID. If you are using Local OS Authentication and you want to enable the Single Sign-On feature for this user ID you have to provide the same credentials provided for the WebSphere Application Server Administrator (see step 9 on page 47). Figure 2-61 TPC-R user ID and password Note: If you want to use another user ID, you need to create it before beginning the installation and ensure that it has administrator rights. g. The Default ports panel is displayed.Ensure that the selected ports are available on the server and then click Next. See Figure 2-62. Figure 2-62 TPC-R Ports panel Chapter 2. Tivoli Storage Productivity Center install on Windows 57
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am h. Review the settings shown in Figure 2-63 and click Install to start the installation. Figure 2-63 TPC-R Settings panel i. The installation of TPC for Replication starts. Several messages about the installation process are shown, such as the one in Figure 2-64. Figure 2-64 TPC-R installation running j. After the completion of the TPC for Replication installation, a summary panel is shown reporting also the URL where the Web browser can be pointed to access the TPC-R Web-User Interface. By clicking the Finish button, this panel is closed and the installation flow goes back to the TPC installation panels (see Figure 2-65).58 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-65 TPC-R Summary panel Note: Tivoli Storage Productivity Center for Replication is installed with no license. You must install the Two Site or Three Site Business Continuity (BC) license. 14.After the creation of the TPC uninstaller, you see the summary information panel (Figure 2-66). Read and verify the information and click Finish to complete the installation. Figure 2-66 Component installation completion panel Chapter 2. Tivoli Storage Productivity Center install on Windows 59
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Verifying the installation At the end of the installation, we can use the Windows Services panel to verify that the TPC services (see Figure 2-67) have been installed. Figure 2-67 Windows service The following services are related to TPC: IBM Tivoli Storage Productivity Center - Data Server IBM WebSphere Application Server v6.1 - Device Server IBM WebSphere Application Server v6.1 - CSM is the service related to TPC for Replication Moreover, another process, shown Figure 2-68, is present in the Services list and it represents the Tivoli Integrated Portal. Figure 2-68 TIP Process60 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm All of the following services have to be present and started. Log files for Data Server Check the logs for any errors or Java exceptions. On Windows, the default installLocation is c:Program FilesIBMTPC. The log files for the Data Server are: <InstallLocation>TPC.log <InstallLocation>logdatainstall <InstallLocation>loginstall <InstallLocation>datalog Log files for Device Server Check the log files for any errors. The log files for the Device Server are: <InstallLocation>TPC.log <InstallLocation>logdeviceinstall <InstallLocation>devicelog Log files for GUI Check the log files for any errors. The log files for the GUI are: <InstallLocation>TPC.log <InstallLocation>logguiinstall <InstallLocation>guilog Log files for CLI Check the log files for any errors. The log files for the CLI are: <InstallLocation>TPC.log <InstallLocation>logcliinstall2.4.3 Agent installation In this section, we present how to locally install Tivoli Storage Productivity Center agents. Data agent or Fabric agent install The Tivoli Storage Productivity Center V4.2 installation program does not support installation of the Data agent or Fabric agent. If you want to install the legacy Data agent or Fabric agent, you must have a previous Tivoli Storage Productivity Center installation program that supports installing the Data agent or Fabric agent. Storage Resource agent installation You typically install the Storage Resource agent using the Tivoli Storage Productivity Center GUI. However, it is also possible to install it locally on a server through a command line. Refer to “SRA installation methods” on page 258 for additional information and examples. Depending on the decision of running the agent as a daemon or non-daemon service (on-demand service) and on the communication protocol that must be used, other parameters might be required. The images of the Storage Resource agent are located on both TPC images disks under <DiskImage>/data/sra/windows. We navigate to the <DiskImage>/data/sra/windows/bin directory. In our environment the communication is between two Windows machines, so the default communication protocol Chapter 2. Tivoli Storage Productivity Center install on Windows 61
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am used is Windows (SMB). We have also decided to run the agent as a non-daemon service. As a result, the command that we are issuing requires a minimum set of parameters and will look similar to these: Agent -install -serverPort <serverport> -serverIP <serverIP> -installLoc <installLocation> -userID <userID> -password <password> The meanings and the values of these parameters are specified in Table 2-1. Table 2-1 Storage Resource agent install parameters Parameter Explanation Value serverPort The port of the TPC Data Server. The default value is 9549 9549. serverIP IP address or fully qualified DNS name of the server. colorado.itso.ibm.com installLoc Location where the agent will be installeda. c:tpcsra userID The user ID defined on the agent system. This is the Administrator user ID t.hat the server can use to connect to the agent system password Password for the specified User ID. itso13sj a. Make sure that when you specify a directory to install the Storage Resource agent into, you do not specify an ending slash mark (). For example, do not specify C:agent1 because this will cause the installation to fail. Figure 2-69 shows a successful installation of the Storage Resource agent. Figure 2-69 Successful Storage Resource agent installation62 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm To verify that the installation completed correctly from the TPC GUI, log on to the TPC GUI and go to Administrative Services  Data Sources  Data/Storage Resource Agents. The installed agent is now present in the list as shown in Figure 2-70. Figure 2-70 Agents in TPC GUI Note: For the agent installed on server maryl.itso.ibm.com, the Agent Type column is Storage Resource and the Last Communication Type is Windows.2.4.4 Disabling TPC or TPC for Replication If you have installed TPC V4.2 on a machine with more than 4 GB of RAM but less than 8 GB we strongly suggest that you run only TPC or TPC for Replication on that machine. In this case you must disable one of the two products. Also, if you have a powerful server but you plan to use only one of the two products, you can disable the other with the procedure we document here. Disabling TPC for Replication To disable the TPC for Replication server, go to Start  Settings  Control Panel  Administrative Tools  Services. Right-click the following service: IBM WebSphere Application Server V6.1 - CSM Then select Properties, as shown in Figure 2-71. Figure 2-71 TPC for Replication service Chapter 2. Tivoli Storage Productivity Center install on Windows 63
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am On the panel shown in Figure 2-72, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel. Figure 2-72 Service properties panel Disabling TPC To disable the TPC, go to Start  Settings  Control Panel  Administrative Tools  Services. Right-click the following service: IBM WebSphere Application Server V6.1 - DeviceServer Then select Properties, as shown in Figure 2-73. Figure 2-73 Services panel On the panel shown in Figure 2-74, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel.64 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-74 Service properties panel Repeat the same procedure for the following services: IBM Tivoli Storage Productivity Center - Data Server IBM Tivoli Storage Resource agent - <directory> if a Storage Resource agent is installed. <directory> is where the Storage Resource agent is installed. The default is <TPC_install_directory>agent Optionally you can also disable the following two services: Tivoli Integrated Portal - TIPProfile_Port_<xxxxx> where <xxxxx> indicates the port specified during installation. The default port is 16310. IBM ADE Service (Tivoli Integrated Portal registry). Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services and you are not using LDAP Chapter 2. Tivoli Storage Productivity Center install on Windows 65
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am2.5 Applying a new build In this section we well cover the steps needed to apply a new build to Tivoli Storage Productivity Center on a Windows platform. Several steps remain the same for other platforms. 1. Checking the version installed. To check your current Tivoli Storage Productivity Center version installed navigate to Help  About (Figure 2-75). This will present you the window show on Figure 2-76 where you can see that the version installed is 4.2.1.152. Figure 2-75 Tivoli Storage Productivity Center Help menu Figure 2-76 TPC version instaled 2. Prepare your environment. To upgrade Tivoli Storage Productivity Center you need to make sure that all the GUIs are closed. There is no need to stop any Tivoli Storage Productivity Center service. 3. Run the installer. Double click on “setup.exe” to start the installation program. The select language for the installation window pops up (Figure 2-77 on page 67).Click OK to continue.66 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-77 Installer, language selection. 4. On Figure 2-78 accept the license by clicking the radio button and click Next to continue. Figure 2-78 License terms 5. Select Custom installation as the installation type (Figure 2-79 on page 68). Click Next to continue. Chapter 2. Tivoli Storage Productivity Center install on Windows 67
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-79 Installation type 6. On the component selection window (Figure 2-80) all options are grayed out so you cannot change them. Click Next to continue. Figure 2-80 Component selection 7. Provide Database administrator information (Figure 2-81 on page 69). The fields should be filled in automatically. Click Next to continue.68 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm Figure 2-81 Database administrator information 8. Provide database schema information on Figure 2-82. The fields should be autocompleted. Click Next to continue. Figure 2-82 Database schema information Chapter 2. Tivoli Storage Productivity Center install on Windows 69
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am 9. Review the information provided and click Install to proceed with the upgrade (Figure 2-83). During the installation process you are presented several windows with the installation progress. Figure 2-83 Summary 10.The installation wizard may not allways shutdown the device server service and the installation will fail with a message “Cannot upgrade component Device Server” as shown in Figure 2-84. To avoid this error you need to kill the process that is using the file C:Program FilesIBMTPCdeviceappswas. We used for that purpose the “process explorer” utility as we will show next. Figure 2-84 Error during installation Process Explorer can be downloaded for free from the Microsoft website at http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx. It is an executable file (no installation required) which provides much more information from Windows processes than the preincluded Windows Task Manager. We will show in the following steps how it works focusing on the issue we had with the device server process not being restarted. It is not our intention to show all the function of Process Explorer.70 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm a. Once you have downloaded the utility, double click on its icon and it will open a window like the one shown in Figure 2-85. If your operating system is working on 64 bits a new executable file called procexp64.exe will be generated on the same path where the original one resides. Figure 2-85 Process Explorer main window b. Next we need to know what process is causing the installation to fail so we will need to know the image path for the process running. Click on View  Select Columns () and then, on the Process Image tab click the checkbox “Image Path” as shown in . This will add a new column to Process Explorer main window where you are able to see the full image path of the each process running. Click OK to finish. Figure 2-86 Select Columns from the View menu Chapter 2. Tivoli Storage Productivity Center install on Windows 71
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am Figure 2-87 Select Columns window c. Now back to the Process Explorer window scroll right to look at the image paths (Figure 2-88) and look for an entry like the one highlighted in the red box. Figure 2-88 Path column d. Finally, the process highlighted is that one running on the path shown on Tivoli Storage Productivity Center installation wizard, kill the process by either pressing “Del” on your keyboard or right clicking over the process and selecting “Kill Process”. You should be able to continue with the installation by clicking Next on the Tivoli Storage Productivity Center installation wizard.72 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Install.fm 11.After the installation completes click Finish to exit the install wizard, Figure 2-89. Figure 2-89 Installation completed window Chapter 2. Tivoli Storage Productivity Center install on Windows 73
    • 7894Install.fm Draft Document for Review February 17, 2011 2:17 am74 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 3 Chapter 3. Tivoli Storage Productivity Center install on Linux In this chapter, we show the step-by-step installation of Tivoli Storage Productivity Center V4.2 on the Red Hat Linux platform. Of the available installation paths, Typical and Custom, we describe the Custom installation in our environment.© Copyright IBM Corp. 2010. All rights reserved. 75
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am3.1 Tivoli Storage Productivity Center installation on Linux This chapter describes how to install IBM Tivoli Storage Productivity Center Standard Edition V4.2 and IBM Tivoli Storage Productivity Center for Replication V4.2 on 64-bit Red Hat Enterprise Linux 5 using the graphical interface. The prerequisite components (DB2 and Agent Manager) are installed prior to invoking the installation program. This section also provides information about the preparation work required before installing the Tivoli Storage Productivity Center family.3.1.1 Installation overview In order to successfully install Tivoli Storage Productivity Center V4.2 you need to follow certain steps as indicated here: Check that the system meets the prerequisites. Refer to 3.2, “Preinstallation steps for Linux” on page 78. Install and configure all required prerequisite components. Refer to 3.3, “Installing the TPC prerequisite for Linux”. Install Tivoli Storage Productivity Center database schema. Refer to 3.4.1, “Creating the database schema” on page 97. Install Tivoli Storage Productivity Center server components. Refer to 3.4.2, “Installing TPC Servers, GUI and CLI” on page 103. You can install Tivoli Storage Productivity Center family components using either Typical installation or Custom installation. Typical installation The Typical installation allows you to install all the components of the Tivoli Storage Productivity Center on the local server in one step. Our recommendation is not to use the Typical installation, because the control of the installation process is much better when you use the Custom installation method. Custom installation The Custom installation allows you to install each component of the Tivoli Storage Productivity Center separately and deploy remote Fabric and or Data agents on various computers. Additional panels are presented allowing you to control the installation sequence of the components and specify additional TPC parameters. This is the installation method that we recommend. Note: Tivoli Storage Productivity Center for Replication is no longer a stand-alone application. Tivoli Storage Productivity Center Version 4.2 now installs Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication Version 4.2 during the server components installation process. When you install Tivoli Storage Productivity Center using custom installation, you have the following installable components: Database schema Tivoli Storage Productivity Center Servers Graphical User Interface (GUI) Command Line Interface (CLI) Data agent76 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Fabric agent After Tivoli Storage Productivity Center Standard Edition is installed, the installation program will start the Tivoli Storage Productivity Center for Replication installation wizard. The approximate time to install Tivoli Storage Productivity Center, including Tivoli Integrated Portal, is about 60 minutes. The approximate time to install Tivoli Storage Productivity Center for Replication is about 20 minutes.3.1.2 Product code media layout and components This section outlines the contents of the product media at the time of writing. The media content will differ depending on whether you are using the Web images or the physical media shipped with the TPC V4.2 package. Passport Advantage and Web media content The Web media consists of a disk image and a SRA package Disk 1 contains all Tivoli Productivity Center components: – Database Schema – Data Server – Device Server – GUI – CLI – Local Data agent – Local Fabric agent – Storage Resource agent – Remote Data agent – Remote Fabric agent – Tivoli Integrated Portal – Tivoli Storage Productivity Center for Replication Note: Disk 1 has four parts to it. All parts must be downloaded and extracted into the same directory. The SRA packagecontains local and remote agent installation images: – Local Data agent – Local Fabric agent – Storage Resource agents – Remote Data agent – Remote Fabric agent – Installation scripts for the Virtual I/O server Physical media The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 parts described in “Passport Advantage and Web media content” on page 77. The physical media CD contains the SRA package. Chapter 3. Tivoli Storage Productivity Center install on Linux 77
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am3.2 Preinstallation steps for Linux Before deploying Tivoli Storage Productivity Center on Linux, you need to analyze your environment to ensure that the system requirements have been met and that you have all the prerequisite components installed and configured.3.2.1 Verifying system hardware and software prerequisites For detailed description of the system hardware and software prerequisites, and the latest platform support information, see the Web site at: http://www-01.ibm.com/support/docview.wss?uid=swg270193803.2.2 Prerequisite component for Tivoli Storage Productivity Center V4.2 For Tivoli Storage Productivity Center V4.2. DB2 UDB Enterprise Server Edition is the only prerequisite component. The following list shows the supported levels of DB2: IBM DB2 UDB Enterprise Server Edition – v9.1 Fix Pack 2 or later – v9.5 Fix Pack 3a or later – v9.7 Note: For DB2 9.7, use the version of DB2 shipped with Tivoli Storage Productivity Center. Do not use DB2 9.7 with fix pack 1 or fix pack 2 (this causes issues with Tivoli Storage Productivity Center).3.3 Installing the TPC prerequisite for Linux This section describes how to install the TPC prerequisites on Linux. We perform a typical installation of DB2 v9.7 64-bit on Red Hat Enterprise Linux 5 Ensure that you have verified that your system meets all the minimum system requirements for installing the prerequisites, including adequate free disk space. Before beginning the installation, it is important that you log on to your system as a local system user with root authority. Attention: In this section, we are dealing with a clean installation of TPC, therefore it is important to understand that if you are required to migrate your current TPC environment to Version 4.2, that you refer to the IBM Tivoli Storage Productivity Center Installation and Configuration Guide, SC27-2337, Chapter 4: “Upgrading and migrating the IBM Tivoli Storage Productivity Center family”. The guide is available at: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/topic/com.ibm.tpc_V42.do c/fqz0_installguide_v42.pdf3.3.1 DB2 installation: GUI install This topic describes how to install DB2 v9.5 Fix Pack 3a 64-bit on Linux using the GUI installation program.78 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Note: You must have the X11 graphical capability installed before installing DB2 using the GUI. Refer to Appendix C, “Configuring X11 forwarding” on page 651. To install DB2, log on as a user with root authority, and then use the following procedures. Accessing the installation media using the CD Follow these steps: 1. Create a mount point or choose an existing mount point. To create a mount point called /cdrom, we enter the following command: mkdir /cdrom 2. Insert the DB2 CD into the CD-ROM drive. Mount the CD-ROM file system at the desired mount point. Run the following command to achieve this: mount -o ro /dev/cdrom /cdrom 3. Change to the directory where the CD-ROM is mounted: cd /cdrom Accessing the installation media using a downloaded image Follow these steps: 1. Create a temporary directory (for example, db2temp) to hold the DB2 installer tar file and untarred files. These files require from 2 GB to 3 GB of hard drive space. mkdir /db2temp 2. Copy or download the DB2 installer into db2temp. 3. Change to the directory where you have stored the image, for example: cd /db2temp 4. Un-tar (extract) the DB2 installer file, following the instructions supplied at the repository from which you downloaded the image, which might involve running the tar or gunzip commands, or a combination of both. For example: tar -xvzf v9.7_linuxx64_ese.tar.gz 5. Change to the installation directory, which you extracted from the image. For example: cd /db2temp/ese Beginning the installation Follow these steps: 1. Run the following command in order to verify that all necessary prerequisite packages are installed on the system: ./db2prereqcheck Chapter 3. Tivoli Storage Productivity Center install on Linux 79
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am If during the prerequisite check you receive an error message such as the one shown in Figure 3-1, you might need to install additional packages to satisfy DB2 dependencies before proceeding with the installation. Figure 3-1 Error message indicating missing DB2 prerequisite packages Refer to the following URL for additional information about DB2 installation requirements for your specific platform: http://www.ibm.com/software/data/db2/udb/sysreqs.html 2. Run the following command to execute the graphical installer: ./db2setup This will open the DB2 Setup Launchpad, as shown in Figure 3-2. Figure 3-2 DB2 Setup Launchpad80 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 3. Select Install a Product from the left-hand panel, then select DB2 Enterprise Server Edition Version 9.7 and click the Install New button in order to proceed with the installation, as shown in Figure 3-3. Figure 3-3 Click Install New to start the installation Chapter 3. Tivoli Storage Productivity Center install on Linux 81
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 4. The DB2 Setup wizard panel is displayed, as shown in Figure 3-4. Click Next to proceed. Figure 3-4 DB2 Setup welcome message82 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 5. The next panel displays the software license agreement. Click Read non-IBM terms to display additional license information and, if you agree with all terms, click Accept and Next to continue (see Figure 3-5). Figure 3-5 Software License Agreement Chapter 3. Tivoli Storage Productivity Center install on Linux 83
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 6. You will be prompted to select the installation type. Accept the default of Typical and click Next to continue (see Figure 3-6). Figure 3-6 Select Typical installation type84 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 7. On the next panel, accept the default: Install DB2 Enterprise Server Edition on this computer and save my settings in a response file. Even though not required, we recommend that you generate such a response file because it will greatly ease tasks such as documenting your work. Specify a valid path and the file name for the response file in the Response file name field. Click Next when you are ready, as shown in Figure 3-7. Figure 3-7 Select both installation and response file creation Chapter 3. Tivoli Storage Productivity Center install on Linux 85
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 8. The panel shown in Figure 3-8 shows the default directory to be used as the installation folder. You can change the directory or accept the defaults. Make sure the installation folder has sufficient free space available, then click Next to continue. Figure 3-8 Select installation directory 9. After you click Next, if your system is an IBM System x® or System p®, you might see a panel titled, Install the IBM Tivoli System Automation for Multiplatforms Base Component (SA MP Base Component). This component is not required by Tivoli Storage Productivity Center, so choose Do not install SA MP Base Component and click Next. Figure 3-9 Create new user for DB2 Administration Server (DAS)86 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 10.You will be prompted whether you want to set up a DB2 instance. Accept the default to Create a DB2 instance and click Next to continue (see Figure 3-10). Figure 3-10 Set up a DB2 instance Chapter 3. Tivoli Storage Productivity Center install on Linux 87
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 11.In the panel shown in Figure 3-11, accept the default to create a Single partition instance and click Next to continue. Figure 3-11 Choose to create a single partition instance88 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 12.The next panel, shown in Figure 3-12, will prompt you for user information for the DB2 instance owner. This user must have a minimal set of system privileges. Accept the default to create a New user and specify a password. Click Next when you are ready. Note: The TPC database repository will be stored in the home directory of the DB2 instance owner specified here. Make sure to place the user’s home directory on a file system that has sufficient free space available; /home is usually not large enough for database repositories! In general, choose the file system with the most available free space on your system to hold database repositories. If you are uncertain about the available file systems and their size, use the df -h command to get an overview. Figure 3-12 Create new user for DB2 instance owner Chapter 3. Tivoli Storage Productivity Center install on Linux 89
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 13.The last user you have to specify is the DB2 fenced user, which is used to execute User Defined Functions (UDFs) and stored procedures. This user must have minimal system privileges as well. We recommend that that you create a New user as shown in Figure 3-13. Specify a new password and click Next to continue. Figure 3-13 Create new user for DB2 instance owner90 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 14.The next panel prompts you to prepare the DB2 tools catalog. Because this component is not required by Tivoli Storage Productivity Center, click Do not prepare the DB2 tools catalog as shown in Figure 3-14. Click Next when you are ready. Figure 3-14 Choose not to prepare the DB2 tools catalog Chapter 3. Tivoli Storage Productivity Center install on Linux 91
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 15.The panel shown in Figure 3-15 allows you to specify a Notification SMTP (e-mail) server. You can optionally specify an existing server or click Do not set up your DB2 server to send notifications at this time—a notification server can always be specified after the installation is finished. Make a choice and click Next to continue. Tip: Configuring DB2 to send e-mail notifications on errors and warning conditions can help resolve those conditions more quickly, thus improving overall stability and resiliency of the solution. This is an important factor in preventing unplanned outages! Figure 3-15 Optionally specify a notification server 16.Figure 3-16 shows the summary panel about what is going to be installed. Review all settings and, if you agree with them, click Finish to begin copying files.92 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Figure 3-16 Installation summary 17.You will see a progress panel as the installer copies the required files. Wait for the installation to complete. 18.When the installation was successful, you will see a panel such as Figure 3-17. Click Finish to close the panel. Figure 3-17 Setup Complete Chapter 3. Tivoli Storage Productivity Center install on Linux 93
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 19.After you have completed installing DB2, you need to edit the file /etc/group and add the root account to the db2iadm1 group. The db2iadm1 group line in /etc/group looks like this: db2iadm1:x:102:root Verifying that DB2 is installed correctly The general steps to verify that DB2 has been installed properly are as follows: Create the SAMPLE database. Connect to the SAMPLE database. Run a query against the SAMPLE database. Drop the SAMPLE database. You can verify that DB2 has been installed properly using the following procedure: 1. Log on as a user with root authority. Note: After adding the root account to the db2iadm1 group as outlined in the previous section, you need to log out and log back in to allow the system to pick up this change. Before proceeding, check that root is a member of this group by issuing the id command. Make sure that the output line contains the db2iadm1 group—it looks similar to Figure 3-18. Figure 3-18 Verify that root is member of db2iadm1 group 2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance user’s home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. After setting the DB2 environment variables, you can verify the installed version of DB2 by issuing the db2level command. The output will indicate which DB2 instance is currently being used, which code release is installed, and whether the selected DB2 instance is 32-bit or 64-bit, as shown in Figure 3-19. Figure 3-19 Verify DB2 version and level94 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Important: Especially note whether the selected DB2 instance is 32-bit or 64-bit because this will greatly affect future installation steps! 4. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20, that means DB2 was already running when you issued the command. Otherwise it will be started now. Figure 3-20 Verify that DB2 is running 5. Enter the db2sampl command to create the SAMPLE database. The results look similar to Figure 3-21. Figure 3-21 Create sample database Note: This process can take several minutes to complete. 6. Enter the following commands to connect to the SAMPLE database, retrieve a list of all the employees that work in Department 20, and reset the database connection: db2 connect to sample db2 “select * from staff where dept = 20” db2 connect reset 7. If all steps completed successfully, you can remove the SAMPLE database. Enter the following command to do so: db2 drop database sample The results look similar to Figure 3-22. Chapter 3. Tivoli Storage Productivity Center install on Linux 95
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-22 Verify DB2 installation You have now successfully completed the DB2 installation.3.4 Installing Tivoli Storage Productivity Center components Now that the prerequisites have been installed, we can install the Tivoli Storage Productivity Center components. Before you begin the installation, consider the following requirements: Confirm that the correct version of DB2 is installed on your system. User IDs that will be required during the installation have been documented for reference. If you are considering the use of LDAP, ensure that you have all the correct information. Refer to the IBM Tivoli Storage Productivity Center Installation and Configuration Guide, SC27-2337, Chapter 2. “Installing the IBM Tivoli Storage Productivity Center family”  “Preparing for installation”. Tip: We recommend that you install the Database Schema first. After that, install Data Server, Device Server, Tivoli Storage Productivity Center for Replication, and Tivoli Integrated Portal in a separate step. If you install all the components in one step, if any part of the installation fails for any reason (for example, space or passwords), the installation suspends and rolls back, uninstalling all the previously installed components. Accessing the installation media using the CD Follow these steps: 1. Create a mount point or choose an existing mount point. To create a mount point called /cdrom, we enter the following command: mkdir /cdrom 2. Insert the Tivoli Storage Productivity Center Disk 1 CD into the CD-ROM drive. Mount the CD-ROM file system at the desired mount point. Run the following command to achieve this:96 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm mount -o ro /dev/cdrom /cdrom 3. Change to the installation directory where the CD-ROM is mounted, for example: cd /cdrom Accessing the installation media using a downloaded image Follow these steps: 1. Create a temporary directory (for example, tpctemp) to hold the Tivoli Storage Productivity Center installer tar files and untarred files. These files require 3-4 gigabytes of hard drive space. mkdir /tpctemp 2. Copy or download the Tivoli Storage Productivity Center installer into tpctemp. 3. Change to the directory where you have stored the image, for example: cd /tpctemp 4. Un-tar (extract) the Tivoli Storage Productivity Center installer files, following the instructions supplied at the repository from which you downloaded the image, which might involve running the tar or gunzip commands, or a combination of both. For example: tar -xvf TPC_4.2.1.108_SE_linux_ix86_disk1_part1.tar tar -xvf TPC_4.2.1.108_linux_ix86_disk1_part2.tar tar -xvf TPC_4.2.1.108_linux_ix86_disk1_part3.tar tar -xvf TPC_4.2.1.108_linux_ix86_disk1_part4.tar Note: Be sure to extract all parts of Disk 1 into the same directory!3.4.1 Creating the database schema This topic provides information about how to create the database schema for use with Tivoli Storage Productivity Center . Note: If you are using a remote database for TPC, you must install the database schema on that computer first, after you have installed DB2. Afterwards you need to install the TPC server components on the other machine, and choose to use a remote database connection. Next we explain how to install the TPC database schema. Note: You must have the X11 graphical capability installed before installing Tivoli Storage Productivity Center using the GUI. Refer to Appendix C, “Configuring X11 forwarding” on page 651. To install the TPC database schema, follow these procedures: 1. Log on as a user with root authority. Chapter 3. Tivoli Storage Productivity Center install on Linux 97
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Note: After adding the root account to the db2iadm1 group as outlined in 3.3.1, “DB2 installation: GUI install” on page 78, you need to log out and log back in to allow the system to pick up this change. Before proceeding, check that root is a member of this group by issuing the id command. Make sure that the output line contains the db2iadm1 group. 2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance user’s home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20 on page 95, that means DB2 was already running when you issued the command. Otherwise it will be started now. 4. Change to the directory where you have extracted the Tivoli Storage Productivity Center Disk 1 software package, then launch the graphical installer by issuing the command: ./setup.sh 5. Tivoli Storage Productivity Center installer is launched, prompting you to select an installation language (see Figure 3-23). Choose a language and click OK to continue. Figure 3-23 Select language98 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 6. The International Program License Agreement is displayed. Read the license text and, if you agree with it, click I accept the terms of the license agreement as shown in Figure 3-24. Click Next when you are ready to proceed with the installation. Figure 3-24 License Agreement 7. The Installation Types panel is displayed, as seen in Figure 3-25. Click Custom Installation. In addition, you can change the TPC Installation Location to suite your requirements or accept the defaults. Make sure that the installation folder has sufficient free space available, then click Next to continue. Figure 3-25 Choose Custom Installation Chapter 3. Tivoli Storage Productivity Center install on Linux 99
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 8. The panel shown in Figure 3-26 prompts you to select one or more components to install. Remove all check marks except for Create database schema for now. Click Next to continue with the installation. Figure 3-26 Select Create database schema component 9. The Database administrator information panel is displayed. Specify a user ID with administrative database authority as Database administrator, such as db2inst1, as seen in Figure 3-27. Specify the according password and click Next to continue. Figure 3-27 Database credentials100 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 10.The panel shown in Figure 3-28 is displayed. Enter the administrative user ID (in our case, db2inst1) and according password again as DB user ID. Make sure that you select Create local database. Figure 3-28 Choose Create local database You can click Database creation details in order to verify additional details, as shown in Figure 3-29. Do not change the default values unless you are a knowledgeable DB2 administrator. Click Next to proceed with the installation. Figure 3-29 Database schema creation Note: The TPC schema name cannot be longer than eight characters. Chapter 3. Tivoli Storage Productivity Center install on Linux 101
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 11.Figure 3-30 shows the summary information panel. Review the information that you have provided for the database schema installation. If you are in agreement that all data entered is correct, you can proceed by clicking Install. Figure 3-30 Summary information 12.The progress panel is displayed. Wait for the installation to finish; the results panel looks like Figure 3-31. Figure 3-31 Installation results 13.Click Finish to exit the graphical installer.102 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Verifying the database schema installation To verify the installation, check that you have the database named TPCDB. Do this by following these steps: 1. Source the DB2 profile: . /home/db2inst1/sqllib/db2profile 2. Verify creation of the TPCDB database by issuing the following command: db2 list db directory The command lists all databases that exist, as seen in Figure 3-32. Figure 3-32 Verify database creation3.4.2 Installing TPC Servers, GUI and CLI After you have completed creating the database schema, you are ready to install the following Tivoli Storage Productivity Center components: Data Server Device Server GUI CLI Note: In addition to the components just mentioned, two additional components will be installed by default, namely Tivoli Integrated Portal as well as Tivoli Storage Productivity Center for Replication. Next we describe how to complete the installation process. Chapter 3. Tivoli Storage Productivity Center install on Linux 103
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Note: You must have the X11 graphical capability installed before installing Tivoli Storage Productivity Center using the GUI. Refer to Appendix C, “Configuring X11 forwarding” on page 651. Follow these steps to complete the installation process: 1. Log on as a user with root authority. 2. In order to set the environment variables for the database instance, you need to source the instance profile (db2profile) found in the instance user’s home directory: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. Make sure that DB2 was started and is currently running by issuing the db2start command. If this gives you an error as shown in Figure 3-20 on page 95, that means DB2 was already running when you issued the command. Otherwise it will be started now. 4. Change to the directory where you have extracted the Tivoli Storage Productivity Center Disk 1 software package, then launch the graphical installer by issuing the following command: ./setup.sh 5. Tivoli Storage Productivity Center installer is launched, prompting you to select an installation language (see Figure 3-33). Choose a language and click OK to continue. Figure 3-33 Select language 6. The International Program License Agreement is displayed. Read the license text and, if you agree with it, click I accept the terms of the license agreement as shown in Figure 3-34. Click Next when you are ready to proceed with the installation.104 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Figure 3-34 License Agreement 7. The Installation Types panel is displayed, as seen in Figure 3-35. Click Custom Installation, then click Next to continue. Figure 3-35 Choose Custom Installation Chapter 3. Tivoli Storage Productivity Center install on Linux 105
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 8. The next panel prompts you to select one or more components to install. Remove all check marks except for Tivoli Storage Productivity Center Servers, GUI and CLI. Note that the database schema is greyed out, because it was already installed previously; see Figure 3-36. Click Next to continue with the installation. Figure 3-36 Select Servers, GUI, and CLI Note: We don’t reccomend to install the Storage Resource Agent (SRA) at this time. Installing any SRA via the installer requires you to also uninstall the SRA using the installer, so in most cases later using the TPC GUI to deploy agents (instead of using the installer) is the more flexible approach. Because we do not plan to install the SRA at this time, there is no need to Register with the agent manager; we perform this step subsequently. 9. If you are running the TPC installation on a system with at least 4 GB but less than the recommended 8 GB of RAM, a warning message will be displayed as seen in Figure 3-37. To ignore this message and continue with the installation, click OK. Figure 3-37 Memory size warning106 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Note: 8 GB of RAM is the minimum memory requirement to run both Tivoli Storage Productivity Center and Tivoli and Tivoli Storage Productivity Center for Replication. If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation. Refer to “Disabling TPC or TPC for Replication” on page 63. 10.The Database administrator information panel is displayed, as seen in Figure 3-38. The database administrator user and password are automatically filled in. This is due to the fact that we previously used it to create the database schema. Click Next to continue. Figure 3-38 Database credentials 11.The database schema information panel is displayed, as shown in Figure 3-39. Because we already installed the database schema previously, nothing can be changed here. Click Next to continue with the installation. Note that if you want to use a remote database on another machine, then you need to install the TPC schema component on this machine first, following the procedure documented in the previous section. Afterwards, install the TPC server components, select the Use remote database option, and specify the host name of the server running the DB2 Manager. Chapter 3. Tivoli Storage Productivity Center install on Linux 107
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-39 Local database preselected 12.If you selected to use a remote database, a warning message is presented to ensure that the remote DB2 instance is running before proceeding; see Figure 3-40. Figure 3-40 Ensure that DB2 is running on the remote system 13.The panel shown in Figure 3-41 requires the following inputs: – Data Server Name: Enter the fully qualified host name of the Data Server. – Data Server Port: Enter the Data Server port. The default is 9549. – Device Server Name: Enter the fully qualified host name of the Device Server. – Device Server Port: Enter the Device Server port. The default is 9550. – TPC Superuser: Enter an operating system group name to associate with the TPC superuser role. This group must exist in your operating system before you install Tivoli Storage Productivity Center . Membership in this group provides full access to the Tivoli Storage Productivity Center product. You can assign a user ID to this group on your operating system and log on to the TPC GUI using this user ID.108 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm If you click the Security roles... button, the Advanced security roles mapping panel is displayed. You can assign an operating system group for each TPC role you want to make an association with, so you can have separate authority IDs to do various TPC operations. The operating system group must exist before you can associate a TPC role with it. Except for the superuser role, you do not have to assign security roles at installation time; you can assign these roles after you have installed TPC. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the values you enter for LDAP TPC Administrator groups override the values you entered here for the TPC superuser. You can record information used in the component installation, such as user IDs, passwords, and storage subsystems in the worksheets in Appendix B, “Worksheets” on page 643. – Host Authentication Password: This is the password used for the Fabric agents to communicate with the Device Server. This password must be specified when you install the Fabric agent. – Data Server Account Password: This is not required for Linux installations, this is only required for Windows. – WebSphere Application Server (WebSphere Application Server) Admin ID and Password: This is the user ID and password required by the Device Server to communicate with the embedded WebSphere. You can use any existing user ID here, such as the dasusr1 ID created upon DB2 installation. The WebSphere Application Server admin ID does not need to have any operating system privileges. The user will be used for the local Tivoli Integrated Portal (TIP) administrator ID. If you click the NAS discovery... button, the NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you have installed TPC. Important: Ensure that you record all passwords that are used during the installation of Tivoli Storage Productivity Center . Chapter 3. Tivoli Storage Productivity Center install on Linux 109
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-41 TPC Server and Agent information When you are ready, click Next to continue. 14.The Tivoli Integrated Portal panel is displayed, as seen in Figure 3-42. You can select to install a new version of TIP or use an already existing install on the local machine. TIP will use 10 port numbers starting from the one specified in the Port field (referred to as the Base Port). The 10 ports will be: – base port – base port+1 – base port+2 – base port+3 – base port+5 – base port+6 – base port+8 – base port+10 – base port+12 – base port+13 The TIP administrator ID and password are pre-filled with the WebSphere Application Server admin ID and password specified in the previous step (Device Server installation). Click Next to continue.110 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Figure 3-42 Tivoli Integrated Portal Important: TPC Version 4.2 only supports a TIP instance that is exclusively used by TPC and TPC-R, but no other application exploiting TIP. Expect support for multiple applications using a shared TIP instance in a future release. 15.The authentication selection panel is displayed (Figure 3-43). This panel refers to the authentication method that will be used by TPC to authenticate the users. Figure 3-43 Authentication selection Chapter 3. Tivoli Storage Productivity Center install on Linux 111
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am 16.If you already have a valid Tivoli Integrated Portal instance on the system and it uses either OS-based or LDAP-based authentication, then TPC will use that existing authentication method. Otherwise, select the authentication method to use: – OS Authentication: This uses the operating system of the TPC server for user authentication. – LDAP/Active Directory: If you select LDAP or Microsoft Active Directory for authentication, you must have LDAP or Active Directory installed already. Choose either of the two options and click Next to proceed. 17.If you decide to use LDAP/Active Directory authentication, additional panels are displayed to configure this authentication method. Refer to “Installing TPC Servers, GUI and CLI” on page 103 for additional details. 18.The summary information panel is displayed. Review the information, then click Install to continue as illustrated in Figure 3-44. Figure 3-44 Summary information112 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 19.You will see a progress window as Tivoli Storage Productivity Center is installed. Wait for the installation to complete. 20.After the TPC Data Server, Device Server, GUI, and CLI installation are complete, the installing TIP panel is displayed (see Figure 3-45). Wait for the TIP installation to finish as well. Figure 3-45 Tivoli Integrated Portal installation After the TIP installation has completed, the TPC for Replication installation is launched in a separate window. The TPC installation is temporarily suspended in the background and the TPC for Replication panel is displayed as seen in Figure 3-46. Figure 3-46 TPC for Replication installation Chapter 3. Tivoli Storage Productivity Center install on Linux 113
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Installing TPC for Replication Follow these steps: 1. The Welcome panel is displayed as seen in Figure 3-46. Click Next. Important: If you are not planning to use TPC for Replication and you attempt to cancel or bypass the installation, it will result in an interruption of the installation process, which will invoke a complete TPC installation rollback. 2. The system prerequisites check panel is displayed, as seen in Figure 3-47. At this stage the wizard will check that the operating system meets all prerequisite requirements as well as fix packs installed. Figure 3-47 System check 3. If the system passes the check as seen in Figure 3-48, you can continue by clicking Next. Figure 3-48 System check complete114 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm 4. Read the license agreement text displayed on the next panel (Figure 3-49) and, if you agree with it, select I accept the terms of the license agreement prior to clicking Next. Figure 3-49 License Agreement 5. Specify the Directory Name where you want to install TPC for Replication. You can either choose a directory by changing the location or by accepting the default directory. See Figure 3-50 for an example. Make sure that the installation folder has sufficient free space available, then click Next to continue with the installation. Figure 3-50 Directory Name 6. The TPC-R Administrator user panel is displayed, as illustrated in Figure 3-51. Enter the user ID and password that will be used as TPC-R administrator. This user must already exist in the operating system and have administrator rights, such as the root account. When you are done, click Next to continue. Chapter 3. Tivoli Storage Productivity Center install on Linux 115
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-51 TPC-R User ID and Password Note: If you prefer to use another user, you are required to create it beforehand and ensure that it has administrator rights. 7. The default WebSphere Application Server ports panel is displayed, as shown in Figure 3-52. Accept the defaults and click Next to continue. Figure 3-52 Default ports 8. The Installation Summary panel is displayed (Figure 3-53). Review the settings and make necessary changes if needed by clicking Back. Otherwise, click Install.116 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Figure 3-53 TPC-R Installation Summary 9. The TPC for Replication installation progress panel is displayed, as seen in Figure 3-54. Wait for the installation to finish. Figure 3-54 TPC-R progress panel 10.The TPC-R Installation Result panel is displayed, as shown in Figure 3-55. Notice the URL to connect to TPC-R. Click Finish. Note: Tivoli Storage Productivity Center for Replication is installed with FlashCopy as the only licensed service. You must install the Two Site or Three Site Business Continuity (BC) license in order to use synchronous Metro Mirror and asynchronous Global Mirror capabilities. Chapter 3. Tivoli Storage Productivity Center install on Linux 117
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-55 TPC-R Installation result 11.After the TPC-R installation has completed, the TPC installer will continue creating the uninstaller as seen in Figure 3-56. Wait for the installation to complete. Figure 3-56 Creating uninstaller 12.After the installation has finished, you see the Summary Information panel (Figure 3-57). Read and verify the information and click Finish to complete the installation.118 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894LinuxInstall.fm Figure 3-57 Summary Information Verifying the TPC Server installation At the end of the installation, it is a good idea to make sure that all the components have been installed successfully and that Tivoli Storage Productivity Center is in good working order. To test this situation on Linux, we have chosen to launch the Tivoli Storage Productivity Center GUI. In TPC we then confirm that all services are started and running. Follow these steps: 1. If you installed TPC to the default location, the following command will launch the TPC GUI on Linux: /opt/IBM/TPC/gui/TPCD.sh 2. Log on to TPC using a user ID that is mapped to the TPC superuser role. If you used the defaults during installation, the root user is mapped to this role. 3. From the Navigation Tree, expand Administrative Services  Service  Data Server and Device Server. All nodes within these branches are marked green, as illustrated in Figure 3-58. Chapter 3. Tivoli Storage Productivity Center install on Linux 119
    • 7894LinuxInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 3-58 Data and Device Server services You have now successfully completed Tivoli Storage Productivity Center server installation.120 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 4 Chapter 4. Tivoli Storage Productivity Center install on AIX In this chapter, we show the step-by-step installation of the Tivoli Storage Productivity Center V4.2 on the AIX platform.© Copyright IBM Corp. 2010. All rights reserved. 121
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am4.1 Tivoli Storage Productivity Center installation on AIX In this chapter, we describe how to install the Tivoli Storage Productivity Center Standard EditionV4.2 and Tivoli Storage Productivity Center for Replication V4.2. The prerequisite component (DB2) is installed prior to invoking the installation program. Furthermore, in this section we provide information about the preparation work required before installing the Tivoli Storage Productivity Center family.4.1.1 Installation overview In order to get Tivoli Storage Productivity Center V4.2 to work, you need to follow certain steps as indicated in the following section references: Check that the system meets the prerequisites. See 4.2, “Preinstallation steps for AIX” on page 123. Install and configure all required prerequisite components. See 4.3, “Installing the prerequisites for AIX” on page 124. Install Tivoli Storage Productivity Center components. See 4.4, “Installing Tivoli Storage Productivity Center components” on page 130. You can install Tivoli Storage Productivity Center using either Typical installation or Custom installation. Custom installation allows you to see what components are being installed and where the components are being installed, as well as giving you the ability to customize your environment by allowing you to install separate components, versions, supply various passwords for user IDs, and change the default installation directories if required. In our case, we install Tivoli Storage Productivity Center using the Custom installation option. Note: Starting with Tivoli Storage Productivity Center V4.1, TPC for Replication is no longer a stand-alone application. Tivoli Storage Productivity Center V4.2 installs Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication V4.2. When you install Tivoli Storage Productivity Center, you have these installable components: Database schema Data Server and Device Server Graphical User Interface (GUI) Command Line Interface (CLI) After Tivoli Storage Productivity Center is installed, the installation program will start the Tivoli Storage Productivity Center for Replication installation wizard.4.1.2 Product code media layout and components In this section, we describe the contents of the product media at the time of writing. The media content will differ depending on whether you are using the Web images or the physical media shipped with the Tivoli Storage Productivity Center V4.2 package. Passport Advantage and Web media content The Web media consists of a disk image and a SRA package:122 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Disk 1 contains all Tivoli Productivity Center components: – Database Schema – Data Server – Device Server – GUI – CLI – Local Data agent – Local Fabric agent – Storage Resource agent – Remote Data agent – Remote Fabric agent – Tivoli Integrated Portal – Tivoli Storage Productivity Center for Replication Note: Disk 1 has four parts to it. All parts must be downloaded and extracted into the same directory. The SRA packagecontains local and remote agent installation images: – Local Data agent – Local Fabric agent – Storage Resource agents – Remote Data agent – Remote Fabric agent – Installation scripts for the Virtual I/O server Physical media The physical media shipped with the TPC V4.2 product consists of a DVD and a CD. The DVD contains the Disk1 parts described in “Passport Advantage and Web media content”. The physical media CD contains the SRA package.4.2 Preinstallation steps for AIX Before deploying Tivoli Storage Productivity Center V4.2 on AIX, you need to analyze your environment to ensure that the system requirements have been met and that you have all the prerequisite components installed and configured. Certain prerequisite components need to be installed before proceeding with the Tivoli Storage Productivity Center V4.2 SE installation. See 4.2.3, “Prerequisite component for Tivoli Storage Productivity Center V4.2” on page 124.4.2.1 Verifying system hardware prerequisites For the hardware prerequisites, see the Web site at: http://www-01.ibm.com/support/docview.wss?uid=swg27019380 The Tivoli Storage Productivity Center server requires 8 GB of RAM. If you have at least 4 GB but less than 8 GB of RAM, you can still install Tivoli Storage Productivity Center and Tivoli Storage Productivity Center for Replication. However, you will get a warning message during installation. Chapter 4. Tivoli Storage Productivity Center install on AIX 123
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation. For installations on AIX, you need a total of 6 GB of free disk space. 2.25 GB for the /tmp directory 3 GB for the /opt directory 250 MB in the /home directory 10 KB of free space in /etc directory 200 MB in the /usr directory 50 MB in the /var directory4.2.2 Verifying system software prerequisites For the software prerequisites, see the Web site at: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp?topic=/com.ibm.tp c_V41.doc/fqz0_r_sw_requirements.html4.2.3 Prerequisite component for Tivoli Storage Productivity Center V4.2 For Tivoli Storage Productivity Center V4.2. DB2 UDB Enterprise Server Edition is the only prerequisite component. The following list shows the supported levels of DB2: IBM DB2 UDB Enterprise Server Edition – v9.1 Fix Pack 2 or later – v9.5 Fix Pack 3a or later – v9.7 Note: For DB2 9.7, use the version of DB2 shipped with Tivoli Storage Productivity Center. Do not use DB2 9.7 with fix pack 1 or fix pack 2 (this causes issues with Tivoli Storage Productivity Center).4.3 Installing the prerequisites for AIX In this section, we describe how to install the Tivoli Storage Productivity Center prerequisites on AIX. We perform a new installation of DB2 9 v9.7 64 bits for AIX. Ensure that you have verified that your system meets all the minimum system requirements for installing the prerequisites, including adequate free disk space. Before beginning the installation, it is important that you log on to your system as a local system user with system rights authority. Attention: In this section, we are dealing with a clean installation of Tivoli Storage Productivity Center V4.2. Therefore it is important to understand that if you are required to migrate or upgrade your current Tivoli Storage Productivity Center environment to Tivoli Storage Productivity Center V4.2, that you follow the migration and upgrade sections found later on in this chapter.124 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm4.3.1 DB2 installation: Command line This topic describes how to install DB2 v9.7 64-bit on AIX using the command line. To install DB2, first log on as a user with root authority, then use the following procedures. Steps to access the installation media using the CD 1. Create a mount point or choose an existing mount point. To create a mount point called /cdrom, we enter the following command: mkdir /cdrom 2. Insert the DB2 CD into the CD-ROM. Mount the CD-ROM file system at the desired mount point. On AIX, you can use the crfs command to add an entry to /etc/file systems for the mount point. Run the following commands to achieve this: /usr/sbin/crfs -v cdrfs -p ro -dcd0 -m/cdrom -Ano mount /cdrom The crfs command only has to be run once for a given mount point, and after that you can use mount and umount for each CD or DVD you put in the drive. 3. Change to the directory where the CD-ROM is mounted: cd /cdrom Steps to access the installation media using a downloaded image 1. Create a temporary directory (for example, db2temp) to hold the DB2 installer tar file and untarred files. These files require 2 GB to 3 GB of hard drive space: mkdir /db2temp 2. Copy or download the DB2 installer into db2temp. 3. Change to the directory where you have stored the image, for example: cd /db2temp 4. Un-tar (extract) the DB2 installer file, following the instructions supplied at the repository from which you download the image, which might invole running the tar or gunzip commands, or a combination of both. For example: tar -xvf v9.7_aix64_ese.tar 5. Change to the installation directory, which you extracted from the image. For example: cd /db2temp/ese Beginning the installation 1. At the command prompt on the host, execute the command line installer: ./db2_install 2. The installer is started, requesting you either to select the default installation directory, or optionally, you can choose another directory, as shown in Figure 4-1. We choose No. Chapter 4. Tivoli Storage Productivity Center install on AIX 125
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 4-1 Select directory 3. Select the product to install, ESE (DB2 Enterprise Server Edition) as shown in Figure 4-2. Figure 4-2 Select product 4. Figure 4-3 shows the DB2 installation being initiated and informs you of the estimated time to perform all tasks. Figure 4-3 DB2 ESE installation progress 5. The Installation Summary eventually is displayed and indicates a successful installation.126 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Configuring the DB2 environment This section provides considerations for the users and groups required for the DB2 environment. 1. Create users and groups for use with DB2; from the host command line, type: mkgroup id=999 db2iadm1 mkgroup id=998 db2fadm1 mkgroup id=997 dasadm1 mkuser id=1004 pgrp=db2iadm1 groups=db2iadm1 home=/home/db2inst1 db2inst1 mkuser id=1003 pgrp=db2fadm1 groups=db2fadm1 home=/home/db2fenc1 db2fenc1 mkuser id=1002 pgrp=dasadm1 groups=dasadm1 home=/home/dasusr1 dasusr1 2. Verify the owner of the directories. Do this by typing ls -ld against the directories as seen in Figure 4-4; the directory owners are displayed as defined in Step 1. Figure 4-4 Verify directory owners 3. Set the DB2 user passwords: passwd db2inst1 You are required to enter the password twice for verification, this presents the password that you want to use for the DB2 instance. passwd db2fenc1 You are required to enter the password twice for verification, this presents the password that you want to use for the fenced user. passwd dasusr1 You are required to enter the password twice for verification, this presents the password that you want to use for the DB2 administration server (DAS) user. 4. Add authentication attributes to the users: pwdadm -f NOCHECK db2inst1 pwdadm -f NOCHECK db2fenc1 pwdadm -f NOCHECK dasusr1 5. Change group db2iadm1 to include the root user: chgroup users=db2inst1,root db2iadm1 6. Create a DB2 Administration Server (DAS): /opt/IBM/db2/V9.7/instance/dascrt -u dasusr1 Chapter 4. Tivoli Storage Productivity Center install on AIX 127
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am As shown in Figure 4-5, you should see a message indicating that the program completed successfully. Figure 4-5 Create DAS server 7. Create a DB2 instance: /opt/IBM/db2/V9.7/instance/db2icrt -a server -u db2fenc1 db2inst1 As shown in Figure 4-6, you ought to see a message indicating that the program completed successfully. Figure 4-6 Create DB2 instance Source the instance profile: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 8. Optional: Change the default location for database repositories. By default, this location is /home/db2inst1. Note: /home is usually not large enough for database repositories. Choose a file system with enough free space to contain the IBM Tivoli Storage Productivity repository. In our case, we use the default repository location. To change the default location, complete the following steps: a. Type db2 update dbm cfg using DFTDBPATH <new repository path> IMMEDIATE, where <new repository path> represents the new location for the repository. b. Type chown -R db2inst1:db2iadm1 <new repository path> to assign ownership to db2inst1 and permission to anyone in db2iadm1. 9. Configure DB2 communication: a. Edit /etc/services and verify or add the following line at the end of the file: db2c_db2inst1 50000/tcp b. Type db2 update dbm cfg using svcename db2c_db2inst1 c. Type db2set DB2COMM=tcpip128 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 10.Add the DB2 license: a. Type cd /opt/IBM/db2/V9.7/adm b. Type ./db2licm -a <DB2 installer location>/db2/ese/disk1/db2/license/db2ese_o.lic Here, <DB2 installer location> represents the directory where the DB2 installer is located. 11.Restart DB2, as shown in Figure 4-7. a. Type db2stop force b. Type db2 terminate c. Type db2start Figure 4-7 Restart DB2 Verifying that DB2 is installed correctly The general steps to verify that DB2 has been installed properly are as follows: 1. Create the SAMPLE database. 2. Connect to the SAMPLE database. 3. Run a query against the SAMPLE database. 4. Drop the SAMPLE database. To verify that DB2 has been installed successfully, complete the following steps: 1. Change to the instance owner user ID by using the su command. For example, if your instance owner user ID is db2inst1, type the following command at the host command prompt: su - db2inst1 2. Start the database manager by entering the db2start command. 3. Enter the db2sampl command to create the SAMPLE database. 4. Enter the following DB2 commands from a DB2 command window to connect to the SAMPLE database, retrieve a list of all the employees that work in Department 20, and reset the database connection. You can see the results of step 3 and step 4 in Figure 4-8: db2 connect to sample db2 “select * from staff where dept=20” db2 connect reset Chapter 4. Tivoli Storage Productivity Center install on AIX 129
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 4-8 Verify DB2 5. If all steps completed successfully, you can remove the SAMPLE database. Enter the command db2 drop database sample to drop the SAMPLE database.4.4 Installing Tivoli Storage Productivity Center components Now that the prerequisite has been installed, we can install the Tivoli Storage Productivity Center components. Before you begin the installation, consider the following requirements: Confirm that the correct version of DB2 is installed on your system. User IDs that will be required during the installation have been documented for reference. If you are planning to use LDAP, ensure that you have all the correct information. Make sure that DB2 is up and running. We will split the installation into two parts. First, we install the Database Schema. Second, we install the remaining components, including Data Server, Device Server, Tivoli Integrated Portal, and Tivoli Storage Productivity Center for Replication.130 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Accessing the installation media using the CD Follow these steps: 1. Create a mount point or choose an existing mount point. To create a mount point called /cdrom, we enter the following command: mkdir /cdrom 2. Insert the CD into the CD-ROM. Mount the CD-ROM file system at the desired mount point. On AIX, you can use the crfs command to add an entry to /etc/filesystems for the mount point. Run the following command to achieve this: /usr/sbin/crfs -v cdrfs -p ro -dcd0 -m/cdrom -Ano mount /cdrom The crfs command only has to be run once for a given mount point, and after that you can use mount and umount for each CD or DVD that you put in the drive. 3. Change to the directory where the CD-ROM is mounted: cd /cdrom Accessing the installation media using a downloaded image Follow these steps: 1. Create a temporary directory (for example, temp) to hold the Tivoli Storage Productivity Center installer tar files and untarred files. These files require 3 GB to 4 GB of hard drive space. mkdir /temp 2. Copy or download the Tivoli Storage Productivity Center installer into temp directory. 3. Change to the directory where you have stored the image, for example: cd /temp 4. Un-tar (extract) the Tivoli Storage Productivity Center installer file, following the instructions supplied at the repository from which you download the image, which might involve running the tar or gunzip commands, or a combination of both. For example: tar -xvf TPC_4.1.0.97_SE_aix_disk1_part1.tar Be sure to untar both parts for disk1. 5. Change to the installation directory, which you extracted from the image. For example: cd /temp Preparing the display If you are installing from a remote terminal session, you must set up an X-Windows display or a Virtual Networking Computing (VNC) Viewer connection prior to beginning the installation process. If you decide to use X-Windows server, you first need to start your local X-Windows server application. Examples are Hummingbird Exceed, Cygwin or Xming. Refer to Appendix C, “Configuring X11 forwarding” on page 651 for more information. If you decide to use VNC Viewer, you first need to start the VNC Server on the AIX server, set up a connection password, and then start the local VNC Viewer. Chapter 4. Tivoli Storage Productivity Center install on AIX 131
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am4.4.1 Creating the database schema This topic provides information about how to create the database schema for use with Tivoli Storage Productivity Center. Note: If you are using a remote database for Tivoli Storage Productivity Center, you must install the database schema on that computer after you have installed DB2. The DB2 database schema name for Tivoli Storage Productivity Center cannot be longer that eight characters. 1. Log on to the system with root authority. 2. Set up your shell environment to point to the instance where the database repository will be installed, to do this, source the db2profile script for the desired instance. In our case the DB2 instance is db2inst1, so we issue the following command: . /home/db2inst1/sqllib/db2profile Note: There is a space between . and /home. 3. Change to the directory where you have extracted the Tivoli Storage Productivity Center software package, then launch the following command: ./setup.sh 4. Tivoli Storage Productivity Center installer is launched, prompting you to select an installation language (Figure 4-9); click OK to continue. Figure 4-9 Select language132 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 5. The International Program Licence Agreement is displayed. Click I accept the terms of the licence agreement, and then click Next, as seen in Figure 4-10. Figure 4-10 Licence Agreement The Installation Types panel is displayed (Figure 4-11). Click Custom installation. In addition, you can change the TPC Installation Location to suit your requirements; we choose the default location, which is /opt/IBM/TPC. After you have completed this panel, click Next to continue. Figure 4-11 Custom Installation selection Chapter 4. Tivoli Storage Productivity Center install on AIX 133
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am 6. The panel, Select one or more components to install on the local or remote computer, is displayed. Remove all check marks except for Create database schema as specified during the DB2 install. See Figure 4-12. Click Next to continue. Figure 4-12 Select “Create database schema” component 7. The Database administrator information panel is displayed. Enter the user ID and password for the DB2 instance owner as shown in Figure 4-13. Click Next to continue. Figure 4-13 Database credentials 8. The new database schema information panel is displayed: a. Enter the DB user ID and password and select Create local database as seen in Figure 4-14.134 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm b. If you click Database creation details, you will see the Database schema creation information panel (Figure 4-15). Do not change the default values unless you are a knowledgeable DB2 administrator. Click Next to continue. Refer to Appendix A, “DB2 table space considerations” on page 639 for the differences between SMS and DMS table spaces. Figure 4-14 Database schema information Figure 4-15 Database schema creation Chapter 4. Tivoli Storage Productivity Center install on AIX 135
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am 9. The summary information panel is displayed (Figure 4-16). Click Install to begin the database schema installation. Figure 4-16 Summary information 10.The progress panel is displayed, as seen in Figure 4-17. Figure 4-17 Progress panel136 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 11.When the installation is complete, the installation results panel is displayed as shown in Figure 4-18. Figure 4-18 Installation results 12.Click Finish to exit the installer. Verifying the installation To check the installation, verify that you have the database named TPCDB. Do this by following these steps: 1. Source the db2 profile: . /home/db2inst1/sqllib/db2profile 2. Verify the creation of the TPCDB database: db2 list db directory 3. The command lists all existing databases, as seen in Figure 4-19. Figure 4-19 Verify database creation Chapter 4. Tivoli Storage Productivity Center install on AIX 137
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am4.4.2 Installing Tivoli Storage Productivity Center components After you have completed creating the database schema, you are ready to install the following Tivoli Storage Productivity Center components: Data Server Device Server GUI CLI Note: In addition to the components just mentioned, two additional components will be installed by default, namely Tivoli Integrated Portal as well as Tivoli Storage Productivity Center for Replication. Follow these steps to complete the installation process: 1. Make sure that you are logged in with the root account. 2. Source the DB2 instance profile: . /home/db2inst1/sqllib/db2profile 3. Change to the directory where you have extracted the Tivoli Storage Productivity Center software package, then launch the following command: ./setup.sh 4. Tivoli Storage Productivity Center installer is launched prompting you to select an installation language (Figure 4-20), click OK to continue. Figure 4-20 Select language 5. The International Program Licence Agreement is displayed, click I accept the terms of the licence agreement, and then click Next, as seen in Figure 4-21.138 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Figure 4-21 Licence agreement 6. The installation types panel is displayed (Figure 4-22). Click Custom installation and then click Next to continue. Figure 4-22 Custom installation 7. The panel, Select one or more components to install, is displayed. Select these choices: – Tivoli Storage Productivity Center Servers – GUI – CLI – Data agent (optional) – Fabric agent (optional) Chapter 4. Tivoli Storage Productivity Center install on AIX 139
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am After you have made the required selections as shown in Figure 4-23, click Next. Figure 4-23 Select components Note: We do not reccomend installing the Storage Resource Agent (SRA) at this time. Installing any SRA via the installer requires you to also uninstall the SRA using the installer, so in most cases later using the Tivoli Storage Productivity Center GUI to deploy agents (instead of using the installer) is the more flexible approach. 8. If you are running the Tivoli Storage Productivity Center installation on a system with at least 4 GB but less than the recommended 8 GB of RAM, a warning message will be displayed as seen in Figure 4-24. To ignore this message and continue with the installation, click OK. Figure 4-24 Memory size warning Note: When attempting to install Tivoli Storage Productivity Center V4.2 on a system with less than 4 GB, this will result in an error message and the installation will fail. 9. The Database administrator information panel is displayed (Figure 4-25). The DB2 user ID and password are automatically filled in. This is due to the fact that we used it to create the database schema. Click Next.140 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Figure 4-25 Database administrator info 10.The database schema panel is displayed (Figure 4-26). You have the option to select a local database or alternatively a remote database to be used by the Data Server and Device Server. We select the Use local database, because this is the database schema installed in the previous steps. Click Next. Figure 4-26 Use local database 11.The next panel, shown in Figure 4-27, requires the following inputs: – Data Server Name: Enter the fully-qualified host name of the Data Server. Chapter 4. Tivoli Storage Productivity Center install on AIX 141
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am – Data Server Port: Enter the Data Server port. The default is 9549. – Device Server Name: Enter the fully-qualified host name of the Device Server. – Device Server Port: Enter the Device Server port. The default is 9550. – TPC Superuser: Enter an operating system group name to associate with the TPC superuser role. This group must exist in your operating system before you install Tivoli Storage Productivity Center. Membership in this group provides full access to the Tivoli Storage Productivity Center product. You can assign a user ID to this group on your operating system and start the Tivoli Storage Productivity Center GUI using this user ID. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value you enter for the LDAP TPC Administrator group overrides the value you entered here for the TPC superuser. – Host authentication password: This is the password used by the Fabric agent to communicate with the Device Server. This password must be specified when you install the Fabric agent. – Data Server Account Password: This is not required for AIX installations; it is only required for Windows. – WebSphere Application Server Admin ID and Password: This is the WebSphere administrator user ID and password required by the Device Server to communicate with embedded WebSphere. In our case, we use the db2inst1 user; you can use the TPC Superuser here. This user will be used for the local Tivoli Integrated Portal administrator ID. Note: If you select LDAP authentication later in the Tivoli Storage Productivity Center installation, then the value you enter for the LDAP TPC Administrator group overrides the value you entered here for the WebSphere Application Server admin ID and password. Important: Ensure that you record all passwords that are used during the installation of TPC. If you click the Security roles... button, the Advanced security roles mapping panel is displayed. You can assign a system group for each TPC role that you want to make an association with; this allows you the flexibility to set up separate authority IDs to perform various TPC operations. The operating group must exist before you can associate a TPC role with a group. You do not have to assign security roles at installation time; you can assign these roles after you have installed TPC. If you click the NAS discovery... button, the NAS discovery information panel is displayed. You can enter the NAS filer login default user name and password and the SNMP communities to be used for NAS discovery. You do not have to assign the NAS discovery information at installation time, you can configure it after you have installed Tivoli Storage Productivity Center.142 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Click Next to continue. Figure 4-27 Tivoli Storage Productivity Center Server and Agent information 12.The Tivoli Integrated Portal (TIP) panel is displayed (see Figure 4-28). You can select to install the TIP program or use an existing TIP install. Important: TIP must be installed on the same server as the TPC server. It is important to note that you are limited to one Tivoli Storage Productivity Center instance per TIP. TIP will use 10 port numbers starting from the one specified in the Port field (referred to as the Base Port). The 10 ports will be: – base port+1 – base port+2 – base port+3 – base port+5 – base port+6 – base port+8 – base port+10 – base port+12 – base port+13 The TIP administrator ID and password are pre-filled with the WebSphere Application Server admin ID and password specified during step 11 (Device Server installation). We have chosen to install the TIP program and not use an existing TIP. You have to specify the installation directory as well as the port to be used; we accept the defaults. Click Next to continue. Chapter 4. Tivoli Storage Productivity Center install on AIX 143
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 4-28 TIP panel 13.The authentication selection panel is displayed (Figure 4-29). This panel refers to the authentication method that will be used by Tivoli Storage Productivity Center to authenticate the users. Figure 4-29 Authentication panel If you have a valid Tivoli Integrated Portal instance on the system and it uses either OS-based or LDAP-based authentication, then Tivoli Storage Productivity Center will use that existing authentication method. Otherwise, select the authentication method to use: – OS Authentication:144 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm This uses the operating system for user authentication. – LDAP/Active Directory: If you select LDAP or Microsoft Active Directory for authentication, you must have an LDAP or Active Directory already installed and set up. Method 1: Choose OS Authentication, then click Next to continue. 14.The summary information panel is displayed (Figure 4-30). Review the information; at this stage, it is a good idea to check that you have sufficient space in the required file systems. Click Install to continue. Note: Remember that the Replication Server is included in the installation of Tivoli Storage Productivity Center V4.2 by default, as mentioned before. Figure 4-30 Summary information 15.You will see the installing panel indicating various stages within the installation process, as shown in the following example. The installation starts with the Data Server installation as seen in Figure 4-31. The installer will proceed through the separate components after the previous component has installed successfully. Note: If the installer fails to install a specific component, the process will stop and the installer will uninstall all components. Chapter 4. Tivoli Storage Productivity Center install on AIX 145
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am Figure 4-31 Data Server install The Installing Device Server panel is shown in Figure 4-32. You can see various messages during the Device Server installation process, and when complete, the installer will briefly display the installing panel for the GUI, CLI, and Agents (if selected). When done, the installing TIP panel is displayed as seen in Figure 4-33. Figure 4-32 Device Server installer146 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm Figure 4-33 Tivoli Integrated Portal installing Important: During the installation of Tivoli Integrated Portal on AIX systems, the progress bar incorrectly indicates that the Tivoli Integrated Portal installation is 100% complete even though it is not yet complete. Continue to wait until the installation is complete. The installation of Tivoli Integrated Portal can be a time consuming exercise, so be patient. TPC for Replication installation After the TIP installation has completed, the TPC for Replication installation is launched. The TPC installation is temporarily suspended in the background, and the TPC for Replication panel is displayed as seen in Figure 4-34. Figure 4-34 TPC for Replication installation is launched Chapter 4. Tivoli Storage Productivity Center install on AIX 147
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am To install TPC for Replication, we follow steps a through j: a. The Welcome panel is displayed as seen in Figure 4-34; choose Next to continue. IMPORTANT: If you are not planning to use TPC for Replication and you attempt to cancel or bypass the installation, it will result in an interruption in the installation process, which will invoke a complete Tivoli Storage Productivity Center installation rollback. b. The System prerequisites check panel is displayed (Figure 4-35). At this stage the wizard will check that the operating system meets all prerequisite requirements as well as having the necessary fix packs installed. Figure 4-35 System check c. If the system passes the check as seen in , you can continue by clicking Next to continue. Figure 4-36 System check complete148 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm d. Accept the License Agreement as seen in Figure 4-37. Click Next to continue. Figure 4-37 Licence agreement e. Select the Directory Name where you want to install TPC for Replication. You can choose a directory either by changing the location or by accepting the default directory as we have done in Figure 4-38. Click Next to continue. Figure 4-38 Directory Name Chapter 4. Tivoli Storage Productivity Center install on AIX 149
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am f. The TPC Administrator user panel is displayed (Figure 4-39). You are required to enter the user ID and password that will be used; this ID is usually the operating system administrator user ID. We choose the root user ID. Note: If you prefer to use another user ID, you are required to create it beforehand and ensure that it has administrator/system rights. Figure 4-39 TPC-R User ID and Password g. The Default WebSphere Application Server ports panel is displayed (Figure 4-40). Accept the defaults. Click Next to continue. Figure 4-40 Default ports150 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm h. The settings panel is displayed (Figure 4-41). Review the settings and make the necessary changes if needed by clicking Back. Otherwise, click Install to continue. Figure 4-41 Summary display i. The TPC for Replication installation progress panel is displayed (see Figure 4-42). Figure 4-42 TPC-R progress panel Chapter 4. Tivoli Storage Productivity Center install on AIX 151
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am j. The TPC-R installation result panel is displayed (see Figure 4-43). Notice that the URL to connect to TPC-R is displayed. Click Finish to continue. Figure 4-43 Installation results Note: Tivoli Storage Productivity Center for Replication is installed with FlashCopy as the only licensed service. You must install the Two Site or Three Site Business Continuity (BC) license in order to use synchronous Metro Mirror and asynchronous Global Mirror capabilities. 16.After the TPC-R installation has completed, the Tivoli Storage Productivity Center Installer will resume as seen in Figure 4-44. Figure 4-44 Tivoli Storage Productivity Center installation continues152 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 17.The Tivoli Storage Productivity Center installation results panel is displayed (see Figure 4-45). Click Finish to continue. Figure 4-45 Tivoli Storage Productivity Center installation results Chapter 4. Tivoli Storage Productivity Center install on AIX 153
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am Verifying the installation At the end of the installation, it is a good idea to make sure that all the components have been installed successfully and that Tivoli Storage Productivity Center is in good working order. To test this on AIX, we have chosen to launch the Tivoli Integrated Portal  launch the Tivoli Storage Productivity Center user interface. On TPC, we confirm that all servers are started and running, using the following steps: 1. We launch the TIP portal using the URL specific to our environment (https://tpc_server_name:16316/ibm/console/logon.jsp) We login using the root account as shown in Figure 4-46. Figure 4-46 TIP Login 2. Start the Tivoli Storage Productivity Center user interface (see Figure 4-47). Figure 4-47 Tivoli Storage Productivity Center user interface154 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AIXInstall.fm 3. Verify that all services are started (Figure 4-48), the nodes ought to reflect as green. Figure 4-48 Data and Device services Chapter 4. Tivoli Storage Productivity Center install on AIX 155
    • 7894AIXInstall.fm Draft Document for Review February 17, 2011 2:17 am156 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 5 Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level In this chapter we will describe migration of the TPC base code to TPC 4.2 level. The chapter will also cover special considerations that you have to be aware of during the upgrade. This chapter will include the following: migration scenarios prerequisites for upgrade to 4.2 database considerations TPC-R considerations Agent Manager, Data and Fabric agent considerations SRA migration upgrading TPC-R in high availability relationship Sample next level 2 heading© Copyright IBM Corp. 2010. All rights reserved. 157
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am5.1 Migration considerations If you are planning to upgrade your existing Tivoli Storage Productivity Center to Tivoli Storage Productivity Center version 4.2, you have to plan migration steps and consider prerequisites before you do the installation. To upgrade Tivoli Storage Productivity Center you will use the same installation program as installing the product. When you upgrade IBM Tivoli Storage Productivity Center, you are upgrading all installed components including the database schema. If a component fails to upgrade, then just that component is not upgraded. You can migrate previous TotalStorage Productivity Center 3.3.2 or later releases and TotalStorage Productivity Center for Replication version 3.3.2 to Tivoli Storage Productivity Center version 4.2. Note: Best practice would be to migrate from 3.3.2 to 4.1, and then upgrade from 4.1 to 4.2 As Tivoli Storage Productivity Center for Replication is no longer a stand-alone application (from TPC version 4.1) upgrading from Tivoli Storage Productivity Center for Replication versions 3.3.2, Tivoli Storage Productivity Center Version 4.2 will install Tivoli Integrated Portal and Tivoli Storage Productivity Center for Replication Version 4.2.5.1.1 Prerequisites Before starting the upgrade, you have to ensure that your system meets the hardware and software requirements of Tivoli Storage Productivity Center version 4.2. You can check these at the following location: http://www.ibm.com/support/entry/portal/Planning/Software/Tivoli/Tivoli_Storage_Pr oductivity_Center_Standard_Edition5.1.2 Database considerations If you are planing to upgrade Tivoli Storage Productivity Center, you must consider the database repository as Tivoli Storage Productivity Center 4.2 supports only DB2 as the database repository. The following DB2 versions are supported with Tivoli Storage Productivity Center 4.2: DB2 Enterprise Server Edition version 9.1 with fix pack 2 or later. DB2 Enterprise Server Edition version 9.5 with fix pack 3a or later. DB2 Enterprise Server Edition version 9.7 Important: Do not use DB2 v9.7 FP1 or FP2 with TPC, it causes issues with TPC (i.e., database crashes). APAR IC69087 was opened for DB2 to provide a fix in future DB2 v9.7 fix pack. You can check the database repository support at the following location: http://www.ibm.com/support/docview.wss?uid=swg27019380 If you have installed DB2 version 8.2, DB2 version 9.1 or DB2 version 9.5 we recommend to migrate and upgrade to DB2 version 9.7. These are the general steps to follow:158 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 1. Stop the IBM Tivoli Storage Productivity Center services and Agent Manager (if you have Agent Manager installed). 2. Pre-check the database for migration. 3. Back up the database. 4. Install DB2 9.7 5. Migrate the DB2 instance. 6. Migrate the database. 7. Verify the migration. 8. Start the IBM Tivoli Storage Productivity Center services and Agent Manager (if you have Agent Manager installed). For more information about the upgrade to DB2 version 9.7, see TPC 4.2 Installation and Configuration Guide SC27-2337-02 For more information about the upgrade to DB2 version 9.7, see also Upgrade to DB2 Version 9.7 in the IBM DB2 Information Center at the following location: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2. luw.qb.upgrade.doc/doc/c0023662.html Note: If the Tivoli Storage Productivity Center database is on a remote system from the server, you must also upgrade the remote database.5.1.3 TPC-R considerations You do not need to uninstall the previous version of Tivoli Storage Productivity Center for Replication to upgrade to the version 4.2. Tivoli Storage Productivity Center 4.2 can be installed on an existing version 3.x or 4.x installation if you have met the hardware and software requirements needed to support Tivoli Storage Productivity Center. With version 4.2, Tivoli Storage Productivity Center for Replication no longer supports DB2 as the datastore for its operational data. Tivoli Storage Productivity Center for Replication uses an embedded repository (Derby database) for its operational data. The Tivoli Storage Productivity Center for Replication 4.2 installation program automatically migrates any Tivoli Storage Productivity Center for Replication operational data in an existing Tivoli Storage Productivity Center for Replication DB2 database to the Tivoli Storage Productivity Center for Replication embedded repository as part of upgrading to Tivoli Storage Productivity Center for Replication 4.2 from an earlier version. If you do not use or if you do not plan to use Tivoli Storage Productivity Center for Replication, do not interrupt the upgrade installation by clicking the Cancel button on InstallShield Wizard for Tivoli Storage Productivity Center for Replication. Click on the Next button and finish the installation of Tivoli Storage Productivity Center for Replication. Note: If you already have TPC-R disabled, you do not have to start it before TPC upgrade. InstallShield Wizard will start the TPC-R service and continue with the installation If you do not plan to use Tivoli Storage Productivity Center for Replication you can disable it after the upgrade. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 159
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Disabling Tivoli Storage Productivity Center for Replication To disable Tivoli Storage Productivity Center for Replication, follow these steps. On Windows: 1. To disable the TPC for Replication server, go to Start  Settings  Control Panel  Administrative Tools  Services. Right-click the following service: IBM WebSphere Application Server V6.1 - CSM 2. Select Properties, as shown in Figure 5-1. Figure 5-1 TPC-R Server service properties160 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 3. On the panel shown in Figure 5-2, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel. Figure 5-2 Disabling TPC-R Server On Linux and AIX: 1. To stop the Tivoli Storage Productivity Center for Replication Server on Linux and AIX issue the following command from the command prompt as shown in Figure 5-3: /opt/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <password> 2. Here, <username> is the user ID and <password> is the password created during installation. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 161
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-3 Stop TPC-R Server 3. To disable the Tivoli Storage Productivity Center for Replication Server from starting on system reboot, you must edit the /etc/inittab and hash out the line that starts up Tivoli Storage Productivity Center for Replication, as shown in Figure 5-4. Figure 5-4 Edit /etc/inittab If you plan to use only Tivoli Storage Productivity Center for Replication you can disable Tivoli Storage Productivity Center after the upgrade.162 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Disabling Tivoli Storage Productivity Center To disable Tivoli Storage Productivity Center Server, follow these steps: On Windows: 1. To disable TPC, go to Start  Settings  Control Panel  Administrative Tools  Services. Right-click the following service: IBM WebSphere Application Server V6.1 - DeviceServer 2. Select Properties, as shown in Figure 5-5. Figure 5-5 Service properties Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 163
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am 3. On the panel shown in Figure 5-6, select Disabled under the Startup type menu and click the Stop button in the Service Status section. When the service has been stopped, click OK to close this panel. Figure 5-6 Disable service 4. Repeat the same procedure for the following services: – IBM Tivoli Storage Productivity Center - Data Server – IBM Tivoli Common Agent - <directory> (<directory> is where the Common Agent is installed. The default is <TPC_install_directory>ca) – IBM Tivoli Storage Resource agent - <directory> (<directory> is where the Storage Resource agent is installed. The default is <TPC_install_directory>agent) – Tivoli Integrated Portal - TIPProfile_Port_<xxxxx> (<xxxxx> indicates the port specified during installation. The default port is 16310.) – IBM ADE Service (Tivoli Integrated Portal registry) Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services. On Linux: 1. To stop the Tivoli Storage Productivity Center services as seen in Figure 5-7, run these commands in the command prompt window: Data Server: /<usr or opt>/IBM/TPC/data/server/tpcdsrv1 stop Device Server: /<usr or opt>/IBM/TPC/device/bin/linux/stopTPCF.sh164 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 2. Depending on whether or not you have a Data agent or Storage Resource agent installed, issue these commands accordingly: Common Agent: /<usr or opt>/IBM/TPC/ca/endpoint.sh stop Storage Resource agent: /<usr or opt>/IBM/TPC/agent/bin/agent.sh stop Figure 5-7 Stop TPC services on Linux On AIX: 1. To stop the Tivoli Storage Productivity Center services as seen in Figure 5-7, run these commands in the command prompt window: Data Server: stopsrc -s TSRMsrv1 Device Server: /<usr or opt>/IBM/TPC/device/bin/aix/stopTPCF.sh 2. Depending on whether or not you have a Data agent or Storage Resource agent installed, issue these commands accordingly: Common Agent: /<usr or opt>/IBM/TPC/ca/endpoint.sh stop Storage Resource agent: /<usr or opt>/IBM/TPC/agent/bin/agent.sh stop 3. To disable the Tivoli Storage Productivity Center Server from starting on system reboot, you must edit the /etc/inittab and hash out the line that starts up Tivoli Storage Productivity Center, as shown in Figure 5-8. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 165
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-8 Disable TPC Stop Tivoli Integrated Portal on AIX and Linux: 1. To stop Tivoli Integrated Portal, run this command in a command prompt window as shown in Figure 5-9: <install_directory>/tip/profiles/TIPProfile/bin/stopServer server1 -username <tipadmin> -password <password> Here, <tipadmin> is the administrator user ID and <password> is the administrator password. Wait for the server to complete the operation. 2. To stop the IBM ADE Service, run this command in a command prompt window: Source the environment: . /var/ibm/common/acsi/setenv.sh Run this command: /usr/ibm/common/acsi/bin/acsisrv.sh stop Note: Stop Tivoli Integrated Portal and IBM ADE Service only if no other applications are using these services.166 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-9 Stopping TIP5.2 Credentials migration tool With Tivoli Storage Productivity Center version 4.2 new native storage system interfaces are provided for DS8000, SAN Volume Controller and XIV storage systems. The native interfaces replace the CIM agent (SMI-S agent) implementation for these storage systems. By upgrading Tivoli Storage Productivity Center to version 4.2 you must migrate the existing storage system credentials for the native interfaces. If you are upgrading Tivoli Storage Productivity Center, a storage subsystem credentials migration tool is provided to help you migrate the existing storage system credentials for the native interfaces. The migration tool is able to migrate the existing storage system credentials automatically after you confirm the migration of the credentials of devices via update button in the migration tool. The native interfaces are supported for the following release levels: DS8000 - release 2.4.2 or later SAN Volume Controller - version 4.2 or later XIV - version 10.1 or later If Tivoli Storage Productivity Center can access the DS8000 using the existing credentials, you can continue to use the existing credentials. For XIV and SAN Volume Controller storage systems, you must manually update the credentials. You can migrate your storage system credentials in one of the following ways: Use the migration tool before the upgrade from the Tivoli Storage Productivity Center installation packages Run the migration tool during the Tivoli Storage Productivity Center upgrade procedure. Run the Tivoli Storage Productivity Center upgrade installation program and specify that you want to use the migration tool. Use the Tivoli Storage Productivity Center GUI after the upgrade to migrate the credentials Note: We recommend to use the migration tool before you start upgrade to Tivoli Storage Productivity Center 4.2 in order to check and prepare all your monitored and managed devices to be ready immediatelly after the upgrade. The migration tool will store the credentials into the database repository. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 167
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Note: If a device is in the probe definition before the upgrade to TPC 4.2 it will not show up in the Configuration Device wizard because the Configuration Device wizard is only for configuring those devices that are unconfigured for monitoring. In this case we recommend to run the migration tool before the upgrade. Run the credentials migration tool before the upgrade You can migrate your storage system credentials before you start the upgrade of Tivoli Storage Productivity Center. Run the migration tool from the directory UserMigrationTool by starting MigrateUserInfo.bat (for Windows) or MigrateUserInfo.sh (for UNIX® or Linux). It will open User Credentials Migration Tool window which shows the table list of the subsystems that can be updated (Figure 5-10) Figure 5-10 User Credentials Migration Tool window If you run the credentials migration tool after you already upgrade your Tivoli Storage Productivity Center to version 4.2, you will get the following error (Figure 5-11). In this case you have to run the credentials migration tool from Tivoli Storage Productivity Center GUI. Figure 5-11 User Credentials Migration Tool error168 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Note: On Windows, a new DLL msvcr90.dll is required to run the tool. If it is not installed, the migration tool will not start. If that happens, start TPC 4.2 installer, choose the language, and accept the license terms. At that point, the required DLL will be installed. You can go back and launch the stand-alone migration tool. Run the credentials migration tool during the upgrade You can migrate your storage system credentials during the upgrade of Tivoli Storage Productivity Center. After you start the TPC installer program and after you specified database repository information in the installer window, the following window will open where you specify that you want to run Storage Subsystem Credential Migration Tool (Figure 5-12) Figure 5-12 Credential Migration tool selection within TPC GUI installer If you select that you want to run the tool, the User Credentials Migration Tool window will open after you click Install button on the summary window. The window shows the table list of the subsystems that can be updated (Figure 5-10) Run the credentials migration tool after the upgrade You can migrate your storage system credentials after you successfully upgrade your Tivoli Storage Productivity Center to version 4.2. When you start and login into Tivoli Storage Productivity Center GUI the window in Figure 5-13 will open. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 169
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-13 Tivoli Storage Productivity Center GUI Welcome screen When you click on the Update Subsystems button the panel for changing credentials will open (Figure 5-14). If you close the Welcome screen, you can also start the panel from Navigation Tree under Configuration  Update Storage Subsystem Credentials. Figure 5-14 Update Storage Subsystem Credentials panel170 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Note: The storage system credential migration applies to all DS8000 systems, XIV systems, and SAN Volume Controller systems. If you have run a CIMOM discovery job for a storage system but have not run a probe job for that system, then upgrade Tivoli Storage Productivity Center, the IP address does not display in the GUI. You must manually enter the IP address for that storage system.5.3 Agent Manager, Data and Fabric agents consideration As the Storage Resource agents now perform the functions of the Data agents and Fabric agents you no longer need to install and maintain the Agent Manager, Data agents, and Fabric agents. They are no longer part of Tivoli Storage Productivity Center and they are not shipped with Tivoli Storage Productivity Center Version 4.2. Tivoli Storage Productivity Center Version 4.2 will supports existing Data and Fabric agents and Agent manager but without any new functions and with some limitations. By migrating Tivoli Storage Productivity Center to version 4.2, Data and Fabric Agents can be migrated using migration function that has been developed to assist you with migrating to the new Storage Resource agents. The migration function was designed so that if the migration fails, the Data and Fabric agents will be restored and restarted. You can either continue to use the legacy agents or retry the migration. If you choose not to migrate the legacy agents as part of the server upgrade, the graphical installer can be launched at a later time to migrate the legacy agents. Data and Fabric agent migration and upgrade is described in detail in Chapter 4. The Tivoli Storage Productivity Center 4.2 installation program does not support installation of the legacy Data agent or Fabric agent. If you want to install the legacy Data agent or Fabric agent, you must have a previous Tivoli Storage Productivity Center installation program that supports installing the Data agent or Fabric agent. If you are planing to use existing legacy Data and Fabric agents, Tivoli Storage Productivity Center must be registered to Agent Manager. By migrating Tivoli Storage Productivity Center base code to version 4.2, existing Agent manager will remain registred. If you are planning to migrate Data and Fabric agent to the new Storage Resource agents, Agent Manager is no longer required and it can be uninstalled. (Reference to TPC 4.1.1. Uninstalling the Agent Manager) Note: Any version of Agent Manager 1.3.2 supports DB 9.1. For DB2 9.5 support, you need to use Agent Manager version 1.3.2.30 which is shipped with Tivoli Storage Productivity Center 4.1.1. If you are planning to use DB2 9.7, you must install Agent Manager 1.4.x or later. Agent Manager 1.3.x does not support DB2 v9.7. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 171
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Important: When running with a Tivoli Storage Productivity Center 4.2 server and a Data agent version 3.3.x or 4.1.x, you see the following limitations: When you are using a Tivoli Storage Productivity Center 4.2 server and a Data agent lower than version 4.1.0, you get error messages in the logs for the storage subsystem performance and switch performance reports (GEN0324E and GEN0008E). If there is data. These error messages do not affect the reports. The report job ends with a warning message. The job status is correct, and the job log reflects the results of the report. The performance constraint violation reports will not be able to run with a Tivoli Storage Productivity Center 4.2 server and a Data agent version 4.1.0 or lower. The Data agents have been removed from the agent list. You can migrate the Data agent to a Storage Resource agent to get a performance constraint violation report. You cannot create a batch report for Rollup Reports by clicking IBM Tivoli Storage Productivity Center  Reporting  Rollup Reports  Asset  Computers  By Computer. The Data agents have been removed from the agent list. You can migrate the Data agent to a Storage Resource agent to get a batch report for Rollup Reports. If you have Tivoli Storage Productivity Center 4.1.1 (or earlier) agents installed, and if you want to continue to use them, Figure 5-1 shows the valid upgrade scenarios. Table 5-1 Agent upgrade scenarios Tivoli Storage Productivity Center Use Tivoli Storage Productivity Center V4.1.1 agent V4.1.1 (or earlier) installed installation program on non-TPC server for local install Data agent or Fabric agent or both If the Data agent or Fabric agent is down level, installed (version 4.1.1 or earlier) on the agent will be upgraded to the latest V4.1.1 local machine level. If the Data agent or Fabric agent is at the latest V4.1.1 level, you see a message that the agent is already installed. Storage Resource agent is installed on local If the Storage Resource agent is at the latest machine V4.1.1 level, the Storage Resource agent is left as is. If the Storage Resource agent is not at the latest V4.1.1 level, the agent is migrated to a Data agent or Fabric agent. No agent installed The Data agent or Fabric agent is installed.5.4 Migration scenarios Depending on your existing installation the upgrade scenarios shown in Figure 5-2 are possible.172 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Table 5-2 Agent migration scenarios TPC 4.1.1 or earlier versions Migration scenarios to TPC 4.2 Data agent or Fabric agent or both are installed You have a choice: Leave the Data agent or Fabric agent at the down level version Migrate the Data agent or Fabric agent to Storage Resource agent Storage Resource Agent is installed You have a choice to upgrade or not upgrade the Storage Resource agent to 4.2. No Agent installed The default Storage Resource agent is installed Note: You cannot use the Tivoli Storage Productivity Center V4.1.1 (or earlier) installation program on a Tivoli Storage Productivity Center V4.2 system. Note: You can use the Tivoli Storage Productivity Center V4.2 installation program to install a local Storage Resource agent on a system that does not have the Tivoli Storage Productivity Center server installed. You can also use the Tivoli Storage Productivity Center GUI to deploy the Storage Resource agents (from the server system).5.4.1 Migration from version 3.x You can upgrade previous TotalStorage Productivity Center version 3.3.2 or later releases and TotalStorage Productivity Center for Replication version 3.3.2 to Tivoli Storage Productivity Center version 4.2. The best practice would be to upgrade in two steps Step 1: • upgrade DB2 to version 9.1 • upgrade from TPC version 3.3.2 to TPC version 4.1 Step 2: • upgrade from TPC version 4.1 to TPC version 4.2 • upgrade DB2 to version 9.7 Note: Upgrade from TPC version 3.3.2 to TPC version 4.1 is described in detail inIBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, Chapter 35.4.2 Migration from version 4.1 You can directly upgrade previous Tivoli Storage Productivity Center version 4.1 and Tivoli Storage Productivity Center for Replication version 4.1 to Tivoli Storage Productivity Center version 4.2. You have to check if your existing database repository is supported with Tivoli Storage Productivity Center version 4.2. In this section, we show you how to upgrade the Tivoli Storage Productivity Center components when you have installed data and fabric agents. The chapter <MARY Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 173
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am CROSSREFERENCE> shows you how to upgrade the agents after the TPC is succesfully upgraded to the version 4.2 Before proceeding with the upgrade, there are various steps that must be performed. Preparing for migration of TPC components Follow these steps: 1. Exit all instances of the Tivoli Storage Productivity Center GUI. 2. If you are upgrading Tivoli Storage Productivity Center on a Windows server make sure that you have exclusive access to the server you are installing TPC V4.2 on. If you are accessing the server remotely, make sure that there are no other remote connections to the server. Multiple remote connections, such as Windows Remote Desktop Connections, will cause the upgrade to fail and can render the server unrecoverable. To log off other remote users on Windows, follow these steps: a. Go to Start  Settings  Control Panel  Administrative Tools  Terminal Services Manager. b. On the Users tab, right-click the users that ought not to be logged on to the server and select Logoff from the pop-up menu (see Figure 5-15). Figure 5-15 Terminal Services Manager c. Close the Terminal Services Manager window. 3. Stop all the TPC services. To stop the services on windows: Go to Start  Setting  Control Panel  Administrative Tools  Services. Right-click the service and select Stop. The following services have to be stopped: – IBM WebSphere Application Server V6 - Device Server – IBM TotalStorage Productivity Center - Data Server – IBM Tivoli Common Agent <directory> where <directory> is where the Common Agent is installed. The default is <TPC_install_dir>/ca. – IBM WebSphere Application Server v6.1 - CSM if you also have TPC for Replication174 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm To stop the services on Linux: ffi Device server: /<TPC_install_directory>/device/bin/linux/stopTPCF.sh ffi Data server: /<TPC_install_directory>/data/server/tpcdsrv1 stop ffi Common agent: /<common_agent_install_directory>/ca/endpoint.sh stop ffi Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh stop ffi IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) To stop the services on AIX: – Device server: /<TPC_install_directory>/device/bin/aix/stopTPCF.sh – Data server: stopsrc -s TSRMsrv1 – Common agent: /<common_agent_install_directory>/ca/endpoint.sh stop – Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh stop – IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/stopServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) 4. Back up your current TPC 4.1 server and databases (TPCDB and IBMCDB). IBMCDB is the Agent Manager database and TPCDB is the Tivoli Storage Productivity Center database.This is important in case of an upgrade failure: a. Back up your TPC database using the DB2 backup process (reference to 4.1 redbook and 4.2 install guide) b. For Tivoli Storage Productivity Center and Tivoli Integrated Portal single signon authentication configuration, back up the WebSphere configuration files. The configuration files are located in the following directories: TIP_installation_directory/profiles/TIPProfile/bin TPC_installation_directory/device/apps/was/profiles/deviceServer/bin The backup file is named: WebSphereConfig_yyyy_mm_dd.zip Where yyyy is the year, mm is the month, and dd is the day. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 175
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Run the following commands on UNIX or Linux systems: /IBM/Tivoli/tip/profiles/TIPProfile/bin/backupConfig.sh /IBM/TPC/device/apps/was/profiles/deviceServer/bin/backupConfig.sh Run the following commands on Windows systems: IBMTivolitipprofilesTIPProfilebinbackupConfig.bat IBMTPCdeviceappswasprofilesdeviceServerbinbackupConfig.bat c. Backup the following registries: InstallShield registries Back up the following registries: AIX: /usr/lib/objrepos/InstallShield/Universal/IBM-TPC/ UNIX: /root/InstallShield/Universal/IBM-TPC Windows: C:Program FilesCommon FilesInstallShieldUniversalIBM-TPC SRM legacy registry Back up the following registries: AIX: subsystem TSRMsrv# where # can be any number UNIX: /etc/Tivoli/TSRM Windows registry Back up the Windows registry. Common agent registry (if you have Data agents and Fabric agents installed) Back up the following registries: AIX or UNIX: /usr/tivoli/ep*, /opt/tivoli/ep* Windoes: C:Program FilesTivoliep* d. Back up the Tivoli GUID setting. Go to C:Program FilesTivoliguid (for Windows) or /opt/tivoli/guid (for AIX or UNIX). Run the following command: tivguid -show >tpc_tivguid.txt e. Back up the Agent Manager files and directories if you have Agent Manager installed. AM_installation_directory/AppServer/agentmanager/config/cells/ AgentManagerCell/security.xml AM_installation_directory/AppServer/agentmanager/installedApps/ AgentManager.ear/AgentManager.war/WEB-INF/classes/resources/ AgentManager.properties AM_installation_directory/os.guid AM_installation_directory/certs f. Back up Tivoli Storage Productivity Center server files and directories. TPC_installation_directory/config TPC_installation_directory/data/config TPC_installation_directory/device/config g. Back up the Data agent and Fabric agent files and directories (if you have the Data agent and Fabric agent installed). TPC_installation_directory/config TPC_installation_directory/ca/cert TPC_installation_directory/ca/config TPC_installation_directory/ca/*.sys TPC_installation_directory/ca/subagents/TPC/Data/config176 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm TPC_installation_directory/ca/subagents/TPC/Fabric/config h. Back up any interim fixes or work around code provided by Tivoli Storage Productivity Center support. 5. Restart all TPC services. To start the services on windows: Go to Start  Setting  Control Panel  Administrative Tools  Services. Right-click the service and select Start. The following services need to be restarted are: – IBM WebSphere Application Server V6 - Device Server – IBM TotalStorage Productivity Center - Data Server – IBM Tivoli Common Agent <directory> where <directory> is where the Common Agent is installed. The default is <TPC_install_dir>/ca. – IBM WebSphere Application Server v6.1 - CSM if you also have TPC for Replication To start the services on Linux: – Device server: /<TPC_install_directory>/device/bin/linux/startTPCF.sh – Data server: /<TPC_install_directory>/data/server/tpcdsrv1 start – Common agent: /<common_agent_install_directory>/ca/endpoint.sh start – Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh start – IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/startServer.sh server1 -username <username> -password <passsword> (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) To start the services on AIX: – Device server: /<TPC_install_directory>/device/bin/aix/startTPCF.sh – Data server: startsrc -s TSRMsrv1 – Common agent: /<common_agent_install_directory>/ca/endpoint.sh start – Storage Resource agent: /<SRA_install_directory>/agent/bin/agent.sh start – IBM WebSphere Application Server V6.1 - CSM: /<usr or opt>/IBM/replication/eWAS/profiles/CSM/bin/startServer.sh server1 -username <username> -password <passsword> Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 177
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am (where <username> represents the ID of the TPC superuser and <password> represents the password for that user) Note: If possible, reboot the Tivoli Storage Productivity Center servers. This action will stop any remaining Tivoli Storage Productivity Center Java processes that might not stop in a timely manner. It is important for the Tivoli Storage Productivity Center Device server to stop and restart cleanly. If this does not occur, a server reboot might be indicated. 6. Stop all Tivoli Storage Productivity Center jobs: – Stop all jobs including performance monitor jobs, system and fabric probe jobs, scan jobs, and other probe jobs. Migration of TPC components After the Tivoli Storage Productivity Center server and services are started you can start with the migration of Tivoli Storage Productivity Center. We will use the same installation program used for installing the product. Depending on the components already installed on the system, various panels are displayed. To perform the upgrade on Windows machine, execute the following procedure: 1. Double-click the setup.exe file located in the directory where you extracted the installation images. 2. Choose the language that must be used for installation and click OK (see Figure 3). Figure 5-16 Language selection panel 3. The License Agreement panel is displayed. Read the terms and select I accept the terms of the license agreement. Then click Next to continue (see Figure 5-17 on page 179).178 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-17 License panel 4. Figure 5-18 shows how to select typical or custom installation. You have the following options: – Typical installation: This selection allows you to upgrade all of the components on the same computer. You have the options to select Servers, Clients and Storage Resource Agent. – Custom installation: This selection allows you to select the components that you can upgrade. – Installation licenses: This selection installs the Tivoli Storage Productivity Center licenses. The Tivoli Storage Productivity Center license is on the DVD. You only need to run this option when you add a license to a Tivoli Storage Productivity Center package that has already been installed on your system. Note that the installation directory field is automatically filled with the TPC installation directory on the current machine and grayed out. In our case, a previous version of TPC is already installed in C:Program FilesIBMTPC directory. Select Custom Installation and Click Next to continue. Note: We recommend to select custom installation which allows you to install each component of the Tivoli Storage Productivity Center separately. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 179
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-18 TPC custom installation 5. The panel with the TPC components is shown. The components already installed on the system are discovered, selected for upgrade, and greyed out. The current version of each component is displayed next to it. In our case, we have a TPC version 4.1.1.55 installed on our system without local Data agents or Fabric agents. Figure 5-19 shows the corresponding panel. Click Next to proceed with the installation. Note: Storage Resource Agent will be upgraded using TPC user interface after the TPC upgrade.180 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-19 TPC components panel 6. If you are running the upgrade on a system with at least 4 GB but less than 8 GB of RAM, you will get the warning message shown in Figure 5-20. You can close the message panel by clicking OK. Note: 8 GB of RAM is the minimum memory requirement to run both Tivoli Storage Productivity Center and Tivoli and Tivoli Storage Productivity Center for Replication. If you have less than 8 GB of RAM, you have to run only Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication because of system load. To do that, you must disable Tivoli Storage Productivity Center or Tivoli Storage Productivity Center for Replication after installation. Figure 5-20 TPC Memory warning message Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 181
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am 7. The DB2 user ID and password panel is shown as in Figure 5-21. The information in these fields is propagated. Click Next to proceed. Figure 5-21 TPC DB2 Administrator panel 8. The Database Schema panel is shown as in Figure 5-22. All the information in this panel is already propagated. Verify it and click Next to continue.182 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-22 TPC Database Schema pane 9. In the TPC servers panel shown in Figure 5-23, verify that the fields are filled with the correct information. Also, the password fields are filled with propagated information. Click Next when you are done. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 183
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-23 TPC Servers panel 10.If Tivoli Storage Productivity Center detects that you have a DS8000, XIV, or SAN Volume Controller storage system, the "Storage Subsystem Credential Migration Tool" panel is displayed. Figure 5-24shows the “Storage Subsystem Credentials Migration Tool” panel which will help you to migrate the existing storage system credentials for the native interfaces. If you want to run the migration tool after the upgrade, uncheck the box for Run Storage Subsystem Credential Migration Tool. Otherwise, check the box for this option. Click Next.184 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-24 Storage Subsystem Credential Migration Tool 11.If the validation is successful, the summary panel shown in Figure 5-25 is presented. Review its content and click Install to start the upgrade. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 185
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-25 Summary panel Note: During the upgrade you will not see the panels with Tivoli Integrated Portal installation. 12.Upgrade will start with deploying Storage Subsystem Credential Migration Tool (Figure 5-26).186 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-26 Deploying Storage Subsystem Credential Migration Toll 13.The Storage Subsystem Credential Migration Tool panel in Figure 5-27 will open where you can select the subsystems with credentials that can be updated automatically. Select the subsystems that you want to update and click Update. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 187
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-27 Credentials Migration tool panel Note: During this upgrade we update only DS8000 subsystem. Updating SVC credentials is described in detail in Chapter xx 14.After you click the Update, the subsystem will be updated and removed from the table list. You can click on Finish button if you updated the selected subsystems and click Yes to confirm to close the Storage Subsystem Credential Migration Tool panel (Figure 5-28). Figure 5-28 Confirmation to close Credentials Migration tool panel 15.Multiple panels such as the ones in Figure 5-29 and Figure 5-30 are now shown.188 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-29 Installling DB schema Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 189
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-30 Installing Device Server Note: When you are upgrading the system, you might see several windows prompting you with the text Replace Existing File. Reply Yes to All to these prompts. Sometimes this dialog window is hidden behind the main installation panel. Make sure you check behind the main installation panel to see if there are any hidden dialog panels. 16.During the TPC upgrade, the TPC for Replication upgrade program is launched. The TPC installation is temporarily suspended and remains in the background while the TPC for Replication installation starts and a Welcome panel is displayed. See Figure 5-31 If TPC for Replication is already installed on your system, it will be upgraded, whereas if it is not present, it will be installed. In our system we have a previous version of TPC for Replication already installed, so the following panels will show a TPC for Replication upgrade. If it is the first time that TPC for Replication is installed on the system, the installation process and panels will be the same as those shown in Chapter xxx190 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-31 TPC for Replication Welcome panel 17.The installation wizard checks on the system prerequisites to verify that the operating system is supported and the appropriate fix packs are installed. If the system passes the prerequisites check, the panel shown in Figure 5-32 is displayed. Click Next to continue. Figure 5-32 System prerequisites check 18.The license agreement panel is shown. Accept it and click Next as shown in Figure 5-33. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 191
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-33 License Agreement Panel 19.In the panel shown in Figure 5-34, you can select the directory where TPC for Replication will be installed. The directory where TPC for Replication is correctly installed is proposed as the default location. You can accept it or change it based on your requirements. Click Next to continue. Figure 5-34 TPC-R Installation directory 20.The upgrade program checks for currently running TPC for Replication instances. If it is found, the message shown in Figure 5-35 is presented. Clicking Yes will continue the TPC for Replication installation and TPC for Replication service will restart during the upgrade. Figure 5-35 Restart TPC-R server during the upgrade192 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 21.Review the settings shown in Figure 5-36 and click Install to start the upgrade. Figure 5-36 TPC-R Summary panel 22.The installation of TPC for Replication starts. Several messages about the installation process are shown, such as the following in Figure 5-37, Figure 5-38 and Figure 5-39. Figure 5-37 Stopping TPC-R server Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 193
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-38 Installing TPC-R Figure 5-39 Starting TPC-R194 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm 23.After the completion of the TPC for Replication upgrade, a summary panel is shown reporting also the URL where the Web browser can be pointed to access the TPC-R Web-UI (see Figure 5-40). By clicking the Finish button, this panel is closed and the installation flows goes back to the TPC installation panels. Figure 5-40 TPC-R Summary panel 24.The TPC installation continues its flow, creating the uninstaller for TPC and completes with summary information panel (Figure 5-41). Click Finish to complete the upgrade. Figure 5-41 TPC upgrade summary panel Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 195
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Note: If you are upgrading TPC on AIX and if you see the panel indicating that the product has been 100% installed and receive the following message: /opt/IBM/TPC/service/service.sh exist on this system and is newer than the file being installed. Do you want to replace this file? Click Yes to All.5.5 Upgrading Storage Resource Agent You can upgrade Storage Resource Agent from version 4.1 to version 4.2 using one of the following methods: use the Tivoli Storage Productivity Center installation wizard use the Tivoli Storage Productivity Center user interface use a Storage Resource Agent command line interface When planning the upgrade of Storage Resource Agent, you have to consider what agents can be migrated, what are unsupported platforms and functions and what are the limitations. Details about Storage Resource Agent are described in detail in Chapter 8, “Storage Resource Agent” on page 255. In this section we will show you how to upgrade Storage Resource Agent by using the methods mentioned above. Note: We would recommend to use Tivoli Storage Productivity Center user interface to upgrade Storage Resource Agent as this is the most common and typical method. It will provide you with more details on the installation and any failures. If you will use TPC installation wizard to do the upgrade, you also have to use the wizard if you want to unistall the agent. Installation wizard When upgrading the Tivoli Storage Productivity Center server using the installation wizard, you can select to upgrade the Storage Resource Agent. If you choose not to upgrade the agent as part of the server upgrade, you can launch the graphical installer at a later time to upgrade the agent. After you start the Tivoli Storage Productivity Center installation wizard you can choose the “Typical Installation” or “Custom Installation”. In this section we document Custom Installation. By selecting the Custom Installation in the Figure 5-42, the panel in Figure 5-43 will open where you select Storage Resource Agent. Clicking on Next will open Storage Resource Agent panel (Figure 5-44) where you can enter the same options that are provided for Storage Resource Agent installation. For details see Chapter XX or Install Guide.196 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-42 TPC install wizard - Custom Installation Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 197
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-43 TPC install wizard - selecting Storage Resource Agent198 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-44 Storage Resource Agent information Wiht the successful upgrade of Tivoli Storage Productivity Center, the Storage Resource Agent is also successfully upgraded. TPC user interface When upgrading Storage Resource Agent using Tivoli Storage Productivity Center user interface follow this steps: 1. In the navigation tree, expand Administrative Services  Data Sources. Left-click Data/Storage Resource Agents (Figure 5-45) Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 199
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 5-45 Selecting Data/Storage Resource Agent 2. In the content panel, select one or more agents for which you want to upgrade and click Upgrade Agents (Figure 5-46). As you can see in the Figure 5-46 the status of the Storage Resource Agents which have to be upgraded is “Need to upgrade agent software”. If you have enabled auto upgrade action, the Storage Resource Agent will be automatically upgraded after you upgrade TPC server. Figure 5-46 Upgrade Agents 3. The Create Storage Resource Agent Upgrade panel is displayed. Use this panel to select the computer and schedule an upgrade of the Storage Resource Agent (Figure 5-47). Figure 5-47 Create Storage Resource Agents upgrade 4. Run the upgrade job. The status of Storage Resource Agent is changed to “Upgrading agent software” (Figure 5-48) and check the status in Job Management (Figure 5-49).200 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm Figure 5-48 Upgrading Storage Resource Agent Figure 5-49 Job Management showing SRA upgrade 5. After the succesful upgrade Storage Resource Agent status is “Up” (Figure 5-50) Figure 5-50 Storage Resource Agent is iupgraded Note: The Storage Resource Agent upgrade job delivers the software upgrade packages to the agent computer. The job log displays the status of the delivery. The actual status of the upgrade is found in the agent log on the agent computer. If the agent log indicates that the upgrade failed and the state of the Storage Resource agent remains in the "Upgrading agent software" status, try restarting the agent and running the upgrade job again. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 201
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am Command line interface You can upgrade the Storage Resource agent manually using command line interface. To upgrade the Storage Resource agent, follow these steps: 1. Go to the location of the installation program (using the Storage Resource Agent image) and go to the bin directory. SRA_image_location/data/sra/operating_system_type/bin 2. From the bin directory run the agent upgrade command: Agent -upgrade -installLoc agent_install_directory If the agent is run as a deamon service you must enter the -commType Daemon parameter. Enclose the agent install directory name in quotation marks. If the upgrade fails, you can check the return codes in the Tivoli Storage Productivity Center 4.2 Information Center using the following link: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp?topic=/com.ibm.tp c_V42.doc/fqz0_r_return_codes_used_by_strg_resource_agent.html5.6 Upgrading TPC-R in high availability environment With Tivoli Storage Productivity Center for Replication version 4.2 or later, an embedded database is used for the database repository. DB2 is no longer supported as the database repository for Tivoli Storage Productivity Center for Replication. As a part of the upgrade process, an embedded repository is created and any data from an existing DB2 Tivoli Storage Productivity Center for Replication database is copied to the new database repository. This process is automatic and does not require any input or action. If you are running Tivoli Storage Productivity Center for Replication in high availability environment you have to upgrade both active and standby TPC-R servers. If you are upgrading TPC-R from DB2 to embeded repository, or you already have TPC-R with embedded repository these are the general procedures in a high availability environment. 1. Issue the takeover command to the standby server 2. Upgrade the standby server Note: Be aware that this action makes both the Tivoli Storage Productivity Center for Replication servers active. 3. Wait for the standby server to complete installation and start up 4. Upgrade the active server. Note: While upgrading the active server, avoid making any configuration changes to the sessions. 5. If no changes have been made to the configuration while the active server is being upgraded, issue takeover command and reestablish the High Availability function from the active server to the standby server. If configuration changes were made to the standby server, synchronize the High Availability function from the standby server to the active202 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894Migrate.fm server. Then perform a takeover operation and reestablish the High Availability function from the active server to the standby server. During the initial synchronization, the current information in the database is saved and held until the synchronization is complete. If an error occurs during this process, the server database is restored to its original state before the synchronization process began. If an error occurs during the synchronization process which causes the status to be in the disconnected or inconsistent state, you can reconnect to a synchronized state. Chapter 5. Migrate Tivoli Storage Productivity Center base code to current level 203
    • 7894Migrate.fm Draft Document for Review February 17, 2011 2:17 am204 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm 6 Chapter 6. Agent migration and upgrade In this chapter we will review the CAS and SRA history and we will describe different scenarios how to upgrade CAS to SRA. We will also cover CIMOM to NAPI migration recommendations.© Copyright IBM Corp. 2010. All rights reserved. 205
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am6.1 CAS and SRA history Before introducing Storage Resource Agents, Tivoli Storage Productivity Center used Tivoli Common Agent Services for software distribution and wanted state management. To take advantage of some Tivoli Storage Productivity Center software management features, the Common agent had to be installed on all managed endpoints. The Common agent environment consisted of the Common Agent, Agent Manager and Resource Manager. The agents were used as a programs that automatically performed some service, such as data collection. Tivoli Storage Productivity Center used Common Information Model (CIM) agents, Data agents, Fabric agents, and out-of-band fabric agents to gather the data. The Storage Resource Agent was introduced in Tivoli Storage Productivity Center 4.1 as a lightweight agent to collect host disk and filesystem information and now with Tivoli Storage Productivity Center 4.2 it includes full host monitoring functionality including disk, filesystem, database, directory and file information for a broad set of platforms. The Storage Resource agents do not require the Agent Manager and it can be easily deployed to other systems using the Tivoli Storage Productivity Center GUI on the server system. The Storage Resource agents now perform the functions of the Data agents and Fabric agents. Tivoli Storage Productivity Center 4.2 uses Storage Resource agent, CIM agents, and Out of Band fabric agents to gather host, application, storage system, and SAN fabric information and send that information to the Data Server or Device server. Note: Agent Manager can be used with Tivoli Storage Productivity Center version 4.2 to allow communication with legacy Data agents and Fabric agents that are present in the environment. However, no new functions were added to those agents for 4.2 release. The Tivoli Storage Productivity Center 4.2 installation program does not support installation of the Data agent or Fabric agent. If you want to install the legacy Data agent or Fabric agent, you must have a previous Tivoli Storage Productivity Center installation program that supports installing the Data agent or Fabric agent. For optimal results when using Tivoli Storage Productivity Center, we recommend to migrate the Data agents and Fabric agents to Storage Resource agents. Tivoli Storage Productivity Center provides the following benefits when you migrate an existing Data agent and Fabric agent to a Storage Resource agent: Storage Resource agents require less resources on a host computer than a Data agent or Fabric agent that is based on Common Agent Services. Additionally, you can deploy Storage Resource agent as a daemon or non-daemon service. Reduced complexity when deploying agents. You can deploy Storage Resource agents directly from the Tivoli Storage Productivity Center user interface and they do not require you to install Agent Manager. For Data agents and Fabric agents, you must use the Tivoli Storage Productivity Center installation program and ensure that Agent Manager is registered with the Data server and Device server. Improved interface when deploying, upgrading, and administering agents. You can manage Storage Resource agents using the nodes in the Administrative Services section of the navigation tree. You can deploy and upgrade Storage Resource agents on schedules that you define.206 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm6.2 Prerequisites When planning the migration of Data agents and Fabric agents, you have to consider what agents can be migrated, what are unsupported platforms and functions and what are the limitations. You can check the following link for the agent support: http://www-01.ibm.com/support/docview.wss?uid=swg27019380#Agents6.3 Scenarios to migrate from CAS to SRA In this section we will describe different scenarios how to migrate the Data agents and Fabric agents to Storage Resource agents. The migration process will install Storage Resource agent on a target host and then uninstalls the existing agent. You can migrate Data agents and Fabric agents to Storage Resource agents by using one of the following methods: use the Tivoli Storage Productivity Center installation wizard (for server system only) use the Tivoli Storage Productivity Center user interface use a command line interface When you are upgrading Tivoli Storage Productivity Center using installation wizard legacy agents will be migrated as part of the Tivoli Storage Productivity Center upgrade. You can also use the wizard if you already have a Tivoli Storage Productivity Center 4.2 server installed. When you have a Tivoli Storage Productivity Center 4.2 server installed, and are installing Storage Resource agent, these are the valid upgrade scenarios. Table 6-1 Agent upgrade scenarios on existing install TPC 4.2 installed Use Tivoli Storage Productivity Center V4.2 installation program on server or to install a Storage Resource agent on a local machine Data agent or Fabric agent or both installed You can elect to migrate the Data agent or Fabric (version 4.1.1 or earlier) agent to a Storage Resource agent Storage Resource agent is installed The Storage Resource agent is upgraded to 4.2 No agent installed The default Storage Resource agent is installed. When you are upgrading a Tivoli Storage Productivity Center agent using the user intarface, these are the valid upgrade scenarios. Table 6-2 Agent upgrade via user interface Tivoli Storage Productivity Center Upgrade agent using Tivoli Storage agent Productivity Center 4.2 user interface Data agent or Fabric agent or both Not supported. You can migrate a Data agent or on local computer Fabric agent to a Storage Resource agent. Storage Resource agent V4.1 on The Storage Resource agent is upgraded to the local computer latest 4.2 level. Chapter 6. Agent migration and upgrade 207
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am Tivoli Storage Productivity Center Upgrade agent using Tivoli Storage agent Productivity Center 4.2 user interface Storage Resource agent V4.2 on The Storage Resource agent is upgraded to the local computer latest 4.2 level (use the force option). When you are upgrading a Tivoli Storage Productivity Center agent using the command line interface, these are the valid upgrade scenarios. Table 6-3 Agent upgrade via CLI Tivoli Storage Productivity Center Upgrade agent using Tivoli Storage agent Productivity Center 4.2 command line Data agent or Fabric agent or both Not supported. Migrate the Data agent or Fabric on local computer agent to a Storage Resource agent. Storage Resource agent 4.1 on The Storage Resource agent is upgraded to the local computer latest 4.2 level (cannot change commtype). Storage Resource agent V4.2 on The Storage Resource agent is upgraded to the local computer latest 4.2 level (must use force option).6.3.1 Installation wizard You can migrate the Data agent or Fabric agent to the Storage Resource agent when you upgrade Tivoli Storage Productivity Center. To migrate the agents using the installation wizard you only need to select Storage Resource Agent. The upgrade procedure will handle the upgrade of the agents. Note: When you upgrade the agents to SRA using the local graphical installer, they need to be uninstalled with the local graphical uninstaller. If you attempt to uninstall an SRA that was installed using the local graphical installer from the TPC GUI, the request will be denied.6.3.2 TPC user interface You can migrate the Data agent or Fabric agent to a Storage Resource agent using the Tivoli Storage Productivity Center user interface. To schedule a migration job of the Data agent and Fabric agent to a Storage Resource agent through the user interface follow these steps: 1. In the navigation tree, expand Administrative Services  Data Sources. Left-click Data/Storage Resource Agents. On the right side you will see the agents and their state. In our example state indicate that the agent must be migrated (Figure 6-1).208 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm Figure 6-1 Selecting Data/Storage Resource agent 2. Select the agent that you want to migrate and click on the Migrate button. The Create Data/Fabric Agent Migration panel is displayed (Figure 6-2). Chapter 6. Agent migration and upgrade 209
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 6-2 Migrate data/Fabric Agent 3. The computer selection tab allows you to select machines that have Data agents, Fabric agents, or both (Figure 6-3). Select the computer and schedule a migration job in When to Run tab.210 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm Figure 6-3 Computer selection Note: When a computer has both a Data and a Fabric agent the migration job will always migrate both agents. There is no option to migrate one and not the other. In the case where both the Data agent and the Fabric agent are being migrated, the migration will migrate both to a single SRA If the agent only has one agent, after migration the SRA will be capable of performing both Data and Fabric functions. The concept of a well placed Fabric agent has been removed in this release. See the fabric section of this presentation for details. 4. In the Options tab you can select how a Storage Resource agent is run after the migration. In our example we select to run the agent as deamon service (Figure 6-4). Chapter 6. Agent migration and upgrade 211
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 6-4 Storage Resource Agent Runtime operations 5. When you click the save button the panel in Figure 6-5 will appear. The job will not be saved until the verification is complete and the Proceed button is clicked.212 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm Figure 6-5 Agent input verification 6. After the verification is completed you can click on the Proceed button and you can save the migration job (Figure 6-6). Figure 6-6 Save migration job 7. Click OK and go to Job Management panel to check the status of migration job (xxx). Chapter 6. Agent migration and upgrade 213
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am Figure 6-7 Migration job running Note: Each migration job will create one job log, regardless of how many computers are selected. When multiple computers are being migrated, the migrations are performed simultaneously in a maximum of 10 threads. The progress of each computer can be tracked by hostname. If the migration completes with warnings, the migration succeeded but there is some minor issue 8. In our example migration job completed with warnings because the migration process was not able to cleanup some of the old files on the remote machine. This is a common issue on Windows (Figure 6-8) Figure 6-8 Migration job warnings 9. Click on the View Log File(s) button to view the details (Figure 6-9)214 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894AgentMigrate.fm Figure 6-9 View Log File 10.In our example Common Agent log files were not deleted. To finish the migration any files under the “TPC_install_dirTPCca” directory can be manually deleted. 11.After the succesful migration Storage Resource Agent status is “Up” (Figure 6-10). Figure 6-10 Storage Resource Agent status6.3.3 Command line interface You can migrate the Data agents and Fabric agents to Storage Resource agents using command line interface. To migrate the Data agent or Fabric agent to the Storage Resource agent, follow these steps: 1. Go to the location of the installation program and go to the bin directory. SRA_image_location/data/sra/operating_system_type/bin 2. From the bin directory run the migrate command For deamon-based service use this command: Agent -migrate -commType Daemon -serverPort 9549 -debug max For non-deamon searvice use this command: Agent -migrate -serverPort 9549 -userid myuserid -certFile mycertfile -passphrase mypassphrase -debug max If the upgrade fails, you can check the return codes in the Tivoli Storage Productivity Center 4.2 Information Center using the following link: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp?topic=/com.ibm.tp c_V42.doc/fqz0_r_return_codes_used_by_strg_resource_agent.html6.4 CIMOM to NAPI When you upgrade to Tivoli Storage Productivity Center 4.2, a migration will be required to switch to the Native API because for the XiV, SVC and DS8000 Tivoli Storage Productivity Center 4.2 will only use the Native API. The migration can be done prior or during the install of Tivoli Storage Productivity Center 4.2, or even later on, but you will not be able to use such a device until you completed the migration. Chapter 6. Agent migration and upgrade 215
    • 7894AgentMigrate.fm Draft Document for Review February 17, 2011 2:17 am Migration You have three options to migrate the CIMOM user credentials/access information to NAPI : You can provide authentication info while running earlier Tivoli Storage Productivity Center versions before upgrading to Tivoli Storage Productivity Center 4.2 by running the stand alone credential migration toll. The information will be stored in the database for later use During the upgrade, the installer will check if you provided user authentication info for Native API devices or not. If not, installer will provide an option to launch the stand alone credential migration toll After upgrading to Tivoli Storage Productivity Center 4.2, you can use the Administration Services  Data Source  Storage Subsystems panel to provide new authentication info. The Configure Devices wizard will usually not work, because typically the Native API devices are already part of a probe job. Credentials migration tool is described in details in Chapter 5, “Migrate Tivoli Storage Productivity Center base code to current level” on page 157. If you migrate a NAPI devices either prior to or as part of the upgrade to TPC 4.2, any embedded DS8k CIMOMs, SVC CIMOMs, and XIV CIMOMs will be automatically deleted from TPC . Proxy DS CIMOMs will NOT be automatically deleted, even if TPC knows of no other devices configured on that CIMOM. If the NAPI device is down at the time of the TPC Data server startup, its CIMOM will not be deleted If you are upgrading from TPC 4.1.1 to TPC 4.2, and you want to migrate an existing TPC 4.1.1 XIV CIMOM note that: – Previous historical data will be retained (true for all NAPI devices), but capability data will not be updated After the upgrade, a reprobe of the subsystem is necessary to enable new 4.2 capabilities (for example, creating and deleting XIV volumes).216 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm 7 Chapter 7. Native API In version 4.2 Tivoli Storage Productivity Center provides a new access method to gather informaton from devices. This method is called the Native API (NAPI) and is at this time only available for a limited number of disk storage subsystems. While this chapter is focused on the Native API method we will also explain some other new or changed parts of TPC to provide a full picture of device confuguration and handling within TPC V4.2.© Copyright IBM Corp. 2010. All rights reserved. 217
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am7.1 NAPI and other changes The Device Server has been re-shaped in some ways.While the most obvious change is the introduction of the Native API, here is a full list of new or modified items: XIV support has been enhanced in the areas: – added performance management – added provisioning capability – added alerts enhanced discovering – supported subsystems will be discovered for NAPI even though they have CIM Agents running – changes in the CIMOM discovery credential migration tool for switchting from CIMOM to NAPI spawn external processes for NAPI based tasks controlled by the new External Process Manager changes to the Navigation Tree (see Figure 7-1 on page 218 for a quick overview) – added “Storage Subsystems” as new entry under Administrative Services  Data Sources – changed “Out of Band Fabric” relabled into “Switch and Subsystems (IP Scan)” new Configure Devices wizard has been added (sometimes also called device configuration wizard, but that’s not the official name) Figure 7-1 on page 218 shows the TPC Navigation Tree enhancements between V4.1 and V4.2Figure 7-1 Navigation Tree changes change between TPC V4.1 and TPC V4.2 - overview218 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm In the rest of this chapter we will provide more information about the new or changed panels and tasks and quickly explain to you the NAPI inferface.7.1.1 Changed panels and/or tasks This is a short summary of what you should understand in terms of discovery, adding and configuring devices in TPC V4.2, see Table 7-1 Table 7-1 Task Function CIMOM Discover The CIMOM discovery has been enhanced to filter out subsystems that are accessed via NAPI. NAPI Discovery This function has been combined with the existing IP scan but the Navigation Tree item has been renamed to “Switch and Subsystem (IP Scan)”. Storage Subsystem This panel will list all disk storage subsystems, no matter which method is used for access to the device. The advantage is that on this panel you will easily see which IP address is used to talk to a device whereas in the table on the Disk Manager  Storage Subsystems panel you do not see that level of detail. Configure Devices This is a new wizard that will guide you to configure devices with TPC. Usually you will use this when new devices have been discovered, or when you manually add devices. You can still do all the steps of the wizard manually but it is a convinient help to do everything in a guided way. The Configure Devices dialog can be started from some panels by clicking the add device button (for example Add Storage Subsystem on the Disk Manager  Storage Subsystems panel) and you can also start the wizard by clicking the wrench icon in the the icon bar ;7.2 Behind the scenes - The External Process Manager With the introduction of Native API another architectural change has been introduced: the External Process Manager (EPM). This process manager is the link between the devices used by NAPI and TPC. It is called external process manager, because now the jobs for the NAPI devices will be started as external processes in the operating system, and no longer be running as threads within the Device Server process. The advantage here is that with this change the scalability and reliability could be increased. Figure 7-2 shows the high level architecture of the EPM. You can see that the EPM starts external processes for each kind of device and each type of job. Chapter 7. Native API 219
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am Figure 7-2 External Process Manager Probes With the implementation of EPM there has been additional changes in the way TPC is doing probes for multiple devices. For every device type there is a process running in the opperating system. Each of those processes is collecting the information for one of the devices at a time. As a result, the work will be running in parallel for different device types but running sequentially for the devices of the same type. See Figure 7-3 on page 220 Figure 7-3 Running a Probe for multiple devices Performance Data220 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm For the user there has not been changed much with the introduction of the EPM. The minimum interval in which TPC will collect performance data is still 5 minutes for NAPI attached devices, so you can expect to see every interval one or more proceses to be started to collect and insert the performance data into the TPC database. New is the ability to collect XIV performance data, but that change was not caused by the introduction of the EPM. In terms of stability there has been a change in TPC V4.2 which allows TPC for fail over to a redundant path (e.g. secondary HMC) if TPC was not able to collect for some intervals. There is no parameter to control the re-try and there will not be any alerts sent, but this greatly enhances the overall stability of performance data collections. In TPC V 4.2.1 the failover mechanism will also be added for CIM based performance data collections. Note: Because of this change it’s now safer to let a performance data collection job run continuously. Anyway we still recomment to stop and restart it, because currently you will not receive any alerts if something really failed within a continuously running job. Since the changes in TPC 4.1.1 which allows you to specify to run a job for 24 instead of 23 hours there is little advantage in letting a job run continuously. With setting a job to 24 hours and restart it daily you don’t loose any performance data, and still have the chance to receive alerts about a failing job (at least once a day). In the figure below you can see in the blue boxes what that TPC had changed your entries when you tried to specifiy a run time of 24 hours with daily restart. Chapter 7. Native API 221
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am Events For events there have been some changes with the introduction of Native API and EPM: there is no such concept of CIM indications for Native API A DS8000 will asynchronously send event to TPC For SVC and XIV TPC will poll every minute to get new events from the subsystems. So you will see every minute processes in your opperating system being started and stopped. The goal of TPC V4.2 is to retain the same level of device support than previous levels of TPC, so for that reason no new alerts have been added to TPC. TPC device monitoring capabilities In general you have to understand that the whole approach of TPC is not replace any element manager software or SNMP manager, but to abstract the devices to a higher level. The SMI specification serves this purpose not only for getting data from a device and managing a device, it also had introduced the concept of CIM indications for health and fault management. CIM indications are a way to let a CIM client application like TPC know that something has happend on the device. The client can subscribe to receive indications for the events, for this they have to supply an address (Indication listener) to which the indications will be sent. (This is one reason why TPC should only be installed on servers with one NIC or have to be configured accordingly as described in the TPC Install Guide Chapter 3  Configuring IP addressing  Configuring IBM Tivoli Storage Productivity Center with multiple IP addresses). In addtion to CIM indications a CIM client can also poll the device for it’s health and opperational status. The underlying idea of those two properties is that while the power supply of device fails (=> the health status would be showing this) it still can opperate without any impact: all values are online and the performance is not impacted. Because SMI-S is meant to be a general and not device specific specification, the health and opperationl status is translated and categorized by a CIM agent from the device internal event into a generic value. The beauty of this is, that it enables a CIM client like TPC to display a status for a device without the need to load some kind of device description files, like it is required for SNMP management applications to load a MIB file. The bottom line of all this is that TPC will help to focus from a business or SLA point of view. On the other hand TPC will not be the tool you chooce when you really need to be informed about each individual component that might have failed in a device.7.3 Solution Design for device access The purpose of this section is meant to be a solution design planing help, so you should read this before you start implementing or configuring these functions. The outline follows this structure: When to use a function Considerations Requirements (Implemention)222 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm7.3.1 Planning for NAPI and NAPI Discovery The Native API (NAPI) is a new way that Tivoli Storage Productivity Center uses to communicate with devices. The Native API does not replace CIM, SNMP or the In-Band fabric interfaces. Allthough it is an addition to the ways TPC can get information, you cannot decide which interface you would like to use, because the support of NAPI is currently available for: IBM DS8000 IBM SAN Volume Controller IBM XIV Note: The DS8000 support is limited to 2107 devices only, and does NOT include the familiy of like products such as DS6000 or ESS. The meaning of the name Native API already explains the difference to the other protocols. Navtive API use the proprietary communication language of a device, and not a reduced set of standadized queries and commands. When to use NAPI For the devices listed above Tivoli Storage Productivity Center V4.2 will only use the Native API. When you upgrade to Tivoli Storage Productivity Center V4.2, TPC a update/migration will be required to switch to the NAPI, which can be done prior or during the install, or even later on, but you will not be able to use such a device until you complete the migration. For that reason the Supported Storage Products Matrix (see <<< best practises>>>) does not list any provider versions or Interop Namespace for the devices listed above. In addition to this new interface the device server has been modified in some parts, so that together with the NAPI the scalability and the reliabliliy have been enhanced. TPC is still not trying to replace the element management tools for those devices, but at the same time customers have asked for better intergration of IBM devices. Just to give an example: for DS8000 it was not posible to specify the LSS when provisioning volumes, and this is now possible with TPC 4.2. The SMI standard will never include this level of detail simply because the intention of SMI-S is to abstract from the actual hardware devices. Considerations for NAPI At this point we will document some general considerations and later on drill down to specifics for the supported devices. To discover NAPI devices automatically you need to use the “Switch and Storage Subsystem (IP Scan). This job which was previously called “Out of Band Discovery). Since the discovery for NAPI devices will not return any status information about a device as opposed to the CIMOM discovery, there is not real need to let the discovery run on a scheduled base but on the other hand the discover of NAPI devices will not send any errors when a new device has been found, but no credentials have been provided, therefore it is not so anyoing when leaving it switched on. If new credentials are not available, monitoring of NAPI devices will not work after upgrading to TPC 4.2 Probe performance is generally the same if not better as before Chapter 7. Native API 223
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am The discovery of new NAPI devices is part of the Switches and Subsystem (IP Scan) job. The job had been there is earlier versions of TPC but now had functions added so that it will identify subsystems that are used by the NAPI method. As for the CIMOM discovery we generally do not recommend using the discovery. By default TPC has no subnets configured to be scanned. If you do want to use it be aware that you need to add the address range that you want to scan. Author Comment: is that above true: tpc is not looking into the subnet it’s installed to? The scan of IP ranges for subsystems and switches can be seperated in such a way, that TPC either looks for switches or for storage subsystems, both or none. This setting is applied to all IP address ranges that you specify. Figure 7-4 on page 224 shows the Administrative Services  Discovery  Switch and Subsystem (IP Scan) panel what you can configure these settings. For example in our environment we specified the range 9.11.98.190 to 9.11.98.210. Figure 7-4 Choose for what kind of device TPC will search If you want to get notified you need to change the alerts options for the IP Scan job, for example by entering your e-mail address on the Alerts tab. IBM DS8000 In this section we discuss how the DS8000 interacts with the NAPI Access method used: ESS Network Interface (ESSNI) Failover:224 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm For the communication with a DS8000 TPC uses the ESSNI client. This is basically the same library that is included in any DS8000 CLI. Since this component has built in capabilities to do a failover from one HMC to another HMC it is a good idea to specify the secondary HMC IP address if you DS8000 has one. The failover might still cause some errors in a TPC job, but the next command that is sent to the device should be using the redundant connection. Network There are no special network considerations. TPC need to be able to talk to the HMC just as be fore when the embedde CIMOM was used. TPC is currently not able to provide specific messages for the vast majority of ESSNI error codes. You can still look up the errors in the DS8000 Information center, doing this often provides useful information – for example, that the user ID is wrong or that the password has expired, which will not be in any TPC logs http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp Example: 2010-08-05 16:58:09.296 HWNEP0003E A DS8000 ESSNI command failed. The error code is CMUN02021E This is the generic TPC error whose “user action” directs the TPC user to lookup the ESSNI code in the DS8000 Information Center. Doing so reveals that this error code means “Unable to create logical volume: the volume number already exists.” IBM SAN Volume Controller Th this section we disccuss how the SVC interacts with the native API. Access method used: Secure Shell (SSH) Failover: In SVC one of the node in the cluster is carrying out the role of the the config node. This node manages the acces through CIM, SSH and a lot of other tasks, and it has a specific IP address. The SVC cluster takes care that there is always one node running as the config node. So from a TPC perspective the failover is happening on the device itself. Network: Now that TPC is accessing a SVC cluster directly you need to make sure that TPC can talk to the cluster. In earlier versions when TPC used a CIMOM, TPC only needed to talk to the CIMOM. If the CIMOM was running on the master console (SVC Version 4) it could use a different physical network to communicate with the SVC cluster. The number of SSH open sessions that a SVC can have at one time is limited to 10 session. This limit of 10 session does only include external CLI acces, GUI and the embedded CIMOM in SVC Version 5 and later don’t count against this limit Sometimes you also read something about 15 sessions. This is the number of new connections that can be opened per second. This number is bigger than the number of concurrent sessions because non-interactive (i.e. script-driven) sessions may last less than a second per connection. TPC handling of SSH keys for SVC: – A default SSH key (tpc_svc.pem) is shipped with TPC. While it is convinient to use just this key it would compromise security, so we do not recomment to use this in an environment other than one used for testing or demonstrations. – TPC will accept SSH key in OpenSSH format or in putty (.ppk) format. Putty keys will be automatically converted into OpenSSH format Chapter 7. Native API 225
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am – You can only use passphrases with OpenSSH keys – If the key is in putty format and you do require passphrases you need to convert the key over into the OpenSSH format manually – The way SVC works with SSH keys is: • Public key is stored on the SVC cluster • User or client application uses Private key If there have no keys uploaded that you want to use, you have three options – use the default key that comes with TPC (not recommended) – use a putty key that you have generated and saved without a passphrase – use a OpenSSH key that you have generatedwith or without a passphrase Background information and general considerations for different SVC versions: In table Table 7-2 on page 226 we explain some general differences between SVC versions. Later on in <<< reference to implementation>> we will explain additional things to watch out for when you are adding a SVC to TPC. Table 7-2 Special considerations for different SVC versions SVC 4 SVC 5+ Concept of SSH key SSH keys are associated to SSH keys are associated to an authority level/role a user ID. there are no individual A user ID is always users to associate a SSH associated with a group key to within SVC and therfore Note: You can still upload with an authority level multiple keys and let each since each key is user use a different key. associated with a user ID This enables you to revoke you cannot use one keypair access for a particular user, for more than one userID without any implications for other users. Authority Level the SSH key needs to have SVC version 5 has the Administrator access introduced a real user and level group concept The User needs to be part of the SVC Administrator group If the user just has monitoring access rights, you still can do probes, but you cannot run performance monitor jobs or performan any kind of volume provisioning User ID / password n/a A user ID can be optionally assigned a password. This password is only used when a user wants to log in through the SVC GUI or through the CIM interface, it is not used for SSH access SSH key upload with TPC performed via internal API performed via CIM administrator user ID administrator user ID required required226 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm As described in Table 7-2 the association of SSH keys is different for SVC Version 4 and SVC Version 5+. Figure 7-5 on page 227 will show you the logical concept of the difference Figure 7-5 SSH key associations Note: Allthough SVC Version 5+ uses now user IDs you still need to start an SSH session to the SVC with the user string admin but provide the key file of the user ID that you want to use for login - SVC will look through the list of key files and see if a maching public key can be found. Using the svcinfo lsuser command you can see which user ID is associated with the SSH session that you have open. Unfortunately we could not find a command that would list all the stored keys and the corresponding user IDs Recommendations: – SSH key names: you should give the SSH key files meaning full names, because it is hard to later find out which user is using a certain key pair. For example assign the user name as the file name to the key. – We recommend that each TPC server has its own pair of SSH keys when working with an SVC. These keys can be used for accessing multiple SVCs, but the association should always be like shown in Figure 7-6. Figure 7-6 Multiple TPC servers accesssing the same SVCs IBM XIV Th this section we disccuss how the XIV interacts with the native API. Access method used: Native API will be the XML-formatted version of the XIV command-line interface called XCLI Failover In TPC 4.2.1 failover support has been added for XIV devices. TPC does not need to be provided with all the possible interface modules , instead TPC will query the XIV during the setup for the IP addresses of the other interface modules. Chapter 7. Native API 227
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 am Network You should only use the address of one of the Interface Module to add an XIV to TPC, adding more than one IP address by starting the Configure Devices wizard again. Requirements for NAPI Firmware Versions: DS8K: Firmware version 2.4.2 and above. SVC: V4.2 and above (for TPC-R 4.3 and above) XIV: V10.1 and above IP ports used from the native communication: DS8K: 1750 SVC 4: 443 SVC 5: 5989 (to upload SSH keys) and 443 for normal operations XIV: 7778 Requirements for NAPI Discovery In order to actually configure the discovery you need to add the range to the IP addresses that TPC should scan, just as you did like in previous versinons of TPC for Out of Band Fabric discoveries. You can configure this on the lower part of the Administrative Services  Discovery  Switch and Subsystem (IP Scan) panel as shown in Figure 7-4 on page 224 where you can see where to add the IP address ranges. If you decide not to use the autodiscovery function you do not have to do anything. Since there is no way to change the job name, we found it useful to change the job description, so that we were able to sort the list of jobs in the new Job Management Panel. Migration Within TPCs philosophy, the term migration is used when the architecture changes, for example going from CIMOM to NAPI. In contrast TPC refers to upgrades when just the version of a component changes, but the architecture stays the same, for example going from SRA version 4.1 to SRA version 4.2. You have three options when to migrate the user credentials/access information: You can provide authentication information while running earlier TPC versions before upgrading to TPC V4.2 by running the stand-alone tool migrateuserinfo.bat/.sh. The information will be stored in the database for later use. During the upgrade, the installer will check if customer has provided user authentication info for NAPI devices or not. If not, installer provide an option to launch the stand-alone tool After upgrading to TPC V4.2, you can use the Administration Services  Data Source  Storage Subsystems panel to provide new authentication info. The Configure Devices wizard will usually not work, because typically the NAPIdevices are already part of a probe job. Considerations If you migrate a NAPI devices either prior to or as part of the upgrade to TPC V4.2, any embedded DS8k CIMOMs, SVC CIMOMs, and XIV CIMOMs will be automatically deleted from TPC – proxy DS CIMOMs will NOT be automatically deleted, even if TPC knows of no other devices configured on that CIMOM. If the NAPI device is down at the time of the TPC Data server startup, its CIMOM will not be deleted228 Tivoli Storage Productivity Center V4.2 Release Update
    • Draft Document for Review February 17, 2011 2:17 am 7894NativeAPI.fm If you are upgrading from TPC V4.1.1 to TPC V4.2, and you want to migrate an existing TPC 4.1.1 XIV CIMOM note that: – Previous historical data will be retained (true for all NAPI devices), but capability data will not be updated After the upgrade, a reprobe of the subsystem is necessary to enable new 4.2 capabilities (e.g. creating/deleting XIV volumes)7.3.2 Planning for CIMOM Discovery When to use CIMOM Discovery In most environments we found that using CIMOM discovery will not have a tremendeous advantage, simply because most CIMOMs have security turned on, which mean TPC will not be able to get a list of devices from the CIMOM. As a result the CIMOM Disovery will fail, leaving you a failed job log entry and potentially sending you an e-mail about an obvious error. Because most of the time when you get a new device in your environment you will know about that, and especially since TPC will not actually configure the device for you, we find little use of the automatic discovery of new CIM agents. Considerations for CIMOM Discovery The CIMOM discovery of new CIMOMs has been changed in some ways to accommodate the implementation of NAPI. A CIMOM discovery can get limited information from a CIMOM even without authenticating, but in most cases this is not enough. Here is a short list of general CIMOM Discovery considerations: CIMOM discovery is a process that serves three purposes: – find new CIM agents – contact a known CIM agent to find new devices – get basic status information form devices managed by TPC through CIM Finding new CIM agents usually reults in a failed CIMOM Discovery simply because no credentials are available to log into the CIM agent. There is a setting that will allow TPC to look for new devices in the subnet that TPC is installed in, as shown in Figure 7-7 Chapter 7. Native API 229
    • 7894NativeAPI.fm Draft Document for Review February 17, 2011 2:17 amFigure 7-7 Disable scan of local subnet TPC will discover CIMOMs that are not within the local subnet of the server by using SLP, which needs to be configured at the TPC side (provide the SLP DA IP address) and the SLP DA side (configure the list of devices available in that subnet) Because the CIMOM Discovery will often fail for obvious reasons (CIMOM that don’t have credentials defined at the TPC server) we recomend to not use the capability to look for new CIMOMs. You can do this simply by not specifying SLP DAs and not letting TPC look for new CIMOMs in it’s local subnet as shown in Figure 7-7 on page 230. This way TPC will still look for new device at already configured CIMOMs and in addition you will still get status information. When you read the following list, keep in mind that you could have a proxy CIMOM that has multiple devices of different types attached, but TPC does not support all of those devices to be used with CIM agents anymore because they are now being used via Native API. the discovery will filter out CIMOMs of the devices that are only supported through the new interface, so the embedded CIMOMs for DS8000 and XIV will be ignored as well as any SVC CIM agent. if the discovery finds a DSopen API CIMOM this one will be added to Administrative Services  Data Sources  CIMOM Agents list The reason for this is that at this stage of a discovery TPC does not yet know what devices are attached to the newly discovered CIMOM. For this to happen you will need to add the credentials for the CIMOM, and run the discovery again. Once TPC can get a list of devices from the CIMOM, DS8000 devices will be filtered out, and the remaining (DS6000 and ESS) devices will be added as managed devices. If at this point there are only DS8000 devices attached to the CIMOM, there will be no mangeddevices added. Based on user feedback the decision was taken to TPC but the CIMOM will also not be removed, since it can be used for other subsystems. Requirements for CIMOM Discovery To use CIMOM Discovery there are no real requirements. The job is defined and activated by default, so there is little or nothing