Ibm tivoli storage area network manager a practical introduction sg246848

  • 1,400 views
Uploaded on

 

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,400
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
19
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Front coverIBM Tivoli Storage AreaNetwork Manager:A Practical IntroductionDiscover, display and monitor your SANtopology, including zonesHistorical and real-timemonitoringED/FI for SAN Error prediction Charlotte Brooks Michel Baus Michael Benanti Ivo Gomilsek Urs Moseribm.com/redbooks
  • 2. International Technical Support OrganizationIBM Tivoli Storage Area Network Manager: A PracticalIntroductionSeptember 2003 SG24-6848-01
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page xxi.Second Edition (September 2003)This edition applies to IBM Tivoli Storage Area Network Manager (product number 5698-SRS) and IBM TivoliBonus Pack for SAN Management (product number 5698-SRE)© Copyright International Business Machines Corporation 2002, 2003. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  • 4. Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . . xxiii The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . . xxiii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . . . xxv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... . . . . . . . xxv Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii September 2003, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviiPart 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction to Storage Area Network management. . . . . . . . . . . . . . . . . . . 3 1.1 Why do we need SAN management? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Storage management issues today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Current generation of SAN management: spreadsheets and paper . . . . . . . . . . . . 7 1.2 New tools for SAN management are needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Storage management components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.2 Standards and SAN management tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2.4 Outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.2.5 Inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.2.6 Why you might use both inband and outband discovery. . . . . . . . . . . . . . . . . . . . 17 1.2.7 Formal standards for outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.2.8 Formal standards for inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2.9 The future of SAN management standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.2.10 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager . . . . . . . . . . . . 27 2.1 Highlights: What’s new in Version 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.1 Discovery of iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 Event Detection and Fault Isolation (ED/FI - SAN Error Predictor). . . . . . . . . . . . 28 2.1.3 IBM Tivoli Enterprise Data Warehouse (TEDW) . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.4 IBM Tivoli SAN Manager on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.5 Embedded WebSphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.6 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.7 Other changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2 IBM Tivoli SAN Manager overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.1 Business purpose of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.2 Components of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.3 Supported devices for Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 Major functions of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.1 Discover SAN components and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.3.2 Deciding how many Agents will be needed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.3 How is SAN topology information displayed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35© Copyright IBM Corp. 2002, 2003. All rights reserved. iii
  • 5. 2.3.4 How is iSCSI topology information displayed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4 SAN management functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.1 Discover and display SAN components and devices . . . . . . . . . . . . . . . . . . . . . . 37 2.4.2 Log events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4.3 Highlight faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.4.4 Provide various reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.5 Launch vendor management applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4.6 Displays ED/FI events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4.7 Tivoli Enterprise Data Warehouse (TEDW) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Part 2. Design considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Chapter 3. Deployment architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2 Fibre Channel standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.1 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.2 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3.1 Host Bus Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4 Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4.1 Point-to-point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4.2 Arbitrated loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.3 Switched fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5 IBM Tivoli SAN Manager components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5.1 DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5.2 IBM Tivoli SAN Manager Console (NetView) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5.3 Tivoli SAN Manager Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5.4 Tivoli SAN Manager Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.5.5 SAN physical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.6 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.6.1 Inband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.6.2 Outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.7 Deployment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.7.1 Tivoli SAN Manager Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.7.2 iSCSI management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.7.3 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.7.4 Tivoli SAN Manager Agent (Managed Host) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.8 Deployment scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.8.1 Example 1: Outband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.8.2 Example 2: Inband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8.3 Example 3: Inband and outband . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.8.4 Additional considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.9 High Availability for Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.9.1 Standalone server failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.9.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Part 3. Installation and basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Chapter 4. Installation and setup. . . . . . . . . . . . . . . . . . . . . . . ....... ...... ....... 95 4.1 Supported operating system platforms . . . . . . . . . . . . . . . . . ....... ...... ....... 96 4.2 IBM Tivoli SAN Manager Windows Server installation . . . . . ....... ...... ....... 96 4.2.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... ...... ....... 96iv IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 6. 4.2.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.2.3 DB2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.4 Upgrading DB2 with Fix Pack 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2.5 Install the SNMP service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.2.6 Checking for the SNMP community name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2.7 IBM Tivoli SAN Manager Server install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.2.8 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.3 IBM Tivoli SAN Manager Server AIX installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.2 Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.3 Starting and stopping the AIX manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.4 Checking the log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124.4 IBM Tivoli SAN Manager Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 IBM Tivoli SAN Manager Agent install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.4 Configure the Agent service to start automatically . . . . . . . . . . . . . . . . . . . . . . . 1174.5 IBM Tivoli SAN Manager Remote Console installation . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 Preinstallation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.3 Installing the Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.4 Check if the service started automatically. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254.6 IBM Tivoli SAN Manager configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.6.1 Configuring SNMP trap forwarding on devices . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.6.2 Configuring the outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.6.3 Checking inband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6.4 Performing initial poll and setting up the poll interval . . . . . . . . . . . . . . . . . . . . . 1324.7 Tivoli SAN Manager upgrade to Version 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.7.1 Upgrading the Windows manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.7.2 Upgrading the remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.7.3 Upgrading the agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354.8 Tivoli SAN Manager uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.8.1 Tivoli SAN Manager Server Windows uninstall. . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.8.2 Tivoli SAN Manager Server AIX uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.8.3 Tivoli SAN Manager Agent uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.8.4 Tivoli SAN Manager Remote Console uninstall . . . . . . . . . . . . . . . . . . . . . . . . . 137 4.8.5 Uninstalling the Tivoli GUID package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384.9 Silent install of IBM Tivoli Storage Area Network Manager. . . . . . . . . . . . . . . . . . . . . 139 4.9.1 Silent installation high level steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.9.2 Installing the manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.9.3 Installing the agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.9.4 How to install the remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.9.5 Silently uninstalling IBM Tivoli Storage Area Network Manager . . . . . . . . . . . . . 1454.10 Changing passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Chapter 5. Topology management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495.1 NetView navigation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.1 NetView interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.2 Maps and submaps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.3 NetView window structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.4 NetView Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.1.5 NetView Navigation Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5.1.6 Object selection and NetView properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Contents v
  • 7. 5.1.7 Object symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1.8 Object status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1.9 Status propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.1.10 NetView and IBM Tivoli SAN Manager integration . . . . . . . . . . . . . . . . . . . . . . 157 5.2 Lab 1 environment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.3 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.3.1 SAN view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5.3.2 Device Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.3.3 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5.3.4 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5.3.5 MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.4 SAN menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5.4.1 SAN Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5.5 Application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.5.1 Native support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.5.2 NetView support for Web interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 5.5.3 Non-Web applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5.5.4 Launching IBM Tivoli Storage Resource Manager . . . . . . . . . . . . . . . . . . . . . . . 179 5.5.5 Other menu options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.6 Status cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5.7 Practical cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.7.1 Cisco MDS 9000 discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.7.2 Removing a connection on a device running an inband agent . . . . . . . . . . . . . . 184 5.7.3 Removing a connection on a device not running an agent . . . . . . . . . . . . . . . . . 187 5.7.4 Powering off a switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5.7.5 Running discovery on a RNID-compatible device. . . . . . . . . . . . . . . . . . . . . . . . 193 5.7.6 Outband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5.7.7 Inband agents only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5.7.8 Disk devices discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.7.9 Well placed agent strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204Part 4. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Chapter 6. NetView Data Collection, reporting, and SmartSets . . . . . . . . . . . . . . . . . 207 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6.1.1 SNMP and MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 6.2 NetView setup and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 6.2.1 Advanced Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.2.2 Copy Brocade MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.2.3 Loading MIBs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.3 Historical reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 6.3.1 Creating a Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6.3.2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6.3.3 Troubleshooting the Data Collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6.3.4 NetView Graph Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.4 Real-time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 6.4.1 MIB Tool Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.4.2 Displaying real-time data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.4.3 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.4.4 SmartSets and Data Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 6.4.5 Seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Chapter 7. Tivoli SAN Manager and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253vi IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 8. 7.1 What is iSCSI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7.2 How does iSCSI work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7.3 IBM Tivoli SAN Manager and iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 7.3.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7.3.2 iSCSI discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Chapter 8. SNMP Event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.2 Introduction to Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.2.1 Setting up the MIB file in Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.3 Introduction to IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 8.3.1 Event forwarding from IBM Tivoli SAN Manager to IBM Director . . . . . . . . . . . . 263 Chapter 9. ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 9.2 Error processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 9.3 Configuration for ED/FI - SAN Error Predictor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 9.4 Using ED/FI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 9.4.1 Searching for the faulted device on the topology map . . . . . . . . . . . . . . . . . . . . 276 9.4.2 Removing notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278Part 5. Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Chapter 10. Protecting the IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . 283 10.1 IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 10.1.1 IBM Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 10.1.2 Embedded IBM WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . . . 284 10.1.3 IBM Tivoli SAN Manager Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 10.1.4 IBM Tivoli SAN Manager Agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 10.2 IBM Tivoli Storage Manager integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 10.2.1 IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 10.2.2 Setup for backing up IBM Tivoli SAN Manager Server . . . . . . . . . . . . . . . . . . . 286 10.2.3 Tivoli Storage Manager server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 286 10.2.4 Tivoli Storage Manager client configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 10.2.5 Additional considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 10.3 Backup procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 10.3.1 Agent files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 10.3.2 Server files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 10.3.3 ITSANMDB Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 10.4 Restore procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 10.4.1 Restore Agent files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 10.4.2 IBM Tivoli SAN Manager Server files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 10.4.3 ITSANMDB database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 10.5 Disaster recovery procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 10.5.1 Windows 2000 restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 10.5.2 ITSANMDB database restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 10.6 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Chapter 11. Logging and tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 11.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 11.2.1 Server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 11.2.2 Manager service commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Contents vii
  • 9. 11.2.3 Service Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 11.2.4 Agent logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 11.2.5 Remote Console logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 11.2.6 Additional logging for NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 11.2.7 ED/FI - SAN Error Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 11.3 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 11.4 SAN Manager Service Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 11.4.1 Exporting (snapshot) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 11.4.2 Importing (restore) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328Part 6. Tivoli Systems Management Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Chapter 12. Tivoli SAN Manager and TEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 12.1 Introduction to Tivoli Enterprise Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 12.2 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 12.3 Configuring the Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 12.4 Configuring TEC Event Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 12.5 Event format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 12.6 Configuring Tivoli SAN Manager event forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . 348 12.6.1 Set the event destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 12.6.2 Configure NetView-TEC adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 12.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 12.8 Sample TEC rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Chapter 13. IBM Tivoli SAN Manager and Configuration Manager. . . . . . . . . . . . . . . 357 13.1 Introduction to IBM Tivoli Configuration Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 13.2 Inventory to determine who has which version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 13.2.1 Create an inventory profile in Tivoli Framework . . . . . . . . . . . . . . . . . . . . . . . . 359 13.3 Software distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 13.3.1 Build software package with Software Package Editor . . . . . . . . . . . . . . . . . . . 370 13.3.2 Create software distribution profile in Tivoli Framework . . . . . . . . . . . . . . . . . . 379 Chapter 14. Integration with Tivoli Enterprise Data Warehouse . . . . . . . . . . . . . . . . . 387 14.1 Introduction to IBM Tivoli Enterprise Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . 388 14.2 IBM Tivoli SAN Manager Data Warehouse Pack . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Chapter 15. Tivoli SAN Manager and Tivoli Monitoring. . . . . . . . . . . . . . . . . . . . . . . . 391 15.1 Introduction to IBM Tivoli Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 15.2 IBM Tivoli Monitoring for IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 392 15.3 Daemons to monitor and restart actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Appendix A. Advanced Topology and Sensor Event Scanners . . . . . . . . . . . . . . . . . 401 Advanced Topology Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Sensor Event Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Appendix B. IBM Tivoli SAN Manager backup scripts. . . . . . . . . . . . . . . . . . . . . . . . . 407 Tivoli Storage Manager configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 DB2 configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Stopping the applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Stopping WebSphere Tivoli SAN Manager application. . . . . . . . . . . . . . . . . . . . . . . . . 409 Stopping Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Starting the applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 These scripts start up the Tivoli SAN Manager environment in an orderly way. . . . . . . . . 409 Starting WebSphere Tivoli SAN Manager application. . . . . . . . . . . . . . . . . . . . . . . . . . 409 Start of IBM Tivoli SAN Manager environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409viii IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 10. DB2 ITSANMDB backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Offline backup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Online backup script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411Appendix C. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . 413 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 417IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 417 Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 417Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 417How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 418Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 418Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Contents ix
  • 11. x IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 12. Figures The team - Urs, Mike, Michel, Ivo, Charlotte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv 1-1 Storage management issues today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1-2 Infrastructure growth issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1-3 Manual storage management issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1-4 Current methods of compiling information about storage networks. . . . . . . . . . . . . . . 7 1-5 Large SAN environment to be managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1-6 Storage management architecture for a suite of solutions. . . . . . . . . . . . . . . . . . . . . 11 1-7 Storage networking standards organizations and their standards . . . . . . . . . . . . . . . 13 1-8 Standards for Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1-9 SAN Manager — Outband management path over the IP network . . . . . . . . . . . . . . 16 1-10 SAN Manager — Inband management path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1-11 Inband management services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1-12 The future of standards in SAN management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1-13 SMIS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1-14 SMIS Architecture in relation to SNIA storage model . . . . . . . . . . . . . . . . . . . . . . . . 22 1-15 CIM/WBEM management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1-16 CIM Agent & CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1-17 SAN management summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2-1 IBM Tivoli SAN Manager V1.2 — New functions and features . . . . . . . . . . . . . . . . . 28 2-2 IBM Tivoli SAN Manager operating environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2-3 IBM Tivoli SAN Manager functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2-4 Functions of IBM Tivoli SAN Manager and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2-5 IBM Tivoli SAN Manager — inband and outband discovery paths . . . . . . . . . . . . . . 33 2-6 Levels of monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2-7 Tivoli SAN Manager — Root menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2-8 Tivoli SAN Manager — explorer display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2-9 iSCSI SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2-10 Tivoli SAN Manager — SAN submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2-11 NetView physical topology display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2-12 Map showing host connection lost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2-13 Zone view submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2-14 Zone members. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2-15 Device Centric View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2-16 Device Centric View — explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2-17 Host Centric View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2-18 Host Centric View — logical volumes and LUN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2-19 Navigation tree for Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2-20 Switch events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2-21 Map Showing Effects of Switch Losing Power. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2-22 Graph of # Frames Transmitted over 8 ports in a 2 minute interval. . . . . . . . . . . . . . 48 2-23 Number of Frames Transmitted Over Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2-24 Vendor application launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2-25 Adornment shown on fibre channel switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3-1 Deployment overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3-2 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3-3 Typical HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3-4 Structure of a fiber optic cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3-5 Single mode and multi mode cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60© Copyright IBM Corp. 2002, 2003. All rights reserved. xi
  • 13. 3-6 SC fibre optic cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3-7 LC connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3-8 GBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3-9 Fibre Channel topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3-10 Fibre Channel point-to-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3-11 Fibre Channel Arbitrated Loop (FC-AL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3-12 Fibre Channel switched fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3-13 Component placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3-14 Inband scanning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3-15 Outband scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3-16 Components of a manger install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3-17 Level s of Fabric Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3-18 RNID discovered host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3-19 Sample outband requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3-20 Display and configure outband agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3-21 Outband management only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3-22 Sample inband requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3-23 Configure Agents — Inband only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3-24 Inband management only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3-25 Sample inband/outband requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3-26 Inband & outband in Configure Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3-27 Inband and outband management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3-28 HOSTS file placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3-29 Standby server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3-30 Failover process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4-1 IBM Tivoli SAN Manager —supported operating system platforms. . . . . . . . . . . . . . 96 4-2 Installation of IBM Tivoli SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4-3 Verifying system host name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4-4 Computer name change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4-5 DB2 services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4-6 Windows Components Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4-7 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4-8 SNMP Service Properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-9 Selecting the product to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4-10 Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4-11 Installation path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4-12 Port range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4-13 DB2 admin user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4-14 SAN Manager database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4-15 WebSphere Administrator password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4-16 Host authentication password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4-17 NetView install drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4-18 NetView password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4-19 Installation path and size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4-20 Installation progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4-21 Finished installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4-22 Tivoli SAN Manager Windows Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4-23 Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4-24 Agent installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4-25 Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4-26 Installation directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4-27 Server name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4-28 Agent port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115xii IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 14. 4-29 Agent access password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164-30 Installation size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164-31 Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174-32 Agent Windows service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184-33 Console installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194-34 Start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204-35 Welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214-36 Installation directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214-37 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224-38 Console ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224-39 Console access password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234-40 Tivoli NetView installation drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234-41 Tivoli NetView service password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244-42 Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244-43 Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254-44 Console service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254-45 Configuration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264-46 SNMP traps to local NetView console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274-47 SNMP trap reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274-48 Trapfwd daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294-49 SNMP traps for two destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304-50 Agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314-51 Outband Agent definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314-52 Login ID definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324-53 Not responding inband agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324-54 SAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1334-55 Uninstalling the SAN Manager Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354-56 Agent uninstall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374-57 Uninstalling remote console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384-58 Uninstalling Tivoli GUID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395-1 NetView window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505-2 NetView Explorer option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515-3 NetView explorer window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525-4 NetView explorer window with Tivoli Storage Area Network Manager view . . . . . . 1525-5 NetView toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535-6 NetView tree map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535-7 NetView objects properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545-8 NetView objects properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545-9 IBM Tivoli SAN Manager icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1555-10 SAN Properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585-11 ITSO lab1 setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1595-12 ITSO lab1 topology with zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605-13 IBM Tivoli NetView root map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1615-14 Storage Area Network submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1615-15 Topology views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625-16 Storage Area Network view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625-17 Topology view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635-18 Switch submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635-19 Interconnect submap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645-20 Physical connections view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645-21 NetView properties panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655-22 Zone view submap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655-23 FASTT zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Figures xiii
  • 15. 5-24 Device Centric View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5-25 Host Centric View for Lab 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5-26 iSCSI discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5-27 iSCSI SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5-28 SAN Properties menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5-29 IBM Tivoli SAN Manager Properties — Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . 171 5-30 IBM Tivoli SAN Manager Properties — Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5-31 IBM Tivoli SAN Manager Properties — Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 5-32 Changing icon and name of a device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5-33 Connection information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5-34 Sensors/Events information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5-35 Brocade switch management application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 5-36 NetView objects properties — Other tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5-37 Launch of the management page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5-38 PATH environment variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 5-39 NetView Tools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5-40 San Data Gateway specialist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5-41 Launch Tivoli Storage Resource Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5-42 IBM Tivoli SAN Manager — normal status cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 5-43 Status cycle using Unmanage function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5-44 Status cycle using Acknowledge function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5-45 Lab environment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5-46 Discovery of MDS 9509 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 5-47 MDS 9509 properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5-48 MDS 9509 connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5-49 Trap received by NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5-50 Connection lost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5-51 Connection restored. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5-52 Marginal connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5-53 Dual physical connections with different status . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5-54 Agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5-55 Unsafe removal of Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5-56 Connection lost on a unmanaged host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5-57 Unmanaged host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5-58 Clear History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5-59 NetView unmanaged host not discovered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5-60 SAN lab - environment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5-61 Switch down Lab 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5-62 Switch up Lab 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5-63 RNID discovered host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-64 RNID discovered host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-65 RNID host with changed label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5-66 Only outband agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5-67 Explorer view with only outband agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5-68 Switch information retrieved using outband agents . . . . . . . . . . . . . . . . . . . . . . . . . 197 5-69 Inband agents only without SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5-70 Inband agents only with SAN connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5-71 Switches sensor information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5-72 Discovered SAN with no LUNS defined on the storage server . . . . . . . . . . . . . . . . 200 5-73 MSS zoning display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5-74 MSS zone with CRETE and recognized storage server . . . . . . . . . . . . . . . . . . . . . 202 5-75 “Well-placed” agent configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5-76 Discovery process with one well-placed agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204xiv IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 16. 6-1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086-2 SNMP architecture overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2096-3 MIB tree structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2106-4 Enabling the advanced menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2116-5 MIB loader interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2136-6 Select and load TRP.MIB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2146-7 Loading MIB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2146-8 NetView MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2156-9 FE-MIB — Error Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2166-10 SW MIB — Port Table Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2166-11 Private MIB tree for bcsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2176-12 MIB Data Collector GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2176-13 starting the SNMP collect daemon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2186-14 internet branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2186-15 Private arm of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2196-16 Enterprise branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2196-17 bcsi branch of MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2206-18 swFCPortTxFrames MIB object identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2216-19 Adding the nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2216-20 Add Nodes to the Collection Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226-21 Newly added Data Collection for swFCTxFrames . . . . . . . . . . . . . . . . . . . . . . . . . . 2236-22 Restart the collection daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236-23 Purge Data Collection files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2246-24 Select ITSOSW2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266-25 Building graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266-26 Graphing of swFCTxFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2276-27 Graph properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2276-28 Real-time reporting — Tool Builder overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2286-29 Enabling all functions in NetView. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2286-30 MIB tool Builder interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296-31 Tool Wizard Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296-32 Tool Wizard Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306-33 SW-MIB — Port Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306-34 Final step of Tool Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2316-35 New MIB application — FXPortTXFrames. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2316-36 Monitor pull-down menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2326-37 NetView Graph starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2326-38 Graph of FCPortTXFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2336-39 Graph Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2336-40 Polling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2346-41 Tool Builder with all MIB objects defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2346-42 All MIB objects in NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2356-43 SmartSet Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2356-44 Selected Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2366-45 Defining a SmartSet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2376-46 Advanced window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2386-47 Advanced window with 2109s added. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396-48 New SmartSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396-49 New SmartSet — IBM 2109. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2406-50 SmartSet topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2416-51 ITSOSW1, ITSOSW2 and ITSOSW3 in IBM2109 SmartSet . . . . . . . . . . . . . . . . . . 2426-52 Additional SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2436-53 IBM2109 SmartSet defined to Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Figures xv
  • 17. 6-54 NetView Graph starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6-55 IBM2109 SmartSet data collected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6-56 Selected MIB instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6-57 Graph showing selected instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 6-58 Server Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6-59 Server Setup options window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6-60 Clear Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6-61 Clear databases warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 6-62 NetView stopping — clearing databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6-63 With seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6-64 Without seed file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 7-1 iSCSI components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7-2 Fibre Channel versus iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 8-1 Event notification overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8-2 SAN Manager generated SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 8-3 Event Destination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 8-4 IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 8-5 SNMP event from SAN Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 9-1 ED/FI - SAN Error Predictor overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 9-2 Failure indication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 9-3 Adornment example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 9-4 Error processing cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 9-5 Fault Isolation indication flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 9-6 ED/FI Menu Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 9-7 ED/FI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 9-8 Rule description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 9-9 Adornments on the topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 9-10 Devices currently in Notification State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 9-11 Indicated device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 9-12 NetView Search dialog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 9-13 Found objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 9-14 Found device on topology map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 9-15 Clear the notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 9-16 After clearing the notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 9-17 Topology change after notification clearance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 10-1 IBM Tivoli SAN Manager components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 10-2 Tivoli Storage Manager integration with Tivoli SAN Manager . . . . . . . . . . . . . . . . . 285 10-3 Sample environment: Backing up Tivoli SAN Manager to Tivoli Storage Manager . 286 10-4 Procedures used to backup IBM Tivoli SAN Manager. . . . . . . . . . . . . . . . . . . . . . . 291 10-5 IBM Tivoli SAN Manager restore procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 10-6 Agent is contacted after restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 10-7 Netview restart failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 10-8 Tivoli Storage Manager restore interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 10-9 IBM Tivoli SAN Manager agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 10-10 IBM Tivoli SAN Manager Disaster Recovery procedures . . . . . . . . . . . . . . . . . . . . 309 10-11 Full system restore result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 10-12 System Objects restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 10-13 System Objects restore results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 10-14 IBM Tivoli SAN Manager interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 10-15 DB2 Database maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 11-1 IBM Tivoli SAN Manager — Logging and tracing overview . . . . . . . . . . . . . . . . . . . 318 11-2 Service Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 11-3 NetView trap reception. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324xvi IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 18. 11-4 NetView daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32411-5 Enable trapd logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32511-6 Stop and start daemons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32511-7 Recycling daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32512-1 TEC architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33512-2 Tivoli Lab environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33612-3 Active Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33712-4 Import Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33712-5 Import Class Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33812-6 Compile Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33912-7 Load Rule Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33912-8 Restart TEC Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34012-9 TEC Console Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34012-10 Create Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34112-11 Create Filter in Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34112-12 Event Group Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34212-13 Add Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34212-14 Event Group Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34312-15 Assign Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34312-16 Assigned Event Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34412-17 Configured Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34412-18 TEC Console main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34512-19 TEC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34512-20 General tab of event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34612-21 Event attribute list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34712-22 Set Event Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34812-23 Enable TEC events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34912-24 Configuration GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34912-25 Choose type of adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35012-26 Enter TEC server name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35012-27 TEC server platform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35112-28 TEC server port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35112-29 Configure forwardable events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35112-30 Choose SmartSets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35212-31 Configure adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35212-32 Start the adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35212-33 Defective cable from bonnie to itsosw1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35312-34 Events for cable fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35312-35 Condition cleared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35413-1 Tivoli Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35913-2 Policy Region tonga-region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36013-3 Managed Resources for Inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36013-4 Policy Region Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36113-5 Profile Manager Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36113-6 Inventory Profile Global Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36213-7 Inventory Profile PC Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36313-8 Inventory Profile UNIX Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36413-9 Distribute Inventory Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36513-10 Distribute Inventory Profile dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36613-11 Distribution Status Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36713-12 Create Query Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36713-13 Edit Inventory Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36813-14 Output for IBM Tivoli SAN Manager Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Figures xvii
  • 19. 13-15 Output for IBM Query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 13-16 Software Package Editor with new package ITSANM-Agent. . . . . . . . . . . . . . . . . . 370 13-17 Properties dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 13-18 Add an execute program action to the package . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 13-19 Install dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 13-20 Advanced tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 13-21 Add directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 13-22 Remove dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 13-23 Advanced properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 13-24 Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 13-25 Ready-to-build software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 13-26 Policy Region with Profile Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 13-27 Create Software Package Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 13-28 Profile Manager with Profiles and Subscribers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 13-29 Import Software Package. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 13-30 Import and build a software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 13-31 Install a software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 13-32 Install Software Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 13-33 Remove a Software Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 14-1 Tivoli Data Warehouse data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 15-1 IBM Tivoli Monitoring Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 15-2 Policy Region tonga-region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 15-3 Profile Manager PM_DM_ITSANM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 15-4 Create Monitoring Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 15-5 Add Parametric Services Model to Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 15-6 Edit Resource Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 15-7 Parameters of Resource Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 15-8 Indications and actions of resource models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 15-9 TEC forwarding of events from Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 15-10 Profilemanager for Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 15-11 TEC events from Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 A-1 Sensor Event data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405xviii IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 20. Tables 1-1 Differences in discovery capability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3-1 SAN Manager using vendor HBAs and switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4-1 Procedure to change passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5-1 IBM Tivoli SAN Manager symbols color meaning . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-2 IBM Tivoli NetView additional colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-3 Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-4 Status propagation rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 A-1 MIB II OIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 A-2 FE MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 A-3 FC-MGMT MIB OIDS used by Advanced Topology Scanner . . . . . . . . . . . . . . . . . 403 A-4 FC-MGMT MIB Sensor Event Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404© Copyright IBM Corp. 2002, 2003. All rights reserved. xix
  • 21. xx IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 22. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area. Anyreference to an IBM product, program, or service is not intended to state or imply that only that IBM product,program, or service may be used. Any functionally equivalent product, program, or service that does notinfringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisions areinconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate without incurringany obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2002, 2003. All rights reserved. xxi
  • 23. TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX® MQSeries® Tivoli Enterprise™ Domino™ NetView® Tivoli Enterprise Console® DB2® OS/2® Tivoli® ^™ OS/390® TotalStorage® Enterprise Storage Server® Predictive Failure Analysis® TME® ESCON® pSeries™ WebSphere® IBM® Redbooks™ xSeries® ibm.com® Redbooks(logo) ™ Lotus® RS/6000®The following terms are trademarks of other companies:Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, othercountries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in theUnited States, other countries, or both.Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,Inc. in the United States, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure ElectronicTransaction LLC.Other company, product, and service names may be trademarks or service marks of others.xxii IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 24. Preface Now that you have installed your SAN, how are you going to manage it? This IBM® Redbook describes the new product, IBM Tivoli® Storage Area Network Manager, an active, intelligent, business-centric management solution for storage resources across the enterprise. IBM Tivoli Storage Area Network Manager provides effective discovery and presentation of SAN physical and logical topologies and provides multiple views of the SAN, including zones. Through its interface, it can be configured to show historical and real-time monitoring of SAN fabric devices. With IBM Tivoli Storage Area Network Manager, you will know whats on your SAN, how the devices are connected, and how storage is assigned to the hosts. If something goes wrong, or new devices are added, the topology display automatically updates to show the changed topology. SAN generated events can be displayed on the manager system, or forwarded to another SNMP manager or Tivoli Enterprise™ Console. This book is written for those who want to learn more about IBM Tivoli SAN Manager, as well as those who are about to implement it. This second edition of the book is current to IBM Tivoli SAN Manager V1.2.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center. Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Tivoli Storage Management and Open Tape Solutions at the International Technical Support Organization, San Jose Center. She has 12 years of experience with IBM in the fields of IBM ~ pSeries™ servers, AIX® and storage. She has written nine Redbooks™, and has developed and taught IBM classes in all areas of storage management. Before joining the ITSO in 2000, she was the Technical Support Manager for Tivoli Storage Manager in the Asia Pacific Region. Michel Baus is an IT Architect for @sys GmbH, an IBM Business Partner in Karlsruhe, Germany. He has eight years of experience in the areas of UNIX®, Linux, Windows® and Tivoli Storage and System Management. He holds several certifications within technical and sales fields and is an IBM Tivoli Certified Instructor. He has developed and taught several storage classes for IBM Learning Services, Germany. He was a member of the team that wrote the redbook Managing Storage Management. Michael Benanti is an IBM Certified IT Specialist in Tivoli Software, IBM Software Group. In his six years with IBM, he has focused on architecture, deployment, and project management in large SAN implementations. Mike also works with the Tivoli World Wide Services Planning Organization, developing services offerings for IBM Tivoli SAN Manager and IBM Tivoli Storage Resource Manager. He has worked in the IT field for more than 11 years, and his areas of expertise include network and systems management disciplines using Tivoli NetView® and data communications hardware research and development. He was an author of the first edition of this redbook.© Copyright IBM Corp. 2002, 2003. All rights reserved. xxiii
  • 25. The team - Urs, Mike, Michel, Ivo, Charlotte Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central and Eastern European Region in architecting, deploying and supporting SAN/storage/DR solutions. His areas of expertise include SAN, storage, HA systems, IBM ~ xSeries® servers, network operating systems (Linux, MS Windows, OS/2®), and Lotus® Domino™ servers. He holds several certifications from various vendors (IBM, Red Hat, Microsoft®). Ivo was a member of the team that wrote the redbook Designing and Optimizing an IBM Storage Area Network, and contributed to various other Redbooks about SAN, Linux/390, xSeries, and Linux. Ivo has been with IBM for five years and was an author of the first edition of this redbook. Urs Moser is an Advisory IT Specialist with IBM Global Services in Switzerland. He has more than 25 years of IT experience, including more than 13 years experience with Tivoli Storage Manager and other Storage Management products. His areas of expertise include Tivoli Storage Manager implementation projects and education at customer sites, including mainframe environments (OS/390®, VSE, and VM) and databases. Urs was a member of the team that wrote the redbook Using Tivoli Storage Manager to Back Up Lotus Notes. Thanks to the following people for their contributions to this project: The authors of the first edition of this redbook: Michael Benanti, Hamedo Bouchmal, John Duffy, Trevor Foley, and Ivo Gomilsek. Deanna Polm, Emma Jacobs, Gabrielle Velez International Technical Support Organization, San Jose Centerxxiv IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 26. Doug Dunham, Nancy Hobbs, Jason Perkins, Todd Singleton, Arvind Surve, IBM Tivoli SAN Manager Development, San Jose Johanna Hislop, Dave Merbach IBM Tivoli SAN Manager Development, Rochester Rob Basham, Steve McNeal, Brent Yardley IBM SAN Development, Beaverton Bill Medlyn, Daniel Wolfe IBM Tivoli SAN Manager Development, Tucson Steve Luko IBM Tivoli SAN Manager Marketing, Tucson Kaladhar Voruganti Almaden Research Center Murthy Sama Cisco SystemsBecome a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an Internet note to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xxv
  • 27. xxvi IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 28. Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6848-01 for IBM Tivoli Storage Area Network Manager: A Practical Introduction as created or updated on September 9, 2003.September 2003, Second Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information Version 1 Release 2 of IBM Tivoli Storage Area Network Manager – AIX Manager and Linux Agent support – iSCSI Support to integrate iSCSI into SAN management – Performance enhancements by removing previous software requirements – Error Detection and Fault Isolation (ED/F - SAN Error Predictor) IBM Tivoli Bonus Pack for SAN Management© Copyright IBM Corp. 2002, 2003. All rights reserved. xxvii
  • 29. xxviii IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 30. Part 1Part 1 Introduction In Part 1 we talk about why customers need management for their Storage Area Networks, focusing on the costs and challenges of managing it manually today. We then introduce IBM Tivoli Storage Area Network Manager, a new solution for displaying and monitoring physical and logical SAN topologies, receiving events, and reporting on SAN performance statistics and counters.© Copyright IBM Corp. 2002, 2003. All rights reserved. 1
  • 31. 2 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 32. 1 Chapter 1. Introduction to Storage Area Network management Industry needs storage management today in open environments for the same reasons that storage management was needed in the mainframe environment in the early and mid-1980s. Businesses are generating data so fast that data storage and data management capabilities are being overwhelmed. If these capabilities cannot handle the growth, then at some point, “the next transaction” cannot be captured, and the business will stop. Here are two key problems which impact this situation: Storage costs are too high. Storage management costs are too high. Storage Area Networks (SANs) are increasingly prevalent, but now face the same problems found in traditional IP networking in the 80’s. Two of the key challenges for SANs are to standardize and provide functional, open management tools. In this chapter: We identify the business and technology considerations which caused the development of SAN management tools. We describe the big picture of data and storage management, and position SAN management within that context. We discuss SAN management, including: – The benefits of using SAN management tools – The functions that SAN management tools should accomplish We consider the impact of standards on SAN management. In subsequent chapters we introduce a new solution for SAN management, IBM Tivoli Storage Area Network Manager, and discuss deployment architectures, installation and design considerations, operations, and maintenance.© Copyright IBM Corp. 2002, 2003. All rights reserved. 3
  • 33. 1.1 Why do we need SAN management? Storage Area Network management is the set of tools, policies, processes, and organization that provide information about and that monitor the devices in a Storage Area Network (SAN). In conjunction with other storage management tools, this helps ensure the availability of data in a SAN. Cross vendor standardized Storage Area Network management can help users to easily adopt SANs faster.1.1.1 Storage management issues today The major issues in storage management are shown in Figure 1-1. Growth is overwhelming people, tools, and processes business transactions storing new and different data types (medical records, voice, images, presentations) new data types are larger than the old data types Unmanaged storage costs too much Manual storage management costs too much Multivendor management is hard to master Figure 1-1 Storage management issues today Growth Growth is being driven by three general trends: Business transaction volumes are growing. Businesses are using computers to store information that used to be stored only on film or paper. There are new data types (such as music, video clips, images, and graphical files) that require significantly more storage per file than older data types like flat files. The data and storage infrastructure that support this growth is growing dramatically. That growth rate is estimated to range from 50-125% annually, depending on the industry and consultant report of your choice. Consequently, the storage infrastructure must also grow to support the growth in business transactions. See Figure 1-3.4 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 34. Infrastructure Growth Issues Growth Vector Comment Large companies have thousands of servers - Server mixture of Windows and different UNIX OS Each corporate server may grow to 3TB data by Staffing 2004 - a typical open system administrator can look after 1TB Average storage growth is 50 to 125% per year - largest companies may see much higher rates. SAN Storage storage will soon be over 50% of total storage. SANs are being increasingly deployedFigure 1-2 Infrastructure growth issuesServer growthMajor companies have hundreds of large UNIX servers, and sometimes thousands ofWindows servers. They are deploying more servers every quarter, and most large companieshave a large variety of different hardware and software platforms, rather than standardizingon particular configurations.Staffing growthWhile we know that storage and data are growing rapidly, support staff numbers are not. Thisonly exacerbates the problem. An average corporate server may be supporting in the order of3 TB of data in the coming years, yet it is estimated that the typical open systemsadministrator can manage only 1 TB. Since in today’s economic times, businesses arelooking to cut costs, most are cutting rather than increasing their IT departments. Clearlymore intelligent and powerful applications will be required to support this environment.Storage and SAN growthAlthough companies are growing their storage at around 50 to 125% per year on average,larger companies may see even higher growth rates. To handle the growth in storage, storageis being consolidated into Storage Area Networks (SANs). SANs are increasingly beingdeployed by customers and customers may deploy up to 50% of their disk via SAN in thecoming years. But, SANs do not solve the underlying problems of mismanaged data and itsexplosive growth. SANs concentrate the storage, the data, and the problems, and emphasizethe need for SAN management. In fact, the cost of SANs and their management is a majorinhibitor to further SAN adoption — SANs are a separate new manageable entity inthemselves, along with their associated hardware and software components.Early adopters who are now expanding the SANs they deployed some time ago are finding adifferent set of problems from those they had when implementing their first SAN. Early on, themain SAN problems were related to interoperability. With the growth in standardization forSANs, these issues are becoming less significant. Now, business who are trying to expandSANs in the enterprise are constrained by the difficulty in managing large-scale SANs withcurrent processes, which are largely vendor-specific and/or manual. Chapter 1. Introduction to Storage Area Network management 5
  • 35. One problem is that SAN management crosses traditional organizational boundaries. Networks are traditionally managed by network management groups. Storage has traditionally been managed by the individual operating system platform groups or by a specialized storage group. SAN managers have to understand both networking and storage. Which group, then, should have the responsibility for managing SANs? As will be seen later in this book, IBM Tivoli Storage Area Network Manager targets exactly this intersection of the two skill areas — using network management techniques to manage the SAN topology, while providing storage management-oriented logical views. Manual storage management costs too much The major issues of infrastructure growth are shown in Figure 1-3. Large corporations - different teams involved UNIX platform management Windows platform management UNIX backup Windows backup Business Continuance and Disaster Recovery Networking Each of the above teams has its own Spreadsheets Home-grown reports Personal databases System or network diagrams Coordination is problematic Each group develops its own policies Policies not coordinated with each other, or with the mainframe group Small corporations - the "one person who does it all" is spread too thin Quick notes in a personal notebook Only resource which knows the infrastructure Figure 1-3 Manual storage management issues In today’s environments, IT organizations typically manage storage across some or all of these areas: OS platform administration — handles disks associated with individual servers Backup and recovery — tape Business continuance and Disaster Recovery — disk and tape at Disaster Recovery sites Networking group — access to NAS and SAN devices, often the overall design Storage group — any of the previous functions, cross-platform In large companies, these disciplines often each have separate teams. Coordinating these different teams is a major issue. In small corporations, these functions are usually handled by a single person, who is typically highly skilled and overworked. All the groups have their own spreadsheets, home-grown reports, personal databases, and Visio diagrams, etc., to manage their particular environments. And typically each area monitors and manages in isolation, not coordinating with the other functions. IT organizations have historically been organized by operating system platform. A UNIX platform administrator managed the server, communications, disk and tape, and SANs, that is, everything to do with UNIX. The same applied to the Windows administrator.6 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 36. Centralizing storage management makes it possible to apply the same tools and processes to all business units within the company. For this reorganization to work effectively, new tools and new procedures are needed to support the new organizational structure. IBM Tivoli SAN Manager is one of the key underlying new tools that support this movement towards a more consistent, more efficient, use of resources — that is, people, storage, and money. For example, a company with 500 NT servers and 300 UNIX servers across different business units might have 2100 LUNs to be managed (1.5 x 500 + 4.5 x 300 = 2100). Managing that many filesystems manually is difficult. A growing percentage of companies have consolidated storage into Fibre Channel (FC) SANs, but they still have to manage the same number of LUNs. The LUNs are still associated with individual application servers, and storage on the FC storage frame is still logically segregated. Some companies have a mix of FC storage pools, network-attached (NAS, iSCSI) storage pools, and direct-attached storage environments. Each FC storage pool is managed by its own storage manager. Each NAS pool has its own manager. Each small group of 25-30 (typically) direct-attached storage servers has its own platform administrator. These administration costs can be at the user department level, at the division IT level, or at the corporate IT level. The costs are hard to aggregate, but are large.1.1.2 Current generation of SAN management: spreadsheets and paper In Figure 1-4, we consider current methods of storage management. Storage Network Topology Visio diagram, typically out of date, or Sometimes a hand-drawn diagram Sometimes on an eraseable whiteboard Switch Inventory and connections Spreadsheet, PC database, or WP document Page in a personal notebook Storage Frame Layout (logical-to-physical) Maintained by vendor, or Customer-maintained in spreadsheet or management application Server Information Spreadsheet, PC database, or WP document Figure 1-4 Current methods of compiling information about storage networks When a user calls and says “my application stopped working!”, administrators (storage administrators, network administrators, application administrators or platform administrators) have to research, narrow down the possible causes, and make an educated guess as to the root cause of the problem. If the problem is confirmed as related to storage, may have to access several individual components in the storage infrastructure (for example, HBA, disk controller, disk system, microcode), one component at a time, sometimes several times for each component, as they try to identify the root cause. The current approach to managing storage networks typically involves manual processes and point (that is, vendor-specific, non-interoperable) solutions. Chapter 1. Introduction to Storage Area Network management 7
  • 37. Information concerning inventory, topology, and components is typically manual. Today’s tools are point-solutions, usually managing one single component, or components from a single vendor. If you need to look at 4 or 5 switches to track down a problem, you might need to log on to 4 or 5 switches, each with its own management software. Here are some frequently encountered scenarios: The topology of the SAN is maintained on a Visio diagram somewhere, which was last updated some months ago “before we added those last 2 departments, and deployed several new switches, and I just didn’t have the time to update the diagram!” The server inventory (a spreadsheet or a PC database) was updated “in a consultant study 12 months ago”. Each platform group has its own inventory, which is kept separately from the other groups. Rarely does a company have an enterprise view of its infrastructure. The revision levels for all the Operating Systems, the patches, the HBA drivers, etc., is in a spreadsheet, which is somewhat up-to-date (“except for the last 3 rounds of server upgrades!”) The logical layout of the storage frames is kept either by the storage vendor themselves or on a spreadsheet which needs to be manually updated. If a problem does arise, then the following tools and methods are typically used to identify and resolve the problem: To manage a switch, the administrator has to consult his spreadsheet to find the address, user ID and password of the switch, log on to the switch, run the switch management package (different for each brand of switch), and scan the menus to understand the SAN architecture, and write down what he needs to know on a piece of paper. To manage the storage frame, the administrator has to log on to the frame, and run its point-solution software (again, different for each manufacturer) to understand the storage frame. Then the administrator has to mentally or manually build a map of the SAN infrastructure. Then the administrator maps the specific problem to the infrastructure, forms an hypothesis, and tries to solve the problem. This is an iterative process. With a small and stable SAN (for example, 2 switches, 12 servers, and 1 storage frame with 4 storage ports), managing the components via spreadsheets, PC databases, and point solution tools can be fairly effective for simple problems. In this environment, there are only 2 primary storage tools to learn (the switch tool and the storage frame tool), with only 2 switches and 1 frame to manage. In this small environment, the administrator generally has the architecture in his head, knows all the components, and can usually identify and fix problems within a reasonable amount of time. Note however that there is probably only one person in the organization who is familiar enough with the layout of the network to be able to do this. What happens if that person takes vacation, is ill, or leaves the organization? With a complex SAN, the number of components to manage exceeds the ability of current tools and administrators to manage in a timely fashion. Just the discovery process alone can be very time-consuming.8 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 38. Figure 1-5 shows a typical large storage network. The Scope of the SAN Management ProblemFigure 1-5 Large SAN environment to be managedIn this large storage network, there are many components, and many points of management: Infrastructure components: – Each switch has its own management software: • There are 2 different switch vendors. • There are at least 8 switches, each with 16 or 32 ports. – Each storage frame has its own management tool: • There are 4 different frames, each with 4-16 storage ports and 50 disks. • There are 2 frame vendors. – Servers, file systems, and HBAs each have their own management tools: • There are 300 servers (many not shown), each with 2 or more mount points or shares, each with 2 HBAs. • There are 5 different platform operating systems (Windows 2000, NT, HP-UX, Solaris, AIX). • There are different vendor HBAs (Emulex, JNI, IBM). Component management: – Storage administrators manage the storage in the storage frame. – The storage vendor sometimes manages the logical-to-physical conversion (file system to LUN) for the storage. – Platform administrators manage servers, file systems, and HBAs. – Backup and recovery are managed by yet a different group. – The client-facing IP network is managed by the network group, who also try to manage the SAN as a whole. Chapter 1. Introduction to Storage Area Network management 9
  • 39. To manage the physical infrastructure, the IT organization would have to individually manage each component of the SAN infrastructure. That is: 4 * 32 + 8 * 16 = 256 switch ports 2 different switch management packages 40 storage frame ports, approximately 200 disks 600 shares or mount points 600 HBAs 300 instances of 4 different operating systems TOTAL NUMBER OF OBJECTS TO MANAGE = 1996 When a problem arises in this complex environment, administrators turn to the manual documents and point-solution tools to help them narrow the focus of their investigation. Considering the state of the documents and the information with which they are working, their task is “challenging”, and the business exposure is high. Mission-critical servers cannot afford hours of downtime just to find a root cause, much less additional time to fix the problem. Mission-critical storage, servers, and applications, by definition, need to be available 24x7. Trying to manage these 2000-odd components manually cannot be done consistently over time. Summary Storage and data are growing rapidly. SANs are growing, and are too big to manage manually. Manual storage management costs a lot. Companies cannot continue to manage storage and data “the old way” (managing individual components), and be successful. Companies MUST adopt new tools to manage storage and data.1.2 New tools for SAN management are needed Clearly, new tools for SAN management are needed. Customers want the following capabilities from SAN management software: To manage their SAN properly To be able to extend the benefits of a SAN to the enterprise To do so in a cost-effective fashion (both storage, and administration) Given the multi vendor infrastructure environment, storage components and storage management tools must be based on standards. Standards in Storage management promise the following benefits: Ensure interoperability Protection of Investment Freedom of choice: – Is vendor-independent – Drives pricing towards commodity pricing – Results in attempts by manufacturers to add value above the standards10 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 40. 1.2.1 Storage management components Figure 1-6 shows the storage management infrastructure functions from the low level device solutions all the way up to the business management level. The current set of Tivoli solutions already provide much of the functions in the Business Management section (that is, Systems Management, Storage Management and Security Management). IBM Tivoli Storage Manager, IBM Tivoli Storage Resource Manager and IBM Tivoli SAN Manager provide the functionality for the middle Storage Resource Management layer. They interoperate with and utilize the lower level storage infrastructure layer applications. These are often vendor-specific solutions, such as individual Element Managers or Replication Solutions. These also encompass some upcoming products, such as for Virtualization. Comprehensive Architecture for a Suite of Storage Management Solutions Business Processes Business Applications Management Systems Management Storage Management Security Management Enterprise Policy Based Automation Operations Storage Resource Reporting Capacity Asset Event Availability Performance Management Monitoring Backup & Recovery / Advanced SAN Management Policy Based Automation Storage File Infrastructure Media Element Subsystem Systems Virtualization Replication Volume Managers Managers Reporting Mgrs DAS SAN NAS TAPE iSCSI Fibre Channel Devices Figure 1-6 Storage management architecture for a suite of solutions SAN management tools were developed to help address the issues described in the previous section — to consolidate into one place all the information needed to manage the components of a SAN so that storage administrators can keep the physical and logical storage environment operating all the time. With the right SAN management tools, from one console, storage administrators should be able to see all that happens in their storage infrastructure: By hosts in the SAN By devices in the SAN By topology of the SAN These are some of the benefits of using SAN management tools: Technical benefits: – Effective discovery and presentation of SAN physical and logical topologies for small or large SANs – Continuous real-time monitoring, with rapid Error Detection and Fault Isolation Chapter 1. Introduction to Storage Area Network management 11
  • 41. – Support for open SAN standards – Minimize storage and SAN downtime – Provide a framework to extend the SAN to the enterprise Business benefits: – Increase revenue by improving availability for applications hosted on the SAN – Reduce costs (both administration & storage) These are the main attributes of a good SAN management tool: Standards based Strong architecture: – Centralized repository – Based on an enterprise database – Discovers all components of a SAN – Integrated with an enterprise console – Identifies errors and isolates faults – Thresholds for reporting and actions Easy to navigate, understand Flexible and extensible: – Provides topology views (physical) views — host-centric and switch-centric – Viewing a single SAN, or all SANs, in an organization – Ability to launch vendor-provided management applications from single console – Reporting, both standard, and customizable1.2.2 Standards and SAN management tools For the storage networking community (both vendors and buyers), standards form the basis for compatibility and interoperability. Standards enable buyers to pick the solutions they want to implement with the knowledge that today’s solution will be interoperable with tomorrow’s solution, and that existing hardware investments will be protected as the environments are extended. For vendors, standards give the confidence that a wide market exists for their solutions, and lowers the costs of compatibility testing. Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interoperability. Figure 1-7 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible.12 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 42. Standards organizations and standardsFigure 1-7 shows some of the main SAN management standards organizations. SAN Management Standards Bodies Marketing De-facto Standards Formal Standards Internet Engineering Task Force (IETF) Storage Networking Industry Association (SNIA) Formal standards for SNMP and MIBs SAN umbrella organization IBM participation: Founding member American National Standards Board, Tech Council, Project Chair Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards Jiro (StoreX) IBM participation Fibre Channel Industry Sun consortium Association (FCIA) Sponsors customer events IBM participation: Board Fibre Alliance International Organization for EMC consortium Standardization (ISO) International standardization SCSI Trade Association IBM Software National Storage Technology roadmaps development ISO Certified Industry Consortium IBM participation: Pre-competitive Member consortium Distributed Management Task Force (DMTF) Development of CIM IBM participationFigure 1-7 Storage networking standards organizations and their standardsIndustry organizations, such as the Storage Networking Industry Association (SNIA) and theFibre Alliance, have taken a leading role in facilitating discussions among vendors and users.Members chair working groups, looking at a wide range of subjects relating to storage andSANs such as discovery and management, backup and disaster recovery. Developments bythese organizations are considered de-facto standards. Recommendations from theseorganizations are submitted to the officially recognized standards bodies (IETF, ISO andANSI) for consideration as a formal standard.A key standard is contained in the FC-MI (Fibre Channel — Methodologies for Interconnects)technical report published by the ANSI T11 standards committee. Taken as a whole, theFC-MI report addresses multi-vendor interoperability for Storage Area Networks. The nextgeneration of the standard FC-MI-2 is already in development.This report describes a required set of common standards for device and managementinteroperability in both loop and switched fabric environments. Compliance to the standardsdefined by the FC-MI allows for operational interoperability between hosts, storage devices,and fabric components over a wide variety of Fibre Channel topologies. It also provides for acommon approach to SAN device discovery and management. Chapter 1. Introduction to Storage Area Network management 13
  • 43. ANSI has defined all the principal standards relating to physical interfaces, protocols, and management interfaces that would be exploited by the hardware vendors: FC-PH specifies the physical and signaling interface. FC-PH-2 and PC-PH-3 specify enhanced functions added to FC-PH. FC-FG, FC-SW, FC-GS-2, FC-GS-3, FC-SW-2, FC-FS, and draft standards for FC-GS-4 (target announcement date, August 2003) and FC-FS-2 are all documents relating to switched fabric requirements. FC-AL specifies the arbitrated loop topology. FC-MI builds on these standards and groups device interoperability into four areas, shown in Figure 1-8. Storage Network Management - Standard Behaviors for Interoperability Management Behaviors The set of standards required to be interoperable at the management level Loop Behaviors The set of standards required to create interoperable Arbitrated Loops Fabric Behaviors The set of standards required to create interoperable Switched Fabrics. A switched fabric is defined as being either a single switch, or 2 or more switches connected via E-ports FC Port Behaviors The set of standards that end-point ports must support for devices to be interoperable in the defined switched fabrics and arbitrated loops Figure 1-8 Standards for Interoperability A single device may have to comply with all four standards. Taken together, these standards define a set of common specifications that a device must adhere to in order to be compliant with FC-MI compliant devices at both the operational and management levels. SAN management using FC-MI SAN management requirements are defined in the discovery and management section of the FC-MI report. This section outlines the ANSI and other standards that Fibre Channel devices must comply with to ensure that all devices, irrespective of vendor or type of device, can be discovered using FC-MI compliant management tools. Adherence to the existing standards defined in the FC-MI report enables a consistent approach for managing storage and SAN components, whether hosts, storage systems, or fabric components such as switches, gateways, or routers. The standards also provide a basis for advanced management capabilities, such as error detection and fault isolation (ED/FI), and predictive analysis of reported errors to identify pending component failures.14 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 44. The following is a partial list of the current standards that different SAN components, end points (hosts, storage subsystems, gateways, etc.), Host Bus Adapter (HBA) drivers, and fabric components must support to be compliant with FC-MI for SAN management: Name Server, as defined in ANSI FC-GS-3 Management Service, as defined in FC-GS-3: – Configuration Server – Unzoned Name Server – Fabric Zone Server Fabric event reporting — These are Extended Link Services (ELS) commands defined in the FC-FS (Framing and Signaling Interface) for notification of fabric events – RSCN — Registered State Change Notification – RLIR — Registered Link Incident Record HBA drivers must support an API (such as the SNIA SAN management HBA API) that must be capable of: – Issuing Name Server, Fabric Management Server, and end point queries, and – Notifying the driver (or other recipient) of fabric events SNMP Monitoring, using the IETF FC Management MIB (previously known as the Fibre Alliance MIB) and trap Respond to end point queries – RNID — Request Node Identification Data – RLS — Read Link Error Status Block Taken together, these different discovery and reporting mechanisms allow a complete SAN topology to be determined and monitored, along with advanced capabilities such as performance analysis and error detection and fault isolation.1.2.3 Discovery Discovery uses two approaches for discovering SAN device information: Outband queries — over an IP network via standardized MIBs, which typically are loaded only onto the managed switches. IBM Tivoli SAN Manager gathers SNMP-collected information from outband agents. Inband queries — using Fibre-Channel protocols. In the case of IBM Tivoli SAN Manager, an Agent loaded onto the target server queries a standard HBA API loaded onto the managed host, which then queries reachable devices in the SAN. The information obtained is returned to the Manager. Tivoli SAN Manager stores the results of inband and outband discoveries through the Agents in its database, co-relates it to look for duplication, and uses the information to draw or re-draw the topology map. Chapter 1. Introduction to Storage Area Network management 15
  • 45. 1.2.4 Outband management For outband management, the following sequence occurs, shown in Figure 1-9. Note that all outband communications occur over the IP network. Data store in the database SAN SAN Manager Manager SNMP Server SNMP Manager Trap SNMP forward Query IP Network SNMP Trap SNMP Agent Managed Switch Switch Figure 1-9 SAN Manager — Outband management path over the IP network Outband management uses the MIB(s) available for the target switches. The purpose of loading a MIB is to define the MIB objects that the SAN management application will track. These are the items that we want to collect data about, such as number of transmitted or received frames, and error conditions. The objects are defined in the relevant MIBs. Outband management is used during polling, which is the process of scanning devices to collect the SAN topology. The SNMP Agent solicits the appropriate information from the devices, and returns it to the SAN Manager through the inbuilt SNMP Manager provided in NetView. The switches in this case are configured to send their trap to the SNMP Manager. SAN management events are also communicated using outband methods. From time to time, events will be triggered from the Agent on the switch to the SAN Manager. The SAN Manager will log these events and respond accordingly. For example, an event could be sent, indicating that a switch port is no longer functioning. The SAN Manager would update its topology map to reflect this.1.2.5 Inband management Inband management is shown in Figure 1-10. Inband management works by discovering devices over the Fibre Channel network using Fibre Channel protocols and standards. The data collected is then sent to the Manager over the TCP/IP network, hence the Manager does not have to be connected to the SAN.16 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 46. IP Network Scan request Managed SAN Inband host Returned data Agent Manager HBA API fibre SCSI channel Query FC Query connected HBA RNID Query Switch Switch End points fabric elements Managed FC FC Gateway Host Storage Storage Figure 1-10 SAN Manager — Inband management path In the case of IBM Tivoli SAN Manager, an Agent is installed on hosts to be managed and is configured to communicate with a Manager system. The polling process for topology discovery sends queries inband through the SAN. Specifically, the HBA API on the managed Agent issues its own query to the FC switch. Topology information is retrieved from the switch, including information about other switches and their attached end-point devices. This is because in a cascaded switch configuration, topology information is shared and replicated among all switches. End-point devices (which do not have an Agent installed), such as storage systems, gateways, and other hosts, respond to RNID and SCSI queries for device and adapter information. Fabric components, such as switches, respond to queries of the Management Server and the Name Server via the HBA API. Switches are not end-point devices. The Agent returns all collected information to the SAN Manager over the IP network. This information is co-related and consolidated (since other Agents also return possibly duplicate information) and stored on the Manager. It uses this information (combined with information returned by outband Agents, if deployed) to build the topology map and submaps.1.2.6 Why you might use both inband and outband discovery Both of these methods have a valid role in SAN management. Both are being actively developed, and offer different technology benefits. One practical benefit of using both methods is that, with 2 discovery methods, should one network or the other become unavailable for some reason, the manager can always fall back on the alternate monitoring method. Multi-protocol management using both inband and outband methods is expected to be the most common implementation of SAN management capabilities. Table 1-1 shows the different capabilities for inband and outband management methods. Table 1-1 Differences in discovery capability Function Inband Outband (uses fibre network) (uses IP network) Device discovery X X Topology mapping X X Chapter 1. Introduction to Storage Area Network management 17
  • 47. Function Inband Outband (uses fibre network) (uses IP network) Topology monitoring X X SAN identification X Element Manager launch X Unit level events X Zone discovery X End point identification X LUN identification X Device status X Node and link level events X End point port statistics X X Logical device-centric and X host-centric views One advantage of inband discovery is that inband compliant devices can discover and report errors for adjoining devices. The capability has other associated benefits: Agents can use this method to discover and manage the physical and logical connections from the switch to the fibre-attached disk. Agents can also use this method to discover and manage fibre-attached hosts through contact with their HBAs. One advantage of outband discovery is that, in the event that a FC path is down, the management server can still receive errors from the IP path. Another advantage of outband discovery via SNMP is that outband discovery is not affected by zoning. Currently, zoning limits inband requests from management agents to discovering only those end-points within the zone. (The ANSI FC-GS-4 compliance should remove this limitation for inband management.)1.2.7 Formal standards for outband management In the early days of SANs, the FC Management MIB was developed as a de-facto standard by the Fibre Alliance organization to provide basic SAN management capability quickly, and with broad device coverage, using the well-established and easy-to-implement SNMP protocol. This management MIB (current release 4.0) is in the process of being adopted by the IETF as a formal standard. The Fibre Alliance fully supports the efforts of ANSI and other standards bodies to provide formal standards for outband SAN management. The FC Management MIB is exploited by the FC-MI. The other standard that exists for outband discovery and management is the Fabric Element SNMP MIB, defined by the IETF. Some vendors are also providing their own SNMP MIBs for monitoring different parameters (for example, performance data) in the switch. The SAN industry benefited greatly from experience gained in both wide-area and local-area networking, and applied that experience in developing FC-GS-3 standard for inband management.18 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 48. 1.2.8 Formal standards for inband management Formal standards for inband SAN management currently provide more information than that provided by outband management standards. ANSI FC-GS-3 defines a number of inband management services that are useful for SAN discovery. They are shown in Figure 1-11. Name Server, and Management Services, comprised of Fabric Configuration Server, and the Fabric Zone server, and the Unzoned Name Server. In-band Query Interface Figure 1-11 Inband management services In conjunction with the Name Server, Management Services allow management applications to determine the configuration of the entire SAN fabric. Name server This provides registry and name services for hosts and devices on the fabric network. This is the basis for soft, or World Wide Name (WWN) zoning. The list of devices is segregated by zone. When a host logs into the SAN, the Name Server tells it which devices it can see and access over the network. Management agents using only the Name Server are limited to device discovery and queries within the same zone as the management agent. Fabric configuration server This server provides fabric configuration and topology information and registration services for platforms (hosts, storage subsystems, etc.) in the SAN. Platforms in the SAN can register information such as their logical name, platform name, and management address. This allows determination of device type — host, storage subsystem, gateway, or storage access device. The Fabric Configuration Server enables discovery of the host (and identification as a host) without the need for an agent on the host, or for manually typing the host name next to the WWN in the configuration table. Management address information allows determination of the device’s outband management interface, IP address, and management protocols (SNMP, HTTP, or CIM). Fabric Zone Server This server defines a mechanism for zoning discovery and control via a standard interface. Unzoned Name Server This provides management applications with name services for device discovery across the entire SAN, uninfluenced by switch zone configuration. A single agent can then discover all devices and end points within the network, irrespective of zoning. Chapter 1. Introduction to Storage Area Network management 19
  • 49. Inband Query Interface The final part of the FC-MI definition is an interface to perform in-band queries, discover HBAs, and to retrieve adapter information. This is provided by the SNIA HBA Management API. This API is supported by many HBA vendors.1.2.9 The future of SAN management standards In this section we consider the future of SAN management standards (see Figure 1-12). Storage Management Initiative Specification - SMIS Enhancements to inband management Enhancements to outband management SAN management applications Figure 1-12 The future of standards in SAN management Today, several different standards exist for discovering management information, and for managing devices. Each standard made sense at the time it was adopted. But the industry has learned a lot, and is now attempting to develop a single management model, the Common Information Model (CIM), for managing hosts, storage subsystems, and storage networking devices. CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Desktop Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Fibre Channel switches, and NAS devices. IBM’s intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces. SNIA regards CIM-based management as the future for multi-protocol SAN management. In 1999, SNIA demonstrated a prototype common Enterprise Storage Resource Manager (ESRM) using WBEM and CIM technology from a number of different vendors (including IBM, Sun, Microsoft, and HDS). This prototype demonstrated management of different storage subsystems (EMC, IBM, StorageTek, Compaq, HDS, and Sun) from a single common management platform. In 2002, IBM, along with other vendors, presented a new piece of technology code-named Bluefin, to SNIA, which was accepted in August 2002. Bluefin employs CIM and WBEM technology to discover and manage resources in multi-vendor SANs using common interfaces. When implemented in management products, Bluefin will improve the usefulness of SAN and storage management applications and provide for greater management interoperability. Storage Management Initiative Specification - SMIS In mid-2002 the Storage Networking Industry Association (SNIA) launched the Storage Management Initiative (SMI) to create and develop the universal adoption of a highly functional open interface for the management of storage networks. The SMI was launched as a result of the SNIA’s adoption of the Bluefin SAN management interface specification and the term Bluefin is no longer in use. The SMI’s goal is to deliver open storage network20 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 50. management interface technology in the form of an SMI Specification (SMIS). TheFigure 1-13 below illustrates the SMIS architectural vision. Architecture of SMI Specification Graphical User Interface Management Frameworks Users Management Tools Storage Resource Management Container Management Data Management Performance Volume Management File System Capacity Planning Media Management Database Manager Resource Allocation Other… Backup & HSM Storage Management Interface Specification Managed Objects Physical Components Logical Components Removable Media Volume Tape Drive Clone Disk Drive Snapshot Robot Media Set Enclosure Zone Host Bus Adapter Other… SwitchFigure 1-13 SMIS ArchitectureFor today’s management applications, to achieve really comprehensive management of SANsand network storage the application, need to communicate with different interfaces of multipledevice vendors. Standards compliance varies by individual vendors. In such an environment itis hard to achieve good management with one application, especially with limiteddevelopment resources. Use of so many different management protocols also slows down theintegration of new devices into management scheme as this each new device to beindividually tested and ratified for support.These factors cause users to prefer individual specialized management tools rather than onecentralized solution.The idea behind SMIS is to standardize the management interfaces so that managementapplications can utilize these and provide cross device management. This means that anewly introduced device can be immediately managed as it will conform to thestandards.SMIS is based on Common Information model (CIM) and Web Based EnterpriseManagement (WBEM) standards. SMIS is providing new features which extend CIM/WBEMtechnology. In Figure 1-14 you can see how the SMIS system architecture is related to theSNIA storage model. Chapter 1. Introduction to Storage Area Network management 21
  • 51. SMIS Architecture in relation to SNIA storage model ni noitacilppA nt a reyal drocer/eliF e m m o Database (dbms) File system (fs) s e d e g ci a e Host v n rMa g e a Block Network S ni r aggregation Device ef ot ul B S Storage devices Figure 1-14 SMIS Architecture in relation to SNIA storage model SMIS extensions to WBEM are: A single management transport Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMIS A complete, unified, and rigidly specified object model. SMIS defines “profiles” and “recipes” within the CIM that enables a management client to reliably utilize a component vendor’s implementation of the standard such as the control of LUNs and Zones in the context of a SAN Consistent use of durable names As a storage network configuration evolves and is reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time Rigorously documented client implementation considerations SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system SMIS compliant products when introduced in a SAN environment will automatically announce their presence and capabilities to other constituents Resource Locking SMIS compliant management applications from multiple vendors can exist in the same SAN and cooperatively share resources via a lock manager The models and protocols in the SMIS implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.22 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 52. CIM/WBEM technology uses a powerful human and machine readable language called themanaged object format (MOF) to precisely specify object models. Compilers can bedeveloped to read MOF files and automatically generate data type definitions, interface stubs,and GUI constructs to be inserted into management applications.SMIS object models are extensible, enabling easy addition of new devices and functionality tothe model, and allowing vendor-unique extensions for added-value functionality.Figure 1-15 shows the components of the SMIS/CIM/WBEM model. CIM/WBEM management model Management Application Auto-generation of Integration Infrastructure Application and Object Model Mapping – Vendor Unique Features Infrastructure Constructs Platform Independent Distributed SMIS Automated Discovery CIM/WBEM Interface Security Technology Locking Object Oriented Device Types Standard Tape Library Many Other Object Switch Array Model per Device MOF MOF MOF MOF Vendor Unique FunctionFigure 1-15 CIM/WBEM management modelAs this standards are still evolving we can not expect that all devices will support the nativeCIM interface, and because of this the SMIS is introducing CIM agents and CIM objectmanagers. The agents and object managers bridge proprietary device management to devicemanagement models and protocols used by SMIS. The agent is used for one device and anobject manager for a set of devices. This type of operation is also called proxy model and isshown in Figure 1-16. Chapter 1. Introduction to Storage Area Network management 23
  • 53. Proxy model (CIM Agent/Object Manager) for legacy devices Lock Directory Manager Server Client Directory User SA 0…n Agent 0…n Agent 0…n SLP TCP/IP CIMxml CIM operations over http TCP/IP SA Service Agent (SA) SA Object Manager Agent Agent 0…n Device or 0…n Provider Subsystem 1 1 0…n Proprietary Proprietary 1 n Embedded Device or Model Device or Subsystem Device Subsystem Proxy Model Proxy Model Figure 1-16 CIM Agent & CIM Object Manager The CIM Agent or Object Manager will translate a proprietary management interface to the CIM interface. An example of CIM Agent is the IBM CIM agent for the IBM TotalStorage® Enterprise Storage Server®. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to “push” their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks. For more information on SMIS/CIM/WBEM, see the SNIA Web site: http://www.snia.org Enhancements to inband management: The FC-GS-3 standard already registers platforms with the Fabric Configuration Server. With the FC-GS-4 standard being on its way to formal registration by the ANSI T11 committee, registration would provide more information about the platform, be it host or storage subsystem. Here are 2 examples of how this might work: Hosts would now have information such as the LUNs assigned to the hosts, and the LUN location in the storage frame. The Fabric Device Management Interface (FDMI) in FC-GS-4 would now register end points in the Fabric Configuration Server. This will provide central registration for all device attributes, status, and statistics. Management applications need query only the fabric24 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 54. management services to build a fabric configuration. This reduces the need for management agents on all hosts, and allows for managing end-points. The role of the host-based agent is still important. Agents are still required to provide logical device-centric and host-centric views of host-to-device connectivity. Another future that is expected with the FC-GS-4 standard is a common zone control mechanism that allows setting and managing zones across multiple switch vendors. This will improve security and administrator productivity. The proposed FC-GS-4 standard has a provision for querying end-points, attributes, and statistics via Extended Link Service (ELS) commands. This includes the ability to retrieve performance and error counters. This information can be used to identify ports with high numbers of transmission or receive errors, and to initiate fault identification processes. Access to performance counters allows analysis of traffic pattern, indications of bottlenecks, and capacity planning of SAN networks. Today with IBM Tivoli SAN Manager, this functionality can already be provided using NetView reporting capabilities. See Chapter 6, “NetView Data Collection, reporting, and SmartSets” on page 207 for more information. Enhancements to out-of-band management The roadmap for inband management of fabric-attached devices via ANSI T11 standards is relatively well-charted. Today, no formally accepted standard exists for managing and controlling storage subsystems. The SNIA, along with many storage vendors, are promoting the use of CIM and WBEM technology to discover and manage storage subsystems. At present, this management would occur via outband management paths. Using these standards-based approaches will be directly translatable, and often directly re-usable, on other storage networks such as iSCSI, Infiniband, and other devices such as NAS servers.1.2.10 Summary Here we summarize the main considerations in SAN management (see Figure 1-17). Business Transactions are growing More companies are implementing SANs. SANs are getting bigger. Traditional manual methods of managing storage no longer work. New tools are needed to manage storage New tools have to be based on standards. Standards are continually evolving. The new tools will reduce costs of: discovering and presenting topology continuous real-time monitoring and fault identification The new tools will help reduce costs of managing storage keep storage available "all the time" for revenue-generating activities Figure 1-17 SAN management summary Chapter 1. Introduction to Storage Area Network management 25
  • 55. In the next chapter we introduce Tivoli’s SAN management application, IBM Tivoli SAN Manager. This chapter presents an overview of its architecture, components, and usage. In working on this redbook, the team built a lab environment to test certain configurations. We will present this architecture, identify the configurations and functions we tested, and summarize our findings. Subsequent chapters will go into detailed explanations about deployment considerations, availability issues, installation and setup and operations, and so on.26 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 56. 2 Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager In this chapter we introduce and position IBM Tivoli Storage Area Network Manager (IBM Tivoli SAN Manager) — including existing and new Version 1.2 functionality, architecture, and components. Tivoli SAN Manager complies with industry standards for SAN storage and management. And it does all this — Tivoli SAN Manager: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through a IBM Tivoli SAN Manager Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the IBM Tivoli SAN Manager Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor). No longer utilizes IBM MQSeries® Supports running the Manager on AIX. Now runs on WebSphere® Express (smaller footprint and less resources). Provides a quick launch feature into IBM Tivoli Storage Resource Manager. Is available as a free Bonus Pack for limited fibre channel port management (up to 64 ports). Provides integration into IBM Tivoli Enterprise Data Warehouse.© Copyright IBM Corp. 2002, 2003. All rights reserved. 27
  • 57. 2.1 Highlights: What’s new in Version 1.2 In this section we summarize new functions, supported platforms, and features of IBM Tivoli SAN Manager Version 1.2. For an introduction to the overall product, see 2.2, “IBM Tivoli SAN Manager overview” on page 29. New Functions and Features for Version 1.2 Discovery of iSCSI Error Detection and Fault Isolation (EDFI) Removal of IBM MQSeries SAN Manager available for AIX New AIX and Linux managed hosts Running embedded WebSphere Express Quick Launch for IBM Tivoli Storage Resource Manager Integration into IBM Tivoli Enterprise Data Warehouse Figure 2-1 IBM Tivoli SAN Manager V1.2 — New functions and features2.1.1 Discovery of iSCSI IBM Tivoli SAN Manager now supports the discovery and management of internet SCSI (iSCSI) devices. The iSCSI discovery is performed independently from the discovery done by IBM Tivoli SAN Manager.2.1.2 Event Detection and Fault Isolation (ED/FI - SAN Error Predictor) Error Detection/Fault Isolation (ED/FI - SAN Error Predictor) is a new feature that performs problem determination on Fibre Channel optical links. ED/FI performs predictive failure analysis and fault isolation that allows users to identify and take appropriate action for components that may be failing.2.1.3 IBM Tivoli Enterprise Data Warehouse (TEDW) IBM Tivoli SAN Manager Version 1.2 support for IBM Tivoli Enterprise Data Warehouse (TEDW) Version 1.1 will provide a central repository of historical data for use by Tivoli Service Level Advisor. Tivoli SAN Manager will use the Extract, Transform Load language (ETL1) to pull data from the IBM Tivoli SAN Manager database and write it to the TEDW. In its first release, the TEDW support will extract switch and port status information only. For more information on TEDW, see Chapter 14, “Integration with Tivoli Enterprise Data Warehouse” on page 387.2.1.4 IBM Tivoli SAN Manager on AIX IBM Tivoli SAN Manager now supports running the Manager component on AIX 5.1. This support does not include Tivoli NetView support on UNIX, therefore the NetView console must still be run on Windows 2000 or Windows XP.2.1.5 Embedded WebSphere IBM Tivoli SAN Manager now includes WebSphere Express embedded. IBM Tivoli SAN Manager no longer requires a separate WebSphere installation.28 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 58. 2.1.6 Operating system support Managed host on AIX 5.2 Managed host on Linux Red Hat Advanced Server version 2.1 Managed host on SuSE Linux Enterprise Server 7.02.1.7 Other changes Dynamic IP addresses (DHCP) instead of static IP addresses are now supported for managed hosts and remote consoles. Provides additional event classes for IBM Tivoli Enterprise Console®. Discovery of Cisco MDS 9000 Series switch. Removal of IBM MQSeries. IBM Tivoli NetView has been upgraded to v7.1.3 JRE has been updated to JRE 1.3.1 Silent install option is now available.2.2 IBM Tivoli SAN Manager overview In this section we present the product components, supported platforms, and a high level view of the major functions.2.2.1 Business purpose of IBM Tivoli SAN Manager The primary business purpose of IBM Tivoli SAN Manager is to help the storage administrator display and monitor their storage network resources — to increase data availability for applications so the company can either be more efficient, or maximize the opportunity to produce revenue. IBM Tivoli SAN Manager helps the storage administrator: Prevent faults in the SAN infrastructure through reporting and proactive maintenance. Identify and resolve problems in the storage infrastructure quickly, when a problem occurs. Provide fault isolation of SAN links. In the next several sections of this chapter we identify the components of IBM Tivoli SAN Manager, and discuss some of their uses. We discuss prevention of problems through predictive reporting and proactive maintenance, and show how to identify a fault quickly.2.2.2 Components of IBM Tivoli SAN Manager These are the major components of IBM Tivoli SAN Manager: A Manager or Server, running on a (preferably dedicated) SAN management system Agents, running on one or more Managed Hosts Management Console – runs by default on a Windows Manager – additional Remote Consoles are also available for Windows (hence a Windows system is required for NetView display with an AIX Manager Outband Agents — consisting of vendor-supplied MIBs for SNMP Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 29
  • 59. There are two additional components (which are provided by the customer): – IBM Tivoli Enterprise Console (TEC) which is used to receive Tivoli SAN Manager generated events. Once forwarded to TEC, These can then be consolidated with events from other applications and acted on according to enterprise policy. – IBM Tivoli Enterprise Data Warehouse (TEDW) is used to collect and analyze data gathered by the IBM Tivoli SAN Manager. These components are shown in Figure 2-2. Tivoli SAN Manager Components SAN Data Gateway Tape Fabric B Storage TCP/IP Tivoli SAN Manager Tivoli SAN Manager agent SP Fabric A Tivoli SAN Manager Systems Remote Console NetView Systems Tivoli Data Warehouse Figure 2-2 IBM Tivoli SAN Manager operating environment The Tivoli SAN Manager Web site, which includes the most up-to-date list of supported manager and agent operating systems, fabric components and HBAs (Host Bus Adapters) is at http://www-3.ibm.com/software/tivoli/products/storage-san-mgr/ IBM Tivoli SAN Manager Server The manager system can be a Windows 2000 or AIX V5.1 system, with the following components: IBM Tivoli SAN Manager code: Controls the SAN management function DB2®: Used as a repository for topology and event records IBM Tivoli NetView: Presents the topology and event information graphically Java™ Virtual Machine: Use of JVM supports portability and completeness SNMP Manager: communicates with SNMP Agents on outband-monitored devices Note: WebSphere Express manages the servlets used by IBM Tivoli SAN Manager for various functions. Its embedded and is not a standalone application.30 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 60. Remote Console One or more Remote Consoles can be installed to provide a GUI for Tivoli SAN Manager. The Server system automatically includes a console display. Remote Consoles must be Windows 2000 or Windows XP systems with the following components: NetView — presents the information graphically Remote Console code: allows an administrator to monitor IBM Tivoli SAN Manager from a remote location or locations Agents or Managed Hosts Agents provide inband management capability and are currently available on the following platforms: Microsoft Windows NT® and 2000 IBM AIX v5.1, v5.2 Sun Solaris v2.6 or 2.8 Linux SuSE Enterprise Server 7.0 Linux Red Hat Advanced. Server 2.1 Agents consist of the following components: The Agent itself: Collects information from various sources and forwards it to the Manager Java virtual Machine: Use of a JVM supports portability and completeness SNMP Agent for managed switches SAN switches can use SNMP to act as outband Agents. Tivoli SAN Manager can use SNMP Management Information Base (MIB) queries to discover information about these switches.2.2.3 Supported devices for Tivoli SAN Manager The list of supported devices, including HBAs, disk systems, tape systems, SAN switches, and gateways is provided at: http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.html Always check here first during planning to see if there are any special considerations for your environment.2.3 Major functions of IBM Tivoli SAN Manager IBM Tivoli SAN Manager performs the functions shown in Figure 2-3. These functions are explored in the rest of this chapter. Discover SAN components and devices Display a topology map of the SAN in physical and logical views Performs error detection and fault isolation Provide real-time and historical reports (through NetView) Error detection and fault isolation (ED/FI - SAN Error Predictor) Discovery of iSCSI devices Launch vendor-provided applications to manage components Figure 2-3 IBM Tivoli SAN Manager functions Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 31
  • 61. These functions are distributed across the Manager and the Agent as shown in Figure 2-4. Tasks - Tivoli SAN Manager Server and Agent Tivoli SAN Manager Server Performs initial discovery of environment Gathers and correlates data from agents on managed hosts Gathers data from SNMP (outband) agents Graphically displays SAN topology and attributes Provides customized monitoring and reporting through NetView Reacts to operational events by changing its display (Optionally) forwards events to Tivoli Enterprise Console or SNMP managers Tivoli SAN Manager Agent Gathers information about SANs by querying switches and devices for attribute and topology information host-level storage, such as filesystems and LUNs event and other information detected by HBAs Forwards topology & event information to the Manager Figure 2-4 Functions of IBM Tivoli SAN Manager and Agents2.3.1 Discover SAN components and devices IBM Tivoli SAN Manager uses two methods to discover information about the SAN — outband discovery, and inband discovery. These discovery paths are shown in Figure 2-5. In outband discovery, all communication occurs over the IP network: IBM Tivoli SAN Manager requests information over the IP network from a switch using SNMP queries on the device. The device returns the information to IBM Tivoli SAN Manager, also over IP In inband discovery, both the IP and Fibre Channel networks are used: IBM Tivoli SAN Manager requests information (via IP) from an IBM Tivoli SAN Manager Agent installed on a Managed Host. That Agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The Agent returns the information to IBM Tivoli SAN Manager over IP. The Manager collects, co-relates and displays information from all devices in the storage network, using both IP and Fibre Channel. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network.32 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 62. SAN Manager - Discovery Paths iSCSI Query request/Returned data (no TCP/IP nvsniffer discovery of iSCSI devices Network Agent) SAN Manager SAN Manager Un- Tivoli Agent Agent Identified SAN Device Manager HBA HBA (no Agent) topology & event data Switch Switch Fibre Channel Outband Connections Fibre (SNMP Query/Trap) Channel SAN Inband (RNID, SCSI queries) DiskFigure 2-5 IBM Tivoli SAN Manager — inband and outband discovery pathsFollowing are definitions of some important terms:Outband discovery is the process of discovering SAN information, including topology anddevice data, without using the Fibre Channel data paths. Outband discovery uses SNMPqueries, invoked over IP network. Outband management and discovery is normally used tomanage devices such as switches and hubs which support SNMP.Inband discovery is the process of discovering information about the SAN, includingtopology and attribute data, through the Fibre Channel data paths. Inband discovery uses thefollowing general process: The Agent sends commands through its Host Bus Adapters (HBA) and the Fibre Channel network to gather information about the switches The switch returns the information through the Fibre Channel network & the HBA to the Agent The Agent queries the endpoint devices using RNID and SCSI protocols The Agent returns the information to the Manager over the IP network The Manager then responds to the new information by updating the database and redrawing the topology map if necessary.iSCSI Discovery Internet SCSI is an Internet Protocol (IP)-based storage networkingstandard for linking data storage. It was developed by the Internet Engineering Task Force(IETF). iSCSI can be used to transmit data over LANs and WANs. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 33
  • 63. 2.3.2 Deciding how many Agents will be needed The storage network may have dozens of switches, 10-20 storage frames, half a dozen SANs, and hundreds of servers. How many Agents should the administrator load? The answer to this question depends on what you want to accomplish. Four different levels of monitoring are possible, as summarized in Figure 2-6. They are discussed in detail in 3.8, “Deployment scenarios” on page 76. We are using the terms inband and outband monitoring as defined in 2.3.1, “Discover SAN components and devices” on page 32. Levels of SAN Management Basic fabric management Outband only Manual Indentity of Endpoints Outband only ledoM erawtfoS krowteN NAS Well Placed Agents Inband only Agents Everywhere Inband + Outband Figure 2-6 Levels of monitoring Outband monitoring only No IBM Tivoli SAN Manager Agents are installed on hosts in this scenario. You are managing the switches in your SAN, that is, monitoring the fabric. IBM Tivoli SAN Manager displays the WWN of an unmanaged host on the topology map, but the device type is unknown. If anything goes wrong with a switch, a link, or one of the unidentified objects, then the event triggers redrawing the topology map, with the broken component identified in red on the map. No information on storage, LUNs or filesystems is available (no logical views). Optionally, you can enhance the SAN display by manually identifying SAN elements via NetView.34 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 64. Inband monitoring In this scenario, at least some hosts have IBM Tivoli SAN Manager Agents installed. How many you load depends on platform support, functionality required, as well as performance implications. More information about this is given in 5.7.9, “Well placed agent strategy” on page 202. You should load at least one Host per zone (or two hosts for redundancy) for the complete topology display. Use this approach (the well-placed Agent) if you want to manage your switches, and to know the name and identify of your RNID-capable hosts. You will display storage-related information (including logical views) only for the hosts with Agents installed. If you need the logical or storage-centric views for all eligible hosts (with platform support), then Agents should be installed on all of these. Use this approach where platform support for Agents is provided AND you want to discover and display storage-related information for as many hosts as possible. Both inband and outband monitoring When you use a combination of inband and outband Agents, you monitor all devices in the fabric across all zones. You also get storage-centric views for hosts with Agents. This approach therefore provides the highest level of information and monitoring for the SAN. Another benefit of this approach is that if, for some reason, the Fibre Channel network becomes unavailable, you still can monitor using the IP path. These issues are examined in more detail in 3.8, “Deployment scenarios” on page 76 as well as 5.7, “Practical cases” on page 182.2.3.3 How is SAN topology information displayed? IBM Tivoli SAN Manager uses IBM Tivoli NetView to display topology information and views, and for monitoring devices. Tivoli NetView discovers, displays, and manages traditional TCP/IP networks. IBM Tivoli SAN Manager extends this function to the Storage Area Network by providing a new SAN pull-down menu item, as well as a SAN icon on the NetView top-level root map. These are shown in Figure 2-7. Tivoli NetView, as customized with IBM Tivoli SAN Manager, provides a single integrated platform to manage both traditional IP networks and SANs. Figure 2-7 Tivoli SAN Manager — Root menu Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 35
  • 65. Ways to display topology IBM Tivoli SAN Manager presents topology displays in two different ways — the icon display and the explorer display. Figure 2-7 is an example of an icon display. The explorer display (named because it is similar to the Windows Explorer file manager), looks like Figure 2-8. Figure 2-8 Tivoli SAN Manager — explorer display2.3.4 How is iSCSI topology information displayed iIBM Tivoli SAN Manager displays iSCSI devices within the NetView SmartSet. Once IP discovery and the appropriate iSCSI Operation is selected. the iSCSI devices are discovered and a SmartSet is created. An example of an iSCSI SmartSet is displayed in Example 2-9 Figure 2-9 iSCSI SmartSet2.4 SAN management functions IBM Tivoli SAN Manager has three primary areas as described in 2.2.1, “Business purpose of IBM Tivoli SAN Manager” on page 29.36 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 66. Prevent faults in the SAN infrastructure through reporting and proactive maintenance. Identify and resolve problems in the storage infrastructure quickly, when a problem occurs. Provide fault isolation of SAN links. IBM Tivoli SAN Manager achieves these purposes by providing the following functions as outlined in Figure 2-3 on page 31, and on Figure 2-4 on page 32. Discover SAN components and devices. Display a topology map of the various fabrics and SANs, giving both physical and logical views. Highlight faults. Provide report and monitoring capability for SNMP-capable devices. Launch vendor-provided applications to manage individual components. Displays ED/FI adornments on the topology map for fault isolation and problem resolution. Provides reporting into Tivoli Enterprise Data Warehouse We will give a brief overview of these IBM Tivoli SAN Manager functions, to illustrate how these functions achieve the business purpose of the tool. Chapter 5, “Topology management” on page 149, gives a more detailed exploration of the product capabilities.2.4.1 Discover and display SAN components and devices In its recommended installation, Tivoli SAN Manager uses both inband and outband methods to discover and map the SAN topology. You can go from high-level views down to more specific views, focusing on different parts of the SAN. When you click the Storage Area Network icon shown in Figure 2-7 on page 35, you will see the following submenu (Figure 2-10). Figure 2-10 Tivoli SAN Manager — SAN submap The SAN icon on the right (highlighted) takes you down the physical topology displays, while the other two icons (Device Centric View and Host Centric View) provide access to the logical topology displays. Physical topology display A typical SAN physical topology display is shown in Figure 2-11. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 37
  • 67. Figure 2-11 NetView physical topology display We reached this display by drilling down from the map shown in Figure 2-10. In this case, one SAN switch is shown with the hosts and devices connected to it. All icons are colored green, indicating they are active and available. Similarly, the connections are black. NetView uses different colors for devices and connections to indicate their status, as explained in 5.1.8, “Object status” on page 155.38 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 68. If something happens on the SAN (for example, a port on the switch fails), then the topologywill be automatically updated to reflect that event, as shown in Figure 2-12. In this case anevent is triggered that the port has failed, however, the Server still communicates with thehost Agent attached to that port. Therefore the connection line between the switch and hostturns red, while the host system remains green.Figure 2-12 Map showing host connection lost Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 39
  • 69. Zone view Tivoli SAN Manager can also display switch zones, where supported by the switch API. Figure 2-13 shows two zones configured, FASTT and TSM. Figure 2-13 Zone view submap40 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 70. If you click an individual zone, the members of that zone will be displayed. This is shown inFigure 2-14. More information on the Zone View for Tivoli SAN Manager is given in “Zoneview” on page 165.Figure 2-14 Zone membersLogical topology displaysThe logical displays provided by Tivoli SAN Manager are the Device Centric View and theHost Centric View. Logical views, unlike the previously shown physical views, do not displaythe connection information between hosts and SAN fabric devices. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 41
  • 71. Device Centric View The Device Centric View displays all the storage devices connected in the SAN, with their relationship to the hosts. The initial Device Centric View map is shown in Figure 2-15. It shows two disk systems, each with their serial numbers. The specific icons displayed depend on how your disk systems are supported by Tivoli SAN Manager. Figure 2-15 Device Centric View You can drill down to individual devices, using the icon display, or display all the information in the explorer view. This is usually a more convenient way to display this information as it is more complete. If you select the Explorer icon, you will see the map shown in Figure 2-16. You can see that the LUNs for both storage systems displayed on the left hand panel. For each LUN, you can drill down to find out which host and Operating System has assigned that LUN. In this example, disk system IT14859668 has five LUNs, and each LUN is associated with one or two named hosts. For example, the first LUN is associated with the hosts SENEGAL and DIOMEDE, running Windows 2000. You can drill down one step further from the Operating System to display the filesystem installed on the LUN. There is one LUN discovered in the other disk system, which is used for the filesystem /mssfs on the AIX system CRETE.42 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 72. Figure 2-16 Device Centric View — explorerHost Centric ViewThe Host Centric View displays all the Managed Hosts, and their associated local andSAN-attached storage devices. The explorer Host Centric view is shown in Figure 2-17. Youcan see that each filesystem associated with each Managed Host or Agent is displayed onthe right hand side. The left hand pane shows all the Managed Hosts.Figure 2-17 Host Centric View Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 43
  • 73. You can drill down on the Host filesystem entries to show also the logical volume (and LUN if fibre-attached) associated with the filesystem, shown in Figure 2-18. Figure 2-18 Host Centric View — logical volumes and LUN Summary display You can also see a summary or navigation display which holds a history of all the maps which you have navigated in IBM Tivoli SAN Manager. In Figure 2-19, we have opened up all 3 views of IBM Tivoli SAN Manager, and therefore can see a very comprehensive display. This is because we have drilled down a Device Centric View, drilled down a Host Centric View, and finally navigated the physical topology, and then opened the Navigation Tree.44 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 74. 1 2 3 Figure 2-19 Navigation tree for Tivoli SAN Manager The Figure 2-19 shows: (1) SAN View (third row, left, see #1) – Topology View – Zone View (2) Device Centric View (third row, middle, see #2) (3) Host Centric View (third row, right, see #3) Object Properties If you click any device to select it (from any map), then right-click and select Object Properties from the pop-up menu, this will bring up the specific properties of the object. In Figure 2-20, we selected a switch, and displayed the Properties window, which has seven different tabs. The events tab is shown, listing events which have been received for this switch.2.4.2 Log events Tivoli SAN Manager collects events received from devices. Figure 2-20 shows an example of events logged by one of the switches. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 45
  • 75. Figure 2-20 Switch events2.4.3 Highlight faults Events about SAN components are communicated to IBM Tivoli SAN Manager by Agents. IBM Tivoli SAN Manager logs the events in a log that looks like Figure 2-20 above. IBM Tivoli SAN Manager then evaluates the event, and decides if the event is (or is not) a fault. If the event is a fault, IBM Tivoli SAN Manager indicates the change in status by changing the color of the object in the topology map. In 5.7, “Practical cases” on page 182, we show more examples of triggering topology changes. Here is just one example, where we simulate the loss of the switch by powering it off. Other common faults include a Fibre Channel cable breaking, a host crash, and an unavailable storage frame. Figure 2-21 shows the topology display once it is refreshed. The healthy devices are in green, and the unhealthy devices are in red. We have also circled the devices that turned red for clarity. Device #5 — the switch — is obvious. The power went out, so the switch no longer responds. All the links (connections) from the switch to the devices also went red. The Devices 1,2,3 & 4 went red. These devices have no Agent on them, are known to IBM Tivoli SAN Manager only through the switch nameserver, and are unavailable now that the switch is down. IBM Tivoli SAN Manager’s map displays this accurately. The hosts with Agents on them (CRETE, TUNGSTEN, SENEGAL AND DIOMEDE) remain green as they still communicate with the Server via TCP/IP. Only their connection link to the switch turns red.46 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 76. 1 2 3 5 4 Figure 2-21 Map Showing Effects of Switch Losing Power When the switch is fixed, as it powers up, it will send an event back to IBM Tivoli SAN Manager, who will re-query the switch (running both the topology scanner and the attribute scanner). The topology will be refreshed to reflect the switch being back online.2.4.4 Provide various reports The primary business purpose for IBM Tivoli SAN Manager is to keep the storage infrastructure running to support revenue-generating actives. We have already seen how the topology display will automatically update to reflect devices becoming available so that network administrator can quickly respond. Another way to improve availability is to use monitoring and reports to anticipate devices that are beginning to fail. Not all failures can be predicted — however, many devices may fail gradually over time, so reporting a history of problems can help anticipate this. For example, if you see a lot of transmission errors occurring over a period of time, you might anticipate a component failure, and schedule maintenance testing or preemptive replacement to minimize impact on generating revenue or other critical applications. Reporting is provided by NetView and includes historical and real-time reporting, on an individual device, or on a defined group of devices. Reporting capabilities are discussed in detail in Chapter 6, “NetView Data Collection, reporting, and SmartSets” on page 207. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 47
  • 77. With NetView you have a very flexible capability to build your own reports according to your specific needs. The reports we are interested in for IBM Tivoli SAN Manager are reports against the objects in the MIB provided by the switch vendor. In our lab, we used an IBM 2109 16-port switch, so we used the Brocade MIB for the Brocade Silkworm 2800. The data elements in the MIB can report on status (device working, not working) and performance (x frames were transmitted over this switch port in Y seconds). Historical reporting With NetView you can display historical reports based on data collected. Figure 2-22 shows a report of data collected over 8 ports in a two minute interval. You can set up the data collection to look for thresholds on various MIB values, and send a trap when defined values are reached. Figure 2-22 Graph of # Frames Transmitted over 8 ports in a 2 minute interval Combining the MIB objects with the canned and customized reporting from NetView provides the storage administrator with the tools needed to help keep the SAN running all the time. Real-time reporting NetView can also track MIB values in real-time. Figure 2-23 shows real-time monitoring of traffic on switch ports. The graph shows the number of frames transmitted from a specific port on a particular switch over a specified time interval. You can set the polling interval to show how often the graph will update.48 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 78. Figure 2-23 Number of Frames Transmitted Over Time You can also create graphs from multiple devices using NetView SmartSets.2.4.5 Launch vendor management applications Tivoli SAN Manager provides a launch platform for many individual vendor management applications. In some cases these are automatically discovered, and in other cases they can be manually configured into NetView. These applications might be for a switch, hub, gateway or storage frame. Figure 2-24 shows an example of launching the automatically discovered switch management application. Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 49
  • 79. Figure 2-24 Vendor application launch IBM Tivoli SAN Manager provides 3 methods for locating such applications: Native support: For some devices, IBM Tivoli SAN Manager will automatically discover and launch the device-related tool. SAN Manager has an internal set of rules (in XML format) by which it identifies the devices whose tool it can launch. Web interface support: Some devices are not discovered automatically, but have a Web interface. IBM Tivoli SAN Manager can be configured with the URL, so that it can subsequently launch Web interface. Non-Web interface support: Other applications have no Web interface. IBM Tivoli SAN Manager offers you the ability to configure the toolbar menu to launch any locally-installed application from the IBM Tivoli SAN Manager console. Note, these applications must be locally installed on the Tivoli SAN Manager Server. These options are presented in 5.5, “Application launch” on page 174.2.4.6 Displays ED/FI events IBM Tivoli SAN Manager Version 1.2 uses ED/FI to predict errors on the optical links that are used to connect SAN components. Coverage includes HBA-switch, switch-switch and switch to storage connections). The SAN topology is updated to reflect the suspected components.50 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 80. Figure 2-25 shows an example of an ED/FI “adornment” on the switch ELM17A110. More information on ED/FI is given in Chapter 9, “ED/FI - SAN Error Predictor” on page 267. Figure 2-25 Adornment shown on fibre channel switch2.4.7 Tivoli Enterprise Data Warehouse (TEDW) The Tivoli Data Enterprise Warehouse collects, organizes, and makes data available for the purpose of analysis in order to give management the ability to access and analyze information about its business. IBM Tivoli SAN Manager will provide ETL1 (Extract Translate and Load) code that allows TEDW to pull data from the IBM Tivoli SAN Manager database. In its first release, the TEDW support will extract switch and port status information. Refer to Chapter 14, “Integration with Tivoli Enterprise Data Warehouse” on page 387 for more information2.5 Summary In this chapter, we introduced IBM Tivoli SAN Manager, whose primary business purpose is to keep the storage infrastructure running to assist revenue-generating activities. IBM Tivoli SAN Manager discovers the SAN infrastructure, and monitors the status of all the discovered components. Furthermore, it also discovers iSCSI devices and provides the functionality to detect and report on SAN interconnect failures using ED/FI. Through Tivoli NetView, the administrator can provide reports on faults on components (either individually or in groups, or “SmartSets”, of components). Chapter 2. Introduction to IBM Tivoli Storage Area Network Manager 51
  • 81. 52 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 82. Part 2Part 2 Design considerations In Part 2 we discuss the deployment architectures (including Server, Agents, Remote Console, and inband/outband discovery) for IBM Tivoli SAN Manager.© Copyright IBM Corp. 2002, 2003. All rights reserved. 53
  • 83. 54 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 84. 3 Chapter 3. Deployment architecture In this chapter we provide an overview of Fibre Channel standards, Fibre Channel topologies, IBM Tivoli Storage Area Network Manager (IBM Tivoli SAN Manager), including a component description, as well as Managed Host placement. We cover these topics: Fibre Channel standards Hardware SAN topologies – Point-to-point – Arbitrated loop – Switched Management – Inband – Outband Component description and placement Deployment considerations – Manager – Agents – Deployment scenarios High availability© Copyright IBM Corp. 2002, 2003. All rights reserved. 55
  • 85. 3.1 Overview In this chapter, we start out by describing the standards and interoperability on which IBM Tivoli SAN Manager is built (Figure 3-1). Fibre Channel discussion topics Standards Fibre Channel hardware HBAs, cables, GBICs SAN Topologies Management Inband Outband IBM Tivoli SAN Manager Components Deployment Considerations Manager and Agent Deployment scenarios Figure 3-1 Deployment overview The challenges of managing heterogeneous SANs are discussed and how IBM Tivoli SAN Manager manages heterogeneous SANs. We also discuss the different Fibre Channel topologies and SAN fabric components followed by deployment scenarios. We also discuss SAN management as it relates to IBM Tivoli SAN Manager and some scanner details.3.2 Fibre Channel standards Standards are desirable in any IT arena, as they provide a means to ensure interoperability and coexistence between equipment manufactured by different vendors. This benefits the customers by increasing their choices.3.2.1 Interoperability Interoperability means the ways in which the various SAN components interact with each other. Many vendors and other organizations have their own labs to perform interoperability testing to ensure adherence to standards. Before going ahead with any purchase decision for a SAN design, it is recommended that you check with the vendor of your SAN components about any testing and certification they have in place. This should be an important input to the decision making process. Where there are multiple vendors involved this becomes very important — for example, if a storage vendor certifies a particular level of HBA firmware, while a server vendor certifies and supports another level. You need to resolve any incompatibilities to avoid ending up with an unsupported configuration.56 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 86. 3.2.2 Standards The SAN component vendors, especially switch makers, are trying to comply to the standards which will allow them to operate together in the SAN environment. The current standard which gives the opportunity to have different components in the same fabric is the FC-SW2 standard from the Technical Committee T11. See this Web site for details: http://www.t11.org This standard defines FSPF (Fabric Shortest Path First), Zoning exchange, and ISL (Inter-Switch Link) communication. Not all vendors may support the entire standard. Future standards (for example, FC-SW3, currently under development) will also bring with them functions which will allow the management information to be exchanged from component to component, thus giving the option to manage different vendors’ components with tools from one vendor. IBM Tivoli SAN Manager employs iSCSI management. The iSCSI protocol is a proposed industry-standard that allows SCSI block I/O protocols to be sent over TCP/IP protocol. See this web site for additional details: http://www.ietf.org We have already mentioned Chapter 1, “Introduction to Storage Area Network management” on page 3 that SAN vendors are trying to establish support for the standards which will give them the opportunity to work together in the same SAN fabric. But this is just one view of heterogeneous support. The other view is from the platforms which will participate in the SAN as the users of the resources. So, when deploying Tivoli SAN Manager it is important that you check that the SAN components you are using are certified and tested with it. This also means that you need to verify which levels of operating systems, firmware, drivers, vendors models are supported by Tivoli SAN Manager. We discuss this later in 3.7, “Deployment considerations” on page 70.3.3 Hardware overview In this section, we introduce various items that can make up a SAN (Figure 3-2). Identifying these components is key to a successful deployment and proper functionality of the Tivoli SAN Manager installation. This includes physical building blocks and protocols. SAN Components HBA GBICs Cables Connectors Figure 3-2 Hardware overview We will start off by covering items of hardware that are typically found in a SAN. The purpose of a SAN is to interconnect hosts/servers and storage. This interconnection is made possible by the components (and their subcomponents) that make up the SAN itself. Chapter 3. Deployment architecture 57
  • 87. 3.3.1 Host Bus Adapter The device that acts as the interface between the fabric of a SAN and either a host or a storage device is a Host Bus Adapter (HBA). In the case of storage devices, they are often just referred to as Host Adapters. The HBA connects to the bus of the host or storage system. It has some means of connecting to the cable or fiber leading to the SAN. Some devices offer more than one Fibre Channel connection. At the time of writing, single and dual ported offerings were available. The function of the HBA is to convert the parallel electrical signals from the bus into a serial signal to pass to the SAN. Some of the more popular vendors of HBA’s include QLogic, Emulex and JNI. Many server or storage vendors also re-sell these adapters under their own brand (OEM). Figure 3-3 shows some typical HBAs. Typical HBAs Figure 3-3 Typical HBAs3.3.2 Cabling There are a number of different types of cable that can be used when designing a SAN. The type of cable and route it will take all need consideration. The following section details various types of cable and issues related to the cable route. Distance The Fibre Channel cabling environment has many similarities to telecommunications or typical LAN/WAN environments. Both allow extended distances through the use of extenders or technologies such as DWDM (Dense Wavelength Division Multiplexing). Like the LAN/WAN environment, Fibre Channel offers increased flexibility and adaptability in the placement of the electronic network components, which is a significant improvement over previous data center storage solutions, such as SCSI. Shortwave or longwave Every data communications fiber falls into one of two categories: Single-mode Multi-mode In most cases, it is impossible to visually distinguish between single-mode and multi-mode fiber (unless the manufacturer follows the color coding schemes specified by the Fibre58 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 88. Channel physical layer working subcommittee — orange for multi-mode and yellow forsingle-mode), since there may not be a difference in outward appearance, only in core size.Both fiber types act as a transmission medium for light, but they operate in different ways,have different characteristics, and serve different applications.Single-mode (SM) fiber allows for only one pathway, or mode, of light to travel within the fiber.The core size is typically 8.3 µm. Single-mode fibers are used in applications where lowsignal loss and high data rates are required, such as on long spans (longwave) between twosystem or network devices, where repeater/amplifier spacing needs to be maximized.Multi-mode (MM) fiber allows more than one mode of light. Common MM core sizes are 50µm and 62.5 µm. Multi-mode fiber is better suited for shorter distance applications. Wherecostly electronics are heavily concentrated, the primary cost of the system does not lie withthe cable. In such a case, MM fiber is more economical because it can be used withinexpensive connectors and laser devices, thereby reducing the total system cost. Thismakes multi-mode fiber the ideal choice for short distance (shortwave) under 500m fromtransmitter to receiver (or the reverse).50/125 micrometers or 62.5/125 micrometersOptical fiber for telecommunications consists of three components: Core Cladding CoatingFigure 3-4 describes the characteristics of fiber optic cables. Optical fiber cable Laser Amplifier, Light Emitting Diode (LED) Optical Fiber Photodiode Treshold Detector Electrical Core Emitter Detector Driver Cladding Optical PulsesFigure 3-4 Structure of a fiber optic cableCoreThe core is the central region of an optical fiber through which light is transmitted. In general,the telecommunications industry uses sizes from 8.3 micrometers (µm) to 62.5 micrometers.The standard telecommunications core sizes in use today are 8.3 µm (single-mode), 50 µm(multi-mode), and 62.5 µm (multi-mode).CladdingThe diameter of the cladding surrounding each of these cores is 125 µm. Core sizes of 85 µmand 100 µm have been used in early applications, but are not typically used today. The coreand cladding are manufactured together as a single piece of silica glass with slightly differentcompositions, and cannot be separated from one another. Chapter 3. Deployment architecture 59
  • 89. Coating The third section of an optical fiber is the outer protective coating. This coating is typically an ultraviolet (UV) light-cured acrylate applied during the manufacturing process to provide physical and environmental protection for the fiber. During the installation process, this coating is stripped away from the cladding to allow proper termination to an optical transmission system. The coating size can vary, but the standard sizes are 250 µm or 900 µm. Most enterprises today use the 62.5 micron core fiber due to its high proliferation in Local Area Networks (LAN). The Fibre Channel SAN standard is based on the 50 micron core fiber and is required to achieve distances specified in the ANSI Fibre Channel standards. Customers should not use the 62.5 micron fiber for use in SAN applications. It is wise to check with any SAN component vendor to see if 62.5 is supported.Figure 3-16 shows the various cables. Single-mode & Multi-mode cables Single-Mode Fiber Cladding (125 um) Core (Core (9 um) 9 um) MultiMode Fiber Cladding (125 um) Core (50 um or 62.5 um) Figure 3-5 Single mode and multi mode cables Copper The Fibre Channel standards also allows for fibers made out of copper. There are different standards available: 75W Video Coax 75W Mini Coax 150W shielded twisted pair The maximum supported speeds and distances using copper are lower than when using fiber optics. Plenum rating A term that is sometimes used when describing cabling is whether a particular cable is plenum rated or not. A plenum is an air filled duct, usually forming part of an air conditioning or venting system. If a cable is to be laid in a plenum, there are certain specifications which need to be met. In the event of a fire, some burning cables emit poisonous gases. If the cable is in a room, then there could be a danger to people in that room. If on the other hand, the cable is in a duct which carries air to an entire building, clearly, there is a much higher risk of endangering life.60 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 90. For this reason, cable manufacturers will specify that their products are either plenum rated ornot plenum rated.ConnectorsThe particular connectors used to connect a fiber to a component will depend upon thereceptacle into which they are being plugged. Some generalizations that can be made. It alsois useful to mention some guidelines for best practices when dealing with connectors orcables.Most, if not all, 2 Gbps devices will be using the Small Form Factor SFF or Small Form FactorPluggable (SFP) technology, and therefore, use Lucent Connector (LC) connectors. MostGigabit Interface Converter GBICs (see “GBICs and SFPs” on page 62) and Gigabit LinkModule GLMs use industry standard Subscriber Connector (SC) connectors.SC connectorsThe duplex SC connector is a low loss, push/pull fitting connector. It is easy to configure andreplace. The two fibers each have their own part of the connector. The connector is keyed toensure correct polarization when connected, that is transmit to receive and vice-versa.See the diagram of an SC connector in Figure 3-6. Fiber optic cable with SC connectorFigure 3-6 SC fibre optic cableLC connectorsThe type of connectors which plug into SFF or SFP devices are called LC connectors. Againa duplex version is used so that the transmit and receive are connected in one step.The main advantage that these LC connectors have over the SC connectors is that they are ofa smaller form factor and so manufacturers of Fibre Channel components are able to providemore connections in the same amount of space. Figure 3-7 shows an LC connector. Chapter 3. Deployment architecture 61
  • 91. LC connector Figure 3-7 LC connector GBICs and SFPs Gigabit Interface Converters (GBICs) are laser-based, hot-pluggable, data communications transceivers. GBICs are available in copper, and both short wavelength and long wavelength versions, which provide configuration flexibility. Users can easily add a GBIC in the field to accommodate a new configuration requirement or replace an existing device to allow for increased availability. They provide a high-speed serial interface for connecting servers, switches and peripherals through an optical fiber cable. In SANs, they can be used for transmitting data between physical Fibre Channel ports. The optical GBICs use lasers that enable cost-effective data transmission over optical fibers at various distances (depending on the type) of up to distances of around 100 km. These compact, hot-pluggable, field-replaceable modules are designed to connect easily to a system card or other device through an industry-standard connector. On the media side, single-mode or multi-mode optical fiber cables, terminated with industry-standard connectors, can be used. GBICs are usually easy to configure and replace. If they are optical, they use low-loss, push-pull, optical connectors. They are mainly used in hubs, switches, directors, and gateways. A GBIC is shown in Figure 3-8. SFPs (Small Form-Factor Pluggable) modules are functionally equivalent to GBICs but use LC connectors. They are increasingly more commonly used than GBICs. A typical GBIC Figure 3-8 GBIC62 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 92. 3.4 Topologies Fibre Channel provides three distinct interconnection topologies. This allows an enterprise to choose the topology best suited to its requirements. See Figure 3-9. The three Fibre Channel topologies are: Point-to-point Arbitrated loop Switched fabric Fibre Channel Topologies Figure 3-9 Fibre Channel topologies3.4.1 Point-to-point Point-to-point is the simplest Fibre Channel configuration to build, and the easiest to administer. Figure 3-10 shows a simple point-to-point configuration. If you only want to attach a single Fibre Channel storage device to a server, you could use a point-to-point connection, which would be a Fibre Channel cable running from the Host Bus Adapter (HBA) to the port on the device. Point-to-point connections are most frequently used between servers and storage devices, but may also be used for server-to-server communications. Fibre Channel Point to Point Fibre Channel 100MB/s Full Duplex Server Disk Figure 3-10 Fibre Channel point-to-point Chapter 3. Deployment architecture 63
  • 93. 3.4.2 Arbitrated loop In Fibre Channel arbitrated loop (FC-AL), all devices on the loop share the bandwidth. The total number of devices which may participate in the loop is 126. For practical reasons, however, the number tends to be limited to no more than 10 and 15. Due to the limitations of FC-AL, it is not typical to build a SAN just around hubs. It is possible to attach a hub to a switched fabric. This allows devices which do not support the switched topology to be utilized in a large SAN. Hubs are typically used in a SAN to attach devices or servers which do not support switched fabrics, but only FC-AL. They may be either unmanaged or managed. See Figure 3-11 for an FC-AL topology. Fibre Channel Arbitrated Loop (FC-AL) HUB Figure 3-11 Fibre Channel Arbitrated Loop (FC-AL)64 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 94. 3.4.3 Switched fabrics Switches allow Fibre Channel devices to be connected together, implementing a switched fabric topology between them. Unlike in an Arbitrated Loop hub, where all connected devices share the bandwidth, in a switch all devices operate theoretically at full Fibre Channel bandwidth. This is because the switch creates a direct communication path between any two ports which are exchanging data. The switch intelligently routes frames from the initiator to the responder. See Figure 3-12. Fibre Channel Switch Fabric Fibre Channel Fibre Channel Switch Switch Figure 3-12 Fibre Channel switched fabric It is possible to connect switches together in cascades and meshes using inter Switch links or ISLs. It should be noted that devices from different manufacturers may not inter-operate fully (or even partially) as standards are still being developed and ratified. As well as implementing this switched fabric, the switch also provides a variety of fabric services and features such as: Name services Fabric control Time services Automatic discovery and registration of host and storage devices Rerouting of frames, if possible, in the event of a port problem Features commonly implemented in Fibre Channel switches include: – Telnet and/or RS-232 interface for management – HTTP server for Web-based management – MIB for SNMP monitoring – Hot swappable, redundant power supplies and cooling devices – Online replaceable GBICs/interfaces – Zoning – Trunking – Other protocols in addition to Fibre Channel Chapter 3. Deployment architecture 65
  • 95. 3.5 IBM Tivoli SAN Manager components Each of the IBM Tivoli SAN Manager components are identified below with a brief description of their function.3.5.1 DB2 IBM Tivoli SAN Manager uses DB2 as its data repository. DB2 should be installed on the server system before installing IBM Tivoli SAN Manager. The installation process automatically creates the required database and tables in the instance.3.5.2 IBM Tivoli SAN Manager Console (NetView) The IBM Tivoli SAN Manager Console performs the following functions: Graphically displays SAN topology — including physical and logical views Displays attributes of entities on the SAN Provides a GUI to configure and administer IBM Tivoli SAN Manager Provides various reporting functions Provides a launching facility for SAN component management applications Displays ED/FI status and logs. Displays status of discovered iSCSI devices. The IBM Tivoli SAN Manager Console can be local — that is, installed on the manager system itself, and also remote — so that it is available on another system with NetView available. The installation process provides an option to install a remote NetView console See Chapter 4, “Installation and setup” on page 95 for information on installing IBM Tivoli SAN Manager.3.5.3 Tivoli SAN Manager Agents The IBM Tivoli SAN Manager Agents are also referred to as Managed Hosts (MH). Managed Hosts perform the following tasks: Gather information about the SAN by querying switches and devices for attribute and topology Gather host level information, including filesystems and logical volumes Gather event information detected by HBAs. All the data is gathered and returned to the Managed Host, which forwards it back to the IBM Tivoli SAN Managerr.3.5.4 Tivoli SAN Manager Server The Manager Server manages the functionality of IBM Tivoli SAN Manager and the Managed Hosts running on connected host machines. The Manager does the following: Gather data from Agents (such as description of SAN, filesystem information, ED/FI information) Provide information to consoles (e.g. graphical display of SAN, reports etc.). Forwards events to a Tivoli Enterprise Console (TEC) or any other SNMP Manager. Note that the TEC and external SNMP managers are not supplied with IBM Tivoli SAN Manager.66 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 96. 3.5.5 SAN physical view The physical view in Figure 3-13 below identifies the installed IBM Tivoli SAN Manager components (Server, Agents, Console and Remote Console), and allows the physical SAN topology to be understood. A SAN environment typically consists of four major classes of components: End-user computers and clients Servers Storage devices and subsystems Interconnect components End-user platforms and server systems are usually connected to traditional LAN and WAN networks. In addition, some end-user systems may be attached to the Fibre Channel network, and may access SAN storage devices directly. Storage subsystems are connected using the Fibre Channel network to servers, end-user platforms, and to each other. The Fibre Channel network is made up of various interconnect components, such as switches, bridges, gateways. Note: The Server system requires an IP connection, but not a Fibre Channel connection (this is optional). Similarly for a Remote Console, and a TEC or SNMP system, since all communication to these systems is sent over TCP/IP. Hosts with Agent code installed require a Fibre Channel attachment (for discovery and monitoring) in addition to the LAN connectivity to the Manager. There will also most likely be additional hosts which are FC attached but do not have the Agent installed. We discuss various deployment options for this in 3.8, “Deployment scenarios” on page 76. Component placement TEC or other SNMP manager Tivoli SAN Manager SAN Data Gateway Tape Fabric Fabric B B DB2 WebSphere Express ESS Tivoli SAN Manager NetView Tivoli SAN Manager Agent Win2k Ethernet SP Fabric Tivoli SAN Manager A Remote Console Tivoli SAN Manager Agent NetView AIX Systems Systems AIX (no agent) Figure 3-13 Component placement Chapter 3. Deployment architecture 67
  • 97. 3.6 Management The elements that make up the SAN infrastructure include intelligent disk subsystems, tape systems, Fibre Channel switches, hubs. The vendors of these components usually provide proprietary software tools to manage their own individual elements. For instance, a management tool for a hub will provide information regarding its own configuration, status, and ports, but will not support other fabric components such as other hubs, switches, HBAs, and so on. Vendors that sell more than one element often provide a software package that consolidates the management and configuration of all of their elements. Modern enterprises, however, usually purchase storage hardware from a number of different vendors, resulting in a highly heterogeneous SAN. Fabric monitoring and management is an area where a great deal of standards work is being focused. Two management methods are used in Tivoli SAN Manager: inband and outband management.3.6.1 Inband management The inband Agent performs its scans directly across the Fibre Channel transport. The collected data from the scan is then sent to the Server via TCP/IP protocol. This is known as Inband management. Inband management is evolving rapidly and reporting on low level interfaces such as Request Node Identification Data (RNID). Tivoli SAN Manager runs two types of inband scanners for gathering Fibre Channel attribute and topology information. They are: Topology Scanner The topology scanner receives a request to scan from the manager. It issues FC Management Server commands (FC-GS-3 standard) to the SAN interconnection devices to get the topology information. The specific FC Management Server commands are: Get platform information Get interconnect information The topology scanner queries every device within each zone that it belongs to. When a scan request is issued from the Server to the Agent, the agents queries the nameserver in the Fibre Channel switch. The nameserver then returns identification information on every device in its database. The symbol label on the topology map is derived from the nameserver. With this information it constructs a complete physical topology map which shows all connections, devices, and zone information. The topology scanner does not use a database to store results. The discovered data will be translated to XML format and sent back to the IBM Tivoli SAN Manager Server where it is stored in the DB2 repository. Attribute Scanner The attribute scanner gets the request from the IBM Tivoli SAN Manager Server to poll the SAN. It uses inband discovery, (specifically the SNI HBA API) to discover endpoint devices. These are Fibre Channel (FC) commands to the endpoint devices to gather attribute information. Typically, the commands used are: SCSI Inquiry SCSI Read Capacity68 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 98. When the attribute scanner runs on a system, it first queries the nameserver on the Fibre Channel switch to get a list of storage devices in the SAN. The scanner then verifies if the LUNs are visible to the host by issuing SCSI commands. In most cases the host can see all the LUNs in the SAN even if they are not assigned to a system (if they are not LUN masked). Since SCSI commands are issued from the Agent, the Agent must have LUNs assigned to the SAN attached storage device to gather the attribute information. Note: The attribute & topology scanner executables for Windows can be found in tivoliitsanmagentbinw32-ix86 The attribute & topology scanner executables for AIX can be found in /tivoli/itsanm/agent/bin/aix Figure 3-14 shows the inband scanner process. Inband scanning Tivoli SAN Manager Fibre Channel switch fabric B Solaris Ethernet Agent Fibre Channel switch fabric Legend B Inband scanner request from AIX Solaris and switches Agent Inband scanner request from AIX and switches Storage System Fabric A Fabric B Figure 3-14 Inband scanning3.6.2 Outband management Outband management means that device management data are gathered over a TCP/IP connection such as Ethernet. Commands and queries can be sent using Simple Network Management Protocol (SNMP). Outband management does not rely on the Fibre Channel network. Therefore, management commands and messages can be sent even if a loop or fabric link fails. Integrated SAN management facilities are more easily implemented, especially by using SNMP, when inband agents cannot be deployed because of platform capability or client requirements. Outband agents are defined to Tivoli SAN Manager, and are typically switches with the appropriate MIB enabled. The Advanced Topology Scanner is used for outband discovery: Chapter 3. Deployment architecture 69
  • 99. Advanced Topology Scanner For outband discovery, the Advanced Topology Scanner is used to query the MIB of the SNMP agents running to gather Fibre Channel port and switch information. The Advanced Topology Scanner queries the ConnUnitPortTable and ConnUnitLinkTable in the FA-MIB for switch port and link connection data. This data is used in creating the Tivoli SAN Manager topology map. The outband scanner process is show in Figure 3-15 below. The outband query simply retrieves information from the MIB in the switches — no discovery across links is done, as is the case with inband discovery. The information retrieved is correlated by the Manager and used to draw the topology map. . Outband scanning Tivoli SAN Manager fibre channel switch fabric A Solaris Agent Ethernet fibre channel switch fabric B AIX Agent Legend Manager issues outband scanner request to switches Figure 3-15 Outband scanning3.7 Deployment considerations In the following sections we outline and discuss the requirements of the Server and Agents for Tivoli SAN Manager, and various deployment scenarios.3.7.1 Tivoli SAN Manager Server The most current product requirements for Tivoli SAN Manager are available at the Web site http://www-3.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNetworkManager.h tml IBM Tivoli SAN Manager runs on Windows 2000 Server or Advanced Server with Service Pack 3. In addition, it should be running on a system with at least a Pentium® III 600Mhz class processor, 1 GB of RAM, and 1 GB of free disk space. It is also supported on a pSeries or RS/6000® running AIX 5.1, with minimum 375 Mhz processor, 200 MB free disk space, and 1GB RAM.70 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 100. It is also recommended that the system be dedicated to running Tivoli SAN Manager, rather than running other key enterprise applications. The Server system also requires TCP/IP network connection and addressability to the hosts and devices on the SAN. It does not need to be attached to the SAN via Fibre Channel. Note: Based on actual customer deployments, optimal server sizing for the Tivoli SAN Manager is 2-way (dual) Pentium III class processor with a speed of 800 Mhz (or equivalent pSeries), 2GB of Random Access Memory (RAM), and 1 GB of free disk space At this time, Tivoli SAN Manager requires a single machine install, where all the components (DB2, WebSphere Express, NetView and the manager code itself) are installed and running on the same system. Figure 3-16 shows the components of the Tivoli SAN Manager Server. Tivoli SAN Manager Server Windows AIX Tivoli SAN Manager Tivoli SAN Manager Console - NetView WebSphere Express ledo M erawtfoS krowteN NAS WebSphere Express DB2 Repository DB2 Repository Windows Console - NetView Figure 3-16 Components of a manger install Tivoli SAN Manager requires that the Manager use a fully qualified static TCP/IP host name. You will need to make DNS services accessible to the Tivoli SAN Manager. Agents, however, can now utilize dynamic IP addresses (DHCP) instead of static IP addresses. Other pre-installation checks are given in 4.2, “IBM Tivoli SAN Manager Windows Server installation” on page 96. Tivoli SAN Manager does not at this time provide built-in cluster support for high availability. If you require a high availability solution for Tivoli SAN Manager without clustering software, we recommend the configuration of a standby server with an identical network configuration and replicate the Tivoli SAN Manager database to that standby server on a regular basis.3.7.2 iSCSI management IBM Tivoli SAN Manager provides basic support for discovering and managing iSCSI devices. iSCSI devices are discovered by the NetView component of IBM Tivoli SAN Manager. By default, NetView IP Internet discovery is disabled. This will need to be enabled prior to any iSCSI devices being discovered and managed. See Chapter 7, “Tivoli SAN Manager and iSCSI” on page 253. Chapter 3. Deployment architecture 71
  • 101. 3.7.3 Other considerations IBM now provides an improved BAROC file for use with IBM Tivoli Enterprise Console. Details of using this are in Chapter 12, “Tivoli SAN Manager and TEC” on page 333.3.7.4 Tivoli SAN Manager Agent (Managed Host) Important: Tivoli SAN Manager Agents should not be installed on any servers that communicate with removable media devices, except where the environment provides the required support. Each time a discovery is run, the agents send SCSI commands to devices to identify their device types. If the target is a removable media device that cannot handle command queuing, a long tape read or write command might time out. This restriction does not apply to AIX Version 5.1 Agents, or certain IBM tape devices. For an up-to-date list of environments where this restriction does not apply, see the Web site http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml The Tivoli SAN Manager Managed Agent is also known as a Managed Host. Consideration needs to given to what functionality is desired from Tivoli SAN Manager. The Agent can be deployed to assist in network management and is also used to collect logical information about filesystems and perform error detection on SAN interconnect links. Refer to the IBM Tivoli SAN Manager Installation Guide, SC23-4697 for prerequisite checks that should be performed on the target managed host prior to any code installation. Please also refer to 4.4, “IBM Tivoli SAN Manager Agent installation” on page 112. Levels of SAN management The four levels of SAN Management are shown in Figure 3-17. When deploying IBM Tivoli SAN Manager, the customer must decide how much SAN management is required. Basic fabric management Manual Indentity of Endpoints ledoM erawtfoS krowteN NAS Well Placed Agents Agents Everywhere Figure 3-17 Level s of Fabric Management Basic Fabric Management Use this configuration when you want to monitor the SAN and receive event notification of state changes of any of the fabric connections or devices. if you do not require endpoint identification, LUN associations, or endpoint properties, this configuration will provide limited Event Detection and Fault Isolation (ED/FI) capabilities. ED/FI will adorn the Fibre Channel switch with events that are detected. No switch port or host systems will be adorned.72 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 102. Manual Identity of EndpointsMonitor the SAN as described in Basic Fabric Management. In addition, perform a one-timemanual identification of all SAN endpoints.Well Placed AgentsMonitor the SAN as described in Basic Fabric Management. In addition, deploy strategicallyplaced agents to help identify endpoints and provide logical information on key SAN attachedhost systems. ED/FI will have greater fault isolation capabilities when agents are deployed inthe SAN. If errors are detected by ED/FI, adornments will be displayed on the host systemrunning the agents and the corresponding switch and switch port to which it is connected.Agents EverywhereMonitor the SAN as described in Basic Fabric Management, plus deploy the inband Agents toautomatically identify as many endpoints as possible. This is useful in a dynamic SAN, whereendpoints change often. This will provide the greatest level of ED/FI. Important: Consideration must be given to the number of Agents and how the initial discovery is performed. Depending on the number of endpoints to discover, IBM Tivoli SAN Manager may taken a long time to complete the discovery process. The full discovery options are shown below: Never run the full discovery (use topology discovery only). Only run the full discovery when the user selects Poll Now. Only run the full discovery during a periodic or scheduled discovery. Run the full discovery when the user selects Poll Now or during a periodic or scheduled discovery. See IBM Tivoli SAN Manager Planning and Installation Guide, SC23-4697.Host systems with HBA, but no SAN connectivityWe tested a scenario where the IBM Tivoli SAN Manager Agent was installed on a hostsystem with a supported HBA, but without connectivity to the SAN (no FC cables attached).The Agent system was discovered and displayed in the Configure Agents window. We sawthe messages in Example 3-1 logged in TivoliitsanmmanagerlogmsgITSANM.log. To avoidthese errors, connect the host to the SAN before installing the agent. Once the FC cableswere attached to the host system, the Agent was automatically added to the topology map.Example 3-1 Host system with HBAs, no SAN connectivity2003.06.04 13:24:07.391 BTAHM2528I Agent diomede.almaden.ibm.com:9570 has been markedactive.com.tivoli.sanmgmt.diskmgr.hostservice.manager.AgentRegistrationListeneragentRegistrationHeartbeat2003.06.04 13:27:20.688 BTAQE1144E An error occurred attempting to run the Topologyscanner on the IBM Tivoli Storage Area Network Manager managed hostdiomede.almaden.ibm.com:9570.com.tivoli.sanmgmt.tsanm.queryengine.InbandScanHandler2003.06.04 13:27:20.875 BTADE1720I Processing has started for the missing devices forscanner ID S0e3.31.1e.71.90.66.11.d7.ba.4e.00.09.6b.92.a6.379570 .com.tivoli.sanmgmt.tsanm.discoverengine.TopologyProcessor process()Host systems with no HBADeploying a Tivoli SAN Manager Agent to a host system with no HBAs is not a supportedconfiguration. We tested this configuration and discovered that when deploying an Agent to ahost system with no HBAs, the Agent code installed successfully and appeared in theConfigure Agents window as contacted. We saw the messages in Example 3-2 logged in Chapter 3. Deployment architecture 73
  • 103. TivoliitsanmmanagerlogmsgITSANM.log. Therefore an HBA should be installed in a host before installing the Agent. Example 3-2 Host system with no HBAs 2003.06.04 14:32:16.922 BTAHM2528I Agent wisla.almaden.ibm.com:9570 has been marked active. 2003.06.04 14:32:52.906 BTAQE1144E An error occurred attempting to run the Topology scanner on the IBM Tivoli Storage Area Network Manager managed host wisla.almaden.ibm.com:9570.com.tivoli.sanmgmt.tsanm.queryengine.InbandScanHandler run Host and Device Centric Views The Device or Host Centric data views report on local filesystems as well as Fibre-attached assigned LUNs for Managed Hosts. Therefore, if a Managed Host has no Fibre LUNs assigned, only local disks will be reported. Once LUNs are assigned, all SAN attached storage will be presented correctly under Device and Host Centric views. Consideration should be given as to which SAN attached hosts have Agents installed. This concept is known as the well placed agent. Refer to 5.7.9, “Well placed agent strategy” on page 202 for more information. Request Node Identification (RNID) Request Node Identification (RNID) is supported by Tivoli SAN Manager and the Agents. Refer to the product Web site for a complete listing of the latest drivers and APIs that support RNID. We found that if one Tivoli SAN Manager Agent was deployed to a SAN attached host with an RNID enabled driver, then the remaining SAN attached hosts without Agents were discovered with the correct symbol type and WWN, providing their HBA also supports RNID. See Figure 3-18 for an RNID discovered unmanaged host. Note that an unmanaged host will not display any information on the logical Device and Host Centric views. Figure 3-18 RNID discovered host74 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 104. Table 3-1 gives information on the capabilities of IBM Tivoli SAN Manager depending on theRNID capability of the Agents and other hosts.Table 3-1 SAN Manager using vendor HBAs and switches Level of information Level of vendor HBA device What information can be collected driver support gathered and shown Good Not using common API Tivoli SAN Manager can do outband management in this situation. Also, if other inband agents have Better or Best levels of HBAs, then Tivoli SAN Manager can do inband discover through those agents. Information shown: Switches with IP connections to the manager. Topology that can be seen from the switches with IP connections to the manager. Hosts and other devices shown as Unknown entities in the topology view. Better Use common API without RNID Tivoli SAN Manager can do support both outband and inband management. Other inband agents will not be able to obtain RNID information from this HBA. In addition to the Good level of information, you will see: Managed hosts with agents installed are not shown as Unknown entities in the topology view. Some storage devices will no longer be shown as Unknown entities in the topology view. Chapter 3. Deployment architecture 75
  • 105. Level of information Level of vendor HBA device What information can be collected driver support gathered and shown Best Common API with RNID Outband, inband and RNID support information are fully supported. In addition to the good and better level of information, you will see: All agents that have HBAs that respond to RNID. Even if the agent is not installed, these hosts will not be shown as Unknown entities in the topology view. Storage devices that respond to RNID will also no longer be shown as Unknown entities in the topology view. Note: You should plan what information is required to be displayed when deciding where to deploy the Server and inband and outband Agents.3.8 Deployment scenarios We present the following examples to demonstrate some various deployment possibilities using outband, inband and a combination of outband and inband.3.8.1 Example 1: Outband only This example describes sample requirements that were compiled based on actual customer requirements. We outline advantages and disadvantages of using an outband only deployment configuration in IBM Tivoli SAN Manager and provide an overview of the install steps. Figure 3-19 describes our requirements. Outband requirements Topology map of the SAN State changes of any of the fabric connections or devices Network management only Dedicated console for operations staff Figure 3-19 Sample outband requirements Advantages The major advantage of deploying outband only agents is quick configuration and non intrusive deployment, since no Tivoli SAN Manager Agent code is required on any SAN host. After installing the Tivoli SAN Manager, the discovery is completed in a short amount of time by adding in the IP addresses or hostname of the Fibre Channel interconnect devices to Tivoli SAN Manager. This typically a switch or a director. There are limited Event Detection and76 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 106. Fault Isolation (ED/FI) capabilities. ED/FI will adorn the fibre channel switch of the eventsdetected.DisadvantagesThere are no endpoint identifications, LUN associations, or endpoint properties. It provideslimited attribute information on the topology map. Once the discovery is complete, the defaultsymbols for SAN attached devices (other than the switches) are displayed as “Unknown”symbols. The World-Wide Name (WWN) is also shown. See Figure 5-67 on page 197 for anexample of using outband agents only. This is caused by the limited attribute informationretrieved from the Advanced Topology Scanner. Once the discovery is complete, we can thenchange the symbol properties of the “Unknown” host to their actual symbol type and name.Figure 5-32 on page 173 shows how to change the symbol type and name. Figure 3-20shows the outband agents defined in the Configure Agents panel of IBM Tivoli SAN Manager.Furthermore, no switch port or host systems will be adorned by ED/FI.For detailed information on changing the symbol type and symbol name refer to 5.4.1, “SANProperties” on page 170. No Device or Host Centric views will be available, since thesedepend on information gathered by the Agents (inband).Figure 3-20 Display and configure outband agentsSetup procedureWith the above requirements, and noting the limitations, we can set up this scenario. Chapter 3. Deployment architecture 77
  • 107. 1. We first recommend verifying that the SAN is fully operational and check all the SAN attached devices for compatibilities and incompatibilities. The following URL provides compatibility requirements. http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml 2. Once the SAN attached components have been analyzed, review the Tivoli SAN Manager prerequisite checklist. Please refer to 4.2.2, “Preinstallation tasks” on page 97. Important: In this configuration, Tivoli SAN Manager relies on SNMP traps and polling intervals to determine when the status change of a Fibre Channel switch or director has occurred. It is recommended that SNMP trap destination tables of these devices by configured to point to the Tivoli SAN Manager’s IP address to allow for event driven management. 3. Enable the trap destination table on Fibre Channel switches or directors to forward SNMP traps to Tivoli SAN Manager. We demonstrate below the process for enabling the trap forward definitions on an IBM 2109 Fibre Channel switch. a. Log into the switch as administrator and issue the agtcfgshow command. This command is used for displaying the SNMP community names and trap destination configuration of the FC switch. See Example 3-3 below. Example 3-3 agtcfgshow output itsosw3:admin> agtcfgshow Current SNMP Agent Configuration Customizable MIB-II system variables: sysDescr = agtcfgset sysLocation = E3-250 sysContact = Charlotte Brooks swEventTrapLevel = 0 authTrapsEnabled = true SNMPv1 community and trap recipient configuration: Community 1: Secret C0de (rw) No trap recipient configured yet Community 2: OrigEquipMfr (rw) No trap recipient configured yet Community 3: private (rw) Trap recipient: 9.1.38.187 Community 4: public (ro) Trap recipient: 9.1.38.187 Community 5: common (ro) No trap recipient configured yet Community 6: FibreChannel (ro) No trap recipient configured yet SNMP access list configuration: Entry 0: No access host configured yet Entry 1: No access host configured yet Entry 2: No access host configured yet Entry 3: No access host configured yet Entry 4: No access host configured yet Entry 5: No access host configured yet itsosw3:admin>78 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 108. You can see above that Community 3 and Community 4 have already been assigned IPaddresses of an SNMP manager. We highlighted in bold the current IP entries that we willmodify. We want to change them to use our Tivoli SAN Manager. b. We will now show the command to change the Community 3 and Community 4 fields to another IP address. Issue the agtcfgset command from the switch prompt. The agtcfgset command is interactive. To leave an entry unchanged, hit Enter. c. We hit Enter several times until the 3rd and 4th community name fields are reached. We then entered in the new IP address and hit Enter. Keep hitting Enter until the message Committing Configuration...done is displayed and the command prompt is returned. See Example 3-4 for the output.Example 3-4 agtcfgset outputitsosw3:admin> agtcfgsetCustomizing MIB-II system variables ...At each prompt, do one of the followings: o <Return> to accept current value, o enter the appropriate new value, o <Control-D> to skip the rest of configuration, or o <Control-C> to cancel any change.To correct any input mistake:<Backspace> erases the previous character,<Control-U> erases the whole line,sysDescr: [ agtcfgset]sysLocation: [E3-250]sysContact: [Charlotte Brooks]swEventTrapLevel: (0..5) [0]authTrapsEnabled (true, t, false, f): [true]SNMP community and trap recipient configuration:Community (rw): [Secret C0de]Trap Recipients IP address in dot notation: [0.0.0.0]Community (rw): [OrigEquipMfr]Trap Recipients IP address in dot notation: [0.0.0.0]Community (rw): [private]Trap Recipients IP address in dot notation: [9.1.38.187] 9.1.38.188Community (ro): [public]Trap Recipients IP address in dot notation: [9.1.38.187] 9.1.38.188Community (ro): [common]Trap Recipients IP address in dot notation: [0.0.0.0]Community (ro): [FibreChannel]Trap Recipients IP address in dot notation: [0.0.0.0]SNMP access list configuration:Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true]Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true]Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true]Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true]Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true]Access host subnet area in dot notation: [0.0.0.0]Read/Write? (true, t, false, f): [true] Chapter 3. Deployment architecture 79
  • 109. Committing configuration...done. itsosw3:admin> 4. Install Tivoli SAN Manager. Refer to 4.2, “IBM Tivoli SAN Manager Windows Server installation” on page 96 for more details. When completed, launch NetView from the desktop. 5. Add the outband agents into Tivoli SAN Manager by specifying either IP address or hostname of the Fibre Channel switch in the Configure Agents GUI. See Figure 3-20 on page 77. 6. After being added and committed to the database, the SNMP agents are then automatically queried by the Advanced Topology Scanner and the data is returned where the manger will process and draw the initial SAN topology map. Figure 3-21 shows the outband management topology. Outband agents will continued to be polled at the user defined polling interval. See 4.6.4, “Performing initial poll and setting up the poll interval” on page 132. Attention: The initial discovery of any large SAN using outband discovery may take some time. Once complete, full discovery should not need to be run very often after that. Consideration should be given as to when the initial discovery is performed. We recommend scheduling initial discoveries during slower processing times for the business. 7. Remote consoles, if required, can be installed anytime after the Server has been installed. The remote console contains the same functionality as the Server console. Once the console is installed it performs database queries for its topology updates from the Manager. Outband management Tiv oli SAN Manager SNMP agent replies SNMP queries to SNMP agents Fibre Channel switch Ethernet SNMP agent replies Fibre Channel switch Tiv oli SAN Manager remote console Figure 3-21 Outband management only80 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 110. 3.8.2 Example 2: Inband only In this example we deploy inband agents instead of outband agents. The deployment of inband agents provides additional functionality over the outband agents. When an inband Agent is deployed, the topology information received at the Manager allows for a more complete picture of the SAN. For example, the host that is running the Agent will always appear with the correct symbol type and hostname for symbol label. Any other hosts running RNID enabled drivers will also be discovered with correct symbol type. We outline advantages and disadvantages of using inband Agent deployment configuration in Tivoli SAN Manager. We provide an overview of the install steps. Figure 3-22 below describes the sample requirements for this example. Inband requirements More accurate topology map Logical views of storage and host systems Figure 3-22 Sample inband requirements Advantages Additional attribute information is returned when inband agents are used. If there are RNID enabled HBAs installed on the SAN attached hosts, then this will allow for a more complete discovery of the SAN. Refer to 3.7.4, “Tivoli SAN Manager Agent (Managed Host)” on page 72 for more details on RNID. With RNID-enabled HBAs running on our host systems, the correct host symbol is used Compare this to the previous scenario with outband agents where they were discovered as unknown. We had other SAN attached hosts with RNID enabled HBA drivers, although without Tivoli SAN Manager Agents. These hosts that were running no agents were discovered correctly. Inband agents provides logical views of SAN resources — these are the Host and Device centric views for hosts with agents installed. The Device Centric View enables you to see all the storage devices and their logical relation to all the managed hosts. This view does not show the switches or other connection devices. The Host Centric View enables you to see all the managed host systems and their logical relation to local and SAN-attached storage devices. ED/FI will provide greater fault isolation capabilities when agents are deployed in the SAN. If errors are detected by ED/FI, adornments will be displayed on the host system running the agents and the corresponding switch and switch port that it is connected. See IBM Tivoli SAN Manager Planning and Installation Guide, SC23-4697. Refer to 5.3.2, “Device Centric View” on page 166 and 5.3.3, “Host Centric View” on page 167 for details on Host and Device Centric Views. Disadvantages Inband discovery is not available for non-supported Agent Operating Systems. Tivoli SAN Manager supports a limited number of Agent platforms at this time. If your platform is not supported, then an outband strategy may be more appropriate. The more Agents which are installed, the more processes will run and the more data will be collected and co-related. This requires processing resources and time. The inband agent runs two scanners to collect attribute and topology information. See Figure 3-14 on page 69. The amount of data returned depends on the size of the SAN fabric. Chapter 3. Deployment architecture 81
  • 111. The Agent must be installed on the hosts which takes some CPU/memory resources and disk space. Running many inband agents will require a corresponding amount of time and processing power to complete the initial discovery. Setup procedure With the above requirements, and noting the limitations, we can set up this scenario. 1. Verify that SAN is fully operational. We proceeded with checking all the SAN attached devices for compatibilities and incompatibilities. 2. Since we are installing Agent code on the SAN attached host, check the HBA make and model for compatibility, operating system levels and maintenance, plus the device driver release level and API compatibility. The following URL provides all compatibility requirements. Important: Tivoli SAN Manager compatibility can be checked at the following URL: http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml 3. Once the SAN attached components have been analyzed, review the Tivoli SAN Manager prerequisite checklist for the Manager and SAN attached hosts. 4. Install Tivoli SAN Manager Server. 5. Install the Tivoli SAN Manager Agent on the selected hosts. The Agents will automatically populate the Configure Agents interface after installation. Figure 3-23 shows the Configure Agents interface after an inband agent has been deployed and contacted the Manager. Refer to 3.5.3, “Tivoli SAN Manager Agents” on page 66 for more details regarding the Agent installation process.82 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 112. Figure 3-23 Configure Agents — Inband only6. Launch NetView from the desktop.7. Navigate to the SAN -> Configure Agents and note the Agents appear in the top half of this panel.8. The Agents will automatically perform inband discovery to create the topology map. Figure 3-24 shows the inband management process. Chapter 3. Deployment architecture 83
  • 113. Inband management Tivoli SAN Manager Tivoli SAN Agent Manager FC FC Fibre Channel switch Storage system Ethernet AIX Figure 3-24 Inband management only The remote console deployment strategy is the same as described in the outband Example.3.8.3 Example 3: Inband and outband Figure 3-25 provides an overview of the requirements for this example. This example differs from the previous examples in that both inband and outband agents are deployed. This configuration provides us with a more robust management environment. We have the ease of deployment of outband agents and leverage the additional functionality provided by the inband agents. The major difference is in the robustness of IBM Tivoli SAN Manager ability to discover and manage the topology. We outline advantages and disadvantages of using a combination of both types of agents, and provide an overview of the install steps. Inband & Outband requirements More accurate and complete topology map Management redundancy Logical views of host and storage systems Reduced single point of failure Figure 3-25 Sample inband/outband requirements This is the recommended approach — install at least one Agent per zone (preferably two for redundancy), and configure all capable switches as outband Agents. Advantages By default, Tivoli SAN Manager will work with inband and outband agents. With this combination we are assured of getting the most complete topology picture with attribute, topology and advanced scanner data being correlated at the Manager to create a full SAN topology. We will continue to leverage RNID enabled drivers on SAN attached hosts for a more complete topology.84 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 114. The Host Centric and Device Centric logical views are available in addition to the topologydisplay.Zone information can be displayed where it is supported by the switch API.It reduces the risk of single point of failure as both Fibre and IP links are usedRedundant and more complete information will be gathered and used to draw the topologymap.DisadvantageThe inband Agent install remains intrusive to the SAN attached host and there are potentialperformance implications for discovery if a large number of Agents are deployed.Setup procedureWith the above requirements, and noting the limitations, we set up this example based on thesteps below.1. Verify that the SAN is fully operational. We proceeded with checking all the SAN attached devices for compatibilities and incompatibilities.2. Since we are installing Agent code on the SAN attached host, check the HBA make, model and driver release levels for compatibility, operating system levels and maintenance.3. Once the SAN attached components compatibility has been confirmed, review the Tivoli SAN Manager prerequisite checklist for our Manager and Agents.4. Install the Tivoli SAN Manager Server.5. Install inband agents.6. Launch NetView from the desktop.7. Navigate to SAN --> Configure Agents. The top half of the window displays the inband agents that are currently installed — which have been automatically added. Click Add to add outband agents. Figure 3-26 shows the Configure Agents interface with both inband and outband agents deployed. For more details on this, see 5.7.6, “Outband agents only” on page 195 and 5.7.7, “Inband agents only” on page 197. Chapter 3. Deployment architecture 85
  • 115. Figure 3-26 Inband & outband in Configure Agents The Manager will perform another discovery of the SAN. Figure 3-27 shows the Agent deployment and management process.86 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 116. Inband & Outband management ocol FC prot N D via INBA Tivoli SAN Manager Agent FC Fibre Channel switch FC AIX Storage system Ethernet MP SN Dvia B AN UT O Figure 3-27 Inband and outband management The remote console deployment strategy is the same as described in 3.8.1, “Example 1: Outband only” on page 76.3.8.4 Additional considerations Finally, here are some additional considerations and pointers for deploying Tivoli SAN Manager. Check the manual IBM Tivoli Storage Area Network Manager: Planning and Installation Guide, SC23-4697 for more information on these tips. Deploying Tivoli SAN Manager using a HOSTS file Although the Installation Guide mentions that DNS must be used, we discovered that Tivoli SAN Manager installs and functions using a HOSTS file on the Manager and Agent. Figure 3-28 provides a view of the HOSTS file placement. Chapter 3. Deployment architecture 87
  • 117. Tivoli SAN Manager using HOSTS file Tivoli SAN Local Manager HOSTS File Ethernet Fibre Channel switch AIX Local SAN Manager HOSTS Agent File Fibre Channel switch Windows SAN Manager Agent Local HOSTS File Figure 3-28 HOSTS file placement Before installing the Manager, we updated the system32driversetcHOSTS file to include entries for the Manager and all Agents. We then updated the HOSTS file on each Agent to include entries for the Manager and all other Agents. We then installed the Manager. Tivoli SAN Manager with Remote Console IBM Tivoli SAN Manager now supports Windows XP, in addition to Windows 2000 Server, Advanced Server as a supported platform type for running the Remote Console If possible, start with a pristine machine (re-install if necessary) for the Manager and NetView Remote Console. Before installing the Remote Console, modify the HOSTS file to include the IP address LONGNAME SHORTNAME as the first entry as shown in “Change the HOSTS file” on page 97. Verify the screen resolution is at least 800x600 on the Manager Verify a range of seven sequential TCP/IP port numbers (starting with 9550-9556, these are the default ports assignments during install) are available. Use the netstat command to verify free ports. These ports are required by the Manager to run Tivoli SAN Manager related services. Verify that SNMP is installed and running as a service on the Manager and Remote Console. Tivoli SAN Manager Agents Make sure you have installed the appropriate HBA cards and device drivers. Run the common API setup program for the HBA on each managed host. This common API program is in addition to the required drivers for the HBA. For example, for a QLogic88 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 118. HBA, you must run the EUSDSetup program. Contact your HBA manufacturer if you do not have this program. For Windows 2000, Windows NT, Solaris, or Linux, if using QLogic HBAs, specific versions of the QLogic and Device Driver are required for RNID support. Both API and Driver are packaged as one file. See the QLogic Web site (http://www.qlogic.com) for updates. The required API and Device Driver levels are listed for different QLogic HBAs at http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.ht ml The agent operating system must be at a level that supports JRE 1.3.1 or later for AIX and Solaris. For AIX 5.1 and 5.2 there are required patches that can be downloaded. See the readme.txt file for Tivoli SAN Manager for details of these. Fibre Channel Switches If using the IBM 2109 (Brocade Silkworm family of switches), the firmware level should be as specified on the support Web site: http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSANM_Device_Compatibility.html Make sure all FC MIBs are enabled. On the IBM 2109, all MIBs are disabled by default. You can use the snmpmibcapset command to enable the MIBs while logged on as administrator to a IBM 2109 switch. (refer to 6.2.3, “Loading MIBs” on page 212). General Verify all FC switch SNMP trap destinations point to the Tivoli SAN Manager IP address The Tivoli SAN Manager and all agents must have static IP addresses Your network should use DNS The remote DNS server must know the static IP address of each machine Verify forward and reverse lookup is working via DNS Issue nslookup to confirm fully qualified host names of the Manager and Managed Host systems.3.9 High Availability for Tivoli SAN Manager In this section we discuss how to protect the Tivoli SAN Manager Server. In such a setup the Tivoli SAN Manager server is installed on the Windows 2000 platform using a central database repository. Our standby server uses a similar hardware configuration.3.9.1 Standalone server failover Since we are only protecting the Tivoli SAN Manager Server, one possible scenario is to have a standby server in the event that the primary Tivoli SAN Manager Server fails. The standby server could be a test Tivoli SAN Manager server with Agents belonging to the primary Tivoli SAN Manager domain. This setup is shown in Figure 3-29. Chapter 3. Deployment architecture 89
  • 119. Stand-by Fail Over Belongs to Primary SAN Manager Tivoli SAN Manager Fibre Channel switch AIX SAN Manager Ethernet Agent Standby Tivoli SAN Manager Fibre Channel switch Windows SAN Manager Agent Services stopped IBM HTTP Administration Server IBM HTTP IBM WS AdminServer 4.0 Figure 3-29 Standby server Here are the steps we followed: 1. We started with a fully deployed Tivoli SAN Manager Server. 2. We then installed Tivoli SAN Manager on the standby server, using the same system settings as the primary server. 3. We then stopped the IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 services on the standby server and changed their startup to manual. 4. Backing up the Tivoli SAN Manager database on the primary server is optional. If you do not have customized data (topology symbol types and symbol names) saved, then you can omit this step. Otherwise, use the DB2 control center to select and backup the ITSANMDB database. See 10.2.2, “Setup for backing up IBM Tivoli SAN Manager Server” on page 286 for details. 5. We then simulated a failure on the primary server by stopping the Tivoli SAN Manager application on the WebSphere Application Server. We then stopped the services IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 6. We then updated the DNS entry for the primary server, changing the IP address of the primary server hostname to that of the standby server hostname and leaving the hostname of the primary server associated with the IP address of the standby server. We could also update the HOSTS file for these changes if DNS is not used.90 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 120. Note: In our testing we used a HOSTS file on the Manager and all the Agents. For the HOSTS file on each Agent and Manager, we modified the IP address of the primary server in the HOSTS file to point to the IP address of the standby server. We then commented out the standby server HOSTS file entry. In Example 3-5 IP address 9.1.38.186 is the address of the standby server and polonium.almaden.ibm.com is the hostname of the primary server. We then commented out the entry for 9.1.38.186 lead.almaden.ibm.com, since this is the original entry that pointed to the standby server before failoverExample 3-5 Agent HOSTS file9.1.38.189 tungsten.almaden.ibm.com tungsten9.1.38.186 polonium.almaden.ibm.com polonium9.1.38.192 palau.almaden.ibm.com palau9.1.38.191 crete.almaden.ibm.com crete9.1.38.166 senegal.itsrmdom.almaden.ibm.com senegal9.1.38.165 diomede.itsrmdom.almaden.ibm.com diomede#9.1.38.186 lead.almaden.ibm.com lead7. We then used the DB2 control center to restore our backed up database. See 10.5.2, “ITSANMDB database restore” on page 3128. We then started IBM HTTP Administration Server, IBM HTTP Server and IBM WS Admin 4.0 on the standby server, and verified that Tivoli SAN Manager was running on the WebSphere Application server using the WebSphere Administration Console. See 4.2.8, “Verifying the installation” on page 110.9. Finally we re-started the Agents. The failover process is summarized in Figure 3-30. Failover process 3 update DNS DNS Tivoli SAN Manager 1 Server fails Fibre Channel switch AIX SAN Manager 2 Ethernet Agent Stop agents Standby Tivoli SAN Manager 5 Start agents Fibre Channel switch Windows SAN Manager Agent 4 Start services IBM HTTP Administration IBM HTTP IBM WS AdminServer 4.0Figure 3-30 Failover process Chapter 3. Deployment architecture 91
  • 121. 3.9.2 Summary In this chapter, we discussed fibre channel standards, SAN topologies in how they apply to IBM Tivoli SAN Manager. We also introduced inband and outband management as it relates to IBM Tivoli SAN Manager. Finally, we presented various deployment scenarios using IBM Tivoli SAN Manager.92 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 122. Part 3Part 3 Installation and basic operations In Part 3 we describe how to install, configure, and uninstall IBM Tivoli SAN Manager. We then cover, in detail, the basic functions, including the different topology displays.© Copyright IBM Corp. 2002, 2003. All rights reserved. 93
  • 123. 94 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 124. 4 Chapter 4. Installation and setup This chapter provides information about installing IBM Tivoli SAN Manager in various environments. We discuss: Installation of the Server and Agent components and also installation of the remote console. Setup of the environment after installation, including adding devices to monitor (via SNMP) and adding managed Agents. We show how to set up the monitoring parameters, such as the polling interval. We do not cover every possibility of installation here — for complete details, consult IBM Tivoli Storage Area Network Planning and Installation Guide, SC23-4697.© Copyright IBM Corp. 2002, 2003. All rights reserved. 95
  • 125. 4.1 Supported operating system platforms IBM Tivoli SAN Manager has three major components: Server Agents Remote console Figure 4-1, shows the supported platforms for each component, as at the time of publication. Supported Manager platforms Windows 2000 SP3 (Server, Advanced Server or Professional) with SNMP service installed AIX 5.1 Maintenance Level 2 with APAR IY34030 Supported Agent platforms Windows NT 4 SP6A Windows 2000 SP2 (Server, Advanced Server or Professional) each with SP3 AIX 5.1 (with APAR IY34030) or 5.2 with support for JRE 1.3.1 Solaris 2.6 or 2.8 with support for JRE 1.3.1 Linux Red Hat Advanced Server 2.1 (32 Bit) Kernel 2.4.9 SuSE Linux Enterprise Server Version 7.0 (32 Bit) Kernel 2.4.7 Supported Remote Console platforms Windows 2000 SP3 (Server, Advanced Server or Professional) with SNMP service installed Windows XP Figure 4-1 IBM Tivoli SAN Manager —supported operating system platforms4.2 IBM Tivoli SAN Manager Windows Server installation This section describes how to install IBM Tivoli SAN Manager Server. The steps are summarized in Figure 4-2 for the Windows Server. Installation Static IP required, seven contiguous free ports required Fully qualified hostname required Install DB2 7.2 and FP8 Upgrade DB2 JBDC drivers to version 2 Install the SNMP service (if not installed) Install the Server code Embedded install of the IBM WebSphere Application Server V5.0 Tivoli NetView Tivoli SAN Manager Server Figure 4-2 Installation of IBM Tivoli SAN Manager4.2.1 Lab environment We installed the Server on a system named LOCHNESS, with Windows 2000 Server with Fix Pack 3.96 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 126. 4.2.2 Preinstallation tasks Before starting the installation you need to ensure that the following requirements are met. Fully qualified host name Tivoli SAN Manager requires a fully qualified hostname. You can verify your computer host name setting by right-clicking My Computer on the desktop and selecting Properties. When the window opens, click Network Identification and you will see information like Figure 4-3. Figure 4-3 Verifying system host name If you do not have a full computer name, including domain name, change it by clicking Properties and supply the fully qualified domain name (FQDN) as shown in Figure 4-4. Figure 4-4 Computer name change After this change, you need to reboot the system for it to become effective. Change the HOSTS file On Windows 2000 systems with Fix Pack 3 installed you must edit the HOSTS file to resolve the long host name. Normally, the address resolution protocol (ARP) returns the short name Chapter 4. Installation and setup 97
  • 127. rather than the fully qualified host name. This can be changed in the hosts tables on the DNS server and on the local computer. For a Windows 2000 system, edit the HOSTS file in %SystemRoot%system32driversetc. The %SystemRoot% is the installation directory for Windows 2000, usually WINNT. The long name should appear before the short name as in Example 4-1. Example 4-1 POLONIUM HOSTS file 9.1.38.167 lochness.almaden.ibm.com lochness 9.1.38.166 senegal.almaden.ibm.com senegal 9.1.38.150 bonnie.almaden.ibm.com bonnie 127.0.0.0 localhost Attention: Host names are case-sensitive. The case used for the computer name in Network Identification (Figure 4-3) must be the same as that used in the HOSTS file. Check for existing Tivoli NetView installation If you have an existing Tivoli NetView 7.1.3 for Windows 2000 installation you can use it with IBM Tivoli SAN Manager Server. If any other version is installed, you must uninstall it before installing the IBM Tivoli SAN Manager Server.4.2.3 DB2 installation As IBM Tivoli SAN Manager stores its data in a database we need to install DB2 Version 7.2, which is today the only supported database. DB2 needs to be installed on the same system as the IBM Tivoli SAN Manager Server installation — remote databases are not supported. Tip: The database can be also used for other data, but we recommend a dedicated database for IBM Tivoli SAN Manager to avoid any potential performance impact. If you are installing on a system which already has DB2 Enterprise Edition Version 7.2 installed, you need to install fix pack 8 to meet the requirements. Before installing DB2, you should create a userid with administrative rights, and install DB2 this userid. In our example we created the userid db2admin. If this user does not already exist, it will be created during DB2 installation. Important: Installation can only be performed with a userid with local administrative rights. When installing DB2, you only need to select the DB2 Enterprise Edition component. You can then accept all defaults — the only thing you need to change is to select Do not install the OLAP Starter Kit. After installation, reboot the system. When the system restarts, check that the DB2 service was started as shown in Figure 4-5.98 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 128. Figure 4-5 DB2 services4.2.4 Upgrading DB2 with Fix Pack 8 Fix Pack 8 is required for the DB2 installation used with IBM Tivoli SAN Manager. You can get the fix pack from: http://www-3.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/download.d2w/report To apply the Fix Pack do the following: 1. Logon to the system with the userid used for DB2 installation, in our example db2admin. 2. Stop all applications accessing DB2 databases, and stop all DB2 services (including DB2 Warehouse if running). 3. Unzip the fix pack file you downloaded. 4. Run the SETUP.EXE. This will install the upgrade over your existing DB2 installation. 5. Reboot the system. Upgrade JDBC drivers to Version 2 To upgrade your DB2 JDBC drivers to Version 2, follow these steps: 1. Close all browser windows. 2. Open a command prompt window and use it to perform all following steps, and monitor error messages. 3. Change the drive and directory to where you installed the DB2 executable files. The default directory is C:Program FilesSQLLIB. 4. Change to the directory java12. 5. Look for the file inuse. If it exists, and contains JDBC 2.0, the correct JDBC driver is already installed. If the correct driver is not installed, follow these steps: a. Stop all programs that might be using DB2. Chapter 4. Installation and setup 99
  • 129. b. Stop DB2 by issuing the command: db2stop. If DB2 does not stop with this command, you can use db2stop force. c. Run the batch file usejdbc2.bat. Make sure that there are no error messages. If so, correct the errors and try again. d. Restart DB2 by issuing the command: db2start. Tip: If you have the problems running usejdbc2.bat check if any Java applications are running. Stop the application and run usejdbc2.bat again.4.2.5 Install the SNMP service Tivoli NetView, which is a component of IBM Tivoli SAN Manager, requires the Windows 2000 SNMP service. To install this, select Control Panel -> Add/Remove Programs -> Add/Remove Windows Components as shown in Figure 4-6. Figure 4-6 Windows Components Wizard100 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 130. Select Management and Monitoring Tools and click Details (Figure 4-7). Figure 4-7 SNMP install Select Simple Network Management Protocol, and click OK. The installation program will prompt you for the installation CD or location where you have the installation files available. After completing these steps we are ready to install the IBM Tivoli SAN Manager Server code.4.2.6 Checking for the SNMP community name After you have installed SNMP or anytime you apply a service pack or fix pack to Windows, you should check the SNMP community name. To do this: 1. Start -> Settings -> Control Panel -> Administrative Tools -> Services. 2. Right–click SNMP Service and select Properties (shown in Figure 4-8) 3. On the General tab, make sure the Startup type is Automatic. 4. On the Security tab, make sure the Community name is public with READ ONLY rights. Chapter 4. Installation and setup 101
  • 131. Figure 4-8 SNMP Service Properties panel4.2.7 IBM Tivoli SAN Manager Server install The IBM Tivoli SAN Manager Server muse be performed with a userid with Administrative rights — we used db2admin. Follow these steps to successfully install: Note: Embedded version of the IBM WebSphere Application Server – Express. The installation process automatically installs the embedded version of WebSphere — you do not have to install it separately. There are some differences between this embedded version and WebSphere Application Server, for example, you will no longer see the WebSphere Administrative Console, it uses less memory, and is easier to install and maintain. Note: MQSeries is no longer included with (or used by) Tivoli SAN Manager. 1. Run LAUNCH.EXE from the installation directory. Figure 4-9 shows the startup window.102 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 132. Figure 4-9 Selecting the product to install2. Select Manager and click Next to continue.3. Select the language — for example, English and click OK. The Welcome window, shown in Figure 4-10, now displays.Figure 4-10 Welcome window4. Click Next to display the license agreement window. Read and accept the license and click Next to continue. You will be prompted for the directory to install Tivoli SAN Manager, shown in Figure 4-11. Chapter 4. Installation and setup 103
  • 133. Figure 4-11 Installation path 5. It is recommended that you accept the default directory. Click Next to continue, and the base port selection window will display, as in Figure 4-12. Figure 4-12 Port range 6. The installation program requires seven consecutive free ports. You only need to define the starting port. In our example we used the default port 9550. Click Next to continue, and you will see the window shown in Figure 4-13.104 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 134. Figure 4-13 DB2 admin user On this window you need to specify the DB2 administrative userid and password. In our example we used db2admin .Click Next to continue and the window in Figure 4-14 displays. Note: The database administration userid must exist before installing the IBM Tivoli SAN Manager Server.Figure 4-14 SAN Manager database Chapter 4. Installation and setup 105
  • 135. 7. Here you specify the name which will be used for the IBM Tivoli SAN Manager Server database, and a userid associated with this database. Tip: We recommend using a meaningful name for the database as this can simplify other operations related to the database such as administration and backups. We accepted the default name, ITSANMDB. This database stores the IBM Tivoli SAN Manager Server information which comes from outband and inband Agents. The DB2 administrator userid specified in the previous step will be used to create the userid entered on this window (db2user1 in our case), which will then be used to access the Server database. Attention: The userid which is specified here must be different from the database administration userid. After completing the fields, click Next to continue. The window in Figure 4-15 displays. Figure 4-15 WebSphere Administrator password 8. Here you need to specify the userid for WebSphere Administration. This should be an existing system userid. In our example we entered wasadmin. Click Next to continue and you will see a window similar to Figure 4-16. Tip: The WebSphere userid specified here must already exist on your system. In our sample we defined an ID, WASADMIN. The password used here should never expire on your system.106 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 136. Figure 4-16 Host authentication password9. Managed systems (Tivoli SAN Manager Agents) have to authenticate to the Server when they send data to it. For this reason, you need to supply an authentication password during installation. The same password will also be used during installation of Agents (see Step 6 on page 116) and Remote Consoles (Step 7 on page 123). After supplying the password, click Next to continue and you will see a window similar to Figure 4-17.Figure 4-17 NetView install drive10.Specify a drive letter for installing IBM Tivoli NetView. Click Next to continue and you will see the window in Figure 4-18. Chapter 4. Installation and setup 107
  • 137. Note: This panel, and the next, will not display if Tivoli NetView Version 7.1.3 is already installed. This is the only version supported to work with IBM Tivoli SAN Manager Server. Figure 4-18 NetView password 11.Here you specify the userid and password for running the NetView service. The installation program will create this userid if it does not exist. Click Next to continue, and the Tivoli SAN Manager Installation summary window, shown in Figure 4-19, will display. Figure 4-19 Installation path and size108 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 138. 12.On this window you can see the installation path which defaults to tivoliitsanmmanager and the size of the installed code. Click Next to continue and the installation will start, as shown in Figure 4-20.Figure 4-20 Installation progress13.After installation is complete, the window in Figure 4-21 appears.Figure 4-21 Finished installation14.Click Next to continue, and you will be prompted to reboot the system (required). Chapter 4. Installation and setup 109
  • 139. 4.2.8 Verifying the installation After restarting the system, you should verify that the IBM Tivoli SAN Manager Server application is running correctly. Check the SAN Manager service is running. The service can be started or stopped with the Service applet in Administrative Tools (shown in Figure 4-22). If it is not running, right-click the IBM WebSphere Application Server V5 – ITSANM–Manager entry and select Start. Figure 4-22 Tivoli SAN Manager Windows Service You should also check the HOSTS file which is modified by the Tivoli NetView installation. The entry shown in Example 4-2 was created in our environment. Example 4-2 Tivoli NetView HOSTS file entry # # The following entry was created by NetView based on Registry information. # 9.1.38.167 lochness lochness.almaden.ibm.com Tivoli NetView checks the HOSTS file every time it starts and if this exact line is missing it will recreate the entry. This entry could have been inserted before the entry we made for long host name resolution as shown in Example 4-1 on page 98, meaning it takes precedence. To avoid this, check that the lines shown in Example 4-2 are at the end of the HOSTS file (moving them if necessary), so that it looks similar to Example 4-3. Example 4-3 Correct HOSTS file order 9.1.38.167 lochness.almaden.ibm.com lochness 9.1.38.166 senegal.almaden.ibm.com senegal 9.1.38.150 bonnie.almaden.ibm.com bonnie 127.0.0.1 localhost # # The following entry was created by NetView based on Registry information. # 9.1.38.167 lochness lochness.almaden.ibm.com As you can see, the long host name entry precedes the Tivoli NetView entry. Attention: Host names are case sensitive!110 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 140. You should also check the log file after installation, which is found in the directory c:tivoliitsanmmanagerloginstall*.log. See Chapter 11, “Logging and tracing” on page 317 for more information on logging.4.3 IBM Tivoli SAN Manager Server AIX installation In this section we cover installation of the IBM Tivoli SAN Manager Server AIX installation.The installation steps are summarized in Figure 4-24. Installation Static IP required, seven contiguous free ports required Fully qualified hostname required Install DB2 7.2 and FP8 Upgrade DB2 JBDC drivers to version 2 Install the SNMP service (if not installed) Install the Tivoli SAN Manager Server code : Embedded install of the IBM WebSphere Application Server V5.0 Tivoli SAN Manager Server Figure 4-23 Agent installation4.3.1 Lab environment In our installation we used AIX 5.1 with ML4 installed.4.3.2 Installation summary 1) Install DB2 2) Upgrade DB2 with FixPak 8. You can get the fix pack from: http://www-3.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/download.d2w/report 3) Install Tivoli SAN Manager code Important: The AIX installation is almost identical to the Windows installation as described in 4.2, “IBM Tivoli SAN Manager Windows Server installation” on page 96. The major difference is that since Tivoli NetView for AIX is not supported or installed, a separate Windows system with NetView and the remote console installed (as in 4.5, “IBM Tivoli SAN Manager Remote Console installation” on page 119), is required to view the console. Therefore the NetView screens do not appear in the AIX installation. All other installation steps are exactly the same. Since the installation uses a GUI, an XWindows server session (either native or emulated) is required.4.3.3 Starting and stopping the AIX manager To start the service automatically, check the following line has been added to /etc/inittab: itsanma:2:once://tivoli/itsanm/manager/bin/aix/startSANM.sh > /dev/console 2>&1 Chapter 4. Installation and setup 111
  • 141. To start the manager on AIX, run this command (using the default directory): /tivoli/itsanm/manager/bin/aix/startSANM.sh To stop the manager on AIX, run this command (using the default directory): /tivoli/itsanm/manager/bin/aix/stopSANM.sh4.3.4 Checking the log files After you install the manager, check the log /tivoli/itsanm/manager/mgrlog.txt for any errors. If you find installation errors, check the other logs in the directory /tivoli/itsanm/manager/log/install/ to determine where the problems occurred.4.4 IBM Tivoli SAN Manager Agent installation In this section we cover the installation of the IBM Tivoli SAN Manager Agent on supported platforms. You can see a summary of the installation steps in Figure 4-24. Installation Four contiguous free ports required Fully qualified hostname required Install the Agent code Setup service to start automatically Figure 4-24 Agent installation4.4.1 Lab environment In our lab environment we installed the Agent on the following operating systems: AIX 5.1 and 5.2 Solaris 8 Linux Red Hat Advanced 2.1 and SuSE SLES / 7 Windows 2000 Server with Fix Pack 34.4.2 Preinstallation tasks Before starting installation you need to ensure that the following requirements are met. Fully qualified host name The requirements for host name resolution are the same as for the IBM Tivoli SAN Manager Server Manager installation (described in “Fully qualified host name” on page 97) and four contiguous free ports.4.4.3 IBM Tivoli SAN Manager Agent install In this section we cover the installation of the IBM Tivoli SAN Manager Agent. The installation has to be performed by a userid with Administrative rights (Windows), or root authority (UNIX). Follow these steps for the installation:112 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 142. Tips: You need 150 MB of free temporary disk space for installation. If the installation fails on a Windows system, restart the system so that the failed partial installation will be cleaned up before trying to reinstall the agent. Delete all files below the base installation directory c:tivoliitsanmagent (Windows) or /tivoli/itsanm/agent (UNIX) before reinstalling. Before installing the agent on Linux, check the /etc/hosts file and enter the correct IP address in front of the hostname. Linux often automatically creates an entry with the loopback (127.0.0.1) address, which causes the agent to register itself at the IBM Tivoli SAN Manager server under this address and therefore cannot be contacted.1. Run the appropriate file from the agent subdirectory on the CD: – AIX — ./setup.aix – Solaris — ./setup.sol – Linux — ./setup.lin – Windows — SETUP.EXE As the installation program is Java based it will look the same on all platforms. Note that you need an XWindows session on all UNIX platforms to perform the installation. You will first be prompted to select the language for installation. We chose English. Click Next to continue. You will then the Welcome window shown in Figure 4-25.Figure 4-25 Welcome window2. Click Next to display the license agreement. Read and accept the agreement, click Next and you will see a window similar to Figure 4-26. Chapter 4. Installation and setup 113
  • 143. Figure 4-26 Installation directory 3. Here you can specify the installation directory or just accept the suggested one. Click Next to continue, you will see a window similar to Figure 4-27. Figure 4-27 Server name and port 4. Enter the IBM Tivoli SAN Manager Server fully qualified host name and the first port number you defined during Server installation, (Step 5 on page 104). We specified 9550.114 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 144. Important: The port number specified here must match the port number specified during Server install. Click Next to continue, and you will see the window shown in Figure 4-28.Figure 4-28 Agent port5. Here you need to specify the starting port for four consecutive ports to be used by the Agent. These ports should not be used by any other application. Click Next to continue, and you will see the window shown in Figure 4-29. Chapter 4. Installation and setup 115
  • 145. Figure 4-29 Agent access password 6. On this window you define the Agent access password which has to be the same as you defined during Server installation (Step 8 on page 106). Click Next — you will see the installation check window, as in Figure 4-30. Figure 4-30 Installation size 7. This shows the installation directory and size. Click Next to start the installation. When complete, you will see the window in Figure 4-31. Click Finish to complete the installation.116 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 146. Figure 4-31 Installation finished 8. Check the log file c:tivoliitsanmagentlog.txt (Windows) or /tivoli/itsanm/agent/log.txt (UNIX) for any errors.4.4.4 Configure the Agent service to start automatically We recommend starting the Agent service automatically. AIX The Agent service is started by running the command tcstart.sh from the directory <install>/bin/aix. Stop the Agent service with tcstop.sh. To start the service automatically, IBM Tivoli SAN Manager uses the BSD style rc.d directories on AIX. Since the default run-level is 2, it creates the needed start/stop scripts in /etc/rc/rc2.d. There are two scripts: S90itsrm_agent - starts the agent, when the run-level is entered (Example 4-4). K90itsrm_agent - stops the agent, when the run-level is left. Example 4-4 rc2.d start script: S90itsrm_agent used on AIX #!/bin/sh TSNM_DIR=/opt/tivoli/itsanm/agent/bin/aix if [ -f "$TSNM_DIR/tcstart.sh" ] && [ -r "$TSNM_DIR/tcstart.sh" ] && [ -x "$TSNM_DIR/tcstart.sh" ] then $TSNM_DIR/tcstart.sh > $TSNM_DIR/../../log/S90_tcstart.log 2>&1 & # $TSNM_DIR/tcstart.sh > /dev/null & fi Chapter 4. Installation and setup 117
  • 147. Solaris The Agent service is started by running the command tcstart.sh from the directory <install>/bin/solaris2. Stop the Agent service with tcstop.sh. The installation program will create a startup script S90itsrm_agent in the directory /etc/rc/rc2.d. This will cause the Agent to start at boot time. Linux The Agent service is started by running the command tcstart from the directory tivoli/itsanm/agent/bin/linux. Stop the Agent service with tcstop. The installation program will create a startup script S90itsrm_agent in the directory /etc/rc/rc2.d and /etc/rc/rc3.d. This will cause the Agent to start at boot time. Windows The service can be started or stopped with the Service applet in Administrative Tools. When you open the applet you will see the window in Figure 4-32. Figure 4-32 Agent Windows service The startup type should be set to Automatic, for the service to start automatically. You can also use Command Line commands: To start — net start “ITSANM-Agent” To stop — net stop “ITSANM-Agent”118 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 148. 4.5 IBM Tivoli SAN Manager Remote Console installation In this section we cover installation of the IBM Tivoli SAN Manager Remote Console. The installation steps are shown in Figure 4-33. Installation Six contiguous free ports required Fully qualified hostname required Install the SNMP service (in not installed) Install the Console code Check if service started automatically Correct the HOSTS file Figure 4-33 Console installation4.5.1 Lab environment We installed the Remote Console on a Windows 2000 Server with Fix Pack 3, called WISLA.4.5.2 Preinstallation tasks Before starting installation make sure that the following requirements are meet. SNMP Service installed Make sure that you have installed the SNMP service and have an SNMP community name of Public defined, as described in 4.2.5, “Install the SNMP service” on page 100 and 4.2.6, “Checking for the SNMP community name” on page 101. Fully qualified host name The requirements for the host name resolution are the same as for the IBM Tivoli SAN Manager Server installation described in “Fully qualified host name” on page 97. You also need six contiguous free ports. Check for existing Tivoli NetView installation If you have an existing Tivoli NetView 7.1.3 installation you can use it with IBM Tivoli SAN Manager Server installation. If you have any other version installed you must uninstall it before installing IBM Tivoli SAN Manager Console.4.5.3 Installing the Console The IBM Tivoli SAN Manager Console remotely displays information about the monitored SAN. The installation must be performed by a userid with Administrative rights. The installation is at the time of writing supported on the Windows 2000 and Windows XP platforms. Follow these steps to successfully install the remote console: Chapter 4. Installation and setup 119
  • 149. Tips: You need 150 MB of free temporary disk space. If the installation fails, restart the system so that the failed partial installation will be cleaned up before trying to reinstall. Delete all files below the base installation directory c:tivoliitsanmconsole before reinstalling. If Tivoli NetView Version 7.1.3 is already installed, ensure these applications are stopped: Web Console Web Console Security MIB Loader MIB Browser Netmon Seed Editor Tivoli Event Console Adaptor Configurator Note: The Remote Console can be installed on an existing Agent system. 1. Run LAUNCH.EXE from the installation directory. The selection window is shown in Figure 4-34. Figure 4-34 Start the installation 2. Select Remote Console and click Next. The following window prompts you to select the language. We selected English. The Welcome window will display, shown in Figure 4-35.120 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 150. Figure 4-35 Welcome window3. Click Next to continue, and the License window displays. Read and accept the license. Click Next to continue, and you will see a window similar to Figure 4-36.Figure 4-36 Installation directory4. Specify the installation directory, click Next, and the window shown in Figure 4-37 displays. Chapter 4. Installation and setup 121
  • 151. Figure 4-37 Server information 5. Specify the fully qualified host name of the Server and the Server port which you defined during Server installation (Step 5 on page 104). Click Next to continue with the installation, you will see a window similar to Figure 4-38. Figure 4-38 Console ports 6. Specify the starting port of a six port range. This ports should not be in use by any other application. Click Next to continue, you will see a window similar to Figure 4-29.122 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 152. Figure 4-39 Console access password7. On this window you define the Console access password which has to be the same as you defined during Server installation Step 9 on page 107. Click Next to continue, you will see a window similar to Figure 4-40.Figure 4-40 Tivoli NetView installation drive8. As Tivoli NetView is part of the IBM Tivoli SAN Manager Console install you need to specify the drive letter where it will be installed. Click Next — you will see a window like Figure 4-41. Chapter 4. Installation and setup 123
  • 153. Note: This panel and the next will not display if NetView Version 7.1.3 is already installed. Figure 4-41 Tivoli NetView service password 9. Specify the userid and password to be used for the NetView service. The installation program will create this userid if it does not exist. Click Next to display the summary window (Figure 4-42) Figure 4-42 Installation summary124 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 154. 10.The summary window shows the selected directory and the size of the installation. Click Next to continue, and the installation will proceed. When it is complete, the window shown in Figure 4-42 displays. Figure 4-43 Installation finished 11.Click Next to continue and Finish to finish installation. You need to restart the system after installation. Check the log files c:tivoliitsanmconsolelog.txt and c:tivoliitsanmconsolenvlog* for any errors.4.5.4 Check if the service started automatically After rebooting, should check that the IBM Tivoli SAN Manager Console service has started. Use the Services applet, as shown in Figure 4-44 and look for ITSANM-Console. Figure 4-44 Console service If the service was started successfully the status should be Started. You also need to check the HOSTS file as the Tivoli NetView installation inserts lines similar to Example 4-5. Chapter 4. Installation and setup 125
  • 155. Example 4-5 Tivoli NetView entry # # The following entry was created by NetView based on Registry information. # 9.1.38.169 wisla.almaden.ibm.com wisla The long name must be resolved before the short name, therefore check there is a suitable long name entry before the lines made by Tivoli NetView, as shown in Example 4-6. Add or edit the line if necessary Tip: Do not delete the Tivoli NetView entry, as it will be added every time you start IBM Tivoli SAN Manager Console. Example 4-6 Corrected HOSTS file entry 9.1.38.169 wisla.almaden.ibm.com wisla # # The following entry was created by NetView based on Registry information. # 9.1.38.169 wisla.almaden.ibm.com wisla4.6 IBM Tivoli SAN Manager configuration Now we show the post-installation configuration of IBM Tivoli SAN manager. You can see the configuration steps in Figure 4-45. Configuration and setup Server install Agent install (optional) Console install (optional) Configuring SNMP trap forwarding on devices Configuring the outband Agents Check the inband Agents Setting up the MIB file in Tivoli NetView Perform initial poll and setup the poll interval Figure 4-45 Configuration steps After installing the Server, Agent and the Console you need to set up the environment.4.6.1 Configuring SNMP trap forwarding on devices There are several ways to configure Tivoli SAN Manager for SNMP traps. Method 1: Forward traps to local Tivoli NetView console In this scenario you setup the devices to send SNMP traps to the NetView console which is installed on the Tivoli SAN Manager Server. An example of this setup is shown in Figure 4-46.126 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 156. Managed Host (Agent ) Disk array Managed Host (Agent) Disk array Managed Hos t (Agent) Disk array SAN Switch SNM P Dis k array Disk array IBM Tiv oli Storage Area Network ManagerFigure 4-46 SNMP traps to local NetView consoleNetView listens for SNMP traps on port 162 and the default community is public. When thetrap arrives to the Tivoli NetView console it will be logged in the NetView Event browser andthen forwarded to Tivoli SAN Manager as shown in Figure 4-47. Tivoli NetView is configuredduring installation of the Tivoli SAN Manager Server for trap forwarding to the IBM Tivoli SANManager Server. SAN Manager Server SNMP Trap TCP Tivoli NetView SAN Manager 162 trapfrwd.conf fibre channel switch (trap forwarding to TCP/IP port 9556)Figure 4-47 SNMP trap receptionNetView forwards SNMP traps to the defined TCP/IP port, which is the sixth port derived fromthe base port defined during installation, shown in 4.2.7, “IBM Tivoli SAN Manager Serverinstall” on page 102. We used the base port 9550, so the trap forwarding port is 9556.With this setup, the SNMP trap information will appear in the NetView Event browser andSAN Manager will use it for changing the topology map. Note: If the traps are not forwarded to SAN Manager, the topology map will be updated based on the information coming from Agents at regular polling intervals. The default IBM Tivoli SAN Manager Server installation (including NetView install) will set up the trap forwarding correctly. Chapter 4. Installation and setup 127
  • 157. Existing NetView installation If you installed Tivoli SAN Manager with an existing NetView, you need to setup trap forwarding. To do this: 1. Configure the Tivoli NetView trapfrwd daemon. Edit the trapfrwd.conf file in the directory usrovconf. This file has two sections: Hosts and Traps. Modify the Hosts section to specify the host name and port to forward traps to (in our case, port 9556 on host LOCHNESS.ALMADEN.IBM.COM). Modify the Traps section to specify which traps Tivoli NetView should forward. The traps to forward for Tivoli SAN Manager are: 1.3.6.1.2 *(Includes MIB-2 traps (and McDATA ’s FC Management MIB traps) 1.3.6.1.3 *(Includes FE MIB and FC Management MIB traps) 1.3.6.1.4 *(Includes proprietary MIB traps (and QLogic ’s FC Management MIB traps)) Example 4-7 shows a sample trapfrwd.conf file. Example 4-7 trapfrwd.conf file [Hosts] #host1.tivoli.com 0 #localhost 1662 lochness.almaden.ibm.com 9556 [End Hosts] [Traps] #1.3.6.1.4.1.2.6.3 * #mgmt 1.3.6.1.2 * #experimental 1.3.6.1.3 * #Andiamo 1.3.6.1.4.1.9524 * #Brocade 1.3.6.1.4.1.1588 * #Cisco 1.3.6.1.4.1.9 * #Gadzoox 1.3.6.1.4.1.1754 * #Inrange 1.3.6.1.4.1.5808 * #McData 1.3.6.1.4.1.289 * #Nishan 1.3.6.1.4.1.4369 * #QLogic 1.3.6.1.4.1.1663 * [End Traps] 2. The trapfrwd daemon must be running before traps are forwarded. Tivoli NetView does not start this daemon by default. To configure Tivoli NetView to start the trapfrwd daemon, enter these commands at a DOS prompt: ovaddobj usrovlrftrapfrwd.lrf ovstart trapfrwd To verify trapfrwd is running, run Server Setup from the NetView Options menu, (Figure 4-48).128 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 158. Figure 4-48 Trapfwd daemonAfter trap forwarding is enabled, configure the SAN components such as switches to sendtheir SNMP traps to the NetView console. Note: This type of setup will give you the best results, especially for devices where you cannot change the number of SNMP recipients and the destination ports.Method 2: Forward traps directly to Tivoli SAN ManagerIn this example you configure the SAN devices to send SNMP traps directly to the Tivoli SANManager Server. The receiving port number is the primary port number plus six ports asdescribed in “Method 1: Forward traps to local Tivoli NetView console” on page 126. In thiscase traps are only used to reflect the topology changes and they will not be shown in theNetView Event browser. Note: Some of the devices do not allow changing the SNMP port — they will only send traps to port 162. In such cases this scenario is not useful. Chapter 4. Installation and setup 129
  • 159. Method 3: Forward traps to SAN Manager and separate SNMP console In this example you setup the SAN devices to send SNMP traps to both the Tivoli SAN Manager Server and to a separate SNMP console (which you have installed in your organization) as shown in Figure 4-49. Managed Host (Agent) Disk array Managed Host (Agent) Disk array Managed Host (Agent) Disk array SAN Switch Disk array Disk array IBM Tivoli Storage Area Network Manager SNMP Console port 162 Figure 4-49 SNMP traps for two destinations The receiving port number for the Tivoli SAN Manager Server is the primary port number plus six ports as described in “Method 1: Forward traps to local Tivoli NetView console” on page 126. The receiving port number for the SNMP console is 162. In this case traps are used to reflect the topology changes and they will also show in the SNMP console events. The SNMP console in this case could be another Tivoli NetView installation or any other SNMP management application. For such a setup, the devices have to support setting multiple traps receivers and also changing the trap destination port. As this functionality is not supported in all devices this scenario is not recommended.4.6.2 Configuring the outband agents IBM Tivoli SAN Manager Server uses Agents to discover the storage environment and to monitor status. These Agents are setup in the Agent Configuration panel. Start this by selecting Configure Agents from the NetView console SAN menu, as shown in Figure 4-50.130 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 160. Figure 4-50 Agent configurationThe configuration panel has two parts — for inband and outband agents. The outband Agentsare defined in the bottom half of the panel. Here, you define all switches in the SAN you wantto monitor. To define such an Agent, click Add and you will see a window as in Figure 4-51.Figure 4-51 Outband Agent definitionEnter the host name or IP address of the switch and click OK to continue. The Agent willappear in the agent list as shown in Figure 4-50. The state of the Agent must be Contacted ifyou want IBM Tivoli SAN Manager to get data from it.To remove an already defined Agent, select it and click Remove.Defining logon ID for zone informationAt the time of writing, Tivoli SAN Manager can retrieve the zone information from IBM FibreChannel Switches and from Brocade Silkworm Fibre Channel Switches. To accomplish this,Tivoli SAN Manager uses API calls to retrieve zoning information. To use this API, Tivoli SANManager has to login into the switch with administrative rights. If you wish to see zoninginformation you need to specify the login ID for the Agents you define. This can be done by Chapter 4. Installation and setup 131
  • 161. selecting the defined Agent and click Advanced (from the Configure Agents window shown in Figure 4-50). You will see a window like Figure 4-52. Figure 4-52 Login ID definition Enter the user name and password for the switch login and click OK to save. You will then be able to see zone information for your switches as described in “Zone view” on page 165. Tip: It is only necessary to enter ID and password information for one switch in each SAN to retrieve the zoning information. We recommend entering this information for at least two switches, however, for redundancy. Enabling more switches than necessary for API zone discovery may slow performance.4.6.3 Checking inband agents After you have installed Agents on the managed systems (as described in 4.4.3, “IBM Tivoli SAN Manager Agent install” on page 112), they should appear in the Agent Configuration with Agent state of Contacted as shown in Figure 4-50. If the Agent does not appear in the panel check the Agent log file for the cause. You can only remove Agents which are no longer responding to the Server. Such Agents will show a Not responding status, as shown in Figure 4-53. Figure 4-53 Not responding inband agent To remove such an Agent, select it and click Remove.4.6.4 Performing initial poll and setting up the poll interval After you have set up the Agents and devices for use with the SAN Manager Server, the initial poll will be performed. You can manually poll using the SAN Configuration panel, shown in Figure 4-54. To access this panel, select Configure Manager from the NetView SAN menu.132 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 162. Figure 4-54 SAN configuration Click Poll Now to perform a manual poll. Note: Polling takes time and is dependant on the size of the SAN. If you did not configure trap forwarding for the SAN devices, (as described in 4.6.1, “Configuring SNMP trap forwarding on devices” on page 126), you will need to define the polling interval. In this case, the topology change will not be event driven from the devices, but will be updated regularly at the polling interval. You can setup the poll interval in the SAN Configuration (Figure 4-54). After specifying the poll interval click OK to save the changes. The polling interval can be specified in: Minutes Hours Days (you can specify the time of the day for polling) Weeks (you can specify the day of the week and time of the day for polling) Tip: You do not need to configure the polling interval if all your devices are set to send SNMP traps to either the local NetView console or the Tivoli SAN Manager Server.4.7 Tivoli SAN Manager upgrade to Version 1.2 In this section we describe how to upgrade IBM Tivoli SAN Manager components from Version 1.1 (or 1.1.1) to Version 1.2. To preserve the existing database, you must specify the same database name, DB2 user ID and password that you specified when installing the previous version. These are the changes that occur when upgrading to IBM Tivoli Storage Area Network Manager V1.2: JVM 1.3.0 is upgraded to 1.3.1. NetView is upgraded from 7.1.1 to 7.1.3 (Windows manager and remote console). MQSeries is removed. WebSphere Application Server is replaced with Embedded WebSphere Application Server – Express, Version 5.0 on the manager. Chapter 4. Installation and setup 133
  • 163. 4.7.1 Upgrading the Windows manager To upgrade the Windows manager, do the following: 1. Login using the DB2 administrator ID. 2. If you have not installed DB2 FixPak 8, install it now. See 4.2.4, “Upgrading DB2 with Fix Pack 8” on page 99. 3. If Tivoli NetView 7.1.3 is installed, check these applications are stopped: – Web Console – Web Console Security – MIB Loader – MIB Browser – Netmon Seed Editor – Tivoli Event Console Adaptor Configurator 4. Ensure Windows 2000 Terminal Services are not running. 5. Insert the Tivoli SAN Manager (Manager and Remote Console) CD into the CD–ROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, double–click launch.exeo from the CD drive in Windows Explorer. The Launch panel will be displayed. 6. The installation process is the same as described in 4.2.7, “IBM Tivoli SAN Manager Server install” on page 102. Follow the steps in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. Note: The DB2 default database name for Tivoli SAN Manager Version 1.1 was TIVOLSAN. The new name in Version 1.2 is ITSANMDB. If the database name and user ID and password are not the same as the previous installation, data will not be migrated, therefore, to retain your data, over-ride the default name with the previous database name (for example, TIVOLSAN). When the installation has completed, the Successfully Installed panel is displayed. If the correct version of Tivoli NetView was installed before you installed the manager, you will see the Finish button. (Tivoli NetView will then not be installed with the manager.) If Tivoli NetView was not previously installed and is therefore installed with this installation of the manager, you will see a prompt to restart the system. After rebooting, check the Tivoli SAN Manager service was started (Figure 4-22 on page 110).4.7.2 Upgrading the remote console Follow these steps to upgrade the remote console: 1. Make sure that the following applications are stopped: – Web Console – Web Console Security – MIB Loader – MIB Browser – Netmon Seed Editor – Tivoli Event Console Adaptor Configurator134 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 164. 2. Insert the Tivoli Storage Area Network Manager and Remote Console CD into the CD–ROM drive and double–click launch.exe. 3. Follow the steps in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. The installation process will automatically update your NetView Version 7.1.1 to 7.1.3 After rebooting, check to see if the Tivoli SAN Manager console service was started (Figure 4-44 on page 125).4.7.3 Upgrading the agents This section shows how to upgrade Tivoli SAN Manager agents Run the appropriate setup script from the agent directory on the Agents CD. This is: setup.exe - Windows ./setup.aix - AIX ./setup.sol - Solaris Follow the directions on the installation panels as described in the Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. The agent service is automatically started after installation.4.8 Tivoli SAN Manager uninstall In this section we describe how to uninstall IBM Tivoli SAN Manager components.4.8.1 Tivoli SAN Manager Server Windows uninstall To uninstall IBM Tivoli SAN Manager Server do the following: 1. Close all Tivoli NetView windows. 2. Go to the Add/Remove Programs applet in Control Panel, select IBM Tivoli Storage Area Network Manager - Manager and click Change/Remove as shown in Figure 4-55. Figure 4-55 Uninstalling the SAN Manager Server Chapter 4. Installation and setup 135
  • 165. 3. To complete the uninstallation process follow the instructions on the window. Restart the system after uninstallation completes. 4. Delete the directory c:tivoliitsanm. 5. If needed, uninstall DB2.4.8.2 Tivoli SAN Manager Server AIX uninstall To uninstall the manager, do the followings: 1. From the root directory, enter this command: /tivoli/itsanm/manager/_uninst/uninstall 2. Follow the steps for Windows uninstallation (4.8.1, “Tivoli SAN Manager Server Windows uninstall” on page 135). 3. A reboot is not required unless you want to reuse the manager ports (9550–9556). Note: This GUID package is not uninstalled when you uninstall IBM Tivoli Storage Area Network Manager. If you plan to reinstall IBM Tivoli Storage Area Network Manager, you should not delete the Tivoli GUID specific files and directories. This can cause IBM Tivoli Storage Area Network Manager to function improperly.4.8.3 Tivoli SAN Manager Agent uninstall To uninstall the Tivoli SAN Manager Agent on various platforms, do the following: AIX or Solaris 1. Stop the Agent service with the command: /tivoli/itsanm/agent/bin/tcstop.sh /tivoli/itsanm/agent/bin/solaris2/tcstop.sh 2. Check if the agent service is stopped with the command: ps -aef | grep "java.*tsnm.baseDir" If you do not see the entry in Example 4-8, the agent service has stopped: Example 4-8 Output of ps -aef | grep "java.*tsnm.baseDir" root 96498 158924 0 Aug 17 pts/3 24:53 /tivoli/itsanm/ agent/jre/bin/java -Dtsnm.baseDir=/tivoli/itsanm/agent -Dtsnm.localPort=9570 -Dtsnm.protocol=http: // -Djlog.noLogCmd=true -classpath /tivoli/itsanm/agent/lib/ classes:/tivoli/itsanm /agent/servlet/common/lib/servlet.jar:/tivoli/itsanm/agent/lib/ com.ibm.mq.jar:/ti voli/itsanm/agent/lib/com.ibm.mqjms.jar:/tivoli/itsanm/agent/lib/ jms.jar:/tivoli/ itsanm/agent/lib/ServiceManager.jar::/tivoli/itsanm/agent/servlet/ bin/bootstrap.jar -Djavax.net.ssl.keyStore=/tivoli/itsanm/agent/conf/server.keystore -Djavax.net.s sl.keyStorePassword=YourServerKeystorePassword -Dcatalina. base=/tivoli/itsanm/age nt/servlet -Dcatalina.home=/tivoli/itsanm/agent/servlet org.apache.catalina.start up.Bootstrap start136 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 166. root 471386 448550 1 14:35:03 pts/4 0:00 grep java.*tsnm.baseDir 3. Start the uninstallation with the command: /tivoli/itsanm/manager/_uninst/uninstall 4. Follow the instructions on the screen to complete uninstallation. Linux 1. Stop the Agent service. 2. Start the uninstallation with the command: /tivoli/itsanm/agent/_uninst/uninstall 3. Follow the instructions on the screen to complete uninstallation process. Windows To uninstall the Windows Agent, select Control Panel -> Add/Remove Programs, select IBM Tivoli Storage Area Network Manager - Agent, and click Change/Remove (Figure 4-56). Figure 4-56 Agent uninstall To complete the uninstallation, follow the instructions on the window, and restart the system after uninstallation completes.4.8.4 Tivoli SAN Manager Remote Console uninstall To uninstall Tivoli SAN Manager Remote Console select Control Panel -> Add/Remove Programs, select IBM Tivoli Storage Area Network Manager - Console, and click Change/Remove as shown in Figure 4-57. Chapter 4. Installation and setup 137
  • 167. Figure 4-57 Uninstalling remote console To complete uninstallation process follow the instructions on the window. Restart the system after uninstallation completes.4.8.5 Uninstalling the Tivoli GUID package The Tivoli GUID (Global User D) package is used to resolve a computer’s identification. The GUID package gives a computer a global unique identifier. With this identifier the computer can be uniquely identified even if it is running multiple applications, for example IBM Tivoli SAN Manager Agent and IBM Tivoli Storage Manager Client. Tip: Do not uninstall the Tivoli GUID package if you are running other Tivoli applications on the system. You should only uninstall the Tivoli GUID if this would be the last Tivoli application using it and you want to have a clean computer. To uninstall the Tivoli GUID on various platforms follow these steps. AIX Uninstall Tivoli GUID using SMIT or with the command: installp -u tivoli.guid Solaris Uninstall Tivoli GUID with the command: pkgrm TIVguid Windows Choose Control Panel -> Add/Remove Programs, select TivGuid, and click Change/Remove as shown in Figure 4-58.138 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 168. Figure 4-58 Uninstalling Tivoli GUID To complete the uninstallation process, follow the instructions on the window. Restart the system after uninstallation completes4.9 Silent install of IBM Tivoli Storage Area Network Manager In this section we describe how to silently install IBM Tivoli SAN Manager components. Before installing the manager, make sure you have done the pre-installation planning as outlined in IBM Tivoli Storage Area Network Manager Planning and Installation Guide, SC23-4697. By modifying the appropriate parameters in the options file for the manager, remote console, or agent, you can then run the included script to install the components. If you install IBM Tivoli Storage Area Network Manager silently, you must also uninstall it silently.4.9.1 Silent installation high level steps The silent installation of all components (Manager, Agent and Console) is done by: Locating the sample option files on the installation media (manager.opt, agent.opt and console.opt) and copying them to local hard disk. Editing these files to reflect your environment. See 4.9.2, “Installing the manager” on page 140, 4.9.3, “Installing the agent” on page 142 and 4.9.4, “How to install the remote console” on page 144 for instructions. Launching the setup command in the following manner: – Windows Agent - setup.exe -silent -options <path><option file> – AIX Agent - setup.aix -silent -options <path>/a<option file> – Linux Agent - setup.lin -silent <path>/<option file> – Solaris Agent - setup.sol -silent <path>/<option file>t Chapter 4. Installation and setup 139
  • 169. Where <option file> is manager.opt for the manager, agent.opt for the agent, and console.opt for the remote console.4.9.2 Installing the manager Before installing the manager, set the appropriate parameters in the options file manager.opt. Copy it from the CD to local disk to do this. See Example 4-9. Example 4-9 Default manager silent installation options file ######################################################################### # InstallShield Options File Template for Manager silent install # # This file can be used to create an options file (i.e., response file) for the # wizard "Setup". Options files are used with "-options" on the command line to # modify wizard settings. # # The settings that can be specified for the wizard are listed below. To use # this template, follow these steps: # # 1. Specify a value for a setting by replacing the characters ’value’. # Read each settings documentation for information on how to specify its # value. # # 2. Save the changes to the file. # # 3. To use the options file with the wizard, specify -options filename # as a command line argument to the wizard, where filename is the name # of this options file. # example: # setup.exe -silent -options manager.opt ############################################################################### #------------------------------------------------------------------------------ # Select default language # Example: # -P defaultLocale="English" #------------------------------------------------------------------------------ #-P defaultLocale="English" #------------------------------------------------------------------------------ # Installation destination directory. Specify a valid directory into which the # product should be installed. If the directory contains spaces, enclose it in # double-quotes. For example, to install the product to C:Program FilesMy # Product in Windows, use # -P installLocation="C:Program FilesMy Product" # -P installLocation="C:/tivoli/itsanm/manager" # For Unix # -P installLocation="/tivoli/itsanm/manager" #------------------------------------------------------------------------------ -P installLocation="C:/tivoli/itsanm/manager" #------------------------------------------------------------------------------ # Base port number for this installation # Example: # -W portNoBean.portNumber=9550 #------------------------------------------------------------------------------ -W portNoBean.portNumber=9550 #------------------------------------------------------------------------------ # DB2 administrator user ID # Example:140 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 170. # -W DBPassword.userID="db2admin"#-------------------------------------------------------------------------------W DBPassword.userID="db2admin"#------------------------------------------------------------------------------# DB2 administrator password## Example:# -W DBPassword.password="password"#-------------------------------------------------------------------------------W DBPassword.password="password"#------------------------------------------------------------------------------# Name of database to be created and used by SANM (SANM database)## Example:# -W SANPassword1.dbName="itsanmdb"#-------------------------------------------------------------------------------W SANPassword1.dbName="itsanmdb"#------------------------------------------------------------------------------# SANM database user ID, must be different than DB2 administrator user ID## Example:# -W SANPassword1.userID="db2user1"#-------------------------------------------------------------------------------W SANPassword1.userID="db2user1"#------------------------------------------------------------------------------# SANM database password# Example:# -W SANPassword1.userID="password"#-------------------------------------------------------------------------------W SANPassword1.password="db2user1"#------------------------------------------------------------------------------# Websphere user ID# Example:# -W WASPassword.userID="wasuser1"#-------------------------------------------------------------------------------W WASPassword.userID="wasadmin"#------------------------------------------------------------------------------# Websphere password for the user above# Example:# -W WASPassword.password="password"#-------------------------------------------------------------------------------W WASPassword.password="wasadmin"#------------------------------------------------------------------------------# Manager, Agent, Console communication password# Example:# -W comPassword.password="password"#-------------------------------------------------------------------------------W comPassword.password="itso_san_jose_pw"#------------------------------------------------------------------------------# Drive Letter where Netview to be installed.# Example:# -W beanNVDriveInput.chcDriveName="C"#-------------------------------------------------------------------------------W beanNVDriveInput.chcDriveName="C"#------------------------------------------------------------------------------# Netview password.# Example:# -W beanNetViewPasswordPanel.password="password"#-------------------------------------------------------------------------------W beanNetViewPasswordPanel.password="netview" Chapter 4. Installation and setup 141
  • 171. #------------------------------------------------------------------------------ # Property use by installation program. Do not remove or modify. #------------------------------------------------------------------------------ -W setWinDestinationBean.value="$P(installLocation)" Specify the installation destination directory. – Windows: -P installLocation="C:/tivoli/itsanm/manager" – UNIX: -P installLocation="/opt/tivoli/itsanm/manager" Note: This procedure accepts forward or backward slashes for directory paths on a Windows platform. Specify the DB2 administrator user ID -W DBPassword.userID="db2admin" Specify the drive letter where Tivoli NetView will be installed -W beanNVDriveInput.chcDriveName="C" Specify DB2 database name -W SANPassword1.dbName="itsanmdb" Specify password for db2user1 -W SANPassword1.password="xxxxxxx" Specify WebSphere admin user -W WASPassword.userID="wasadmin" Specify WebSphere admin password -W WASPassword.password="xxxxxxx" Specify communication password -W comPassword.password="xxxxxxx" Specify NetView admin password -W beanNetViewPasswordPanel.password="xxxxxxx"4.9.3 Installing the agent Before installing the agent, set the appropriate parameters in the options file agent.opt. Copy it from the CD to local disk to do this (Example 4-10). Example 4-10 Default agent silent installation option file ############################################################################### # InstallShield Options File Template for Agent silent install # # This file can be used to create an options file (i.e., response file) for the # wizard "Setup". Options files are used with "-options" on the command line to # modify wizard settings. # # The settings that can be specified for the wizard are listed below. To use # this template, follow these steps: # # 1. Specify a value for a setting by replacing the characters ’value’. # Read each settings documentation for information on how to specify its142 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 172. # value.## 2. Save the changes to the file.## 3. To use the options file with the wizard, specify -options filename# as a command line argument to the wizard, where filename is the name# of this options file.# example:# setup.exe -silent -options agent.opt##################################################################################------------------------------------------------------------------------------# Select default language# Example:# -P defaultLocale="English"#------------------------------------------------------------------------------#-P defaultLocale="English"#------------------------------------------------------------------------------# Installation destination directory:## The install location of the product. Specify a valid directory into which the# product should be installed. If the directory contains spaces, enclose it in# double-quotes. For example, to install the product to C:Program FilesMy# Product in Windows, use# -P installLocation="C:Program FilesMy Product"# -P installLocation="C:/tivoli/itsanm/agent"# For Unix# -P installLocation="/tivoli/itsanm/agent"#-------------------------------------------------------------------------------P installLocation="c:/tivoli/itsanm/agent"#------------------------------------------------------------------------------# Specify full qualified name of remote manager machine:# Example:# -W managerNamePort.managerName="manager.sanjose.ibm.com"#-------------------------------------------------------------------------------W managerNamePort.managerName="manager.sanjose.ibm.com"#------------------------------------------------------------------------------# Specify base port number of remote manager:# Example:# -W managerNamePort.managerPort=9550#-------------------------------------------------------------------------------W managerNamePort.managerPort=9550#------------------------------------------------------------------------------# Base port number for this installation# Example:# -W portNoBean.portNumber=9570#-------------------------------------------------------------------------------W portNoBean.portNumber=9570#------------------------------------------------------------------------------# Manager, Agent, Console communication password# Example:# -W comPassword.password="password"#-------------------------------------------------------------------------------W comPassword.password="itso_san_jose_pw"#------------------------------------------------------------------------------# Property use by installation program. Do not remove or modify.#-------------------------------------------------------------------------------W setWinDestinationBean.value="$P(installLocation)" Chapter 4. Installation and setup 143
  • 173. Specify the hostname of the manager machine -W managerNamePort.managerName="lochness.sanjose.ibm.com" Specify a password for manager, agent, and console communication -W comPassword.password="xxxxxxx"4.9.4 How to install the remote console Before installing the console, set the appropriate parameters in the options file console.opt. Copy it from the CD to local disk to do this (see Example 4-11). Example 4-11 Remote console default silent install option file ############################################################################### # InstallShield Options File Template for Remote Console silent install # This file can be used to create an options file (i.e., response file) for the # wizard "Setup". Options files are used with "-options" on the command line to # modify wizard settings. # # The settings that can be specified for the wizard are listed below. To use # this template, follow these steps: # # 1. Specify a value for a setting by replacing the characters ’value’. # Read each settings documentation for information on how to specify its # value. # # 2. Save the changes to the file. # # 3. To use the options file with the wizard, specify -options filename # as a command line argument to the wizard, where filename is the name # of this options file. # example: # setup.exe -silent -options console.opt ############################################################################### #------------------------------------------------------------------------------ # Select default language # Example: # -P defaultLocale="English" #------------------------------------------------------------------------------ #-P defaultLocale="English" #------------------------------------------------------------------------------ # Installation destination directory. # # The install location of the product. Specify a valid directory into which the # product should be installed. If the directory contains spaces, enclose it in # double-quotes. For example, to install the product to C:Program FilesMy # Product in Windows, use # -P installLocation="C:Program FilesMy Product" # -P installLocation="C:/tivoli/itsanm/console" #------------------------------------------------------------------------------ -P installLocation="c:/tivoli/itsanm/console" #------------------------------------------------------------------------------ # Specify full qualified name of remote manager machine: # Example: # -W beanManagerLocation.HostName="manager.sanjose.ibm.com" #------------------------------------------------------------------------------ -W beanManagerLocation.HostName="lochness.almaden.ibm.com" #------------------------------------------------------------------------------ # Specify base port number of remote manager: # Example:144 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 174. # -W beanManagerLocation.PortNo=9550 #------------------------------------------------------------------------------ -W beanManagerLocation.PortNo=9550 #------------------------------------------------------------------------------ # Base port number for this installation # Example: # -W portNoBean.portNumber=9560 #------------------------------------------------------------------------------ -W portNoBean.portNumber=9560 #------------------------------------------------------------------------------ # Manager, Agent, Console communication password # Example: # -W comPassword.password="password" #------------------------------------------------------------------------------ -W comPassword.password="itso_san_jose_pw" #------------------------------------------------------------------------------ # Drive Letter where Netview to be installed. # Example: # -W beanNVDriveInput.chcDriveName="C" #------------------------------------------------------------------------------ -W beanNVDriveInput.chcDriveName="C" #------------------------------------------------------------------------------ # Netview password. # Example: # -W beanNetViewPasswordPanel.password="password" #------------------------------------------------------------------------------ -W beanNetViewPasswordPanel.password="netview" Specify the fully qualified name of the remote manager machine -W beanManagerLocation.HostName="lochness.almaden.ibm.com" Specify a password for manager, agent, and console communication -W comPassword.password="xxxxxx" Specify the password for Tivoli NetView -W beanNetViewPasswordPanel.password="xxxxxxx"4.9.5 Silently uninstalling IBM Tivoli Storage Area Network Manager This section describes how to uninstall IBM Tivoli Storage Area Network Manager if you have installed the product using silent installation. Uninstalling the manager on Windows To uninstall the manager on Windows, from the installation directory, run the following command: c:tivoliitsanmmanager_uninstuninstall -silent Uninstalling the manager on AIX To uninstall the AIX manager, from the installation directory, run this command: /tivoli/itsanm/manager/_uninst/uninstall -silent Uninstalling the remote console To uninstall the remote console, run this command from the installation directory: c:tivoliitsanmconsole_uninstuninstall -silent Chapter 4. Installation and setup 145
  • 175. Uninstalling the agents To uninstall the Windows agent, from the installation directory, run this command: c:tivoliitsanmagent_uninstuninstall -silent To uninstall the UNIX agent, from the installation directory, run this command: /tivoli/itsanm/agent/_uninst/uninstall -silent4.10 Changing passwords If you need to change any of the passwords used during the installation process, you should use the procedures described in Table 4-1.Table 4-1 Procedure to change passwords User ID/Password Change of ID Change of How to change User ID/Password Used after allowed? Password ID/Password? Software gets allowed? installed? db2admin No N/A Yes, Change password from the recommended for Computer Management security reasons Administrative tool. db2user Yes No Yes 1. Change password from the Computer Management Administrative tool. 2. Use following procedure to change password stored inside ITSANM properties file: srmcp ConfigService setPW Was Admin Yes Yes Yes 1. Change User ID/Password from following file <Install_Location>/apps/was/prope rties/soap.client.props Modify following entries. com.ibm.SOAP.loginUserid=<User _ID> com.ibm.SOAP.loginPassword=<P ASSWORD> Where you need to replace <User_ID> or <PASSWORD> 2.Scripts are available from IBM Support for AIX and Windows - contact your local support structure to get them. NetView Yes N/A Yes, 1. Change password from the password recommended for Computer Management security reasons Administrative tool. 2. Change Logon Password for “Tivoli NetView Service” from Control Panel/Services146 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 176. User ID/Password Change of ID Change of How to change UserID/Password Used after allowed? Password ID/Password? Software gets allowed? installed?Host Yes N/A Yes, 1. Change password from theAuthentication recommended for Computer ManagementPassword security reason Administrative tool. 2. Use following procedure to change password stored inside ITSANM properties file. srmcp ConfigService setAuthenticationPw Chapter 4. Installation and setup 147
  • 177. 148 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 178. 5 Chapter 5. Topology management In this chapter we provide an introduction to the features of IBM Tivoli SAN Manager. We discuss the following topics: IBM Tivoli NetView navigation overview Lab environment description Physical and logical topology views: – SAN view – Host centric view – Device centric view – iSCSI view – MDS 9000 Object status and properties Launch of management applications Practical cases© Copyright IBM Corp. 2002, 2003. All rights reserved. 149
  • 179. 5.1 NetView navigation overview Since Tivoli SAN Manager uses IBM Tivoli NetView (abbreviated as NetView) for display, before going into further details, we give you a basic overview of the NetView interface, how to navigate in it and how IBM Tivoli SAN Manager integrates with NetView. Detailed information on NetView is in the redbook Tivoli NetView V6.01 and Friends, SG24-6019.5.1.1 NetView interface NetView uses a graphical interface to display a map of the IP network with all the components and interconnect elements that are discovered in the IP network. As your Storage Area network (SAN) is a network, Tivoli SAN Manager uses NetView and its graphical interface to display a mapping of the discovered storage network.5.1.2 Maps and submaps NetView uses maps and submaps to navigate in your network and to display deeper details as you drill down. The main map is called the root map while each dependent map is called a submap. Your SAN topology will be displayed in the Storage Area Network submap and its dependents. You can navigate from one map to its submap simply by double-clicking the element you want to display.5.1.3 NetView window structure Figure 5-1 shows a basic NetView window. submap window submap stack child submap area Figure 5-1 NetView window150 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 180. The NetView window is divided in three parts: The submap window displays the elements included in the current view. Each element can be another submap or a device The submap stack is located on the left side of the submap window. This area displays a stack of icons representing the parent submaps that you have already displayed. It shows the hierarchy of submaps you have opened for a particular map. This navigation bar can be used to go back to a higher level with one click The child submap area is located at the bottom of the submap window. This submap area shows the submaps that you have previously opened from the current submap. You can open a submap from this area, or bring it into view if it is already opened in another window on the window.5.1.4 NetView Explorer From the NetView map based window, you can switch to an Explorer view where all maps, submaps and objects are displayed in a tree scheme (similar to the Microsoft Windows Explorer interface). To switch to this view, right-click a submap icon and select Explore as shown in Figure 5-2. Figure 5-2 NetView Explorer option Figure 5-3 shows the new display using the NetView Explorer. Chapter 5. Topology management 151
  • 181. Figure 5-3 NetView explorer window From here, you can change the information displayed on the right pane by changing to the Tivoli Storage Area Network Manager view on the top pull-down field. The previously displayed view was System Configuration view. The new display is shown in Figure 5-4. Figure 5-4 NetView explorer window with Tivoli Storage Area Network Manager view Now, the right pane shows Label, Name, Type and Status for the device. You may scroll right to see additional fields.152 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 182. 5.1.5 NetView Navigation Tree From the any NetView window, you can switch to the Navigation Tree by clicking the tree icon circled on Figure 5-5. Figure 5-5 NetView toolbar NetView will display, with a tree format, all the objects contained in the maps you have already explored. Figure 5-6 shows the tree view. Figure 5-6 NetView tree map You can see that our SAN — circled in red — does not show its dependent objects since we have not yet opened this map through the standard NetView navigation window. You can click any object and it will open its submap in the standard NetView view.5.1.6 Object selection and NetView properties To select an object, right-click it. NetView displays a context-sensitive menu with several options including Object Properties as shown in Figure 5-7. Chapter 5. Topology management 153
  • 183. Figure 5-7 NetView objects properties menu The Object Properties for that device will display (Figure 5-8). This will allow you to change NetView properties such as the label and icon type of the selected object. Figure 5-8 NetView objects properties Important: As IBM Tivoli SAN Manager runs its own polling and discovery processes and only uses NetView to display the discovered objects, each change to the NetView object properties will be lost as soon as IBM Tivoli SAN Manager regenerates a new map.154 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 184. 5.1.7 Object symbols IBM Tivoli SAN Manager uses its own set of icons as shown in Figure 5-9. Two new icons have been added for Version 1.2 - ESS and SAN Volume Controller. Figure 5-9 IBM Tivoli SAN Manager icons5.1.8 Object status The color of a symbol or the connection represents its status. The colors used by IBM Tivoli SAN Manager and their corresponding status are shown in the following table.Table 5-1 IBM Tivoli SAN Manager symbols color meaning Symbol color Connection color Status Status meaning Green Black Normal The device was detected in at least one of the scans Green Black New The device was detected in at least one of the scans and a new discovery has not yet been performed since the device was detected Yellow Yellow Marginal (suspect) Device detected - the status is impaired but still functional Red Red Missing None of the scans that previously detected the device are now reporting it IBM Tivoli NetView uses additional colors to show the specific status of the devices, however these are not used in the same way by IBM Tivoli SAN Manager. Table 5-2 IBM Tivoli NetView additional colors Symbol color Status Status Meaning Blue Unknown Status not determined Wheat (tan) Unmanaged The device is no longer monitored for topology and status changes. Dark green Acknowledged The device was Missing, Suspect or Unknown. The problem has been recognized and is being resolved Gray (used in NetView Unknown Status not determined Explorer left pane) If you suspect problems in your SAN, look in the topology displays for icons indicating a status of other than normal/green. To assist in problem determination, Table 5-3 provides an overview of symbol status with possible explanations of the problem. Chapter 5. Topology management 155
  • 185. Table 5-3 Problem determination Display Agents Device Link Non-ISL ISL explanation explanation Any Normal Marginal One or more, but One or more, but (green) (yellow) not all links to the not all links device in this between the two topology are switches is missing missing. Any Normal Critical All links to the All links between (green) (red) device in this the two switches topology are are missing, but missing, while other the out-of-band links to this device communication to in other topologies the switch is are normal. normal Any Critical Critical All links to the All links between (red) (red) device in this the two switches topology are are missing, and missing, while all the out-of-band other links to communication to devices in other the switch is topologies are missing or missing (if any) indicates that the switch is in critical condition Both Critical Normal All in-band agents This condition (red) (black) monitoring the should not happen. device can no If you see this on longer detect the an ISL where device. For switches on either example, a server side of the link reboot, power-off, have an shutdown of agent out-of-band agent service, Ethernet connected to your problems, and SAN Manager, soon. then you are having problems with your out-of-band agent. Both Critical Marginal At least one link to This condition (red) (yellow) the device in this should not happen. topology is normal If you see this on and one or more an ISL where links are missing. In switches on either addition, all in-band side of the link agents monitoring have an the device can no out-of-band agent longer detect the connected to your device SAN Manager, then you are having problems with your out-of-band agent.156 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 186. 5.1.9 Status propagation Each object has a color representing its status. If the object is an individual device, the status shown is that of the device. If the object is a submap, the status shown reflects the summary status of all objects in its child submap. Status of lower level objects is propagated to the higher submap as shown in Table 5-4. Table 5-4 Status propagation rules Object status Symbols in the child submap Unknown No symbols with status of normal, critical, suspect or unmanaged Normal All symbols are normal or acknowledged Suspect (marginal) All symbols are suspect or Normal and suspect symbols or Normal, suspect and critical symbols Critical At least one symbol is critical and no symbol are normal5.1.10 NetView and IBM Tivoli SAN Manager integration IBM Tivoli SAN Manager adds a new SAN menu entry in the IBM Tivoli NetView interface, shown in Figure 5-10. The SAN pull-down menu contains the following entries: SAN Properties to display and change object properties, such as object label and icon Launch Application to run a management application ED/FI Properties to view ED/FI events ED/FI Configuration to start, stop, and configure ED/FI Configure Agents to add and remove agents Configure Manager to configure the polling and discovery scheduling Set Event Destination to configure SNMP and TEC events recipients Storage Resource Manager to launch IBM Tivoli Storage Resource Manager Help Chapter 5. Topology management 157
  • 187. Figure 5-10 SAN Properties menu All those items will subsequently be described in more detail.5.2 Lab 1 environment description For demonstration purposes in the following sections, we call this lab lab1. We had the following equipment: Two IBM 2109-S08 (ITSOSW1 and ITSOSW2) switches with firmware V2.6.0c One IBM 2109-S16 (ITSOSW3) switch with firmware V2.6.0c One IBM 2109-F16 (ITSOSW4) switch with firmware V3.0.2 One IBM 2107-G07 SAN Data Gateway One IBM pSeries F50 (BRAZIL) running AIX 4.3.3 ML10 with: – One IBM 6227 card with firmware 02903291 One IBM pSeries F80 (SICILY) running AIX 5.1.1 ML2 with: Two IBM 6227 cards with firmware 02903291 One IBM pSeries 6F0 (CRETE) running AIX 4.3.3 ML10 with: – One IBM 6228 card with firmware 02C03891 One Sun Enterprise 250 (SOL-E) running Sun Solaris 8 with: – Two JNI FCI-1063 cards with driver 2.6.11 Three IBM xSeries 330 (LEAD, RADON, POLONIUM) with: – One QLogic QLA2200 card with firmware 8.1.5.12 One IBM xSeries 330 (TUNGSTEN) with: – Two QLogic QLA2200 card with firmware 8.1.5.12158 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 188. One IBM xSeries 330 (GALLIUM) with: – Two QLogic QLA2300 card with firmware 8.1.5.12 One IBM Ultrium Scalable Tape Library (3583) One IBM TotalStorage FAStT700 storage serverFigure 5-11 shows the SAN topology of our lab environment. Lab topology ITSOSW1 ITSOSW2 LEAD SOL-E SICILY GALLIUM BRAZIL LTO 3583 ITSOSW4 FAStT700 CRETE SDG ITSOSW3 TUNGSTEN MSS BONNIE CLYDE DIOMEDE SENEGALFigure 5-11 ITSO lab1 setupWe also set up various zones within the switch — Figure 5-12 shows these. Note that this isan initial configuration which changed throughout various testing scenarios — examplesshown in this book may not represent this exact configuration. Chapter 5. Topology management 159
  • 189. Lab topology with zones ITSOSW1 ITSOSW2 LEAD SOL-E SICILY GALLIUM BRAZIL LTO 3583 ITSOSW4 FAStT700 CRETE SDG ITSOSW3 TUNGSTEN Switches zoning SW1-SW2 TSM SW1-SW2 FAStT SW3 ITSOSW3ALLPORTS MSS SW3 MSS BONNIE CLYDE DIOMEDE SENEGAL SW4 FAStT Figure 5-12 ITSO lab1 topology with zones5.3 Topology views The standard IP-based IBM Tivoli NetView root map contains IP Internet and SmartSets submaps. IBM Tivoli SAN Manager adds a third submap, called Storage Area Network, to allow the navigation through your discovered SAN. Figure 5-13 shows the NetView root map with the addition of IBM Tivoli SAN Manager.160 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 190. Figure 5-13 IBM Tivoli NetView root mapThe Storage Area Network submap (shown in Figure 5-14) displays an icon for each availabletopology view. There will be a SAN view icon for each discovered SAN fabric (three in ourcase), a Device Centric View icon, and a Host Centric View icon.Figure 5-14 Storage Area Network submap Chapter 5. Topology management 161
  • 191. You can see in this figure that we had three fabrics. They are named Fabric1, Fabric3, and Fabric4, since we have changed their label using SAN -> SAN Properties as explained in “Properties” on page 171. Figure 5-15 shows the complete list of views available. In the following sections we will describe the content of each view. Topology views Tivoli NetView root map Storage Area Network SAN view Device Centric view Host Centric view Devices Topology view Zone view Hosts (storage servers) Switches Interconnect elements Zones LUNs Platform Elements Elements (switches) Elements Host Filesystems Platform Volumes Figure 5-15 Topology views5.3.1 SAN view The SAN view allows you to see the SAN topology at the fabric level. In this case we clicked the Fabric1 icon shown in Figure 5-14. The display in Figure 5-16 appears, giving access to two further submaps: Topology view Zone view Figure 5-16 Storage Area Network view162 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 192. Topology viewThe topology view is used to display all elements of the fabric including switches, hosts,devices, and interconnects. As shown on Figure 5-17, this particular fabric has two switches.Figure 5-17 Topology viewNow, you can click a switch icon to display all the hosts and devices connected to the selectedswitch.Figure 5-18 Switch submap Chapter 5. Topology management 163
  • 193. On the Topology View (shown in Figure 5-17) you can also click Interconnect Elements to display information about all the switches in that SAN. Figure 5-19 Interconnect submap The switch submap, (Figure 5-18), shows that six devices are connected to switch ITSOSW1. Each connection line represents a logical connection. Click a connection bar twice to display the exact number of physical connections (Figure 5-20). We now see that, for this example, SOL-E is connected to two ports on the switch ITSOSW1. Figure 5-20 Physical connections view164 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 194. When the connection represents only one physical connection (or, if we click one of the twoconnections shown in Figure 5-20), NetView displays its properties panel (Figure 5-21).Figure 5-21 NetView properties panelZone viewThe Zone view submap displays all zones defined in the SAN fabric. Our configurationcontains two zones called FASTT and TSM.Figure 5-22 Zone view submap Chapter 5. Topology management 165
  • 195. Click twice on the FASTT icon to see all the elements included in the FASTT zone. Figure 5-23 FASTT zone In lab1, the FASTT zone contains five hosts and one storage server. We have installed Tivoli SAN Manager Agents on the four hosts that are labelled with their correct hostname (BRAZIL, GALLIUM, SICILY and SOL-E). For the fifth host, LEAD, we have not installed the agent. However, it is discovered since it is connected to the switch. IBM Tivoli SAN Manager displays it as a host device, and not as an unknown device, because the QLogic HBA drivers installed on LEAD support RNID. This RNID support gives the ability for the switch to get additional information, including the device type (shown by the icon displayed), and the WWN. The disk subsystem is shown with a question mark because the FAStT700 was not yet fully supported (with the level of code available at the time of writing) and IBM Tivoli SAN Manager was not able to determine all the properties from the information returned by the inband and outband agents.5.3.2 Device Centric View You may have several SAN fabrics with multiple storage servers. The Device Centric View (accessed from the Storage Area Network view, as shown in Figure 5-14 on page 161), displays the storage devices connected to your SANs and their relationship to the hosts. This is a logical view as the connection elements are not shown. Because of this, you may prefer to see this information using the NetView Explorer interface as shown in Figure 5-24. This has the advantage of automatically displaying all the lower level items for Device Centric View listed in Example 5-15 on page 162 simultaneously, such as LUNs and Host.166 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 196. Figure 5-24 Device Centric View In the preceding figure, we can see the twelve defined LUNs and the host to which they have been allocated. The dependency tree is not retrieved from the FAStT server but is consolidated from the information retrieved from the managed hosts. Therefore, the filesystems are not displayed as they can be spread on several LUNs and this information is transparent to the host. Note that the information is also available for the MSS storage server, the other disk storage device in our SAN.5.3.3 Host Centric View The Host Centric View (accessed from the Storage Area Network view, as shown in Figure 5-14 on page 161) displays all the hosts in the SAN and their related local and SAN-attached storage devices. This is a logical view that does not show the interconnect elements (and runs across the fabrics). Since this is also a logical view, like the Device Centric View, the NetView Explorer presents a more comprehensive display (Figure 5-25). Chapter 5. Topology management 167
  • 197. Figure 5-25 Host Centric View for Lab 1 We see our four hosts and all their local filesystems whether they are locally or SAN-attached. NFS-mounted filesystems and shared directories are not displayed. Since no agent is running on LEAD, it is not shown in this view.5.3.4 iSCSI discovery For this environment we will reference SAN Lab 2 (“Lab 2 environment” on page 190). Starting discovery You can discover and manage devices that use the iSCSI storage networking protocol through IBM Tivoli SAN Manager using IBM Tivoli NetView. Before discovery, SNMP and the iSCSI MIBs must be enabled on the iSCSI device, the Tivoli NetView IP Discovery must be enabled. See 6.4, “Real-time reporting” on page 227 for enabling IP discovery. The IBM Tivoli NetView nvsniffer daemon will discover the iSCSI devices. Depending on the iSCSI operation chosen, a corresponding iSCSI SmartSet will be created under the IBM Tivoli NetView SmartSets icon. By default, the nvsniffer utility runs every 60 minutes. Once nvsniffer discovers a iSCSI device, it creates an iSCSI SmartSet located on the NetView Topology map at the root level. The user can select what type of iSCSI device is discovered. From the menu bar, click Tools -> iSCSI Operations menu and select Discover All iSCSI Devices, Discover All iSCSI Initiators or Discover All iSCSI Targets, as shown in Figure 5-26. For more details about iSCSI, refer to Chapter 7, “Tivoli SAN Manager and iSCSI” on page 253.168 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 198. Figure 5-26 iSCSI discovery Double-click the iSCSI SmartSet icon to display all iSCSI devices. Once all iSCSI devices are discovered by NetView, the iSCSI SmartSet can be managed from a high level. Status for iSCSI devices is propagated to the higher level, as described in 5.1.9, “Status propagation” on page 157. If you detect a problem, drill to the SmartSet icon and continue drilling through the iSCSI icon to determine what iSCSI device is having the problem. Figure 5-27 shows an iSCSI SmartSet. Figure 5-27 iSCSI SmartSet5.3.5 MDS 9000 discovery The Cisco MDS 9000 is a family of intelligent multilayer directors and fabric switches that have such features as: virtual SANs (VSANs), advanced security, sophisticated debug analysis tools and an element manager for SAN management. IBM Tivoli SAN Manager has enhanced compatibility for the Cisco MDS 9000 Series switch. Tivoli NetView displays the port numbers in a format of SSPP, where SS is the slot number and PP is the port number. The Launch Application menu item is available for the Cisco switch. When the Launch Application is selected, the Cisco Fabric Manager application is started. For more details, see 5.7.1, “Cisco MDS 9000 discovery” on page 182. Chapter 5. Topology management 169
  • 199. 5.4 SAN menu options In this section we describe some of the menu options contained under the SAN pull-down menu option for IBM Tivoli SAN Manager.5.4.1 SAN Properties As shown in Figure 5-28, select an object and use SAN -> SAN Properties to display the properties gathered by IBM Tivoli SAN Manager. In this case we are selecting a particular filesystem (the root filesystem) from the Agent SOL-E. Figure 5-28 SAN Properties menu This will display a SAN Properties window that is divided into two panes. The left pane always contains Properties, and may also contain Connection and Sensors/Events, depending on the type of object being displayed. The right pane contains the details of the object. These are some of the device types that give information in the SAN Properties menu: Disk drive Hdisk Host file system LUN Log volume OS Physical volume Port SAN170 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 200. Switch System Tape drive Volume group ZonePropertiesThe first grouping item is named Properties and contains generic information about theselected device. The information that is displayed depends on the object type. This sectionshows at least the following information: Label: The label of the object as it is displayed by IBM Tivoli SAN Manager. If you update this field, this change will be kept over all discoveries. Icon: The symbol representing the device type. If the object is of an unknown type, this field will be in read-write mode and you will be able to select the correct symbol. Name: The reported name of the device.Figure 5-29 shows the Properties section for a filesystem. You can see that it displays thefilesystem name and type, the mount point, and both the total and available space. Since afilesystem is not related to a port connection and also does not return sensor events, only theProperties section is available.Figure 5-29 IBM Tivoli SAN Manager Properties — FilesystemFigure 5-30 shows the Properties section for a host. You can see that it displays thehostname, the IP address, the hardware type, and information about the HBA. Since the hostdoes not give back sensor related events, only the Properties and Connections sections areavailable. Chapter 5. Topology management 171
  • 201. Figure 5-30 IBM Tivoli SAN Manager Properties — Host Figure 5-31 shows the Properties section for a switch. You can see that it displays fields including the name, the IP address, and the WWN. The switch is a connection device and sends back information about the events and the sensors. Therefore, all three item groups are available (Properties, Connections, and Sensors/Events). Figure 5-31 IBM Tivoli SAN Manager Properties — Switch172 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 202. Figure 5-32 shows the properties for an unknown device. Here you can change the icon to apredefined one by using the pull-down field Icon. You can also change the label of a deviceeven if the device is of a known type.Figure 5-32 Changing icon and name of a deviceConnectionThe second grouping item, Connections shows all ports in use for the device. This sectionappears only when it is appropriate to the device displayed — switch or host.On Figure 5-33, we see the Connection tab for one switch where six ports are used. Port 0 isused for the Inter-Switch Link (ISL) to switch ITSOSW2. This is a very useful display, as itshows which device is connected on each switch port.Figure 5-33 Connection informationSensors/EventsThe third grouping item, Sensors/Events, is shown in Figure 5-34. It shows the sensors statusand the device events for a switch. It may include information about fans, batteries, powersupplies, transmitter, enclosure, board, and others. Chapter 5. Topology management 173
  • 203. Figure 5-34 Sensors/Events information5.5 Application launch Many SAN devices have vendor-provided management applications. IBM Tivoli SAN Manager provides a launch facility for many of these.5.5.1 Native support For some supported devices, IBM Tivoli SAN Manager will automatically discover and launch the device-related administration tool. To launch, select the device and then click SAN -> Launch Application. This will launch the Web application associated with the device. In our case, it launches the Brocade switch management Web interface for the switch ITSOSW4, shown in Figure 5-35.174 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 204. Figure 5-35 Brocade switch management application5.5.2 NetView support for Web interfaces For devices that have not identified their management application, IBM Tivoli NetView allows you to manually configure the launch of a Web interface for any application, by doing the following: Right-click the device and select Object Properties from the context-sensitive menu. On the dialog box, select the Other tab (shown in Figure 5-36). Select LANMAN from the pull-down menu. Check isHTTPManaged. Enter the URL of the management application in the Management URL field. Click Verify, Apply, OK. Chapter 5. Topology management 175
  • 205. Figure 5-36 NetView objects properties — Other tab After this, you can launch the Web application by right-clicking the object and then selecting Management Page, as shown in Figure 5-37. Figure 5-37 Launch of the management page Important: This definition will be lost if your device is removed from the SAN and subsequently rediscovered, since it will be a new object for NetView.176 IBM Tivoli Storage Area Network Manager: A Practical Introduction
  • 206. 5.5.3 Non-Web applications You can also configure the NetView toolbar menu to launch a locally installed management application from the NetView console. Here we show you how to configure NetView to launch the management application for the IBM SAN Data Gateway. You can use the same procedure for any other application that is installed on the NetView server. 1. Create a file in the directory usrovregistrationc. You can call it anything with a .REG extension, for example, here, SanDG.reg. Insert the lines shown in Example 5-1. If you have other management applications to insert, create different .REG files in the same directory, as NetView will automatically scan this directory for extra items. Example 5-1 File to enable launch of non-Web application from NetView console Application "SDG Specialist" { Description { "SDG Specialist" } Command "C:Program FilesIBM StorWatchIBM ClientLaunch.exe"; MenuBar "Tools" { <55> "SDG Specialist"_G f.action "aSDG"; } Action "aSDG" { Command "C:Program FilesIBM StorWatchIBM ClientLaunch.exe"; } } 2. Stop NetView 3. To be sure that the application can be automatically launched, update the PATH variable on your server and add the path to the program directory. My Computer -> Properties, select the Advanced tab -> Environment Variables. Under System Variables, select PATH. Include the full pathname of the application in the PATH variable (Figure 5-38). Figure 5-38 PATH environment variable 4. Re-start NetView. After this, you will be able to launch the SAN Data Gateway application by selecting it from the Tools menu as shown in Figure 5-39. Chapte