Your SlideShare is downloading. ×
Ibm total storage san file system sg247057
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Ibm total storage san file system sg247057

1,416
views

Published on

Published in: Technology, Business

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,416
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Front coverIBM TotalStorage SANFile SystemNew! Updated for Version 2.2.2 of SANFile SystemHeterogeneous file sharingPolicy-based file lifecyclemanagement Charlotte Brooks Huang Dachuan Derek Jackson Matthew A. Miller Massimo Rosichiniibm.com/redbooks
  • 2. International Technical Support OrganizationIBM TotalStorage SAN File SystemJanuary 2006 SG24-7057-03
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page xix.Fourth Edition (January 2006)This edition applies to Version 2, Release 2, Modification 2 of IBM TotalStorage SAN File System (productnumber 5765-FS2) on the day of announcement in October of 2005. Please note that pre-release code wasused for the screen captures and command output; some minor details may vary from the generally availableproduct.© Copyright International Business Machines Corporation 2003, 2004, 2006. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  • 4. Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv December 2004, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv January 2006, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvPart 1. Introduction to IBM TotalStorage SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction: Growth of SANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Storage networking technology: Industry trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 The IBM approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 Rise of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 What is virtualization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 Types of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Storage virtualization models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 SAN data sharing issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 IBM TotalStorage Open Software Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 IBM TotalStorage SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5.2 IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.3 Comparison of SAN Volume Controller and SAN File System . . . . . . . . . . . . . . . 18 1.5.4 IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.5 TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5.6 TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5.7 TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.8 TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.6 File system general terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.1 What is a file system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.2 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.6.3 Selecting a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.7 Filesets and the global namespace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.8 Value statement of IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 2. SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.1 SAN File System product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2 SAN File System V2.2 enhancements overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview . . . . . . . . . . . . . . . . . . . 35 2.4 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5 SAN File System hardware and software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . 37© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. iii
  • 5. 2.5.1 Metadata server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5.2 Master Console hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5.3 SAN File System software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.4 Supported storage for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.5 SAN File System engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.6 Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.5.7 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.8 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5.9 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5.10 Policy based storage and data management . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.5.11 Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5.12 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.5.13 Reliability and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.5.14 Summary of major features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Part 2. Planning, installing, and upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 3. MDS system design, architecture, and planning issues. . . . . . . . . . . . . . . 65 3.1 Site infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2 Fabric needs and storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 SAN File System volume visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.3.1 Uniform SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.3.2 Non-uniform SAN File System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.4 Network infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.5 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5.1 Local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5.2 LDAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.6.1 Advanced heterogenous file sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.6.2 File sharing with Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7 Planning the SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7.1 Storage pools and filesets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7.2 File placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.7.3 FlashCopy considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.8 Planning for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8.1 Cluster availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8.2 Autorestart service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.8.3 MDS fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.8.4 Fileset and workload distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.8.5 Network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.8.6 SAN planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9 Client needs and application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9.1 Client needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.9.3 Client application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.9.4 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.9.5 Linux for zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.10 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.10.1 Offline data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.10.2 Online data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.11 Implementation services for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.12 SAN File System sizing guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91iv IBM TotalStorage SAN File System
  • 6. 3.12.2 IP network sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.3 Storage sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.4 SAN File System sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923.13 Planning worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.14 Deploying SAN File System into an existing SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.15 Additional materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Chapter 4. Pre-installation configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994.1 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1 Local authentication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.2 LDAP and SAN File System considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014.2 Target Machine Validation Tool (TMVT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054.3 SAN and zoning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.4 Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.4.1 Install and verify SDD on Windows 2000 client. . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.2 Install and verify SDD on an AIX client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 Install and verify SDD on MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174.5 Redundant Disk Array Controller (RDAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 RDAC on Windows 2000 client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 RDAC on AIX client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.5.3 RDAC on MDS and Linux client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Chapter 5. Installation and basic setup for SAN File System . . . . . . . . . . . . . . . . . . . 1255.1 Installation process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265.2 SAN File System MDS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Pre-installation setting and configurations on each MDS . . . . . . . . . . . . . . . . . . 127 5.2.2 Install software on each MDS engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.3 SUSE Linux 8 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.2.4 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2.5 Install prerequisite software on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2.6 Install SAN File System cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.2.7 SAN File System cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.3 SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.3.1 SAN File System Windows 2000/2003 client . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.3.2 SAN File System Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3.3 SAN File System Solaris installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5.3.4 SAN File System AIX client installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.3.5 SAN File System zSeries Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . 1785.4 UNIX device candidate list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855.5 Local administrator authentication option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1865.6 Installing the Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.6.2 Installing Master Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925.7 SAN File System MDS remote access setup (PuTTY / ssh). . . . . . . . . . . . . . . . . . . . 228 5.7.1 Secure shell overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228Chapter 6. Upgrading SAN File System to Version 2.2.2. . . . . . . . . . . . . . . . . . . . . . . 2296.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306.2 Preparing to upgrade the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2316.3 Upgrade each MDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.3.1 Stop SAN File System processes on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.3.2 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.3.3 Upgrade the disk subsystem software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.3.4 Upgrade the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Contents v
  • 7. 6.3.5 Upgrade the MDS software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.4 Special case: upgrading the master MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 6.5 Commit the cluster upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 6.6 Upgrading the SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.6.1 Upgrade SAN File System AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.6.2 Upgrade Solaris/Linux clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.6.3 Upgrade SAN File System Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.7 Switching from LDAP to local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Part 3. Configuration, operation, maintenance, and problem determination . . . . . . . . . . . . . . . . . . . 249 Chapter 7. Basic operations and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 7.1 Administrative interfaces to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.1.1 Accessing the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.1.2 Accessing the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7.2 Basic navigation and verifying the cluster setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 7.2.1 Verify servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 7.2.2 Verify system volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.3 Verify pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.4 Verify LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 7.2.5 Verify administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 7.2.6 Basic commands using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 7.3 Adding and removing volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 7.3.1 Adding a new volume to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 7.3.2 Changing volume settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.3.3 Removing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 7.4.1 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 7.4.2 Adding a volume to a user storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.3 Adding a volume to the System Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.4 Changing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.4.5 Removing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 7.4.6 Expanding a user storage pool volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 7.4.7 Expanding a volume in the system storage pool. . . . . . . . . . . . . . . . . . . . . . . . . 284 7.5 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 7.5.1 Relationship of filesets to storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 7.5.2 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 7.5.3 Creating filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 7.5.4 Moving filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 7.5.5 Changing fileset characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7.5.6 Additional fileset commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.5.7 NLS support with filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.6 Client operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.6.1 Fileset permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 7.6.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 7.6.3 Take ownership of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.7 Non-uniform SAN File System configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 7.7.1 Display a list of clients with access to particular volume or LUN . . . . . . . . . . . . 304 7.7.2 List fileset to storage pool relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7.8 File placement policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7.8.1 Policies and rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 7.8.2 Rules syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 7.8.3 Create a policy and rules with CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309vi IBM TotalStorage SAN File System
  • 8. 7.8.4 Creating a policy and rules with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 7.8.5 More examples of policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.8.6 NLS support with policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.8.7 File storage preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 7.8.8 Policy management considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 7.8.9 Best practices for managing policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334Chapter 8. File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3378.1 File sharing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3388.2 Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 8.2.1 Implementation: Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . 3408.3 Advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 8.3.1 Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.2 Administrative commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.3 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.4 Directory server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 8.3.5 MDS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 8.3.6 Implementation of advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . 365Chapter 9. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3759.1 SAN File System FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9.1.1 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9.1.2 Creating, managing, and using the FlashCopy images . . . . . . . . . . . . . . . . . . . 3789.2 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 9.2.1 Planning migration with the migratedata command . . . . . . . . . . . . . . . . . . . . . . 390 9.2.2 Perform migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 9.2.3 Post-migration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3959.3 Adding and removing Metadata servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 9.3.1 Adding a new MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 9.3.2 Removing an MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 9.3.3 Adding an MDS after previous removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3989.4 Monitoring and gathering performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 9.4.1 Gathering and analyzing performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . 3999.5 MDS automated failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 9.5.1 Failure detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 9.5.2 Fileset redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 9.5.3 Master MDS failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 9.5.4 Failover monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 9.5.5 General recommendations for minimizing recovery time . . . . . . . . . . . . . . . . . . 4279.6 How SAN File System clients access data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4279.7 Non-uniform configuration client validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 9.7.1 Client validation sample script details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 9.7.2 Using the client validation sample script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431Chapter 10. File movement and lifecycle management . . . . . . . . . . . . . . . . . . . . . . . . 43510.1 Manually move and defragment files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 10.1.1 Move a single file using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . . 436 10.1.2 Move multiple files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . 439 10.1.3 Defragmenting files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . 44110.2 Lifecycle management with file management policy . . . . . . . . . . . . . . . . . . . . . . . . . 441 10.2.1 File management policy syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 10.2.2 Creating a file management policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 10.2.3 Executing the file management policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 10.2.4 Lifecycle management recommendations and considerations . . . . . . . . . . . . . 446 Contents vii
  • 9. Chapter 11. Clustering the SAN File System Microsoft Windows client . . . . . . . . . . 447 11.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 11.2 Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 11.2.1 MSCS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 11.2.2 SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 11.3 Installing the SAN File System MSCS Enablement package . . . . . . . . . . . . . . . . . . 455 11.4 Configuring SAN File System for MSCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 11.4.1 Creating additional cluster groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 11.5 Setting up cluster-managed CIFS share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Chapter 12. Protecting the SAN File System environment . . . . . . . . . . . . . . . . . . . . . 477 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.1.1 Types of backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.2 Disaster recovery: backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.2.1 LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.2.2 Setting up a LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 12.2.3 Restore from a LUN based backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 12.3 Backing up and restoring system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 12.3.1 Backing up system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 12.3.2 Restoring the system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 12.4 File recovery using SAN File System FlashCopy function . . . . . . . . . . . . . . . . . . . . 493 12.4.1 Creating FlashCopy image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 12.4.2 Reverting FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 12.5 Back up and restore using IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 502 12.5.1 Benefits of Tivoli Storage Manager with SAN File System . . . . . . . . . . . . . . . . 502 12.6 Backup/restore scenarios with Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 503 12.6.1 Back up Windows data using Tivoli Storage Manager Windows client. . . . . . . 504 12.6.2 Back up user data in UNIX filesets with TSM client for AIX . . . . . . . . . . . . . . . 507 12.6.3 Backing up FlashCopy images with the snapshotroot option . . . . . . . . . . . . . . 510 Chapter 13. Problem determination and troubleshooting . . . . . . . . . . . . . . . . . . . . . . 519 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13.2 Remote access support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13.3 Logging and tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 13.3.1 SAN File System Message convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 13.3.2 Metadata server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 13.3.3 Administrative and security logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 13.3.4 Consolidated server message logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 13.3.5 Client logs and traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 13.4 SAN File System data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13.5 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 13.5.1 Validating the RSA configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 13.5.2 RSA II management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 13.6 Simple Network Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13.6.1 SNMP and SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13.7 Hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 13.8 SAN File System Message conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547Part 4. Exploiting the SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Chapter 14. DB2 with SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 14.1 Introduction to DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 14.2 Policy placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 14.2.1 SMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554viii IBM TotalStorage SAN File System
  • 10. 14.2.2 DMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 14.2.3 Other data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14.2.4 Sample SAN File System policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14.3 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 14.4 Load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 14.5 Direct I/O support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 14.6 High availability clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 14.7 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 14.8 Database path considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Appendix A. Installing IBM Directory Server and configuring for SAN File System 565 Installing IBM Tivoli Directory Server V5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Creating the LDAP database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 Configuring IBM Directory Server for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 Starting the LDAP Server and configuring Admin Server . . . . . . . . . . . . . . . . . . . . . . . . . 577 Verifying LDAP entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Sample LDIF file used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Appendix B. Installing OpenLDAP and configuring for SAN File System . . . . . . . . . 589 Introduction to OpenLDAP 2.0.x on Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Installation of OpenLDAP packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Configuration of OpenLDAP client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 Configuration of OpenLDAP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Configure OpenLDAP for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 Appendix C. Client configuration validation script . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Sample script listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . 603 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Contents ix
  • 11. x IBM TotalStorage SAN File System
  • 12. Figures 1-1 SAN Management standards bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1-2 CIMOM proxy model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1-3 SNIA storage model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1-4 Intelligence moving to the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1-5 In-band and out-of-band models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1-6 Block level virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1-7 IBM TotalStorage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1-8 File level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1-9 IBM TotalStorage SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1-10 Summary of SAN Volume Controller and SAN File System benefits. . . . . . . . . . . . . 19 1-11 TPC for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1-12 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1-13 TPC for Disk functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1-14 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1-15 Windows system hierarchical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1-16 Windows file system security and permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1-17 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1-18 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2-1 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2-2 SAN File System administrative structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2-3 SAN File System GUI browser interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2-4 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2-5 Filesets and nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2-6 SAN File System storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2-7 File placement policy execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2-8 Windows 2000 client view of SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2-9 Exploring the SAN File System from a Windows 2000 client. . . . . . . . . . . . . . . . . . . 55 2-10 FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3-1 Mapping of Metadata and User data to MDS and clients . . . . . . . . . . . . . . . . . . . . . 68 3-2 Illustrating network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3-3 Data classification example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3-4 SAN File System design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3-5 SAN File System data migration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3-6 SAN File System data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3-7 Typical data and metadata flow for a generic application with SAN File System . . . 94 3-8 SAN File System changes the way we look at the Storage in today’s SANs. . . . . . . 97 4-1 LDAP tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-2 Example of setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4-3 Verify disks are seen as 2145 disk devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5-1 SAN File System Console GUI sign-on window . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-2 Select language for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5-3 SAN File System Windows 2000 Client Welcome window . . . . . . . . . . . . . . . . . . . 150 5-4 Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5-5 Configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5-6 Review installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5-7 Security alert warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5-8 Driver IBM SANFS Cluster Bus Enumerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5-9 Driver IBM SAN Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xi
  • 13. 5-10 Start SAN File System client immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-11 Windows client explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-12 Windows 2000 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-13 Windows 20003 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-14 SAN File System helper service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5-15 Launch MMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-16 Add the Snap-in for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-17 Add Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-18 Add the IBM TotalStorage System Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-19 Add/Remove Snap-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5-20 Save MMC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5-21 Save MMC console to the Windows desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5-22 IBM TotalStorage File System Snap-in Properties . . . . . . . . . . . . . . . . . . . . . . . . . 161 5-23 DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5-24 Verify value for DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5-25 Trace Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5-26 Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5-27 Modify Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5-28 J2RE Setup Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5-29 J2RE verify the install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5-30 SNMP Service Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5-31 SNMP Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5-32 Verifying SNMP and SNMP Trap Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5-33 Master Console installation wizard initial window . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-34 Set user account privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-35 Adobe Installer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5-36 Master Console installation wizard information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5-37 Select optional products to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5-38 Viewing the Products List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5-39 PuTTY installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5-40 DB2 Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5-41 DB2 select installation type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5-42 DB2 select installation action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5-43 DB2 Username and Password menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5-44 DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5-45 DB2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 5-46 DB2 tools catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5-47 DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5-48 DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 5-49 DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5-50 Verify DB2 install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 5-51 Verify SVC console install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5-52 Select database repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5-53 Specify single DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5-54 Enter DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 5-55 Set trapdSharePort162 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 5-56 Define trapdTrapReceptionPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5-57 Enter TSANM Manager name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5-58 IBM Director Installation Directory window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 5-59 IBM Director Service Account Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 5-60 IBM Director network drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5-61 IBM Director database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5-62 IBM Director superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220xii IBM TotalStorage SAN File System
  • 14. 5-63 Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225-64 Upgrade to dynamic disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2235-65 Verify both disks are set to type Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2235-66 Add Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2245-67 Select mirrored disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255-68 Mirroring process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255-69 Mirror Process completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265-70 Setting Folder Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266-1 SAN File System console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2457-1 Create PuTTY ssh session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2537-2 SAN File System GUI login window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2567-3 GUI welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577-4 Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587-5 Basic SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647-6 Select expand vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2797-7 vdisk expansion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807-8 Data LUN display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817-9 Disk before expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837-10 Disk after expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2847-11 Relationship of fileset to storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887-12 Filesets from the MDS and client perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897-13 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897-14 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2927-15 Windows Explorer shows cluster name sanfs as the drive label . . . . . . . . . . . . . . . 2937-16 List nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2947-17 MBCS characters in fileset attachment directory . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967-18 Select properties of fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017-19 ACL for the fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027-20 Verify change of ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027-21 Windows security tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037-22 Policy rules based file placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3067-23 Policies in SAN File System Console (GUI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3127-24 Create a New Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3137-25 New Policy: High Level Settings sample input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3147-26 Add Rules to Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3157-27 New rule created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3167-28 Edit Rules for Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3177-29 List of defined policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3187-30 Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3187-31 Verify Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3197-32 New Policy activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3197-33 Delete a Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3207-34 Verify - Delete Policy Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3217-35 List Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3217-36 MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3237-37 Generated SQL for MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . 3247-38 Select a policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3267-39 Rules for selected policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3267-40 Edited rule for Preallocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3277-41 Activate new policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3277-42 Disable default pool with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3317-43 Display policy statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3338-1 View Windows permissions on newly created fileset. . . . . . . . . . . . . . . . . . . . . . . . 341 Figures xiii
  • 15. 8-2 Set permissions for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 8-3 Advanced permissions for Everyone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 8-4 Set permissions on Administrator group to allow Full control . . . . . . . . . . . . . . . . . 343 8-5 View Windows permissions on winfiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 8-6 View Windows permissions on fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 8-7 Read permission for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 8-8 SAN File System user mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 8-9 Sample configuration for advanced heterogeneous file sharing . . . . . . . . . . . . . . . 350 8-10 Created Active Directory Domain Controller and Domain: sanfsdom.net . . . . . . . . 351 8-11 User Creation Verification in Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 8-12 SAN File System Windows client added to Active Directory domain. . . . . . . . . . . . 352 8-13 Sample heterogeneous file sharing LDAP diagram . . . . . . . . . . . . . . . . . . . . . . . . . 352 8-14 Log on as sanfsuser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8-15 Contents of svcfileset6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8-16 unixfile.txt permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8-17 Edit the file in Windows as sanfsuser and save it . . . . . . . . . . . . . . . . . . . . . . . . . . 370 8-18 Create the file on the Windows client as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . 371 8-19 Show file contents in Windows as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 8-20 winfile.txt permissions from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 9-1 Make FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-2 Copy on write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-3 The .flashcopy directory view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 9-4 Create FlashCopy image GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 9-5 Create FlashCopy wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 9-6 Fileset selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 9-7 Set Flashcopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 9-8 Verify FlashCopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 9-9 FlashCopy image created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 9-10 List of FlashCopy images using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 9-11 List of FlashCopy images before and after a revert operation . . . . . . . . . . . . . . . . . 386 9-12 Select image to revert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 9-13 Delete Image selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 9-14 Delete Image verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 9-15 Delete image complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 9-16 Data migration to SAN File System: data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 9-17 SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 9-18 View statistics: client sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9-19 Statistics: Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9-20 Console Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 9-21 Create report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 9-22 View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 9-23 SAN File System failures and actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 9-24 List of MDS in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 9-25 List of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 9-26 Metadata server mds3 missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 9-27 Filesets list after failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 9-28 Metadata server mds3 not started automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 9-29 Failback warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 9-30 Graceful stop of the master Metadata server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 9-31 Metadata server mds2 as new master. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 9-32 Configuring SANFS for SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9-33 Selecting the event severity level that will trigger traps . . . . . . . . . . . . . . . . . . . . . . 422 9-34 Log into IBM Director Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423xiv IBM TotalStorage SAN File System
  • 16. 9-35 Discover SNMP devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4239-36 Compile a new MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4249-37 Select the MIB to compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4249-38 MIB compilation status windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4259-39 Viewing all events in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4259-40 Viewing the test trap in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4269-41 Trap sent when an MDS is shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4269-42 Example of required client access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43010-1 Windows-based client accessing homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . 43710-2 Verify file sizes in homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44311-1 MSCS lab setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44811-2 Basic cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44911-3 Network Interfaces in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45011-4 Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45011-5 SAN File System client view of the global namespace . . . . . . . . . . . . . . . . . . . . . . 45111-6 Fileset directory accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45411-7 Show permissions and ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45411-8 Create a file on the fileset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45511-9 Choose the installation language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45611-10 License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45611-11 Complete the client information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45711-12 Choose where to install the enablement software . . . . . . . . . . . . . . . . . . . . . . . . . . 45711-13 Confirm the installation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45811-14 New SANFS resource is created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45911-15 Create a new cluster group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45911-16 Name and description for the group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46011-17 Specify preferred owners for group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46011-18 Group created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46111-19 ITSOSFSGroup displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46111-20 Create new resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46211-21 New resource name and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46211-22 Select all nodes as possible owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46311-23 Enter resource dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46311-24 SAN File System resource parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46411-25 Display filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46411-26 Fileset for cluster resource selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46511-27 Cluster resource created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46511-28 New resource in Resource list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46611-29 Bring group online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46611-30 Group and resource are online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46711-31 Resource moves ownership on failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46711-32 Resource stays with current owner after rebooting the original owner . . . . . . . . . . 46811-33 Create IP Address resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46911-34 IP address resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46911-35 IP address resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47011-36 Network Name resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47111-37 Network Name resource: Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47111-38 Network Name resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47211-39 File Share resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47211-40 File Share resource: dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47311-41 File Share resource: parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47311-42 All file share resources online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47411-43 Designate a drive for the CIFS share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Figures xv
  • 17. 11-44 CIFS client access SAN File System via clustered SAN File System client . . . . . . 475 11-45 Copy lots of files onto the share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 11-46 Drive not accessible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 12-1 SVC FlashCopy relationships and consistency group . . . . . . . . . . . . . . . . . . . . . . . 481 12-2 Metadata dump file creation start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 12-3 Metadata dump file name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 12-4 DR file creation final step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 12-5 Delete/remove the metadata dump file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 12-6 Verify deletion of the metadata dump file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 12-7 FlashCopy option window GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 12-8 FlashCopy Start GUI window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 12-9 Select Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 12-10 Set Properties of FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 12-11 Verify FlashCopy settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 12-12 FlashCopy images created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 12-13 Windows client view of the FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 12-14 Client file delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 12-15 FlashCopy image revert selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 12-16 Image restore / revert verification and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 12-17 Remaining FlashCopy images after revert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 12-18 Client data restored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 12-19 Exploitation of SAN File System with Tivoli Storage Manager. . . . . . . . . . . . . . . . . 502 12-20 User files selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 12-21 Restore selective file selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 12-22 Select destination of restore file(s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 12-23 Restore files selection for FlashCopy image backup . . . . . . . . . . . . . . . . . . . . . . . . 506 12-24 Restore files destination path selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 13-1 IBM Connection Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13-2 Steps for remote access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 13-3 SAN File System message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 13-4 Event viewer on Windows 2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 13-5 OBDC from GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13-6 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 13-7 RSAII interface using Internet Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 13-8 Accessing remote power using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 13-9 Access BIOS log using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 13-10 Java Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13-11 RSA II: Remote control buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13-12 ASM Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13-13 SNMP configuration on RSA II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 14-1 Example storage pool layout for DB2 objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14-2 Workload distribution of filesets for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 14-3 Default data caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 14-4 Directory structure information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 A-1 Select location where to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 A-2 Language selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 A-3 Setup type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 A-4 Features to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 A-5 User ID for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 A-6 Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 A-7 GSKit pop-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 A-8 Installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 A-9 Configuration tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570xvi IBM TotalStorage SAN File System
  • 18. A-10 User ID pop-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571A-11 Enter LDAP database user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571A-12 Enter the name of the database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572A-13 Select database codepage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572A-14 Database location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573A-15 Verify database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573A-16 Database created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574A-17 Add organizational attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575A-18 Browse for LDIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576A-19 Start the import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577A-20 IBM Directory Server login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578A-21 IBM Directory Server Web Administration Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579A-22 Change admin password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580A-23 Add host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581A-24 Enter host details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582A-25 Verify that host has been added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583A-26 Login to local host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584A-27 Admin console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585A-28 Manage entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586A-29 Expand ou=Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 Figures xvii
  • 19. xviii IBM TotalStorage SAN File System
  • 20. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisionsare inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xix
  • 21. TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AFS® HACMP™ Storage Tank™ AIX 5L™ IBM® System Storage™ AIX® NetView® Tivoli® DB2 Universal Database™ PowerPC® TotalStorage® DB2® POWER™ WebSphere® DFS™ POWER5™ xSeries® Enterprise Storage Server® pSeries® z/VM® Eserver® Redbooks™ zSeries® Eserver® Redbooks (logo) ™ FlashCopy® SecureWay®The following terms are trademarks of other companies:Java, J2SE, Solaris, Sun, Sun Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in theUnited States, other countries, or both.Microsoft, Windows NT, Windows, Win32, and the Windows logo are trademarks of Microsoft Corporation in the UnitedStates, other countries, or both.i386, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of IntelCorporation or its subsidiaries in the United States, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.xx IBM TotalStorage SAN File System
  • 22. Preface This IBM Redbook is a detailed technical guide to the IBM TotalStorage® SAN File System. SAN File System is a robust, scalable, and secure network-based file system designed to provide near-local file system performance, file aggregation, and data sharing services in an open environment. SAN File System helps lower the cost of storage management and enhance productivity by providing centralized management, higher storage utilization, and shared access by clients to large amounts of storage. We describe the design and features of SAN File System, as well as how to plan for, install, upgrade, configure, administer, and protect it. This redbook is for all who want to understand, install, configure, and administer SAN File System. It is assumed the reader has basic knowledge of storage and SAN technologies.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. Figure 1 The team: Dachuan, Massimo, Matthew, Derek, and Charlotte© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xxi
  • 23. Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Storage Solutions at the International Technical Support Organization, San Jose Center. She has 14 years of experience with IBM in the fields of IBM TotalStorage hardware and software, IBM ^® pSeries® servers, and AIX®. She has written 15 Redbooks™, and has developed and taught IBM classes in all areas of storage and storage management. Before joining the ITSO in 2000, she was the Technical Support Manager for Tivoli® Storage Manager in the Asia Pacific Region. Huang Dachuan is an Advisory IT Specialist in the Advanced Technical Support team of IBM China in Beijing. He has nine years of experience in networking and storage support. He is CCIE certified and his expertise includes Storage Area Networks, IBM TotalStorage SAN Volume Controller, SAN File System, ESS, DS6000, DS8000, copy services, and networking products from IBM and Cisco. Derek Jackson is a Senior IT Specialist working for the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. He primarily supports SAN File System, IBM TotalStorage Productivity Center, and the ATS’s lab infrastructure. Derek has worked for IBM for 22 years, and has been employed in the IT field for 30 years. Before joining ATS, Derek worked for IBMs Business Continuity and Recovery Services and was responsible for delivering networking solutions for its clients. Matthew A. Miller is an IBM Certified IT Specialist and Systems Engineer with IBM in Phoenix, AZ. He has worked extensively with IBM Tivoli Storage Software products as both a field systems engineer and as a software sales representative and currently works with Tivoli Techline. Prior to joining IBM in 2000, Matt worked for 16 years in the client community in both technical and managerial positions. Massimo Rosichini is an IBM Certified Product Services and Country Specialist in the ITS Technical Support Group in Rome, Italy. He has extensive experience in IT support for TotalStorage solutions in the EMEA South Region. He is an ESS/DS Top Gun Specialist and is an IBM Certified Specialist for Enterprise Disk Solutions and Storage Area Network Solutions. He was an author of previous editions of the Redbooks IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 and IBM TotalStorage SAN File System SG24-7057. Thanks to the following people for their contributions to this project: The authors of previous editions of this redbook: Jorge Daniel Acuña, Asad Ansari, Chrisilia Davis, Ravi Khattar, Michael Newman, Massimo Rosichini, Leos Stehlik, Satoshi Suzuki, Mats Wahlstrom, Eric Wong Cathy Warrick and Wade Wallace International Technical Support Organization, San Jose Center Todd Bates, Ashish Chaurasia, Steve Correl, Vinh Dang, John George, Jeanne Gordon, Matthew Krill, Joseph Morabito, Doug Rosser, Ajay Srivastava, Jason Young SAN File System Development, IBM® Beaverton Rick Taliaferro, Ida Wood IBM Raleigh Herb Ahmuty, John Amann, Kevin Cummings, Gonzalo Fuentes, Craig Gordon, Rosemary McCutchen, IBM Gaithersburg Todd DeSantis IBM Pittsburghxxii IBM TotalStorage SAN File System
  • 24. Bill Cochran, Ron Henkhaus IBM Illinois Drew Davis IBM Phoenix Michael Klein IBM Germany John Bynum IBM San JoseBecome a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners, or clients. Your efforts will help increase product acceptance and client satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xxiii
  • 25. xxiv IBM TotalStorage SAN File System
  • 26. Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7057-03 for IBM TotalStorage SAN File System as created or updated on January 27, 2006.December 2004, Third Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information Advanced heterogeneous file sharing File movement and lifecycle management File sharing with Samba Changed information Client supportJanuary 2006, Fourth Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information New centralized installation procedure Preallocation policy for large files Local authentication option Microsoft clustering support Changed information New MDS server and client platform (including zSeries® support) New RSA connectivity and high availability details© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xxv
  • 27. xxvi IBM TotalStorage SAN File System
  • 28. Part 1Part 1 Introduction to IBM TotalStorage SAN File System In this part of the book, we introduce general industry and client issues that have prompted the development of the IBM TotalStorage SAN File System, and then present an overview of the product itself.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 1
  • 29. 2 IBM TotalStorage SAN File System
  • 30. 1 Chapter 1. Introduction In this chapter, we provide background information for SAN File System, including these topics: Growth in SANs and current challenges Storage networking technology: industry trends Rise of storage virtualization and growth of SAN data Data sharing with SANs: issues IBM TotalStorage products overview Introduction to file systems and key concepts Value statement for SAN File System© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 3
  • 31. 1.1 Introduction: Growth of SANs Storage Area Networks (SANs) have gained wide acceptance. Interoperability issues between components from different vendors connected by a SAN fabric have received attention and have generally been resolved, but the problem of managing the data stored on a variety of devices from different vendors is still a major challenge to the industry. The volume of data storage required in daily life and business has exploded. Specific figures vary, but it is indisputably true that capacity is growing, hardware costs are decreasing, while availability requirements are rapidly approaching 100%. Three hundred million Internet users are driving two petabytes of data traffic per month. Users are mobile, access patterns unpredictable, and the content of data becomes more and more interactive. Clients deploying SANs today face many issues as they build or grow their storage infrastructures. Although the cost of purchasing storage hardware continues its rapid decline, the cost of managing storage is not keeping pace. In some cases, storage management costs are actually rising. Recent studies show that the purchase price of storage hardware comprises as little as 5 to 10 percent of the total cost of storage. The various factors that make up the total cost of ownership include: Administration costs Downtime Environmental overhead Device management tasks Backup and recovery procedures Shortage of skilled storage administrators Heterogeneous server and storage installations Information technology managers are under significant pressure to reduce costs while deploying more storage to remain competitive. They must address the increasing complexity of storage systems, the explosive growth in data, and the shortage of skilled storage administrators. Furthermore, the storage infrastructure must be designed to help maximize the availability of critical applications. Storage itself may well be treated as a commodity. However, the management of it is certainly not; in fact, the cost of managing storage is typically many times its actual acquisition cost.1.2 Storage networking technology: Industry trends In the late 1990s, storage networking emerged in the form of SANs, Network Attached Storage (NAS), and Internet Small Computer System Interface (iSCSI) technologies. These were aimed at reducing the total cost of ownership (TCO) of storage by managing islands of information among heterogeneous environments with disparate operating systems, data formats, and user interfaces, in a more efficient way. SANs enable you to consolidate storage and share resources by enabling storage capacity to be connected to servers at a greater distance. By disconnecting storage resource management from individual hosts, a SAN enables disk storage capacity to be consolidated. The results can be lower overall costs through better utilization of the storage, lower management costs, increased flexibility, and increased control. This can be achieved physically or logically.4 IBM TotalStorage SAN File System
  • 32. Physical consolidationData from disparate storage subsystems can be combined onto large, enterprise classshared disk arrays, which may be located at some distance from the servers. The capacity ofthese disk arrays can be shared by multiple servers, and users may also benefit from theadvanced functions typically offered with such subsystems. This may include RAIDcapabilities, remote mirroring, and instantaneous data replication functions, which might notbe available with smaller, integrated disks. The array capacity may be partitioned, so thateach server has an appropriate portion of the available gigabytes.Available capacity can be dynamically allocated to any server requiring additional space.Capacity not required by a server application can be re-allocated to other servers. This avoidsthe inefficiency associated with free disk capacity attached to one server not being usable byother servers. Extra capacity may be added, non disruptively.However, physical consolidation does not mean that all wasted space concerns areaddressed.Logical consolidationIt is possible to achieve shared resource benefits from the SAN, but without moving existingequipment. A SAN relationship can be established between a client and a group of storagedevices that are not physically co-located (excluding devices that are internally attached toservers). A logical view of the combined disk resources may allow available capacity to beallocated and re-allocated between different applications running on distributed servers, toachieve better utilization.Extending the reach: iSCSIWhile growing in popularity, nevertheless there are certain perceived barriers to entry withSANs. These include a higher cost, and complexity of implementation and administration.The iSCSI protocol is intended to address this by bringing some of the performance benefitsof a SAN, while not requiring the same infrastructure. It achieves this by providingblock-based I/O over a TCP/IP network, rather than the Fibre Channel for SAN.Today’s storage solutions need to embrace emerging technologies at all price points to offerthe client the highest freedom of choice. Chapter 1. Introduction 5
  • 33. 1.2.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interpretability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. SAN Management Standards Bodies Marketing De-facto Standards Formal Standards Internet Engineering Task Force (IETF) Storage Networking Industry Association (SNIA) Formal standards for SNMP and MIBs SAN umbrella organization IBM participation: Founding member American National Standards Board, Tech Council, Project Chair Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards Jiro (StoreX) IBM participation Fibre Channel Industry Sun consortium Association (FCIA) Sponsors customer events IBM participation: Board Fibre Alliance International Organization for EMC consortium Standardization (ISO) International standardization SCSI Trade Association IBM Software National Storage Technology roadmaps development ISO Certified Industry Consortium IBM participation: Pre-competitive Member consortium Distributed Management Task Force (DMTF) Development of CIM IBM participation Figure 1-1 SAN Management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage. Storage Networking Industry Association (SNIA) Storage Management Initiative (SMI) Specification. CIM/WEB management model CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Desktop Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBM’s intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces. CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be6 IBM TotalStorage SAN File System
  • 34. developed to read MOF files and automatically generate data type definitions, interface stubs,and GUI constructs to be inserted into management applications.SMI SpecificationSNIA has fully adopted and enhanced the CIM standard for Storage Management in its SMISpecification. SMI Specification was launched in mid-2002 to create and develop a universalopen interface for managing storage devices, including storage networks.The idea behind SMIS is to standardize the management interfaces so that managementapplications can utilize them and provide cross device management. This means that a newlyintroduced device can be immediately managed, as it will conform to the standards.SMIS extends CIM/WBEM with the following features: A single management transport: Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMIS. A complete, unified, and rigidly specified object model: SMIS defines “profiles” and “recipes” within the CIM that enables a management client to reliably utilize a component vendor’s implementation of the standard, such as the control of LUNs and Zones in the context of a SAN. Consistent use of durable names: As a storage network configuration evolves and is reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems so that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system: SMIS compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMIS compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager.The models and protocols in the SMIS implementation are platform-independent, enablingapplication development for any platform, and enabling them to run on different platforms.The SNIA will also provide interpretability tests that will help vendors to test their applicationsand devices if they conform to the standard. Chapter 1. Introduction 7
  • 35. Integrating existing devices into the CIM model As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMIS is introducing CIM agents and CIM object managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMIS. The agent is used for one device and an object manager for a set of devices. This type of operation is also called a proxy model and is shown in Figure 1-2. The CIM Agent or CIM Object Manager (CIM/OM) will translate a proprietary management interface to the CIM interface. An example of a CIM/OM is the IBM CIM Object Manager for the IBM TotalStorage Enterprise Storage Server®. Proxy model (CIM Agent/Object Manager) for legacy devices Lock Directory Manager Server Client Directory User SA 0…n Agent 0…n Agent 0…n SLP TCP/IP CIMxml CIM operations over http TCP/IP SA Service Agent (SA) SA Object Manager Agent Agent 0…n Device or 0…n Provider Subsystem 1 1 0…n Proprietary Proprietary 1 n Embedded Device or Model Device or Subsystem Device Subsystem Proxy Model Proxy Model Figure 1-2 CIMOM proxy model In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent, as shown in the “Embedded Model” in Figure 1-2. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible, feature-poor interfaces into their products. Component developers will no longer have to “push” their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end users. Ultimately, faced with reduced costs for management, end users will be able to adopt storage-networking technology faster and build larger, more powerful networks.8 IBM TotalStorage SAN File System
  • 36. 1.2.2 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is “to ensure that storage networks become efficient, complete, and trusted solutions across the IT community”1. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market. The SNIA Shared Storage Model IBM is an active member of SNIA and fully supports SNIA’s goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 1-3. The SNIA Storage Model Application File/record subsystem Database (dbms) File system (FS) Redundancy mgmt (backup, …) Resource mgmt, configuration High availability (fail-over, …) Services subsystem Storage domain Discovery, monitoring Capacity planning Host-based block aggregation Security, billing Block aggregation SN-based block aggregation Device-based block aggregation Storage devices (disks, tape, etc.) Block subsystem Copyright 2000, Storage Network Industry Association Figure 1-3 SNIA storage model IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Block aggregation File/record subsystems Storage devices/block subsystems 1 http://www.snia.org/news/mission/ Chapter 1. Introduction 9
  • 37. Services subsystems In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMIS/CIM/WBEM, see the SNIA and DMTF Web sites: http://www.snia.org http://www.dmtf.org Why open standards? Products that adhere to open standards offer significantly more benefits than using proprietary ones. The history of the information technology industry has shown essentially open systems offer three key benefits: Better solutions at a lower price: By harnessing the resources of multiple companies, more development resources are brought to bear on common client requirements, such as ease of management. Improved interoperability: Without open standards, every vendor needs to work with every other vendor to develop interfaces for interoperability. The result is a range of very complex products whose interdependencies make them difficult for clients to install, configure, and maintain. Client choice: By complying with standards developed jointly, products interoperate seamlessly with each other, preventing vendors from locking clients into their proprietary platform. As client needs and vendor choices change, products that interoperate seamlessly provide clients with more flexibility and improve co-operation among vendors. More significantly, given the industry-wide focus on business efficiency, the use of fully integrated solutions developed to open industry standards will ultimately drive down the TCO of storage.1.2.3 The IBM approach Deploying a storage network requires many choices. Not only are there SANs and NAS to consider, but also other technologies, such as iSCSI. The choice of when to deploy a SAN, or use NAS, continues to be debated. CIOs and IT professionals must plan to ensure that all the components from multiple storage vendors will work together in a virtualization environment to enhance their existing storage infrastructures, or build new infrastructures, while keeping a sharp focus on business efficiency and business continuance. The IBM approach to solving these pervasive storage needs is to address the entire problem by simplifying deployment, use, disaster recovery, and management of storage resources. From a TCO perspective, the initial purchase price is becoming an increasingly small part of the equation. As the cost per megabyte of disk drives continues to decrease, the client focus is shifting away from hardware towards software value-add functions, storage management software, and services. The importance of a highly reliable, high performance hardware solution, such as the IBM TotalStorage DS8000), as the guardian of mission-critical data for a business, is still a cornerstone concept. However, software is emerging as a critical element of any SAN solution. Management and virtualization software provide advanced functionality for administering distributed IT assets, maintaining high availability, and minimizing downtime.10 IBM TotalStorage SAN File System
  • 38. 1.3 Rise of storage virtualization Storage virtualization techniques are becoming increasingly more prevalent in the IT industry today. Storage virtualization forms one of several levels of virtualization in a storage network, and can be described as the abstraction from physical volumes of data storage to a logical level. Storage virtualization addresses the increasing complexity of managing storage, while reducing the associated costs. Its main purpose is the full exploitation of the benefits promised by a SAN. Virtualization enables data sharing, ensuring higher availability, providing disaster tolerance, improving performance, allowing for consolidation of resources, providing policy-based automation, and much more besides, which do not automatically result from the implementation of today’s SAN hardware components. Storage virtualization is possible on several levels of the storage network components, meaning that it is not limited to the disk subsystem. Virtualization separates the representation of storage to the operating system and its users from the actual physical components. This has been available, and taken for granted, in the mainframe environment for many years (such as DFSMS from IBM, and IBM’s VM operating system with minidisks).1.3.1 What is virtualization? Storage virtualization gathers the storage into storage pools, which are independent of the actual layout of the storage (that is, the overall file system structure). Because of this independence, new disk systems can be added to a storage network, and data migrated to them, without causing disruption to applications. Since the storage is no longer controlled by individual servers, it can be used by any server as needed. In addition, it can allow capacity to be added or removed on demand without affecting the application servers. Storage virtualization will simplify storage management, which has been an escalating expense in the traditional SAN environment.1.3.2 Types of storage virtualization Virtualization can be implemented at the following levels: Server level Storage level Fabric level Chapter 1. Introduction 11
  • 39. The IBM strategy is to move the intelligence out of the server, eliminating the dependency on having to implement specialized software at the server level. Removing it at the storage level decreases the dependency on implementing RAID subsystems, and alternative disks can be utilized. By implementing at a fabric level, storage control is moved into the network, which gives the opportunity for virtualization to all, and at the same time reduces complexity by providing a single view of storage. The storage network can be used to leverage all kinds of services across multiple storage devices, including virtualization. A high-level view of this is shown in Figure 1-4. Application Application DBMS DBMS Installable File File System System File System Device Driver Device Driver Common File Hardware Element Traditional SAN Tivoli Storage System - SAN File Management Management System SAN SAN Volume Controller Storage Network Intelligent Intelligent Storage Ctller Storage Ctller RAID Controller RAID Controller RAID Controller Disk Disk Disk Figure 1-4 Intelligence moving to the network The effective management of resources from the data center across the network increases productivity and lowers TCO. In Figure 1-4, you can see how IBM accomplishes this effective management by moving the intelligence from the storage subsystems into the storage network using the SAN Volume Controller, and moving the intelligence of the file system into the storage network using SAN File System. The IBM storage management software, represented in Figure 1-4 as hardware element management and Tivoli Storage Management (a suite of SAN and storage products), addresses administrative costs, downtime, backup and recovery, and hardware management. The SNIA model (see Figure 1-3 on page 9) distinguishes between aggregation at the block and file level. Block aggregation or block level virtualization The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices, such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can12 IBM TotalStorage SAN File System
  • 40. be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or slicing-and-dicing native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring File aggregation or file level virtualization The file/record layer in the SNIA model is responsible for packing items, such as files and databases, into larger entities, such as block level volumes and storage devices. File aggregation or file level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes. They can: Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support Enhance productivity by providing centralized and simplified management through policy-based storage management automation Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers1.3.3 Storage virtualization models Storage virtualization can be broadly classified into two models: In-band virtualization, also referred to as symmetric virtualization Out-of-band virtualization, also referred to as asymmetric virtualization Figure 1-5 shows the two storage virtualization models. Figure 1-5 In-band and out-of-band models Chapter 1. Introduction 13
  • 41. In-band In an in-band storage virtualization implementation, both data and control information flow over the same path. The IBM TotalStorage SAN Volume Controller (SVC) engine is an in-band implementation, which does not require any special software in the servers and provides caching in the network, allowing support of cheaper disk systems. See the redbook IBM TotalStorage SAN Volume Controller, SG24-6423 for further information. Out-of-band In an out-of-band storage virtualization implementation, the data flow is separated from the control flow. This is achieved by storing data and metadata (data about the data) in different places. This involves moving all mapping and locking tables to a separate server (the Metadata server) that contains the metadata of the files. IBM TotalStorage SAN File System is an out-of-band implementation. In an out-of-band solution, the servers (who are clients to the Metadata server) request authorization to data from the Metadata server, which grants it, handles locking, and so on. The servers can then access the data directly without further Metadata server intervention. Separating the flow of control and data in this manner allows the data I/O to use the full bandwidth that a SAN provides, while control I/O goes over a separate network like TCP/IP. For many operations, the metadata controller does not even intervene. Once a client has obtained access to a file, all I/O will go directly over the SAN to the storage devices. Metadata is often referred to as data about the data; it describes the characteristics of stored user data. A Metadata server, in the SAN File System, is a server that off loads the metadata processing from the data-storage environment to improve SAN performance. An instance of the SAN File System runs on each engine, and together the Metadata servers form a cluster.1.4 SAN data sharing issues The term “data sharing” is used somewhat loosely by users and some vendors. It is sometimes interpreted to mean the replication of files or databases to enable two or more users, or applications, to concurrently use separate copies of the data. The applications concerned may operate on different host platforms. Data sharing may also be used to describe multiple users accessing a single copy of a file. This could be called “true data sharing”. In a homogeneous server environment, with appropriate application software controls, multiple servers may access a single copy of data stored on a consolidated storage subsystem. If attached servers are heterogeneous platforms (for example, a mix of UNIX® and Windows®), sharing of data between such unlike operating system environments is complex. This is due to differences in file systems, access controls, data formats, and encoding structures.1.5 IBM TotalStorage Open Software Family Storage and network administrators face tough challenges today. Demand for storage continues to grow, and enterprises require increasingly resilient storage infrastructures to support their on demand business needs. Compliance with legal, governmental, and other industry specific regulations is driving new data retention requirements. The IBM TotalStorage Open Software Family is a comprehensive, flexible storage software solution that can help enterprises address these storage management challenges today. As a first step, IBM offers infrastructure components that adhere to industry standard open14 IBM TotalStorage SAN File System
  • 42. interfaces for registering with management software and communication connection and configuration information. As the second step, IBM offers automated management software components that integrate with these interfaces to collect, organize, and present information about the storage environment. The IBM TotalStorage Open Software Family includes the IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System, and the IBM TotalStorage Productivity Center.1.5.1 IBM TotalStorage SAN Volume Controller The IBM TotalStorage SAN Volume Controller (SVC) is an in-band, block-based virtualization product that minimizes the dependency on unique hardware and software, decoupling the storage functions expected in a SAN environment from the storage subsystems and managing storage resources. In a typical non-virtualized SAN, shown to the left of Figure 1-6, servers are mapped to specific devices, and the LUNs defined within the storage subsystem are directly presented to the host or hosts. With the SAN Volume Controller, servers are mapped to virtual disks, thus creating a virtualization layer. SANS Today Block Virtualization Servers are mapped to specific physical Servers are mapped to a virtual disk disks i.e. “physical mapping i.e. “logical mapping” Figure 1-6 Block level virtualization Chapter 1. Introduction 15
  • 43. The IBM TotalStorage SAN Volume Controller is designed to provide a redundant, modular, scalable, and complete solution, as shown in Figure 1-7. Redundant, modular, scalable, complete solution Managed Disks Figure 1-7 IBM TotalStorage SAN Volume Controller Each SAN Volume Controller consists of one or more pairs of engines, each pair operating as a single controller with fail-over redundancy. A large read/write cache is mirrored across the pair, and virtual volumes are shared between a pair of nodes. The pool of managed disks is controlled by a cluster of paired nodes. The SAN Volume Controller is designed to provide complete copy services for data migration and business continuity. Since these copy services operate on the virtual volumes, dramatically simpler replication configurations can be created using the SAN Volume Controller, rather than replicating each physical volume in the managed storage pool. The SAN Volume Controller improves storage administrator productivity, provides a common base for advanced functions, and provides for more efficient use of storage. The SAN Volume Controller consists of software and hardware components delivered as a packaged appliance solution in a variety of form factors. The IBM SAN Volume Controller solution can be preconfigured to the clients specification, and will be installed by an IBM customer engineer.1.5.2 IBM TotalStorage SAN File System The IBM TotalStorage SAN File System architecture brings the benefits of the existing mainframe system-managed storage (DFSMS) to the SAN environment. Features such as policy-based allocation, volume management, and file management have long been available on IBM mainframe systems. However, the infrastructure for such centralized, automated management has been lacking in the open systems world of Linux®, Windows, and UNIX. On conventional systems, storage management is platform dependent. IBM TotalStorage SAN File System provides a single, centralized point of control to better manage files and data, and is platform independent. Centralized file and data management dramatically simplifies storage administration and lowers TCO.16 IBM TotalStorage SAN File System
  • 44. SAN File System is a common file system specifically designed for storage networks. Bymanaging file details (via the metadata controller) on the storage network instead of inindividual servers, the SAN File System design moves the file system intelligence into thestorage network where it can be available to all application servers. Figure 1-8 shows the filelevel virtualization aggregation, which provides immediate benefits: a single globalnamespace and a single point of management. This eliminates the need to manage files on aserver by server basis. A global namespace is the ability to access any file from any clientsystem using the same name. Block virtualization: Common file system: SAN FS An important step Extends the value FS Servers are mapped to a virtual disk, easing Server file systems are enhanced through the administration of the physical assets a common file system and single name spaceFigure 1-8 File level virtualizationIBM TotalStorage SAN File System automates routine and error-prone tasks, such as fileplacement, and monitors out of space conditions. IBM TotalStorage SAN File System willallow true heterogeneous file sharing, where reads and writes on the same data can be doneby different operating systems.The SAN File System Metadata server (MDS) is a server cluster attached to a SAN thatcommunicates with the application servers to serve the metadata. Other than installing theSAN File System client on the application servers, no changes are required to applications touse SAN File System, since it emulates the syntax and behavior of local file systems. Chapter 1. Introduction 17
  • 45. Figure 1-9 shows the SAN File System environment. External clients NFS / CIFS SFS admin console IP Network Client / metadata communications SAN LAN FC iSCSI FC/iSCSI Gateway SFS meta-data cluster 2-8 servers SFS metadata storage SFS user System storage storage Multiple, heterogeneous storage pools Figure 1-9 IBM TotalStorage SAN File System architecture In summary, IBM TotalStorage SAN File System is a common SAN-wide file system that permits centralization of management and improved storage utilization at the file level. IBM TotalStorage SAN File System is configured in a high availability configuration with clustering for the Metadata servers, providing redundancy and fault tolerance. IBM TotalStorage SAN File System is designed to provide policy-based storage automation capabilities for provisioning and data placement, nondisruptive data migration, and a single point of management for files on a storage network.1.5.3 Comparison of SAN Volume Controller and SAN File System Both the IBM SAN Volume Controller and IBM SAN File System provide storage virtualization capabilities that address critical storage management issues, including: Optimized storage resource utilization Improved application availability Enhanced storage personnel productivity The IBM SAN Volume Controller addresses volume related tasks that impact these requirements including: Add, replace, remove storage arrays Add, delete, change LUNs Add capacity for applications Manage different storage arrays Manage disaster recovery tools Manage SAN topology18 IBM TotalStorage SAN File System
  • 46. Optimize storage performance The IBM SAN File System addresses file related tasks that impact these same requirements. For example: Extend or truncate file system Format file system De-fragmentation File-level replication Data sharing Global name space Data lifecycle management A summary of SAN Volume Controller and SAN File System benefits can be seen in Figure 1-10. IBM TotalStorageTM Virtualization Family Benefit SAN Volume Controller SAN File System Create a single pool of storage from Virtual Volumes from multiple disparate storage devices the storage pool File, Data sharing across heterogeneous Single SAN-wide File Servers, OS System, global namespace Centralized Management Single interface for the Single view of file space storage pool across heterogeneous servers Improved Capacity Utilization Pools volumes across Reduces storage needs disparate storage at File Level devices Improved Application Availability No downtime to manage Non-disruptive LUNs, migrate volumes, additions/ changes to add storage file space, less out-of- space conditions Single, Cost Effective set of Advanced Volume-based Peer-to- File-based space- Copy Services Peer Remote Copy and efficient FlashCopy® FlashCopy® Policy Based Automation Files, Data, Quality-of- Service based pooling SAN Volume Controller and SAN File System provide complementary benefits to address Volume and File level issues © 2005 IBM Corporation on demand operating environment Figure 1-10 Summary of SAN Volume Controller and SAN File System benefits1.5.4 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center is comprised of a user interface designed for ease of use, and the following components: TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Data TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Replication Chapter 1. Introduction 19
  • 47. 1.5.5 TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Fabric is designed to build and maintain a complete, current map of your storage network. TPC for Fabric can automatically determine both the physical and logical connections in your storage network and display the information in both a topological format and a hierarchical format. Looking outward from the SAN switch, TPC for Fabric can answer questions that help administrators validate proper configuration of your open storage network: What hosts are attached to your storage network and how many HBAs does each host have? What firmware levels are loaded on all your HBAs? What firmware levels are loaded on all your SAN switches? How are the logical zones configured? Looking downward from the host, TPC for Fabric answers administrator questions that arise when changes occur in the storage network that could affect host access to storage: Does a given host have alternate paths through the storage network? Do those alternate paths use alternate switches? If available, are those alternate paths connected to alternate controllers on the storage device? Looking upward from the storage device, TPC for Fabric answers administrator questions that arise when changes happen in the storage network that could affect the availability of stored data: What hosts are connected to a given storage device? What hosts have access to a given storage logical unit (LUN)? Another key function of the TPC for Fabric is “change validation”. TPC for Fabric detects changes in the storage network, both planned and unplanned, and it can highlight those changes for administrators. Figure 1-11 on page 21 shows a sample topology view provided by TPC for Fabric.20 IBM TotalStorage SAN File System
  • 48. Figure 1-11 TPC for Fabric1.5.6 TotalStorage Productivity Center for Data TotalStorage Productivity Center for Data is an analyzing software tool that helps storage administrators to manage the content of systems from a logical perspective. TPC for Data improves the storage return on investment by: Delaying purchases of disks: After performing housecleaning, you can satisfy the demand for more storage from existing (now freed-up) disks. Depending on your particular situation, you may discover you have more than adequate capacity and can defer the capital expense of additional disks for a considerable time. Lowering the storage growth rate: Because you are now monitoring and keeping better control of your storage according to policies in place, it should grow at a lower rate than before. Lowering disk costs: With TPC for Data, you will know what the real quarter-to-quarter growth rates actually are, instead of approximating (best-effort basis) once per year. You can project your annual demand with a good degree of accuracy, and can negotiate an annual contract with periodic deliveries, at a price lower than you would have paid for periodic emergency purchases. Lowering storage management costs: The manual effort is greatly reduced as most functions, such as gathering the information and analyzing it, are automated. Automated Alerts can be set up so the administrator only needs to get involved in exceptional conditions. Chapter 1. Introduction 21
  • 49. Figure 1-12 shows the TPC for Data dashboard.Figure 1-12 TPC for Data Before using TPC for Data to manage your storage, it was difficult to get advance warning of out-of-space conditions on critical application servers. If an application did run out of storage on a server, it would typically just stop. This means revenue generated from that application or the service provided by it also stopped. And it incurred a high cost to fix it, as fixing unplanned outages is usually expensive. With TPC for Data, applications will not run out of storage. You will know when they need more storage, and can get it at a reasonable cost before an outage occurs. You will avoid the loss of revenue and services, plus the additional costs associated with unplanned outages.1.5.7 TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Disk is designed to enable administrators to manage storage area network (SAN) storage components based on the Storage Networking Industry Association (SNIA) Storage Management Interface Specification (SMI-S). TPC for Disk also includes the BonusPack for TPC for Fabric, bringing together device management with fabric management. This combination is designed to allow a storage administrator to configure storage devices from a single point, monitor SAN status, and provide operational support to storage devices.22 IBM TotalStorage SAN File System
  • 50. Managing a virtualized SANIn a pooled or virtualized SAN environment, multiple devices work together to create astorage solution. TPC for Disk is designed to provide integrated administration, optimization,and replication features for these virtualization solutions.TPC for Disk is designed to provide an integrated view of an entire SAN system to helpadministrators perform complex configuration tasks and productively manage the SANinfrastructure. TPC for Disk offers features that can help simplify the establishment,monitoring, and control of disaster recovery and data migration solutions, because thevirtualization layers support advanced replication configurations.TPC for Disk includes a device management function, which discovers supported devices,collects asset, configuration, and availability data from the supported devices, and provides atopographical view of the storage usage relationships among these devices. Theadministrator can view essential information about storage devices discovered by TPC forDisk, examine the relationships among the devices, and change their configurations.The TPC for Disk device management function provides discovery of storage devices thatadhere to the SNIA SMI-S standards. The function uses the Service Location Protocol (SLP)to discover supported storage subsystems on the SAN, create managed objects to representthese discovered devices, and display them as individual icons in the TPC Console.Device management in TPC offers: Centralized access to information from storage devices Enhanced storage administrator productivity with integrated volume configuration Outstanding problem determination with cross-device configuration Centralized management of storage devices with browser launch capabilitiesTPC for Disk also provides a performance management function: a single, integrated consolefor the performance management of supported storage devices.The performance management function monitors metrics such as I/O rates and cacheutilization, and supports optimization of storage through the identification of the best LUNs forstorage allocation. It stores received performance statistics in database tables for later use,and analyzes and generates reports on monitored devices for display in the TPC Console.The administrator can configure performance thresholds for the devices based onperformance metrics and the system can generate alerts when these thresholds areexceeded. Actions can then be configured to trigger from these events, for example, sende-mail or an SNMP trap. The performance management function also provides gauges(graphs) to track real-time performance. These gauges are updated when new data becomesavailable.The performance management function provides: Proactive performance management Performance metrics monitoring across storage subsystems from a single console Timely alerts to enable event action based on client policies Focus on storage optimization through identification of the best LUN for a storage allocation Chapter 1. Introduction 23
  • 51. Figure 1-13 shows the TPC main window with the performance management functions expanded.Figure 1-13 TPC for Disk functions1.5.8 TotalStorage Productivity Center for Replication Data replication is a core function required for data protection and disaster recovery. TotalStorage Productivity Center for Replication (TPC for Replication) is designed to control and monitor the copy services operations in storage environments. It provides advanced copy services functions for supported storage subsystems on the SAN. Today, it provides Continuous Copy and Point-in-Time Copy services. Specific support is for IBM FlashCopy® for ESS and PPRC (Metro Mirror) for ESS. TPC for Replication provides configuration assistance by automating the source-to-target pairing setup, as well as monitoring and tracking the replication operations. TPC for Replication helps storage administrators keep data on multiple related volumes consistent across storage systems. It enables freeze-and-go functions to be performed with consistency on multiple pairs when errors occur during the replication (mirroring) operation. And it helps automate the mapping of source volumes to target volumes, allowing a group of source volumes to be automatically mapped to a pool of target volumes. With TPC for Replication, the administrator can: Keep data on multiple related volumes consistent across storage subsystems Perform freeze-and-go functions with consistency on multiple pairs when errors occur during a replication operation Figure 1-13 shows the TPC main window with the Replication management functions expanded.24 IBM TotalStorage SAN File System
  • 52. Figure 1-14 TPC for Replication1.6 File system general terminology Since SAN File System implements a SAN-based, global namespace file system, it is important here to understand some general file system concepts and terms.1.6.1 What is a file system? A file system is a software component that builds a logical structure for storing files on storage devices (typically disk drives). File systems hide the underlying physical organization of the storage media and present abstractions such as files and directories, which are more easily understood by users. Chapter 1. Introduction 25
  • 53. Generally, it appears as a hierarchical structure in which files and folders (or directories) can be stored. The top of the hierarchy of each file system is usually called “root”. Figure 1-15 shows an example of a Windows system hierarchical view, also commonly known as the tree or directory. Figure 1-15 Windows system hierarchical view A file system specifies naming conventions for naming the actual files and folders (for example, what characters are allowed in file and directory names; are spaces permitted?) and defines a path that represents the location where a specific file is stored. Without a file system, files would not even have names and would appear as nameless blocks of data randomly stored on a disk. However, a file system is more than just a directory tree or naming convention. Most file systems provide security features, such as privileges and access control for: Access to files based on user/group permissions Access Control Lists (ACLs) to allow/deny specific actions on file(s) to specific user(s) Figure 1-16 on page 27 and Example 1-1 on page 27 show Windows and UNIX system security and file permissions, respectively.26 IBM TotalStorage SAN File System
  • 54. Figure 1-16 Windows file system security and permissions Example 1-1 UNIX file system security and permissions # ls -l total 2659 -rw------- 1 root system 31119 Sep 15 16:11 .TTauthority -rw------- 1 root system 196 Sep 15 16:11 .Xauthority drwxr-xr-x 10 root system 512 Sep 15 16:11 .dt -rwxr-xr-x 1 root system 3970 Apr 17 11:36 .dtprofile -rw------- 1 root system 3440 Sep 16 08:16 .sh_history -rw-r--r-- 1 root system 115 May 13 14:12 .xerrors drwxr-xr-x 2 root system 512 Apr 17 11:36 TT_DB -rw-r--r-- 1 root system 3802 Sep 04 09:51 WebSM.pref -rwxrwxrwx 1 root system 6600 May 14 08:01 aix_sdd_data_gatherer drwxr-x--- 2 root audit 512 Apr 16 2001 audit lrwxrwxrwx 1 bin bin 8 Apr 17 09:35 bin -> /usr/bin drwxr-xr-x 2 root system 512 Apr 18 08:30 cdrom drwxrwxr-x 5 root system 3072 Sep 15 15:00 dev -rw-r--r-- 1 root system 108 Sep 15 09:16 dposerv.lock drwxr-xr-x 2 root system 512 May 13 15:12 drom drwxr-xr-x 2 root system 512 May 29 13:40 essdisk1fs1.6.2 File system types File systems have a wide variety of functions and capabilities and can be broadly classified into: Local file systems LAN file systems SAN file systems Chapter 1. Introduction 27
  • 55. Local file systems A local file system is tightly integrated with the operating system, and is therefore usually specific to that operating system. A local file system provides services to the system where the data is installed. All data and metadata are served over the system’s internal I/O path. Some examples of local file systems are Windows NTFS, DOS FAT, Linux ext3, and AIX JFS. LAN file systems LAN file systems allow computers attached via a LAN to share data. They use the LAN for both data and metadata. Some LAN file systems also implement a global namespace, like AFS®. Examples of LAN file systems are Network File System (NFS), Andrew File System (AFS), Distributed File System (DFS™), and Common Internet File System (CIFS). Network file sharing appliances A special case of a LAN file system is a specialized file serving appliance, such as the IBM N3700 and similar from other vendors. These provide CIFS and NFS file serving capabilities using both LAN and iSCSI protocols. SAN file systems SAN file systems allow computers attached via a SAN to share data. They typically separate the actual file data from the metadata, using the LAN path to serve the metadata, and the SAN path for the file data. The IBM TotalStorage SAN File System is a SAN File System. Figure 1-17 shows the different file system types. Local File Systems LAN File Systems SAN File Systems Use LAN for data & Use SAN for data and Integral part of OS metadata LAN for metadata NTFS, FAT, JFS NFS, AFS, DFS, CIFS SAN FS Leo Leo Iva Lou Leo Iva Lou Leo File File SAN files ServerA ServerB Metadata Server Leo/Iva/Lou files Leo/Iva files Iva/Lou files Virtualized Storage Subsystem Figure 1-17 File system types1.6.3 Selecting a file system The factors that determine which type of file system is most appropriate for an application or business requirement include: Volume of data being processed Type of data being processed Patterns of data access Availability requirements Applications involved Types of computers requiring access to the file system28 IBM TotalStorage SAN File System
  • 56. LAN file systems are designed to provide data access over the IP network. Two of the mostcommon protocols are Network File System (NFS) and Common Internet File System (CIFS).Typically, NFS is used for UNIX servers and CIFS is used for Windows servers. Tools exist toallow Windows servers to support NFS access and UNIX/Linux servers to support CIFSaccess, which enable these different operating systems to work with each others’ files.Local file systems’ limitations surface when business requirements mandate the need for arapid increase in data storage or sharing of data among servers. Issues may include: Separate “islands of storage” on each host. Because local file systems are integrated with the servers’ operating system, each file system must be managed and configured separately. In situations where two or more file system types are in use (for example, Windows and Sun™ Servers), operators require training and skills in each of these operating systems to complete even common tasks such as adding additional storage capacity. No file sharing between hosts. Inherently difficult to manage.LAN file systems can address some of the limitations of local file systems by adding the abilityto share among homogenous systems. In addition, there are some distributed file systemsthat can take advantage of both network-attached and SAN-attached disk. Some restrictionsof LAN file systems include: In-band cluster architectures are inherently more difficult to scale than out-of-band SAN file system architectures. Performance is impacted as these solutions grow. Homogeneous file-sharing only. There is no (or limited) ability to provide file-locking and security between mixed operating systems. Each new cluster creates an “island of storage” to manage. As the number of “islands” grow, similar issues as with local file systems tend to increase. File-level policy-based placement is inherently more difficult. Clients still use NFS/CIFS protocols with the inherent limitations of those protocols (security, locking, and so on) File system and storage resources are not scalable beyond a single NAS appliance. An NAS appliance must handle blocks for non-SAN attached clients.SAN file systems address the limitations of local and network file systems. They enable 7x24availability, increasing rates of change to the environment, and reduction of managementcost.The IBM SAN File System offers these advantages: Single global view of file system. This enables tremendous flexibility to increase or decrease the amount of storage available to any particular server as well as full file sharing (including locking) between heterogeneous servers. Metadata Server processes only metadata operations. All data I/O occurs at SAN speeds. Linear scalability of global file system can be achieved by adding Metadata Server nodes. Advanced, centralized, file-granular, and policy-based management. Automated lifecycle management of data can take full advantage of tiered storage. Nondisruptive management of physical assets provides the ability to add, delete, and change the disk subsystem without disruption to the application servers. Chapter 1. Introduction 29
  • 57. 1.7 Filesets and the global namespace A key concept for SAN File System is the global namespace. Traditional file systems and file sharing systems operate separate namespaces, that is, each file is tied or mapped to the server which hosts it, and the clients must know which server this is. For example, in Figure 1-17 on page 28, in a LAN file system, user Iva has files stored both on File Server A and File Server B. She would need to specify the particular file server in the access path for each file. SAN File System, by contrast, presents a global namespace: there is one file structure (subdivided into parts called filesets), which is available simultaneously to all the clients. This is shown in Figure 1-18. ROOT fileset 1 fileset 2 fileset 3 fileset 4 fileset 5 fileset 6 Figure 1-18 Global namespace Filesets are subsets of the global namespace. To the clients, the filesets appear as normal directories, where they can create their own subdirectories, place files, and so on. But from the SAN File System server perspective, the fileset is the building-block of the global namespace structure, which can only be created and deleted by SAN File System administrators. Filesets represent units of workload for metadata; therefore, by dividing the files into filesets, you can split the task of serving the metadata for the files across multiple servers. There are other implications of filesets; we will discuss them further in Chapter 2, “SAN File System overview” on page 33.1.8 Value statement of IBM TotalStorage SAN File System As the data stored in the open systems environment continues to grow, new paradigms for the attachment and management of data and the underlying storage of the data are emerging. One of most commonly used technologies in this area is the Storage Area Network (SAN). Using a SAN to connect large amounts of storage to large numbers of computers gives us the potential for new approaches to accessing, sharing, and managing our data and storage. However, existing operating systems and file systems are not built to exploit these new capabilities. IBM TotalStorage SAN File System is a SAN based distributed file system and storage management solution that enables many of the promises of SANs, including shared heterogeneous file access, centralized management, and enterprise-wide scalability. In addition, SAN File System leverages the policy-based storage and data management30 IBM TotalStorage SAN File System
  • 58. concepts found in mainframe computers and makes them available in the open systemsenvironment.IBM TotalStorage SAN File System can provide an effective solution for clients with a smallnumber of computers and small amounts of data, and it can scale up to support clients withthousands of computers and petabytes of data.IBM TotalStorage SAN File System is a member of the IBM TotalStorage Virtualization Familyof solutions. The SAN File System has been designed as a network-based heterogeneous filesystem for file aggregation and data sharing in an open environment. As a network-basedheterogeneous file system, it provides: High performance data sharing for heterogeneous servers accessing SAN-attached storage in an open environment. A common file system for UNIX and Windows servers with a single global namespace to facilitate data sharing across servers. A highly scalable out-of-band solution (see 1.3.3, “Storage virtualization models” on page 13) supporting both very large files and very large numbers of files without the limitations normally associated with NFS or CIFS implementations.IBM TotalStorage SAN File System is a leading edge solution that is designed to: Lower the cost of storage management Enhance productivity by providing centralized and simplified management through policy-based storage management automation Improve storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers Improve application availability Simplify and lower the cost of data backups through application server free backup and built in file-based FlashCopy images Allow data sharing and collaboration across servers with high performance and full locking support Eliminate data migration during application server consolidation Provide a scalable and secure infrastructure for storage and data on demandIBM TotalStorage SAN File System solution includes a Common Information Model (CIM)Agent, supporting storage management by products based on open standards for units thatcomply with the open standards of the Storage Network Industry Association (SNIA)Common Information Model. Chapter 1. Introduction 31
  • 59. 32 IBM TotalStorage SAN File System
  • 60. 2 Chapter 2. SAN File System overview In this chapter, we provide an overview of the SAN File System Version 2.2.2, including these topics: Architecture SAN File System Version 2.2, V2.2.1, and V2.2.2 enhancements overview Components: Hardware and software, supported storage, and clients Concepts: Global namespace, filesets, and storage pool Supported storage devices Supported clients Summary of major features – Direct data access – Global namespace (scalability for growth) – File sharing – Policy based automatic placement – Lifecycle management© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 33
  • 61. 2.1 SAN File System product overview The IBM TotalStorage SAN File System is designed on industry standards so it can: Allow data sharing and collaboration across servers over the SAN with high performance and full file locking support, using a single global namespace for the data. Provide more effective storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers. Improve productivity and reduce the “pain” for IT storage and server management staff by centralizing and simplifying management through policy-based storage management automation, thus significantly lowering the cost of storage management. Facilitate application server and storage consolidation across the enterprise to scale the infrastructure for storage and data on demand. Simplify and lower the cost of data backups through built-in, file-based FlashCopy image function. Eliminate data migration during application server consolidation, and also reduce application downtime and failover costs. SAN File System is a multiplatform, robust, scalable, and highly available file system, and is a storage management solution that works with Storage Area Networks (SANs). It uses SAN technology, which allows an enterprise to connect a large number of computers and share a large number of storage devices, via a high-performance network. With SAN File System, heterogeneous clients can access shared data directly from large, high-performance, high-function storage systems, such as IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 (formerly IBM TotalStorage FAStT), as well as non-IBM storage devices. The SAN File System is built on a Fibre Channel network, and is designed to provide superior I/O performance for data sharing among heterogeneous computers. SAN File System differs from conventional distributed file systems in that it uses a data-access model that separates file metadata (information about the files, such as owner, permissions, and the physical file location) from actual file data (contents of the files). The metadata is provided to clients by MDSs; the clients communicate with the MDSs only to get the information they need to locate and access the files. Once they have this information, the SAN File System clients access data directly from storage devices via the clients’ own direct connection to the SAN. Direct data access eliminates server bottlenecks and provides the performance necessary for data-intensive applications. SAN File System presents a single, global namespace to clients where they can create and share data, using uniform file names from any client or application. Furthermore, data consistency and integrity is maintained through SAN File System’s management of distributed locks and the use of leases. SAN File System also provides automatic file placement through the use of policies and rules. Based on rules specified in a centrally-defined and managed policy, SAN File System automatically stores data on devices in storage pools that are specifically created to provide the capabilities and performance appropriate for how the data is accessed and used.34 IBM TotalStorage SAN File System
  • 62. 2.2 SAN File System V2.2 enhancements overview In addition to the benefits listed above, enhancements of SAN File System V2.2 include: Support for SAN File System clients on AIX 5L™ V5.3, SUSE Linux Enterprise Server 8 SP4, Red Hat Enterprise Linux 3, Windows 2000/2003, and Solaris™ 9 Support for iSCSI attached clients and iSCSI attached user data storage Support for IBM storage and select non-IBM storage and multiple types of storage concurrently for user data storage Support for an unlimited amount of storage for the user data Support for multiple SAN storage zones for enhanced security and more flexible device support Support for policy-based movement of files between storage pools Support for policy-based deletion of files Ability to move or defragment individual files Improved heterogeneous file sharing with cross platform user authentication and security permissions between Windows and UNIX environments Ability to export the SAN File System global namespace using Samba 3.0 on the following SAN File System clients: AIX 5L V5.2 and V5.3 (32- and 64-bit), Red Hat EL 3.0, SUSE Linux Enterprise Server 8.0, and Sun Solaris 9 Improved globalization support, including Unicode fileset attach point names and Unicode fine name patterns in policy rule2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview MDS support for SLES9 as well as SLES8. Clients who remain with SLES8 will need to upgrade to SLES8 SP4. Support for xSeries® 365 as Metadata server. Support for new IBM disk hardware: IBM TotalStorage DS6000 and IBM TotalStorage DS8000. Redundant Ethernet support on the MDSs (Linux Ethernet bonding). Improved installation: A new loadcluster function automatically installs the SAN File System software and its prerequisites across the entire cluster from one MDS. Preallocation policies to improve performance of writing large new files. MDSs support (and require) a TCP/IP interface to RSA cards. Support for SAN File System client on zSeries Linux SLES8 and pSeries Linux SLES8. Microsoft® cluster support for SAN File System clients on Windows 2000 and Windows 2003. Local user authentication option: LDAP is no longer required for the authentication of administrative users. Virtual I/O device support on AIX. Support for POSIX direct I/O file system interface calls on Intel® 32-bit Linux. Japanese translation of Administrator interfaces: GUI and CLI at the V2.2 level. Chapter 2. SAN File System overview 35
  • 63. 2.4 SAN File System architecture SAN File System architecture and components are illustrated in Figure 2-1. Computers that want to share data and have their storage centrally managed are all connected to the SAN. In SAN File System terms, these are known as clients, since they access SAN File System services, although in the enterprise context, they would most likely be, for example, database servers, application servers, or file servers. External clients NFS / CIFS SFS admin console IP Network Client / metadata communications SAN LAN FC iSCSI FC/iSCSI Gateway SFS meta-data cluster 2-8 servers SFS metadata storage SFS user System storage storage Multiple, heterogeneous storage pools Figure 2-1 SAN File System architecture In Figure 2-1, we show five such clients, each running a SAN File System currently supported client operating system. The SAN File System client software enables them to access the global namespace through a virtual file system (VFS) on UNIX/Linux systems and an installable file system (IFS) on Windows systems. This layer (VFS/IFS) is built by the OS vendors for use specifically for special-purpose or newer file systems. There are also special computers called Metadata server (MDS) engines, that run the Metadata server software, as shown in the left side of the figure. The MDS’s manage file system metadata (including file creation time, file security information, file location information, and so on), but the user data accessed over the SAN by the clients does not pass through an MDS. This eliminates the performance bottleneck from which many existing shared file system approaches suffer, giving near-local file system performance. MDSs are clustered for scalability and availability of metadata operations and are often referred to as the MDS cluster. In a SAN File System server cluster, there is one master MDS and one or more subordinate MDSs. Each MDS runs on a separate physical engine in the cluster. Additional MDSs can be added as required if the workload grows, providing solution scalability. Storage volumes that store the SAN File System clients’ user data (User Pools) are separated from storage volumes that store metadata (System Pool), as shown in Figure 2-1.36 IBM TotalStorage SAN File System
  • 64. The Administrative server allows SAN File System to be remotely monitored and controlled through a Web-based user interface called the SAN File System console. The Administrative server also processes requests issued from an administrative command line interface (CLI), which can also be accessed remotely. This means the SAN File System can be administered from almost any system with suitable TCP/IP connectivity. The Administrative server can use local authentication (standard Linux user IDs and groups) to look up authentication and authorization information about the administrative users. Alternatively, an LDAP server (client supplied) can be used for authentication. The primary Administrative server runs on the same engine as the master MDS. It receives all requests issued by administrators and also communicates with Administrative servers that run on each additional server in the cluster to perform routine requests.2.5 SAN File System hardware and software prerequisites The SAN File System is delivered as a software only package. SAN File System software requires the following hardware and software to be supplied and installed on each MDS in advance, by the customer. SAN File System also includes software for an optional Master Console; if used, then the customer must also provide the prerequisite hardware and software for this, as described in 2.5.2, “Master Console hardware and software” on page 38.2.5.1 Metadata server SAN File System V2.2.2 supports from two to eight Metadata servers (MDS) running on hardware that must be supplied by the client. The hardware servers that run the MDSs are generically known as engines. Each engine must be a rack-mounted, high-performance, and highly-reliable Intel server. The engine can be a SAN File System Metadata Server engine (4146 Model 1RX), an IBM ^ xSeries 345 server, an IBM ^ xSeries 346 server, an IBM ^ xSeries 365 server, or equivalent servers with the hardware components listed below. SAN File System V2.2 will support a cluster of MDSs consisting of both 4146-1RX engines, IBM ^ xSeries servers, and equivalents. If not using the IBM ^ 345, 346, 365, or 4146-Model 1RX, the following hardware components are required for each MDS: Two processors of minimum 3 GHz each. Minimum of 4 GB of system memory. Two internal hard disk drives with mirroring capability, minimum 36 GB each. These are used to install the MDS operating system, and should be set up in a mirrored (RAID 1) configuration. Two power supplies (optional, but highly recommended for redundancy). A minimum of one 10/100/1000Gb port for Ethernet connection (Fibre or Copper); however, two Ethernet connections are recommended to take advantage of high-availability capabilities with Ethernet bonding. Two 2 Gb Fibre Channel host bus adapter (HBA) ports. These must be compatible with the SUSE operating system and the storage subsystems in your SAN environment. They must also be capable of running the QLogic 2342 device driver. Suggested adapters: QLogic 2342 or IBM part number 24P0960. CD-ROM and diskette drives. Chapter 2. SAN File System overview 37
  • 65. Remote Supervisory Adapter II card (RSA II). This must be compatible with the SUSE operating system. Suggested card: IBM part number 59P2984 for x345, 73P9341 - IBM Remote Supervisor Adapter II Slim line for x346. Certified for SUSE Linux Enterprise Server 8, with Service Pack 4 (kernel level 2.4.21-278) or SUSE Linux Enterprise Server 9, Service Pack 1, with kernel level 2.6.5-7.151. Each MDS must have the following software installed: SUSE Linux Enterprise Server 8, Service Pack 4, kernel level 2.4.21-278, or SUSE Linux Enterprise Server 9, Service Pack 1, kernel level 2.6.5-7.151. Multi-pathing driver for the storage device used for the metadata LUNs. At the time of writing, if using DS4x000 storage for metadata LUNs, then either RDAC V9.00.A5.09 (SLES8) or RDAC V9.00.B5.04 (SLES9) is required. If using other IBM storage for metadata LUNs (ESS, SVC, DS6000, or DS8000), then SDD V1.6.0.1-6 is required. However, these levels will change over time. Always check the release notes distributed with the product CD, as well as the SAN File System for the latest supported device driver level. More information about the multi-pathing driver can be found in 4.4, “Subsystem Device Driver” on page 109 and 4.5, “Redundant Disk Array Controller (RDAC)” on page 119.2.5.2 Master Console hardware and software The SAN File System V2.2.2 Master Console is an optional component of a SAN File System configuration for use as a control point. If deployed, it requires a client-supplied, high performance, and highly reliable rack-mounted Intel Pentium® 4 processor server. This can be an IBM ^ xSeries 305 server, a SAN File System V1.1 or V2.1 Master Console, 4146-T30 feature #4001, a SAN Volume Controller Master Console, or equivalent Intel server with the following capabilities: At least 2.6 GHz processor speed At least 1 GB of system memory Two 40 GB IDE hard disk drives CD-ROM drive Diskette drive Two 10/100/1000 Mb ports for Ethernet connectivity (Copper or Fiber) Two Fibre Channel Host Bus Adapter (HBA) ports Monitor and keyboard: IBM Netbay 1U Flat Panel Monitor Console Kit with keyboard or equivalent If a SAN Volume Controller Master Console is already available, it can be shared with SAN File System, since it meets the hardware requirements. The Master Console, if deployed, must have the following software installed: Microsoft Windows 2000 Server Edition with Service Pack 4 or higher, or Microsoft Windows Professional with Update 818043, or Windows 2003 Enterprise Edition, or Windows 2003 Standard Edition. Microsoft Windows Internet Explorer Version 6.0 (SP1 or later). Sun Java™ Version 1.4.2 or higher.38 IBM TotalStorage SAN File System
  • 66. Antivirus software is recommended. Additional software for the Master Console is shipped with the SAN File System software package, as described in 2.5.6, “Master Console” on page 45.2.5.3 SAN File System software SAN File System software (5765-FS2) is required licensed software for SAN File System. This includes the SAN File System code itself and the client software packages to be installed on the appropriate servers, which will gain access to the SAN File System global namespace. These servers are then known as SAN File System clients. The SAN File System software bundle consists of three components: Software that runs on each SAN File System MDS Software that runs on your application servers, called the SAN File System Client software Optional software that is installed on the Master Console, if used2.5.4 Supported storage for SAN File System SAN-attached storage is required for both metadata volumes as well as user volumes. Supported storage subsystems for metadata volumes (at the time of writing) are listed in Table 2-1. Table 2-1 Storage subsystem platforms supported for metadata LUNs Storage platform Models supported Driver and Mixed operating microcode system access? ESS 2105-F20, 2105-750, SDD v1.6.0.1-6 Yes 2105-800 DS4000 / FAStT 4100/100, 4300/600, RDAC v09.00.x for the No 4400/700, 4500/900, Linux v2.4 or v2.6 that is, all except for kernel DS4800 DS6000 All SDD v1.6.0.1-6 Yes DS8000 All SDD v1.6.0.1-6 Yes SVC (SLES8 only) 2145 v2.1.x SDD v1.6.0.1-6 Yes SVC for Cisco and v1.1.8 SDD v1.6.0.1-6 Yes MDS9000 Note this information can change at any time; the latest information about specific supported storage, including device driver levels and microcode, is at this Web site. Please check it before starting your SAN File System installation: http://www.ibm.com/storage/support/sanfs Metadata volume considerations Metadata volumes should be configured using RAID, with a low ratio of data to parity disks. Hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Chapter 2. SAN File System overview 39
  • 67. User volumes SAN File System can be configured with any SAN storage device for the user data storage, providing it is supported by the operating systems running the SAN File System client (including having a compatible HBA) and that it conforms to the SCSI standard for unique device identification. SAN File System also supports storage devices for user data storage attached through iSCSI. The iSCSI attached storage devices must conform to the SCSI standard for unique device identification and must be supported by the SAN File System client operating systems. Consult your storage system’s documentation or the vendor to see if it meets these requirements. Note: Only IBM storage subsystems are supported for the system (metadata) storage pool. SAN File System supports an unlimited number of LUNs for user data storage. The amount of user data storage that you can have in your environment is determined by the amount of storage that is supported by the storage subsystems and the client operating systems. In the following sections, SAN File System hardware and logical components are described in detail.2.5.5 SAN File System engines Within SAN File System, an engine is the physical hardware on which a MDS and an Administrative server runs. SAN File System supports any number from two to eight engines. Increasing the number of engines increases metadata traffic capacity and can provide higher availability to the configuration. Note: Although you cannot configure an initial SAN File System with only one engine, you can run a single-engine system if all of the other engines fail (for example, if you have only two engines and one of them fails), or if you want to bring down all of the engines except one before performing scheduled maintenance tasks. Performance would obviously be impacted in this case, but these scenarios are supported and workable, on a temporary basis. The administrative infrastructure on each engine allows an administrator to monitor and control SAN File System from a standard Web browser or an administrative command line interface. The two major components of the infrastructure are an Administrative agent, which provides access to administrative operations, and a Web server that is bundled with the console services and servlets that render HTML for the administrative browsers. The infrastructure also includes a Service Location Protocol (SLP) daemon, which is used for administrative discovery of SAN File System resources by third-party Common Information Model (CIM) agents. An administrator can use the SAN File System Console, which is the browser-based user interface, or administrative commands (CLI) to monitor and control an engine from anywhere with a TCP/IP connection to the cluster. This is in contrast to the SAN Volume Controller Console, which uses the Master Console for administrative functions. Metadata server A Metadata server (MDS) is a software server that runs on a SAN File System engine and performs metadata, administrative, and storage management services. In a SAN File System40 IBM TotalStorage SAN File System
  • 68. server cluster, there is one master MDS and one or more subordinate MDSs, each running ona separate engine in the cluster. Together, these MDSs provide clients with shared, coherentaccess to the SAN File System global namespace.All of the servers, including the master MDS, share the workload of the SAN File Systemglobal namespace. Each is responsible for providing metadata and locks to clients for filesetsthat are hosted by that MDS. Each MDS knows which filesets are hosted by each particularMDS, and when contacted by a client, can direct the client to the appropriate MDS. Theymanage distributed locks to ensure the integrity of all of the data within the globalnamespace. Note: Filesets are subsets of the entire global namespace and serve to organize the namespace for all the clients. A fileset serves as the unit of workload for the MDS; each MDS has a workload assigned of some of the filesets. From a client perspective, a fileset appears as a regular directory or folder, in which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories at which filesets are attached.In addition to providing metadata to clients and managing locks, MDSs perform a wide varietyof other tasks. They process requests issued by administrators to create and manage filesets,storage pools, volumes, and policies, they enforce the policies defined by administrators toplace files in appropriate storage pools, and they send alerts when any thresholdsestablished for filesets and storage pools are exceeded.Performing metadata servicesThere are two types of metadata: File metadata: This is information needed by the clients in order to access files directly from storage devices on a Storage Area Network. File metadata includes permissions, owner and group, access time, creation time, and other file characteristics, as well as the location of the file on the storage. System metadata: This is metadata used by the system itself. System metadata includes information about filesets, storage pools, volumes, and policies. The MDSs perform the reads and writes required to create, distribute, and manage this information.The metadata is stored and managed in a separate system storage pool that is onlyaccessible by the MDS in a server cluster.Distributing locks to clients involves the following operations: Issuing leases that determine the length of time that a server guarantees the locks it grants to clients. Granting locks to clients that allow them shared or exclusive access to files or parts of files. These locks are semi-preemptible, which means that if a client does not contact the server within the lease period, the server can “steal” the client’s locks and grant them to other clients if requested; otherwise, the client can reassert its locks (get its locks back) when it can make contact, thereby inter-locking the connection again. Providing a grace period during which a client can reassert its locks before other clients can obtain new locks if the server itself goes down and then comes back online. Chapter 2. SAN File System overview 41
  • 69. Performing administrative services An MDS processes the requests from administrators (issued from the SAN File System console or CLI) to perform the following types of tasks: Create and manage filesets, which are subsets of the entire global namespace and serve as the units of workload assigned to specific MDSs. Receive requests to create and manage volumes, which are LUNs labeled for SAN File System’s use in storage pools. Create and maintain storage pools (for example, an administrator can create a storage pool that consists of RAID or striped storage devices to meet reliability requirements, and can create a storage pool that consists of random or sequential access or low-latency storage devices to meet high performance requirements). Manually move files between storage pools, and defragment files in storage pools. Create FlashCopy images of filesets in the global namespace that can be used to make file-based backups easier to perform. Define policies containing rules for placement of files in storage pools. Define policies that define the automatic background movement of files among storage pools and the background deletion of files. Performing storage management services An MDS performs these storage management services: Manages allocation of blocks of space for files in storage pool volumes. Maintains pointers to the data blocks of a file. Evaluates the rules in the active policy and manages the placement of files in specific storage pools based on those rules. Issues alerts when filesets and storage pools reach or exceed their administrator-specified thresholds, or returns out-of-space messages if they run out of space. Administrative server Figure 2-2 on page 43 shows the overall administrative interface structure of SAN File System.42 IBM TotalStorage SAN File System
  • 70. Admin Clients SAN File File System Clients Web Browser (GUI Client) Installable file system ssh (CLI access) Virtual file system rd CIM Client (3 party/IGS) Customer SAN File system Server Network Cluster Master Console (optional) GUI Web Server CLI Client (tanktool) (sfscli) Console KVM Admin Agent (CIM) Call-Home Remote Support Metadata server Server Director Linux Auth Server RSA Card LDAP Server OR Local AuthenticationFigure 2-2 SAN File System administrative structureThe SAN File System Administrator server, which is based on a Web server softwareplatform, is made up of two parts: the GUI Web server and the Administrative Agent. Chapter 2. SAN File System overview 43
  • 71. The GUI Web server is the part of the administrative infrastructure that interacts with the SAN File System MDSs and renders the Web pages that make up the SAN File System Console. The Console is a Web-based user interface, either Internet Explorer or Netscape. Figure 2-3 shows the GUI browser interface for the SAN File System. Figure 2-3 SAN File System GUI browser interface The Administrative Agent implements all of the management logic for the GUI, CLI, and CIM interfaces, as well as performing administrative authorization/authentication against the LDAP server. The Administrative Agent processes all management requests initiated by an administrator from the SAN File System console, as well as requests initiated from the SAN File System administrative CLI, which is called sfscli. The Agent communicates with the MDS, the operating system, the Remote Supervisor Adapter (RSA II) card in the engine, the LDAP, and Administrative Agents on other engines in the cluster when processing requests. Example 2-1 shows all the commands available with sfscli. Example 2-1 The sfscli commands for V2.2.2 itso3@tank-mds3:/usr/tank/admin/bin> ./sfscli sfscli> help activatevol lsadmuser mkvol setfilesetserver addprivclient lsautorestart mvfile setoutput addserver lsclient quiescecluster settrap addsnmpmgr lsdomain quit startautorestart attachfileset lsdrfile rediscoverluns startcluster autofilesetserver lsfileset refreshusermap startmetadatacheck builddrscript lsimage reportclient startserver catlog lslun reportfilesetuse statcluster catpolicy lspolicy reportvolfiles statfile chclusterconfig lspool resetadmuser statfileset chdomain lsproc resumecluster statldap chfileset lsserver reverttoimage statpolicy chldapconfig lssnmpmgr rmdomain statserver44 IBM TotalStorage SAN File System
  • 72. chpool lstrapsetting rmdrfile stopautorestart chvol lsusermap rmfileset stopcluster clearlog lsvol rmimage stopmetadatacheck collectdiag mkdomain rmpolicy stopserver detachfileset mkdrfile rmpool suspendvol disabledefaultpool mkfileset rmprivclient upgradecluster dropserver mkimage rmsnmpmgr usepolicy exit mkpolicy rmusermap expandvol mkpool rmvol help mkusermap setdefaultpool sfscli> itso3@tank-mds3:/usr/tank/admin/bin> An Administrative server interacts with a SAN File System MDS through an intermediary, called the Common Information Model (CIM) agent. When a user issues a request, the CIM agent checks with an LDAP server, which must be installed in the environment, to authenticate the user ID and password and to verify whether the user has the authority (is assigned the appropriate role) to issue a particular request. After authenticating the user, the CIM agent interacts with the MDS on behalf of that user to process the request. This same system of authentication and interaction is also available to third-party CIM clients to manage SAN File System.2.5.6 Master Console The Master Console software is designed to provide a unified point of service for the entire SAN File System cluster, simplifying service to the MDSs. It makes a Virtual Private Network (VPN) connection readily available that you can initiate and monitor to enable hands-on access by remote IBM support personnel. It also provides a common point of residence for the IBM TotalStorage TPC for Fabric, IBM Director, and other tools associated with the capabilities just described, and can act as a central repository for diagnostic data. It is optional (that is, not required) to install a Master Console in a SAN File System configuration. If deployed, the Master Console hardware is customer-supplied and must meet the specifications listed in 2.5.2, “Master Console hardware and software” on page 38. The Master Console supported by the SAN File System is the same as that used for the IBM TotalStorage SAN Volume Controller (SVC) and IBM TotalStorage SAN Integration Server (SIS), so if there is already one in the client environment, it can be shared with the SAN File System. The Master Console software package includes the following software, which must be installed on it, if deployed: Adobe Acrobat Reader DB2® DS4000 Storage Manager Client IBM Director PuTTY SAN Volume Controller Console 6 Tivoli Storage Area Network Manager IBM VPN Connection Manager From the Master Console, the user can access the following components: SAN File System console, through a Web browser. Administrative command-line interface, through a Secure Shell (SSH) session. Any of the engines in the SAN File System cluster, through an SSH session. Chapter 2. SAN File System overview 45
  • 73. The RSA II card for any of the engines in the SAN File System cluster, through a Web browser. In addition, the user can use the RSA II Web interface to establish a remote console to the engine, allowing the user to view the engine desktop from the Master Console. Any of the SAN File System clients, through an SSH session, a telnet session, or a remote display emulation package, depending on the configuration of the client. Remote access Remote Access support is the ability for IBM support personnel who are not located on a user’s premises to assist an administrator or a local field engineer in diagnosing and repairing failures on a SAN File System engine. Remote Access support can help to greatly reduce service costs and shorten repair times, which in turn will reduce the impact of any SAN File System failures on business. Remote Access provides a support engineer with full access to the SAN File System console, after a request initiated by the customer. The access is via a secure VPN connection, using IBM VPN Connection Manager. This allows the support engineer to query and control the SAN File System MDS and to access metadata, log, dump, and configuration data, using the CLI. While the support engineer is accessing the SAN File System, the customer is able to monitor their progress via the Master Console display.2.5.7 Global namespace In most file systems, a typical file hierarchy is represented as a series of folders or directories that form a tree-like structure. Each folder or directory could contain many other folders or directories, file objects, or other file system objects, such as symbolic links or hard links. Every file system object has a name associated with it, and it is represented in the namespace as a node of the tree. SAN File System introduces a new file system object, called a fileset. A fileset can be viewed as a portion of the tree-structured hierarchy (or global namespace). It is created to divide the global namespace into a logical, organized structure. Filesets attach to other directories in the hierarchy, ultimately attaching through the hierarchy to the root of the SAN File System cluster mount point. The collection of filesets and its content in SAN File System combine to form the global namespace. Fileset boundaries are not visible to the clients. Only a SAN File System administrator can see them. From a client’s perspective, a fileset appears as a regular directory or folder within which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories to which filesets are attached. The global namespace is the key to the SAN File System. It allows common access to all files and directories by all clients if required, and ensures that the SAN File System clients have both consistent access and a consistent view of the data and files managed by SAN File System. This reduces the need to store and manage duplicate copies of data, and simplifies the backup process. Of course, security mechanisms, such as permissions and ACLs, will restrict visibility of files and directories. In addition, access to specific storage pools and filesets can be restricted by the use of non-uniform SAN File System configurations, as described in 3.3.2, “Non-uniform SAN File System configuration” on page 69. How the global namespace is organized The global namespace is organized into filesets, and each fileset is potentially available to the client-accessible global namespace at its attach point. An administrator is responsible for creating filesets and attaching them to directories in the global namespace, which can be done at multiple levels. Figure 2-4 on page 47 shows a sample global namespace. An attach point appears to a SAN File System client as a directory in which it can create files and46 IBM TotalStorage SAN File System
  • 74. folders (permissions permitting). From the MDS perspective, the filesets allow the metadata workload to be split between all the servers in the cluster. Note: Filesets can be organized in any way desired, to reflect enterprise needs. SAN File System / ROOT (Default Fileset) (Additional Filesets) /HR /Finance /CRM /Manufacturing Figure 2-4 Global namespace For example, the root fileset (for example, ROOT) is attached to the root level in the namespace hierarchy (for example, sanfs), and the filesets are attached below it (that is, HR, Finance, CRM, and Manufacturing). The client would simply see four subdirectories under the root directory of the SAN File System. By defining the path of a fileset’s attach point, the administrator also automatically defines its nesting level in relationship to the other filesets.2.5.8 Filesets A fileset is a subset of the entire SAN File System global namespace. It serves as the unit of workload for each MDS, and also dictates the overall organizational structure for the global namespace. It is also a mechanism for controlling the amount of space occupied by SAN File System clients. Filesets can be created based on workflow patterns, security, or backup considerations, for example. You might want to create a fileset for all the files used by a specific application, or associated with a specific client. The fileset is used not only for managing the storage space, but also as the unit for creating FlashCopy images (see 2.5.12, “FlashCopy” on page 58). Correctly defined filesets mean that you can take a FlashCopy image for all the files in a fileset together in a single operation, thus providing a consistent image for all of those files. A key part of SAN File System design is organizing the global namespace into filesets that match the data management model of the enterprise. Filesets can also be used as a criteria in placement of individual files within the SAN File System (see 2.5.10, “Policy based storage and data management” on page 49). Tip: Filesets are assigned to a MDS either statically (that is, by specifying a MDS to serve the fileset when it is created), or dynamically. If dynamic assignment is chosen, automatic simple load balancing will be done. If using static fileset assignment, consider the overall I/O loads on the SAN File System cluster. Since each fileset is assigned to one (and only one) MDS at a time, for serving the metadata, you will want to balance the load across all MDS in the cluster, by assigning filesets appropriately. More information about filesets is given in 7.5, “Filesets” on page 286. Chapter 2. SAN File System overview 47
  • 75. An administrator creates filesets and attaches them at specific locations below the global fileset. An administrator can also attach a fileset to another fileset. When a fileset is attached to another fileset, it is called a nested fileset. In Figure 2-5, fileset1 and fileset2 are the nested filesets of parent fileset Winfiles. Note: In general, we do not recommend creating nested filesets; see 7.5.2, “Nested filesets” on page 289 for the reasons why. / ( ROOT ) /HR /UNIXfiles /Winfiles /Manufacturing (filesets) fileset1 fileset2 (nested filesets) Figure 2-5 Filesets and nested filesets Here we have shown several filesets, including filesets called UNIXfiles and Winfiles. We recommend separating filesets by their “primary allegiance” of the operating system. This will facilitate file sharing (see “Sharing files” on page 54 for more information). Separation of filesets also facilitates backup, since if you are using file-based backup methods (for example, tar, Windows Backup vendor products like VERITAS NetBackup, or IBM Tivoli Storage Manager), full metadata attributes of Windows files can only be backed up from a Windows backup client, and full metadata attributes of UNIX files can only be backed up from a UNIX backup client. See Chapter 12, “Protecting the SAN File System environment” on page 477 for more information. When creating a fileset, an administrator can specify a maximum size for the fileset (called a quota) and specify whether SAN File System should generate an alert if the size of the fileset reaches or exceeds a specified percentage of the maximum size (called a threshold). For example, if the quota on the fileset was set at 100 GB, and the threshold was 80%, an alert would be raised once the fileset contained 80 GB of data. The action taken when the fileset reaches its quota size (100 GB in this instance) depends on whether the quota is defined as hard or soft. If a hard quota is used, once the threshold is reached, any further requests from a client to add more space to the fileset (by creating or extending files) will be denied. If a soft quota is used, which is the default, more space can be allocated, but alerts will continue to be sent. Of course, once the amount of physical storage available to SAN File System is exceeded, no more space can be used. The quota limit, threshold, and quota type can be set differently and individually for each fileset.2.5.9 Storage pools A storage pool is a collection of SAN File System volumes that can be used to store either metadata or file data. A storage pool consists of one or more volumes (LUNs from the back-end storage system perspective) that provide, for example, a desired quality of service for a specific use, such as to store all files for a particular application. An administrator must assign one or more volumes to a storage pool before it can be used.48 IBM TotalStorage SAN File System
  • 76. SAN File System has two types of storage pools (System and User), as shown in Figure 2-6. SAN File System System User User User Pool Pool1 Pool2 Pool3 Default User Pool Figure 2-6 SAN File System storage pools System Pool The System Pool contains the system metadata (system attributes, configuration information, and MDS state) and file metadata (file attributes and locations) that is accessible to all MDSs in the server cluster. There is only one System Pool, which is created automatically when SAN File System is installed with one or more volumes specified as a parameter to the install process. The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended. The RAID configuration should have a low ratio of data to parity disks, and hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Remote mirroring solutions, such as MetroMirror, available on the IBM TotalStorage SAN Volume Controller, DS6000, and DS8000, are also possible. User Pools User Pools contain the blocks of data that make up user files. Administrators can create one or more user storage pools, and then create policies containing rules that cause the MDS servers to store data for specific files in the appropriate storage pools. A special User Pool is the default User Pool. This is used to store the data for a file if the file is not assigned to a specific storage pool by a rule in the active file placement policy. One User Pool, which is automatically designated the default User Pool, is created when SAN File System is installed. This can be changed by creating another User Pool and setting it to the default User Pool. The default pool can also be disabled if required.2.5.10 Policy based storage and data management SAN File System provides automatic file placement, at the time of creation, through the use of polices and storage pools. An administrator can create quality-of-service storage pools that are available to all users, and define rules in file placement policies that cause newly created files to be placed in the appropriate storage pools automatically. SAN File System also provides file lifecycle management through the use of file management policies. File placement policy A file placement policy is a list of rules that determines where the data for specific files is stored. A rule is an SQL-like statement that tells a SAN File System MDS to place the data for a file in a specific storage pool if the file attribute that the rule specifies meets a particular condition. A rule can apply to any file being created, or only to files being created within a specific fileset, depending on how it is defined. Chapter 2. SAN File System overview 49
  • 77. A storage pool is a named set of storage volumes that can be specified as the destination for files in rules. Only User Pools are used to store file data. The rules in a file placement policy are processed in order until the condition in one of the rules is met. The data for the files is then stored in the specified storage pool. If none of the conditions specified in the rules of the policy is met, the data for the file is stored in the default storage pool. Figure 2-7 shows an example of how file placement policies work. The yellow box shows a sequence of rules defined in the policy. Underneath each storage pool is a list of some files that will be placed in it, according to the policy. For example, the file /HR/dsn.bak matches the first rule (put all files in the fileset /HR into User Pool 1) and is therefore put into User Pool 1. The fact that it also matches the second rule is irrelevant, because only the first matching rule is applied. See 7.8, “File placement policy” on page 304 for more information. / File Name Fileset Rules for File Placement File Type /HR go into User Pool 1 *.bak go into User Pool 4 /HR /Finance /CRM /Manufacturing DB2.* go into User Pool 2 *.tmp go into User Pool 3 SAN File System User User User User Pool 1 Pool 2 Pool 3 Pool 4 /HR/dsn1.txt /CRM/DB2.pgm /CRM/dsn3.tmp /CRM/dsn2.bak /HR/DB2.pgm /Finance/DB2.tmp /Finance/dsn4.bak /HR/dsn1.bak Figure 2-7 File placement policy execution The file placement policy can also optionally contain preallocation rules. These rules, available with SAN File System V2.2.2, allow a system administrator to automatically preallocate space for designated files, which can improve performance. See 7.8.7, “File storage preallocation” on page 324 for more information about preallocation. File management policy and lifecycle management SAN File System Version 2.2 introduced a lifecycle management function. This allows administrators to specify how files should be automatically moved among storage pools during their lifetime, and, optionally, specify when files should be deleted. The business value of this feature is that it improves storage space utilization, allowing a balanced use of premium and inexpensive storage matching the objectives of the enterprise. For example, an enterprise may have two types of storage devices; one that has higher speed, reliability, and cost, and one that has lower speed, reliability, and cost. Lifecycle management in SAN File System could be used to automatically move infrequently accessed files from the more50 IBM TotalStorage SAN File System
  • 78. expensive storage to cheaper storage, or vice versa, for more critical files. Lifecycle management reduces the manual intervention necessary in managing space utilization and therefore also reduces the cost of management. Lifecycle management is set up via file management policies. A file management policy is a set of rules controlling the movement of files among different storage pools. Rules are of two types: migration and deletion. A migration rule will cause matching files to be moved from one storage pool to another. A deletion rule will cause matching files to be deleted from the SAN File System global namespace. Migration and deletion rules can be specified based on pool, fileset, last access date, or size criteria. The system administrator defines these rules in a file management policy, then runs a special script to act on the rules. The script can be run in a planning mode to determine in advance what files would be migrated/deleted by the script. The plan can optionally be edited by the administrator, and then passed back for execution by the script so that the selected files are actually migrated or deleted. For more information, see Chapter 10, “File movement and lifecycle management” on page 435.2.5.11 Clients SAN File System is based on a client-server design. A SAN File System client is a computer that accesses and creates data that is stored in the SAN File System global namespace. The SAN File System is designed to support the local file system interfaces on UNIX, Linux, and Windows servers. This means that the SAN File System is designed to be used without requiring any changes to your applications or databases that use a file system to store data. The SAN File System client for AIX, Sun Solaris, Red Hat, and SUSE Linux use the virtual file system interface within the local operating system to provide file system interfaces to the applications running on AIX, Sun Solaris, Red Hat, and SUSE Linux. The SAN File System client for Microsoft Windows (supported Windows 2000 and 2003 editions) uses the installable file system interface within the local operating system to provide file system interfaces to the applications. Clients access metadata (such as a files location on a storage device) only through a MDS, and then access data directly from storage devices attached to the SAN. This method of data access eliminates server bottlenecks and provides read and write performance that is comparable to that of file systems built on bus-attached, high-performance storage. SAN File System currently supports clients that run these operating systems: AIX 5L Version 5.1 (32-bit uniprocessor or multiprocessor). The bos.up or bos.mp packages must be at level 5.1.0.58, plus APAR IY50330 or higher. AIX 5L Version 5.2 (32-bit and 64-bit). The bos.up package must be at level 5.2.0.18 or later. The bos.mp package must be at level 5.2.0.18 or later. APAR IY50331 or higher is required. AIX 5L Version 5.3 (32-bit or 64-bit). Windows 2000 Server and Windows 2000 Advanced Server with Service Pack 4 or later. Windows 2003 Server Standard and Enterprise Editions with Service Pack 1 or later. VMWare ESX 2.0.1 running Windows only. Red Hat Enterprise Linux 3.0 AS, ES, and WS, with U2 kernel 2.4.21-15.0.3 hugemem, smp or U4 kernel 2.4.21-27 hugemem, and smp on x86 systems. Chapter 2. SAN File System overview 51
  • 79. SUSE Linux Enterprise Server 8.0 on kernel level 2.4.21-231 (Service Pack 3) kernel level 2.4.21-278 (Service Pack 4) on x86 servers (32-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on pSeries (64-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on zSeries (31-bit). Sun Solaris 9 (64-bit) on SPARC servers. Note: The AIX client is supported on pSeries systems with a maximum of eight processors. The Red Hat client is supported on either the SMP or Hugemem kernel, with a maximum of 4 GB of main memory. The zSeries SUSE 8 SAN File System client uses the zFCP driver and supports access to ESS, DS6000, and DS8000 for user LUNs. SAN File System client software must be installed on each AIX, Windows, Solaris, SUSE, or Red Hat client. On an AIX, Linux, and Solaris client, the software is a virtual file system (VFS), and on a Windows client, it is an installable file system (IFS). The VFS and IFS provide clients with local access to the global namespace on the SAN. Note that clients can also act as servers to a broader clientele. They can provide NFS or CIFS access to the global namespace to LAN-attached clients and can host applications such as database servers. A VFS is a subsystem of an AIX/Linux/Solaris client’s virtual file system layer, and an IFS is a subsystem of a Windows client’s file system. The SAN File System VFS or IFS directs all metadata operations to an MDS and all data operations to storage devices attached to a SAN. The SAN File System VFS or IFS provides the metadata to the clients operating system and any applications running on the client. The metadata looks identical to metadata read from a native, locally attached file system, that is, it emulates the local file system semantics. Therefore, no change is necessary to the client applications access methods to use SAN File System. When the global namespace is mounted on an AIX/Linux/Solaris client, it looks like a local file system. When the global namespace is mounted on a Windows client, it appears as another drive letter and looks like an NTFS file system. Files can therefore be shared between Windows and UNIX clients (permissions and suitable applications permitting). Clustering SAN File System V2.2.2 supports clustering software running on AIX, Solaris, and Microsoft clients. AIX clients HACMP™ is supported on SAN File System clients running AIX 5L V5.1, V5.2, and V5.3, when the appropriate maintenance levels are installed. Solaris clients Solaris client clustering is supported when used with Sun Cluster V3.1. Sun clustered applications can use SAN File System provided that the SAN File System is declared to the cluster manager as a Global File System. Likewise, non-clustered applications are supported when Sun Cluster is present on the client. Sun Clusters can also be used as an NFS server, as the NFS service will fail over using local IP connectivity.52 IBM TotalStorage SAN File System
  • 80. Microsoft clientsMicrosoft client clustering is supported for Windows 2000 and Windows 2003 clients withMSCS (Microsoft Cluster Server), using a maximum of two client nodes per cluster.Caching metadata, locks, and dataCaching allows a client to achieve low-latency access to both metadata and data. A client cancache metadata to perform multiple metadata reads locally. The metadata includes mappingof logical file system data to physical addresses on storage devices attached to a SAN.A client can also cache locks to allow the client to grant multiple opens to a file locally withouthaving to contact a MDS for each operation that requires a lock.In addition, a client can cache data for small files to eliminate I/O operations to storagedevices attached to a SAN. A client performs all data caching in memory. Note that if there isnot enough space in the client’s cache for all of the data in a file, the client simply reads thedata from the shared storage device on which the file is stored. Data access is still fastbecause the client has direct access to all storage devices attached to a SAN.Using the direct I/O modeSome applications, such as database management systems, use their own sophisticatedcache management systems. For such applications, SAN File System provides a direct I/Omode. In this mode, SAN File System performs direct writes to disk, and bypasses local filesystem caching. Using the direct I/O mode makes files behave more like raw devices. Thisgives database systems direct control over their I/O operations, while still providing theadvantages of SAN File System, such as SAN File System FlashCopy. Applications need tobe aware of (and configured for) direct I/O. IBM DB2 UDB supports direct I/O (see 14.5,“Direct I/O support” on page 558 for more information).On the Intel Linux (IA32) releases supported with the SAN File System V2.2.2 client, supportis provided for the POSIX direct I/O file system interface calls.Virtual I/OThe SAN File System 2.2.2 client for AIX 5L V5.3 will interoperate with Virtual I/O (VIO)devices. VIO enables virtualization of storage across LPARs in a single POWER5™ system.SAN File System support for VIO enables SAN File System clients to use data volumes thatcan be accessed through VIO. In addition, all other SAN File System clients will interoperatecorrectly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients.Version 1.2.0.0 of VIO is supported by SAN File System. Restriction: SAN File System does not support the use of Physical Volume Identifier (PVID) in order to export a LUN/physical volume (for example, hdisk4) on a VIO Server. To list devices with a PVID, type lspv. If the second column has a value of none, the physical volume does not have a PVID.For a description of driver configurations that require the creation of a volume label, see“What are some of the restrictions and limitations in the VIOS environment?” on the VIOSWeb site at: http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/faq.html Chapter 2. SAN File System overview 53
  • 81. Sharing files In a homogenous environment (either all UNIX or all Windows clients), SAN File System provides access and semantics that are customized for the operating system running on the clients. When files are created and accessed from only Windows clients, all the security features of Windows are available and enforced. When files are created and accessed from only UNIX clients, all the security features of UNIX are available and enforced. In Version 2.2 of SAN File System (and beyond), the heterogenous file sharing feature improves the flexibility and security involved in sharing files between Windows and UNIX based environments. The administrator defines and manages a set of user map entries using the CLI or GUI, which specifies a UNIX domain-qualified user and a Windows domain-qualified user that are to be treated as equivalent for the purpose of validating file access permissions. Once these mappings are defined, the SAN File System automatically accesses the Active Directory Sever (Windows) and either LDAP or Network Information Service (NIS) on UNIX to cross-reference the user ID and group membership. See 8.3, “Advanced heterogeneous file sharing” on page 347 for more information about heterogenous file sharing. If no user mappings are defined, then heterogeneous file sharing (where there are both UNIX and Windows clients) is handled in a restricted manner. When files created on a UNIX client are accessed by a non-mapped user on a Windows client, the access available will be the same as those granted by the “Other” permission bits in UNIX. Similarly, when files created on a Windows client are accessed on a non-mapped user on a UNIX client, the access available is the same as that granted to the “Everyone” user group in Windows. If the improved heterogenous file sharing capabilities (user mappings) are not implemented by the administrator, then file sharing is positioned primarily for homogenous environments. The ability to share files heterogeneously is recommended for read-only, that is, create files on one platform, and provide read-only access on the other platform. To this end, filesets should be established so that they have a “primary allegiance”. This means that certain filesets will have files created in them only by Windows clients, and other filesets will have files created in them only by UNIX clients. How clients access the global namespace SAN File System clients mount the global namespace onto their systems. After the global namespace is mounted on a client, users and applications can use it just as they do any other file system to access data and to create, update, and delete directories and files. On a UNIX-based client (including AIX, Solaris, and Linux), the global namespace looks like a local UNIX file system. On a Windows client, it appears as another driver letter and looks like any other local NTFS file system. Basically, the global namespace looks and acts like any other file system on a client’s system. There are some restrictions on NTFS features supported by SAN File System (see “Windows client restrictions” on page 56). Figure 2-8 on page 55 shows the My Computer view from a Windows 2000 client: The S: drive (labelled sanfs) is the attach point of the SAN File System. A Windows 2003 client will see a similar display.54 IBM TotalStorage SAN File System
  • 82. Figure 2-8 Windows 2000 client view of SAN File SystemIf we expand the S: drive in Windows Explorer, we can see the directories underneath(Figure 2-9 shows this view). There are a number of filesets available, including the rootfileset (top level) and two filesets under the root (USERS and userhomes). However, the clientis not aware of this; they simply see the filesets as regular folders. The hidden directory,.flashcopy, is part of the fileset and is used to store FlashCopy images of the fileset. Moreinformation about FlashCopy is given in 2.5.12, “FlashCopy” on page 58 and 9.1, “SAN FileSystem FlashCopy” on page 376.Figure 2-9 Exploring the SAN File System from a Windows 2000 client Chapter 2. SAN File System overview 55
  • 83. Example 2-2 shows the AIX mount point for the SAN File System, namely SANFS. It is mounted on the directory /sfs. Other UNIX-based clients see a similar output from the df command. A listing of the SAN File System namespace base directory shows the same directory or folder names as in the Windows output. The key thing here is that all SAN File System clients, whether Windows or UNIX, will see essentially the same view of the global namespace. Example 2-2 AIX /UNIX mount point of the SAN file system Rome:/ >df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 65536 46680 29% 1433 9% / /dev/hd2 1310720 73752 95% 21281 13% /usr /dev/hd9var 65536 52720 20% 455 6% /var /dev/hd3 131072 103728 21% 59 1% /tmp /dev/hd1 65536 63368 4% 18 1% /home /proc - - - - - /proc /dev/hd10opt 65536 53312 19% 291 4% /opt /dev/lv00 4063232 1648688 60% 657 1% /usr/sys/inst.images SANFS 603095040 591331328 2% 1 1% /sfs Rome:/ > cd /sfs/sanfs Rome:/ > ls .flashcopy aix51 aixfiles axi51 files lixfiles lost+found smallwin testdir tmp userhomes USERS winfiles winhome Some client restrictions There are certain restrictions in the current release for SAN File System clients. Use of MBCS Multi-byte characters (MBCS) can now be used (from V2.2 onwards) in pattern matching in file placement policies and for fileset attach point directories. MBCS are not supported in the names of storage pools and filesets. Likewise, MBCS cannot be used in the SAN File System cluster name, which appears in the namespace as the root fileset attach point directory name (for example, /sanfs), or in the fileset administrative object name (as opposed to the fileset directory attach point). UNIX client restriction UNIX clients cannot use user IDs or group IDs 999999 and 1000000 for real users or groups; these are reserved IDs used internally by SAN File System. Note: To avoid any conflicts with your current use of IDs, the reserved user IDs can be configured once at installation time. Windows client restrictions The SAN File System is natively case-sensitive. However, Windows applications can choose to use case-sensitive or case-insensitive names. This means that case-sensitive applications, such as those making use of Windows support for POSIX interfaces, behave as expected. Native Win32® clients (such as Windows Explorer) get only case-aware semantics. The case specified at the time of file creation is preserved, but in general, file names are case-insensitive. For example, Windows Explorer allows the user to create a file named Hello.c, but an attempt to create hello.c in the same folder will fail because the file already exists. If a Windows-based client accesses a folder that contains two files that are created on56 IBM TotalStorage SAN File System
  • 84. a UNIX-based client with names that differ only in case, its inability to distinguish between thetwo files may lead to undesirable results. For this reason, it is not recommended for UNIXclients to create case-differentiated files in filesets that will be accessed by Windows clients.The following features of NTFS are not currently supported by SAN File System: File compression on either individual files or all files within a folder. Extended attributes. Reparse points. Built-in file encryption on files and directories. Quotas; however, quotas are provided by SAN File System filesets. Defragmentation and error-checking tools (including CHKDSK). Alternate data streams. Assigning an access control list (ACL) for the entire drive. NTFS change journal. Scanning all files/directories owned by a particular SID (FSCTL_FIND_FILES_BY_SID). Security auditing or SACLs. Windows sparse files. Windows Directory Change Notification.Applications that use the Directory Change Notification feature may stop running when a filesystem does not support this feature, while other applications will continue running.The following applications stop running when Directory Change Notification is not supportedby the file system: Microsoft applications – ASP.net – Internet Information Server (IIS) – The SMTP Service component of Microsoft Exchange Non-Microsoft application – Apache Web serverThe following application continues to run when Directory Change Notification is notsupported by the file system: Windows Explorer. Note that when changes to files occur by other processes, the changes will not be automatically reflected until a manual refresh is done or the file folder is reopened.In addition to the above limitations, note these differences: Programs that open files using the 64-bit file ID (the FILE_OPEN_BY_FILE_ID option) will fail. This applies to the NFS server bundled with Microsoft Services for UNIX. Symbolic links created on UNIX-based clients are handled specially by SAN File System on Windows-based clients; they appear as regular files with a size of 0, and their contents cannot be accessed or deleted. Batch oplocks are not supported. LEVEL_1, LEVEL_2 and Filter types are supported. Chapter 2. SAN File System overview 57
  • 85. Differences between SAN File System and NTFS SAN File System differs from Microsoft Windows NT® File System (NTFS) in its degree of integration into the Windows administrative environment. The differences are: Disk management within the Microsoft Management Console shows SAN File System disks as unallocated. SAN File System does not support reparse points or extended attributes. SAN File System does not support the use of the standard Windows write signature on its disks. Disks used for the global namespace cannot sleep or hibernate. SAN File System also differs from NTFS in its degree of integration into Windows Explorer and the desktop. The differences are: Manual refreshes are required when updates to the SAN File System global namespace are initiated on the metadata server (such as attaching a new fileset). The recycle bin is not supported. You cannot use distributed link tracing. This is a technique through which shell shortcuts and OLE links continue to work after the target file is renamed or moved. Distributed link tracking can help a user locate the link sources in case the link source is renamed or moved to another folder on the same or different volume on the same PC, or moved to a folder on any PC in the same domain. You cannot use NTFS sparse-file APIs or change journaling. This means that SAN File System does not provide efficient support for the indexing services accessible through the Windows “Search for files or folders” function. However, SAN File System does support implicitly sparse files.2.5.12 FlashCopy A FlashCopy image is a space-efficient, read-only copy of the contents of a fileset in a SAN File System global namespace at a particular point in time. A FlashCopy image can be used with standard backup tools available in a user’s environment to create backup copies of files onto tapes. A FlashCopy image can also be quickly “reverted”, that is, roll back the current fileset contents to an available FlashCopy image. When creating FlashCopy images, an administrator specifies which fileset to create the FlashCopy image for. The FlashCopy image operation is performed individually for each fileset. A FlashCopy image is simply an image of an entire fileset (and just that fileset, not any nested filesets) as it exists at a specific point in time. An important benefit is that during creation of a FlashCopy image, all data remains online and available to users and applications. The space used to keep the FlashCopy image is included in its overall fileset space; however, a space-efficient algorithm is used to minimize the space requirement. The FlashCopy image does not include any nested filesets within it. You can create and maintain a maximum of 32 FlashCopy images of any fileset. See 9.1, “SAN File System FlashCopy” on page 376 for more information about SAN File System FlashCopy. Figure 2-10 on page 59 shows how a FlashCopy image can be seen on a Windows client. In this case, a FlashCopy image was made of the fileset container_A, and specified to be created in the directory 062403image. The fileset has two top-level directories, DRIVERS and Adobe. After the FlashCopy image is made, a subdirectory called 062403image appears in the special directory .flashcopy (which is hidden by default) underneath the root of the fileset. This directory contains the same folders as the actual fileset, that is, DRIVERS and Adobe, and all the file/folder structure underneath. It is simply frozen at the time the image was taken.58 IBM TotalStorage SAN File System
  • 86. Therefore, clients have file-level access to these images, to access older versions of files, or to copy individual files back to the real fileset if required, and if permissions on the flashcopy folder are set appropriately. Figure 2-10 FlashCopy images2.5.13 Reliability and availability Reliability is defined as the ability of SAN File System to perform to its specifications without error. This is critical for a system that will store corporate data. Availability is the ability to stay up and running, plus the ability to transparently recover to maintain the available state. SAN File System has many built-in features for reliability and availability. The SAN File System operates in a cluster. Each MDS engine supplied by the client is required to have the following features for availability: Dual hardware components: – Hardware mirrored internal disk drives – Dual Fibre Channel ports supporting multi-path I/O for storage devices Chapter 2. SAN File System overview 59
  • 87. Remote Supervisor Adapter II (RSA II). The RSA-II provides remote access to the engine’s desktop, monitoring of environmental factors, and engine restart capability. The RSA card communicates with the service processors on the MDS engines in the cluster to collect hardware information and statistics. The RSA cards also communicate with the service processors to enable remote management of the servers in the cluster, including automatic reboot if a server hang is detected. More information about the RSA card can be found in 13.5, “Remote Supervisor Adapter II” on page 537. To improve availability, the MDS hardware also needs the following dual redundant features: Dual power supplies. Dual fans. Dual Ethernet connections with network bonding enabled. Bonding network interfaces together allows for increased failover in high availability configurations. Beginning with V2.2.2, SAN File System supports network bonding with SLES8 SP 4 and SLES 9 SP 1. Redundant Ethernet support on each MDS enables the full redundancy of the IP network between the MDSs in the cluster as well as between the SAN File System Clients and the MDSs. The dual network interfaces in each MDS are combined redundantly servicing a single IP address. – Each MDS still uses only one IP address. – One interface is used for IP traffic unless the interface fails, in which case IP service is failed over to the other interface. – The time to fail over an IP service is on the order of a second or two. The change is transparent to SAN File System. – No change to client configuration is needed. We also strongly recommend UPS systems to protect the SAN File System engines. Automatic restart from software problems SAN File System has the availability functions to monitor, detect, and recover from faults in the cluster. Failures in SAN File System can be categorized into two types: software faults that affect MDS software components, and hardware faults that affect hardware components. Software faults Software faults are server errors or failures for which recovery is possible via a restart of the server process without manual administrative intervention. SAN File System detects and recovers from software faults via a number of mechanisms. An administrative watchdog process on each server monitors the health of the server and restarts the MDS processes in the event of failure, typically within about 20 seconds of the failure. If the operating system of an MDS hangs, it will be ejected from the cluster once the MDS stops responding to other cluster members. A surviving cluster member will raise an event and SNMP trap, and will use the RSA card to restart the MDS that was hung. Hardware faults Hardware faults are server failures for which recovery requires administrative intervention. They have a greater impact than software faults and require at least a machine reboot and possibly physical maintenance for recovery. SAN File System detects hardware faults by way of a heartbeat mechanism between the servers in a cluster. A server engine that experiences a hardware fault stops responding to heartbeat messages from its peers. Failure of a server to respond for a long enough period of60 IBM TotalStorage SAN File System
  • 88. time causes the other servers to mark it as being down and to send administrative SNMP alerts. Automatic fileset and master role failover SAN File System supports the nondisruptive, automatic failover of the workload (filesets). If any single MDS fails or is manually stopped, SAN File System automatically redistributes the filesets of that MDS to surviving MDSs and, if necessary, reassigns the master role to another MDS in the cluster. SAN File System also uses automatic workload failover to provide nondisruptive maintenance for the MDSs. 9.5, “MDS automated failover” on page 413 contains more information about SAN File system failover.2.5.14 Summary of major features To summarize, SAN File System provides the following features. Direct data access by exploitation of SAN technology SAN File System uses a data access model that allows client systems to access data directly from storage systems using a high-bandwidth SAN, without interposing servers. Direct data access helps eliminate server bottlenecks and provides the performance necessary for data-intensive applications. Global namespace SAN File System presents a single global namespace view of all files in the system to all of the clients, without manual, client-by-client configuration by the administrator. A file can be identified using the same path and file name, regardless of the system from which it is being accessed. The single global namespace shared directly by clients also reduces the requirement of data replication. As a result, the productivity of the administrator as well as the users accessing the data is improved. It is possible to restrict access to the global namespace by using a non-uniform SAN File System configuration. In this way, only certain SAN File System volumes and therefore filesets will be available to each client. See 3.3.2, “Non-uniform SAN File System configuration” on page 69 for more information. File sharing SAN File System is specifically designed to be easy to implement in virtually any operating system environment. All systems running this file system, regardless of operating system or hardware platform, potentially have uniform access to the data stored (under the global namespace) in the system. File metadata, such as last modification time, are presented to users and applications in a form that is compatible with the native file system interface of the platform. SAN File System is also designed to allow heterogeneous file sharing among the UNIX and Windows client platforms with full locking and security capabilities. By enabling this capability, heterogeneous file sharing with SAN File System increases in performance and flexibility. Chapter 2. SAN File System overview 61
  • 89. Policy based automatic placement SAN File System is aimed at simplifying the storage resource management and reducing the total cost of ownership by the policy based automatic placement of files on appropriate storage devices. The storage administrator can define storage pools depending on specific application requirements and quality of services, and define rules based on data attributes to store the files at the appropriate storage devices automatically. Lifecycle management SAN File System provides the administrator with policy based data management that automates the management of data stored on storage resources. Through the policy based movement of files between storage pools and the policy based deletion of files, there is less effort needed to update the location of files or sets of files. Free space within storage pools will be more available as potentially older files are removed. The overall cost of storage can be reduced by using this tool to manage data between high/low performing storage based on importance of the data.62 IBM TotalStorage SAN File System
  • 90. Part 2Part 2 Planning, installing, and upgrading In this part of the book, we present detailed information for planning, installing, and upgrading the IBM TotalStorage SAN File System.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 63
  • 91. 64 IBM TotalStorage SAN File System
  • 92. 3 Chapter 3. MDS system design, architecture, and planning issues In this chapter, we discuss the following topics: Site infrastructure Fabric needs and storage partitioning SAN storage infrastructure Network infrastructure Security: Local Authentication and LDAP File Sharing: Heterogeneous file sharing Planning for storage pools, filesets, and policies Planning for high availability Client needs and application support Client data migration SAN File System sizing guide Integration of SAN File System into an existing SAN Planning worksheets© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 65
  • 93. 3.1 Site infrastructure To make sure that the installation of SAN File System is successful, it is crucial to plan thoroughly. You need to verify that the following site infrastructure is available for SAN File System: Adequate hardware for SAN File System Metadata server engines. SAN File System is shipped as a software product; therefore, the hardware for SAN File System must be supplied by the client. In order to help to size the hardware for SAN File System Metadata server engines, a SAN File System sizing guide is available. We discuss sizing considerations in 3.12, “SAN File System sizing guide” on page 91. The Metadata servers must be set up with two internal drives for the operating system, configured as a RAID 1 mirrored pair. SAN configuration with no single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. Detailed information about planning SANs is available in the redbook Designing and Optimizing an IBM Storage Area Network, SG24-6419. A KVM (Keyboard Video Mouse) for each server. This is also required for the Master Console, if deployed; however, a separate KVM can also be used. Typical clients will use a switch so that the KVM can be shared between multiple servers. The Master Console KVM can be shared with the SAN File System servers through the RSA card. A SAN with two switch ports per SAN File System server engine, and enough SAN ports for any additional storage devices and clients. The SAN ports on the SAN File System engines are required to be 2 Gbps, so appropriate cabling is required. Client supplied switches can be 1 or 2 Gbps (2 Gbps is recommended for performance). Optionally, but recommended, the Master Console, if deployed, uses two additional SAN ports. The HBA in the MDS must be capable of supporting the QLogic device driver level recommended for use with SAN File System V2.2.2. A supported back-end storage device with LUNs defined for both system and user storage. – Currently supported disk systems for system storage are the IBM TotalStorage Enterprise Storage Server (ESS), the IBM TotalStorage DS8000 series, the IBM TotalStorage DS6000 series, the IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 series (formally FAStT) Models DS4300, DS4400, and DS4500. System metadata should be configured on high availability storage (RAID with a low ratio of data to parity disks). – SAN File System V2.2.2 can be configured with any suitable SAN storage device for user data storage. That is, any SAN-attached storage supported by the operating systems on which the SAN File System client runs can be used, provided it conforms to the SCSI standard for unique device identification. SAN File System V2.2.2 also supports iSCSI data LUNs as long as the devices conform to the SCSI driver interface standards. Sufficient GBICs, LAN, and SAN cables should be available for the installation. Each SAN File System engine needs at least two network ports and TCP/IP addresses (one for the server host address and the other for the RSA connection). The ports can be either the standard 10/100/1000 Ethernet, or optional Fibre connection. The Master Console, if deployed, requires two 10/100 Ethernet ports and two TCP/IP address. Therefore the minimum requirement for a two engine cluster is four Ethernet ports, or six if the optional Master Console is deployed. In addition, Ethernet bonding (see 3.8.5, “Network planning” on page 84 for more information) is HIGHLY recommended for every SAN File System configuration. This requires an additional network port (either standard66 IBM TotalStorage SAN File System
  • 94. copper or optional fibre), preferably on a separate switch for maximum redundancy. With Ethernet bonding configured, three network ports are required per MDS. To perform a rolling upgrade to SAN File System V2.2.2, you must leave the USB/RS-485 serial network interface in place for the RSA cards. Once the upgrade is committed, you can remove the RS-485 interface, since it is no longer used. It is replaced by the TCP/IP interface for the RSA cards. Power outlets (one or two per server engine; dual power supplies for the engine are recommended but not required). You need two wall outlets or two rack PDU outlets per server engine. For availability, these should be on separate power circuits. The Master Console, if deployed, requires one wall outlet or one PDU outlet. SAN clients with supported client operating systems, and supported Fibre Channel adapters for the disk system being used. Supported SAN File System clients at the time of writing are listed in 2.5.11, “Clients” on page 51, and are current at the following Web site: http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html3.2 Fabric needs and storage partitioning When planning the fabric for SAN File System, consider these criteria: The SAN configuration for the SAN File System should not have a single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. We recommend separating the fabrics between the HBA ports within the MDS. By separating the fabrics, you will avoid a single path of failure for the fabric services, such as the name server. A maximum of 126 dual-path LUNs can be assigned to the system storage pool. SAN File System V2.2 supports an unlimited number of LUNs for user data storage; however, the environment will necessarily impose some practical restrictions on this item, determined by the amount of storage supported by the storage devices and the client operating systems. The SAN File System Metadata servers (MDS) must have access to all Metadata (or system storage pool) LUNs. Access to client data LUNs is not required. The SAN File System clients must be prevented from having access to the Metadata LUNs, as shown in Figure 3-1 on page 68. The darker area includes the MDS engines and the LUNs in the system pool. The lighter areas include various combinations of SAN File System clients and LUNs in user pools. Overlaps are possible in the clients’ range of access, depending on the user data access required and the underlying support for this in the storage devices. The SAN File System clients need to have access only to those LUNs they will eventually access. This will be achieved by using zoning/LUN masking/storage partitioning on the back-end storage devices. Chapter 3. MDS system design, architecture, and planning issues 67
  • 95. AIX AIX W indow s W indow s M etadata 2000 2000 Server HBA HBA HBA HBA HBA FC Sw itch 1 1 FC Sw itch FC Sw itch 1 System User HBA Pool Pool M etadata Server Figure 3-1 Mapping of Metadata and User data to MDS and clients Each of the SAN File System clients should be zoned separately (hard zoning is recommended) so that each HBA can detect all the LUNs containing that client’s data in the User Pools. If there are multiple clients with the same HBA-type (manufacturer and model), these may be in the same zone; however, putting different HBA-types in the same zone is not supported, for incompatibility reasons. LUN masking must be used where supported by the storage device to LUN mask the metadata storage LUNs for exclusive use by the Metadata servers. Here are some guidelines for LUN masking: – Specify the Metadata LUNs to the Linux mode (if the back-end storage has OS-specific operating modes). – Specify the LUNs for User Pool LUNs, when using ESS, as follows (note that on SVC, there is no host type setting): • Set the correct host type according to which client/server you are configuring. The host type is set on a per-host basis, not for the LUN, regardless of host. • Therefore, with LUNs in User Pools, the LUNs may be mapped to multiple hosts, for example, Windows and AIX. You can ignore any warning messages about unlike hosts. Tip: For ESS, if you have microcode level 2.2.0.488 or above, there will be a host type entry of IBM SAN File System (Lnx MDS). If this is available, choose it for the LUNs. If running an earlier microcode version, choose Linux. For greatest security, SAN File System fabrics should preferably be isolated from non-SAN File System fabrics on which administrative activities could occur. No hosts can have access to the LUNs used by the SAN File System apart from the MDS servers and the SAN File System clients. This could be achieved by appropriate zoning/LUN masking, or for greatest security, by using separate fabrics for SAN File System and non-SAN File System activities.68 IBM TotalStorage SAN File System
  • 96. The Master Console hardware, if deployed, requires two fibre ports for connection to the SAN. This enables it to perform SAN discovery for use with IBM TotalStorage Productivity Center for Fabric. We strongly recommend installing and configuring IBM TotalStorage Productivity Center for Fabric on the Master Console, as having an accurate picture of the SAN configuration is important for a successful SAN File System installation. Multi-pathing device drivers are required on the MDS. IBM Subsystem Device driver (SDD) is required on SAN File System MDS when using IBM TotalStorage Enterprise Storage Server, DS8000, DS6000, and SAN Volume Controller. RDAC is required on SAN File System MDS for SANs using IBM TotalStorage DS4x00 series disk systems. Multi-pathing device drivers are recommended on the SAN File System clients for availability reasons, if provided by the storage system vendor.3.3 SAN File System volume visibility In SAN File System V1.1, there were restrictions on the visibility of user volumes. Basically, all the MDS and all the clients were required to have access to all the data LUNs. With V21 and later of SAN File System, this restriction is eased. The MDS requires access to all the Metadata LUNs only, and the clients require access to all or a subset of the data LUNs. Note that it is still true that SAN File System clients must not have visibility to the System volumes. Important: Make sure your storage device supports sharing LUNs among different operating systems if you will be sharing individual user volumes (LUNs) among different SAN File System clients. Some storage devices allow each LUN to be made available only to one operating system type. Check with your vendor. In general, we can distinguish two ways for setting up a SAN File System environment: a uniform and a non-uniform SAN File System configuration.3.3.1 Uniform SAN File System configuration In a uniform SAN File System configuration, all SAN File System clients have access to all user volumes. Since this uniform configuration simplifies the management of the whole SAN File System environment, it might be a preferred approach for smaller, homogenous environments. In a uniform SAN File System configuration, all SAN File System data are visible to all clients. If you need to prevent undesired client access to a particular data, you can use standard operating system file/directory permissions to control access at a file/directory level. The uniform SAN File System configuration corresponds to a SAN File System V1.1 environment.3.3.2 Non-uniform SAN File System configuration In a non-uniform SAN File System configuration, not all SAN File System clients have access to all the user volumes. Clients only access user volumes they really need, or the volumes residing on disk systems for which they have operating support. The main consideration for a non-uniform configuration is to ensure that all clients have access to all user storage pool volumes that can potentially be used by a corresponding fileset. Any attempt to read/write data on a volume to which a SAN File System client does not have access will lead to an I/O error. We consider non-uniform configurations as preferable for large and heterogeneous SAN environments. Chapter 3. MDS system design, architecture, and planning issues 69
  • 97. Note for SAN File System V2.1 clients: SAN configurations for SAN File System V2.1 are still supported by V2.2 and above, so no changes are required in the existing SAN infrastructure when upgrading. A non-uniform SAN File System configuration provides the following benefits. Flexibility Scalability Security Wider range of mixed environment support Flexibility SAN File System can adapt to desired, environment-to-environment specific SAN zoning requirements. Instead of enforcing a single zone environment, multiple zones, and therefore multiple spans of access to SAN File System user data, are possible. This means it is now easier to deploy SAN File System into an existing SAN environment. In order to help make SAN File System configurations more manageable, a set of new functions and commands were introduced with SAN File System V2.1: SAN File System volume size can now be increased in size without interrupting file system processing or moving the content of the volume. This function is supported on those systems on which the actual device driver allows LUN expansion (for example, current models of SVC or the DS4000 series) and the host operating system also supports it. Data volume drain functionality (rmvol) uses a transactional-based approach to manage the movement of data blocks to other volumes in the particular storage pool. From the client perspective, this is a serialized operation, where only one I/O at a time occurs to volumes within the storage pool. The goal of employing this kind of mechanism is to reduce the client’s CPU cycles. Some commands for managing the client data (for example, mkvol and rmvol) now require a client name as a mandatory parameter. This ensures that the administrative command will be executed only on that particular client. We cover the basic usage of most common SAN File System commands in Chapter 7, “Basic operations and configuration” on page 251. Scalability The MDS can host up to 126 dual-path LUNs for the system pool. The maximum number of LUNs for client data depends on platform-specific capabilities of that particular client. Very large LUN configurations are now possible if the data LUNs are divided between different clients. Security By easing the zoning requirements in SAN File System, better storage and data security is possible in the SAN environment, as all hosts (SAN File System clients) have access only to their own data LUNs. You can see an example of a SAN File System zoning scenario in Figure 3-1 on page 68. Wider range of mixed environment support Since not all the data LUNs need to be visible to all SAN File System clients and to the MDS, and therefore not all storage must be supported on every client and MDS, this expands the range of supported storage devices for clients. For example, if you have Linux and Windows clients, and a storage system that is supported only on Windows, you could make the LUNs on that system available only to the Windows clients, and not the Linux clients.70 IBM TotalStorage SAN File System
  • 98. Note that LUNs within a DS4000 partition can only be used by one operating system type; this is a restriction of the DS4x00 partition. Other disk systems, for example, SVC, allow multi-operating system access to the same LUNs.3.4 Network infrastructure SAN File System has the following requirements for the network topology: One IP address is required for each MDS and one for the Remote Supervisor Adapter II (RSAII) in each engine. This is still true when implementing redundant Ethernet support (Ethernet bonding; see 3.8.5, “Network planning” on page 84) with SAN File System V2.2.2, since the two Ethernet NICs share one physical IP address. Currently, SAN File System supports from two to eight engines. To take full advantage of the MDS Dual Ethernet/Ethernet bonding support provided in V2.2.2, each Ethernet NIC must be cabled to a separate Ethernet port, preferably in a separate switch.This provides greater availability in the event of an Ethernet switch outage. Two types of interfaces are supported on the MDS: 10/100/1000 Copper or 1 Gb Fibre Ethernet. The RSAII uses 10/100/1000 Copper Ethernet. The Master Console, if deployed, requires two Ethernet ports. One is connected to the existing IP network (connected to the Master Console, all MDS, and clients), and one for a VPN connection to be used for remote access to bypass the firewall. This configuration allows the Master Console to be shared with an SVC (if installed). The client to cluster and intra-cluster communication traffic will be on the existing client LAN. All Metadata servers must be on the same physical network. If multiple subnets are configured on the physical network, it is recommended that all engines are on the same subnet. If possible, avoid any routers or gateways between the clients and the MDS. This will optimize performance. Any systems that will be used for SAN File System administration require IP access to the SAN File System servers hosting the Administrative servers. Chapter 3. MDS system design, architecture, and planning issues 71
  • 99. An example of how the network can be set up is shown in Figure 3-2. Note there are two physical connections on the right of each MDS, indicating the redundant Ethernet configuration. However, these share the one TCP/IP address. M a s te r C o n s o le V P N fo r re m o te a c c e s s E x is tin g IP N e tw o r k M e ta d a ta S e rve r W in d o w s W in d o w s A IX A IX 2000 2000 RSA F C S w iS c h t c h 1 FC t w i 1 F C S w itc h 2 RSA S y s te m U ser Pool Pool M e ta d a ta S e rve r Figure 3-2 Illustrating network setup3.5 Security Authentication to the SAN File System administration interface can be accomplished in one of two ways: using LDAP, or using a new procedure called local authentication, which uses the Linux operating system login process (/etc/passwd and /etc/group). You must choose, as part of the planning process, whether you will use LDAP or local authentication. If an LDAP environment already exists, and you plan to implement SAN File System heterogenous file sharing, there is an advantage to using that LDAP; however, for those environments not already using LDAP, SAN File System implementation can be simplified by using local authentication. Using local authentication can eliminate one potential point of failure, since it does not depend on access to an external LDAP server to perform administrative functions.3.5.1 Local authentication With SAN File System V2.2.1 and later, you can use local authentication for your administrative IDs. Local authentication uses native Linux methods on the MDS to verify users and their authority to perform administrative operations. When issuing an administrative request (for example, to start the SAN File System CLI or log in to the GUI), the user ID and password is validated, and then it is verified that the user ID has authority to issue that particular request. Each user ID is assigned a role (corresponding to an OS group) that gives that user a specific level of access to administrative operations. These roles are Monitor, Operator, Backup, and Administrator. After authenticating the user ID, the administrative server interacts with the MDS to process the request. Setting up local authentication To use local authentication, define specific groups on each MDS (Administrator, Operator, Backup, or Monitor). They must have these exact names. Then add users, associating them72 IBM TotalStorage SAN File System
  • 100. with the appropriate groups according to the privileges required. For a new SAN File System installation, this is part of the pre-installation/planning process. For an existing SAN File System cluster that has previously been using LDAP authentication, migration to the local authentication method can be at any time, except for during a SAN File System software upgrade. We show detailed steps for defining the required groups and user IDs in 4.1.1, “Local authentication configuration” on page 100 (for new SAN File System installations) and 6.7, “Switching from LDAP to local authentication” on page 246 (for existing SAN File System installations who want to change methods). When using local authentication, whenever a user ID/password combination is entered to start the SAN File System CLI or GUI, the authentication method checks that the user ID exists as a UNIX user account in /etc/passwd, and if the correct password was supplied. It then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access. Some points to note when using the local authentication method Every MDS must have the standard groups defined (Administrator, Operator, Backup, or Monitor). You need at least one user ID with the Administrator role. Other IDs with the Administrator or other roles may be defined, as many as are required. You can have more than one ID in each group, but each ID can only be in one group. Every MDS must have the same set of user IDs defined as UNIX OS accounts. The same set of users and groups must be manually configured on each MDS. Use the same password for each SAN File System user ID on every MDS. These must be synchronized manually in the local /etc/passwd and /etc/group files; use of other methods (for example, NIS) are not supported. You cannot change the authentication method during a rolling upgrade of the SAN File System software. Each user ID corresponding to a SAN File System administrator name must be a member of exactly one group name corresponding to a SAN File System administrator authorization level (Administrator, Operator, Backup, or Monitor). Users who will not access SAN File System must not be members of SAN File System administration groups. As of SAN File System V2.2.1, you may choose to either deploy an LDAP server as before, or use the new local authentication option. When installing a new SAN File System, you can select which option to use (LDAP or local authentication). When prompted for a CLI user and password, specify an ID in the Administrator group, and its associated password. We will show this method for installing SAN File System in 5.2.6, “Install SAN File System cluster” on page 138 and 5.2.7, “SAN File System cluster configuration” on page 147. For existing SAN File System installations, you can switch from LDAP to the new local authentication option. We show how to do this in 6.7, “Switching from LDAP to local authentication” on page 246.3.5.2 LDAP A Lightweight Directory Access Protocol (LDAP) server is the other alternative for authentication with the SAN File System administration interface. This LDAP server can be any compliant implementation, running on any supported operating system. It is not supported to install the LDAP server on any MDS or the Master Console at this time. Chapter 3. MDS system design, architecture, and planning issues 73
  • 101. Although any standards-compliant LDAP implementation should work with SAN File System, at the time of writing, tested combinations included: IBM Directory Server V5.1 for Windows IBM Directory Server V5.1 for Linux OpenLDAP/Linux Microsoft Active Directory The LDAP server needs to be configured appropriately with SAN File System in order to use LDAP to authenticate SAN File System administrators. Examples of LDAP setup and configuration are provided in the following appendixes: Appendix A, “Installing IBM Directory Server and configuring for SAN File System” on page 565 Appendix B, “Installing OpenLDAP and configuring for SAN File System” on page 589 LDAP network requirements SAN File System requires basic network knowledge: IP address and port numbers. If you want to use a secure LDAP connection (optional), there must be a Secure Sockets Layer (SSL) in place, and an SSL certificate is required to set up the secure connection with the SAN File System. In order for the SAN File System to authorize with the LDAP server, it also requires an authorized LDAP Username, which can browse the LDAP tree where the Users and Roles are stored. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values. LDAP users A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the GUI (console). While you can also use LDAP on your SAN File System clients to authenticate client users, this is not required, and is not discussed further in this redbook. All SAN File System administrative users must have an entry in the LDAP database. They all must have the same parent DN, and they must all be the same objectClass. They must contain a user ID attribute, which will be their login name. It must also contain a userPassword attribute. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values. LDAP roles SAN File System administrators must have a role. The role of a SAN File System administrator determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. All must have the parent DN (distinguished name), and all must have the same objectClass. When a user logs in, SAN File System checks the LDAP server to determine the role to which the user belongs. Each must have an attribute containing the string that describes its role: Administrator, Backup, Operator, or Monitor. Finally, they each must support an attribute that can contain multiple values, which will contain one value for each role occupant’s DN. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values. Table 3-1 LDAP role information for SAN File System planning Description Example of value Your value Network IP Address of LDAP server 9.42.164.125 Port Numbers 389 insecure, 636 secure74 IBM TotalStorage SAN File System
  • 102. Description Example of value Your valueAuthorized LDAP Username superadmin (default for IBM Directory Server)Authorized LDAP Password secret (default for IBM Directory Server)OrganizationOrganization Parent DN dn: o=ITSOObjectClass organizationOrganization ITSOManager, ITSOorganizationManager Parent dn dn: cn=Manager,o=ITSOObjectClass organizationalRoleAttribute containing Role cn: ManagernameUsers, ITSO organizationUser Parent dn dn: ou=Users,o=ITSOObjectClass organizationalUnitou UsersOrganizationAdmin User Parent dn cn=ITSOAdmin Administrator,ou=Users,o=ITSOObjectClass inetOrgPersonAttribute containing role name cn: ITSOAdmin AdministratorobjectClass of User entries sn: AdministratorAttribute containing login user uid: ITSOAdminIDAttribute containing login userPassword: passwordpasswordMonitor, Users, ITSOorganizationMonitor User Parent dn dn: cn=ITSOMon Monitor,ou=Users,o=ITSOObjectClass inetOrgPersonAttribute containing Role cn: ITSOMon MonitornameobjectClass of User entries sn: MonitorAttribute containing Login uid: ITSOMonuser ID Chapter 3. MDS system design, architecture, and planning issues 75
  • 103. Description Example of value Your value Attribute containing Login userPassword: password password Backup, Users, ITSO organization Backup User Parent dn dn: cn=ITSOBack Backup,ou=Users,o=ITSO ObjectClass inetOrgPerson Attribute containing Role cn: ITSOBack Backup name ObjectClass of User entries sn: Backup Attribute containing login user uid: ITSOBack ID Attribute containing Login userPassword: password password Operator, Users, ITSO organization Operator User Parent dn dn: cn=ITSOOper Operator,ou=Users,o=ITSO ObjectClass inetOrgPerson Attribute containing Role cn: ITSOOper Operator name ObjectClass of User entries sn: Operator Attribute containing Login uid: ITSOOper user ID Attribute containing Login userPassword: password password Roles Role Parent dn dn: ou=Roles,o=ITSO ObjectClass organizationalUnit ou Roles ObjectClass of Role entries organizationalRole Attribute containing Role cn: Administrator name Attribute for Role occupants roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO Administrator, Roles, ITSO organization Admin Role Parent dn dn: cn=Administrator,ou=Roles,o=IT SO76 IBM TotalStorage SAN File System
  • 104. Description Example of value Your valueObjectClass organizationalRoleAttribute containing Role cn: AdministratornameAttribute for Role occupants roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSOMonitor, Roles, ITSOorganizationMonitor Role Parent dn dn: cn=Monitor,ou=Roles,o=ITSOObjectClass organizationalRoleAttribute containing Role cn: MonitornameAttribute for Role occupants roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSOBackup, Roles, ITSOorganizationBackup Role Parent dn dn: cn=Backup,ou=Roles,o=ITSOObjectClass organizationalRoleAttribute containing Role cn: BackupnameAttribute for Role occupants roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSOOperator, Roles, ITSOorganizationOperator Role Parent dn dn: cn=Operator,ou=Roles,o=ITSOObjectClass objectClass: organizationalRoleAttribute containing Role cn: OperatornameAttribute for Role occupants roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO Chapter 3. MDS system design, architecture, and planning issues 77
  • 105. 3.6 File sharing In this section, we cover some requirements for advanced heterogeneous file sharing with the SAN File System.3.6.1 Advanced heterogenous file sharing Advanced heterogeneous file sharing was introduced in SAN File System V2.2. It enables secure User and Group authorization when sharing files between UNIX and Windows based systems. This allows for files created on UNIX based systems to be viewed by authorized users on Windows based systems and vice versa. More details on setting up advanced heterogeneous file sharing are given in 8.3, “Advanced heterogeneous file sharing” on page 347. Heterogenous file sharing requires that an Active Directory Domain on Windows, and either an NIS or LDAP instance for UNIX, to provide directory services for user IDS on the SAN File System clients. At present, only one UNIX directory service domain (either NIS or LDAP) and one Active Directory instance is supported (although it is possible that this instance is serving multiple Active Directory domains).3.6.2 File sharing with Samba The use of Samba on selected SAN File System clients is also supported to export the global name space. Samba is an open source Common Internet File System (CIFS) that can be loaded on UNIX and Linux based platforms.3.7 Planning the SAN File System configuration In this section, we cover some basic planning and sizing guidelines for SAN File System.3.7.1 Storage pools and filesets SAN File System volumes (LUNs) are grouped into storage pools, as described in 2.5.9, “Storage pools” on page 48. There are two types of pools: User Pools and the System Pool. The System Pool is used for the actual file metadata, as well as for general “bookkeeping” of the SAN File System, that is, the system metadata, or the common information shared among all cluster engines. Important: The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended, with a low ratio of data to parity disks. Remote mirroring solutions, such as Metro and Global Mirroring, available on the IBM TotalStorage Enterprise Storage Server, SVC, and DS6000/DS8000 series are also possible.78 IBM TotalStorage SAN File System
  • 106. One default User Pool is created on installation; additional pools may be created based oncriteria chosen by particular organization. Examples of the many possible criteria that couldbe chosen include: Device capabilities Performance Availability Location: secure or unsecure Business owners Application typesWe strongly recommend separating workload across different types of storage LUNs. Thesystem pool size should start at approximately 2-5% of the total user data size, and volumesmay be added to increase the pool size as user data grows. As part of the planning anddesign process, you should determine which storage pools are needed.In order to determine how many storage pools are needed, a data classification analysismight be required. For example, you might want to place the database data, the shared workdirectories used by application developers, and the personal home directories of individualsinto separate Storage Pools. The reason why you would want to do this is to use storagecapacity more efficiently. With the active data pooled, we can use enterprise class disk arraylike IBM TotalStorage DS8000 for the databases, mid-range disk array like the DS4x00 seriesfor the shared work directories, and low-cost storage (JBODs) for the personal homedirectories. The goal of storing the data in separate pools is to match the value of the data tothe cost of the storage. Figure 3-3 shows three storage pools that have been defined forparticular needs. The clients have also been mapped to particular storage pools according tothe access requirements. This mapping information is used to determine which LUNs need tobe made available to which clients (via a combination of zoning, LUN masking, or othermethods as available in the storage system). Clients SAN Virtualized File System Cheap OLTP Storage Critical Storage Storage Pool Pool (random Pool (RAID-5, (JBOD I/O) cache)Figure 3-3 Data classification exampleData classification analysis will also help to implement policy-based file placementmanagement into the SAN File System. Policy determines which pool or pools will be used toplace files when they are created within SAN File System. If a non-uniform configuration isbeing used in SAN File System, then you need to make sure that for each client, all volumesin any storage pool that could be used by any fileset to which that client needs access areavailable to that client. We will show some methods for doing this in 9.7, “Non-uniformconfiguration client validation” on page 429. Chapter 3. MDS system design, architecture, and planning issues 79
  • 107. For the best performance in SAN File System, all engines should be busy in a balanced manner. This is facilitated through the use of filesets. You should plan for at least N filesets for N MDS; if not, then some of the Metadata servers will be in standby mode. You could carve out the workload into a multiple of N filesets, all expected to be similar in terms of workload, or use a more granular approach, where the filesets have different access characteristics (for example, where some generate more metadata traffic than others). SAN File System also supports basic load balancing functions for filesets; you can balance the fileset workload by dynamically assigning them to an MDS, depending on the number of filesets already being served by each MDS. See 7.5, “Filesets” on page 286 for more information about dynamic filesets and load balancing. Nested filesets are not recommended; see 7.5.2, “Nested filesets” on page 289 for reasons why. Note: Remember that the performance of the SAN File System cluster itself is dependent on metadata traffic and not data traffic.3.7.2 File placement policies SAN File System includes a powerful mechanism for controlling how administrators can manage files in the global file system: the file placement policy. This is the placement of files in storage pools using rules based on file attributes, such as file name, owner, group ID of owner, or the system creating the files. It is important that you determine the policy during the planning and design phase, as an incorrect policy can cause files to be created in an unexpected storage pool. You can define as many policies you like for the SAN File System. However, only one policy at a time can be active. Changes in policy are not retroactive, that is, if you later decide to create a rule to put certain files into a different pool, this will not affect files meeting that criteria that are already in the global file system. For detailed information about how to implement policy, see 7.8, “File placement policy” on page 304.3.7.3 FlashCopy considerations FlashCopy images for each fileset are stored in a hidden .flashcopy directory, which is located under the fileset’s attachment point in the directory structure. FlashCopy images are stored on the same volumes as the original fileset. The SAN File System FlashCopy engine uses a space-efficient, copy-on-write method to make the image. When the image is first made, the image and the original data share the same space. As data in the actual fileset changes (data added, deleted, or modified), only the changed blocks in the fileset are written to a new location on disk. The FlashCopy image continues to point to the old blocks, while the actual fileset will be updated over time to point to the new blocks. 9.1.1, “How FlashCopy works” on page 376 describes this process in more detail. Since the change rate of a fileset is not generally predictable, it is not possible to accurately determine how much space a particular FlashCopy image will occupy at a particular time. When planning space requirements, include space for FlashCopy images. You can maintain up to 32 images per fileset. The more images you maintain, the more space will be needed. Therefore, you need to consider how many FlashCopy images you would like to maintain for a particular fileset.80 IBM TotalStorage SAN File System
  • 108. You might make the assumption that 10% of the total amount of data in the fileset will change during the lifetime of a FlashCopy image, that is, between when the image is taken and when it is deleted. Let us assume the following example. We have 500 GB of data in one fileset. We want to keep three FlashCopy images. Therefore, for a 10% changed data ratio, we will need 50 GB additional space per FlashCopy image, in total, 150 GB additional space for all three FlashCopy images. Note: Keep in mind that the space used by FlashCopy images counts against the quota of the particular fileset.3.8 Planning for high availability The SAN File System has been architected to provide end user applications with Highly Available (HA) access to data contained in a SAN File System global namespace. The following SAN File System features are geared toward providing high availability: Clustered MDSs for redundancy of the file system service Fileset relocation (failover/failback) in response to cluster changes SAN File System Client logic for automatic fileset failover re-discovery and lock reassertion SAN File System Client logic for automatic detection of changes in the MDS cluster SAN File System Client logic for automatic establishment and maintenance of leases MDS failure monitoring and detection through network heartbeats Cluster server fencing through SAN messaging or remote power control Redundant paths to storage devices through dual HBAs and multipathing drivers Redundant MDS Ethernet connections through dual network adaptors and driver level path failover (Ethernet bonding in active-backup mode) Rolling upgrade of cluster software between releases Quorum disk lock function for network partition handling Administrative agent with autorestart service for software fault handling Internal deadlock detection In combination, these capabilities allow a SAN File System to respond to many network, SAN, software, and hardware faults automatically with little to no down time for client applications. In addition, routine maintenance operations such as server hardware, cluster software, or network switch upgrades can be performed in a SAN File System while preserving application access to the SAN File System namespace.3.8.1 Cluster availability In a normal state, each MDS in the active cluster exchanges heartbeats with other MDSs and uses these heartbeats to detect a failed peer MDS. In V2.2.2, the default heartbeat interval is 500 milliseconds with a heartbeat threshold of 20 heartbeats. If an MDS misses 20 consecutive expected heartbeats (10 seconds by default in V2.2.2), the cluster will declare the node failed and start to eject it. If the ejection cannot be communicated to the master MDS (for example, if the failed node is the master itself), then the observing MDS starts a process to elect a new master MDS from all the peer MDSs that remain in the cluster (that is, are reachable). Chapter 3. MDS system design, architecture, and planning issues 81
  • 109. The length of the failure detection window is set so that a crashing MDS process has time to be restarted automatically if possible, and rejoin the cluster before the ejection process is started. This means that filesets do not have to be relocated in the event of most software faults. The following section discusses the restart mechanism that makes this rejoin possible.3.8.2 Autorestart service The SAN File System Administrative agent running on each MDS has an autorestart service that monitors the MDS processes. If the processes fail on a particular MDS (software fault), the autorestart service immediately restarts the MDS processes, which then attempt to rejoin the active cluster. If the restart and rejoin is successful, no filesets are relocated. If the autorestart service is stopped or disabled, or if the rejoin fails, one of the other MDSs will detect this, and initiate an ejection operation to remove the failed node and relocate its filesets to an active MDS. The state of the autorestart service can be viewed using the sfscli lsautorestart command. To achieve the highest degree of availability, we highly recommend that the autorestart service always be enabled on all MDSs. The autorestart service will automatically disable itself if restarting an MDS fails four times within a one hour period. This may happen, for example, in cases where a SAN fault causes continued I/O errors at restart time when the MDS attempts to rejoin the cluster. Periodic checks should be made to ensure that the autorestart service is active, especially after recovering from a fault. The service can be started using the sfscli startautorestart <servername> command on the master or target MDS.3.8.3 MDS fencing If the cluster loses contact with a MDS that was previously active, the MDS is called a “rogue server”. Before moving filesets, or electing a new master MDS, if the rogue MDS was the master, the master (or new master candidate) must first be certain that the rogue node cannot issue latent I/Os. That is, the rogue server must be “fenced” from the cluster so that it is guaranteed not to issue latent I/Os after failover. The SAN File System cluster software has two mechanisms for fencing rogue servers: The first is a SAN based messaging protocol, and the second uses a remote power management capability to power the rogue node off. Fencing through SAN communication Certain types of network faults can cause the cluster to lose contact with an MDS. With the use of Ethernet bonding, this is less likely. If the lost MDS is actually alive and has access to the SAN but no network connection, the master MDS will send a shutdown message through a SAN based messaging protocol. When a partitioned node receives the SAN based shutdown message it stops all I/O and sends an OK response through the SAN signifying that it agrees to shut down and that it has completed all I/O. The partitioned node is then considered safe and its workload may be relocated to another online MDS. Fencing through remote power management (RSA) If the MDS is unreachable either via the network or the SAN, another MDS can detect the loss of heartbeat and start the ejection from the cluster. It will also remotely power off the unreachable MDS before relocating its filesets. The remote power control function is implemented by the MDS using the IBM ^ xSeries RSA (Remote Supervisory Adapter) system on the failed node. Before V2.2.2, remote access to a MDSs RSA card was through a dedicated RS-485 serial network. An MDS wishing to fence another MDS from the cluster would log on to the local RSA card and access the remote RSA card over this RS-485 network. In V2.2.2 and beyond, all access to a remote RSA card is over the IP network. In V2.2.2 and higher, each MDS also periodically checks if it can reach all RSA cards in all other MDSs. By default, this check is executed daily. If a MDS cannot access a peer metadata servers RSA card, that detecting node will log the error and issue an SNMP alert (see 13.6,82 IBM TotalStorage SAN File System
  • 110. “Simple Network Management Protocol” on page 543). The RSA check interval can be changed or disabled using internal existing commands: sfscli legacy “setrsacheckinterval <interval_in_seconds>” sfscli legacy “setrsacheckinterval DEFAULT” sfscli legacy “disablersacheck” Each of these commands must be executed from the master MDS. If the RSA fault detection is disabled with the last command, a manual check can be performed on demand (or via a cron job) using the internal existing command sfscli legacy lsengine. This is shown in 13.5.1, “Validating the RSA configuration” on page 538.3.8.4 Fileset and workload distribution A fileset is a logical subtree of the SAN File System global namespace, and is the fundamental unit of workload assigned to an MDS in a cluster. Each MDS essentially provides access to a subset of filesets that comprise the global namespace. The fileset workload of a MDS will be relocated to a peer MDS if the original MDS fails or is stopped. When a fileset is created, it can be assigned to a specific MDS for management. As long as the specified MDS is part of the active cluster group, that fileset will be serviced from the specified MDS. This is known as static fileset assignment. You can also choose to allow the cluster to assign the fileset to a suitable MDS, using a simple load balancing algorithm. This fileset may be moved from one MDS to another whenever the fileset load (number of filesets per MDS) gets unbalanced because of a change in the cluster membership. This is known as dynamic fileset assignment. Filesets can be changed from static to dynamic, and from dynamic to static, and a static fileset can also be reassigned statically to another MDS. We recommend using either all dynamic filesets or all static fileset assignments to avoid undesired excessive load on a specific MDS cluster node and to get more predictable fileset distribution/failover behavior. Using all static filesets allows you to have more precise control of load balancing the SAN File System cluster. Dynamic filesets will be allocated to different MDSs to balance the load. However, the load balancing algorithm essentially only considers the number of filesets assigned to each MDS. It does not take into account that some filesets may be more active than others. Therefore, if you know which filesets are expected to be more active, you can use this knowledge to assign them statically to cluster nodes based on activity as opposed to number of filesets. In a static fileset environment, you can also choose to have an idle MDS with no filesets assigned. This idle server is available to receive failed over filesets. This is known as an N+1 or spare server configuration.The only way to force a spare server N+1 configuration is to make all filesets static and leave one node with no static fileset assignments. It is important to note that a client must be able to access all filesets in the path to an object in order to access the object. Therefore, namespace design has an impact on availability and in general nested filesets should be avoided for maximum availability because an event impacting a parent fileset can impact all children filesets. Chapter 3. MDS system design, architecture, and planning issues 83
  • 111. 3.8.5 Network planning A SAN File System is implemented in the client IP network. The properties of this underlying network impact the availability of the SAN File System. Figure 3-4 shows a SAN File System network designed for high availability. Storage system 1 SAN 1 fc3 fc4 SDD, RDAC User data pools (data) Windows AIX Linux SFS Clients Ethernet Bonding Admin client Master console IP network LDAP ip1 ip2 Ethernet Bonding SFS MDS Cluster Master Sub Sub MDS MDS MDS System pool (metadata) SDD, RDAC fc1 fc2 SAN 2 Storage subsystem 2 Figure 3-4 SAN File System design Each SAN File System MDS has dual Ethernet adapters (Gigabit or Fibre Channel), and uses Ethernet bonding to provide redundant connections to the IP network. Bonding is a term used to describe combining multiple physical Ethernet links together to form one virtual link, and is sometimes referred to as trunking, channel bonding, NIC teaming, IP multipathing, or grouping. Bonding is commonly implemented either in the kernel network stack (driver and device independent) or by Ethernet device drivers. There are multiple bonding modes with the most common being active-active (load balancing packets across all bonded members), and active-backup, in which only one NIC in a bonded group is active at a time. In active-backup mode failover occurs to the inactive NIC upon failure of the active NIC. Active-backup mode works with any existing Ethernet infrastructure, while active-active mode (load balancing) requires participation of the network switches. In SAN File System V2.2.2, the dual Ethernet adapters on each MDS can be bonded into one virtual interface in active-backup mode with mii monitoring for link failure detection. See “Set up Ethernet bonding” on page 131 and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for details on configuring Ethernet bonding on the MDS. Although not compulsory, we strongly recommend you implement Ethernet bonding in your SAN File System cluster. Ethernet bonding allows a single NIC or cable to fail without downtime, but if both NICs are connected to a common switch, a switch failure can cause significant downtime. Therefore, for highest availability, a fully physical redundant network layer is recommended, so that each NIC is connected to a separate switch.84 IBM TotalStorage SAN File System
  • 112. The combination of Ethernet Bonding and a fully redundant physical network allows a SAN File System to be transparent to many network faults or maintenance operations, such as cable faults, NIC faults, or switch replacements. If both NICs are isolated, then failover will occur, that is, the MDS will be ejected from the cluster, and filesets transferred to a surviving MDS(s). Ethernet bonding may also be implemented on the SAN File System clients; this is optional, and the methods for doing this depend on the client OS platform. Consult your OS documentation for details.3.8.6 SAN planning A SAN File System that has high availability requirements also needs a redundant SAN fabric. The Fibre Channel Host Bus Adapters (HBAs) on each MDS should be connected to separate switches. Client machines should also be attached to more than one switch, but it is less critical since the loss of a single clients I/O path does not impact other clients as does the total loss of a MDSs I/O path. A highly available SAN configuration is shown in Figure 3-4 on page 84. Redundancy in the physical SAN must work together with a function that detects SAN path failure, and fails over the traffic to another path. This function is commonly provided by either the disk device driver or at the OS kernel level. The SAN File System MDSs work with the Subsystem Device Driver (SDD) when the system pool is comprised of volumes provided by DS8000/DS6000, ESS, or SVC. If the system volumes are provided by DS4x00/FaST, then the RDAC multipathing driver is used.3.9 Client needs and application support This section details some specifics for clients (file sharing, administrative, and Master Console) as well as application support.3.9.1 Client needs At the time of writing, SAN File System supports the following client platforms: Windows 2000 Server and Advanced Server Windows Server 2003 Standard and Enterprise Editions AIX 5L Version 5.1 (32-bit) AIX 5L Version 5.2 (32- and 64-bit) AIX 5L Version 5.3 (32- and 64-bit) Red Hat Enterprise Linux 3.0 on Intel SUSE Linux Enterprise Server 8.0 on Intel SUSE Linux Enterprise Server 8.0 for IBM zSeries SUSE Linux Enterprise Server 8.0 for IBM pSeries Solaris 9 See the following Web site for the latest list of supported SAN File System client platforms, including full fix, Service Pack, and kernel levels: http://www.ibm.com/storage/support/sanfs Volume managers, such as VERITAS Volume Manager or LVM in AIX, can be used only to manage virtual disks or LUNs that are not managed by SAN File System. This is because both SAN File System and other volume managers need to “own” their particular volumes. Chapter 3. MDS system design, architecture, and planning issues 85
  • 113. The clients require HBAs that are compatible with the underlying storage systems used for data storage by SAN File System. See the following IBM Web sites for supported adapters: DS6x00 and DS8x00 series: http://www.ibm.com/servers/storage/support/disk/ds6800/ http://www.ibm.com/servers/storage/support/disk/ds8100/ http://www.ibm.com/servers/storage/support/disk/ds8300/ ESS: http://www.ibm.com/servers/storage/support/disk/2105.html SVC: http://www.ibm.com/servers/storage/support/virtual/2145.html DS4x00 series except DS4800: http://www.ibm.com/servers/storage/support/disk/ds4100/ http://www.ibm.com/servers/storage/support/disk/ds4300/ http://www.ibm.com/servers/storage/support/disk/ds4400/ http://www.ibm.com/servers/storage/support/disk/ds4500/ For non-IBM storage, consult your vendor for supported HBAs. Each client requires at least 20 MB of available space on the hard drive for the SAN File System client code. To remotely administer the SAN File System, you need a secure shell (SSH) client for the CLI and a Web browser for the GUI. Examples of SSH clients are PuTTY, Cygwin, or OpenSSH, which are downloadable at: http://www.putty.nl http://www.cygwin.com http://www.openssh.com The Web browsers currently supported are Internet Explorer 6.0 SP1 and above and Netscape 6.2 and above (Netscape 7.0 and above is recommended). To access the Web interface for the RSAII card, Java plug-in Version 1.4 is also required, which can be downloaded from: http://www.java.sun.com/products/plugin3.9.2 Privileged clients A privileged client, in SAN File System terms, is a client that needs to have root privileges in a UNIX environment or Administrator privileges in a Windows environment. A root or Administrator user on a privileged SAN File System client will have full control over all file system objects in the filesets. A root or Administrator user on a non-privileged SAN File System client will not have full control over file system objects. We will discuss privileged clients in more detail in 7.6.2, “Privileged clients” on page 297. How many privileged clients do you need? This depends on your environment. However, we recommend having at least two privileged clients per platform, which means two for Windows, and two for UNIX-based systems. In this case, since AIX, Solaris, and Linux all use the same user/group permissions scheme, we will consider them all as UNIX-based systems. A privileged client is also needed to perform backup/restore operations. You may consider configuring additional privileged clients if root or Administrator privileges are required by any of your client applications or particular security needs.86 IBM TotalStorage SAN File System
  • 114. You can grant or revoke privileged client access dynamically. In this way, you can simply grant privileged client access only when you need to perform an action requiring root privileges on SAN File System objects, and revoke it once you complete the action.3.9.3 Client application support SAN File System is designed to work with all applications, and application binaries can be installed in the SAN File System global namespace. Virtual I/O on AIX The SAN File System V2.2.2 client for AIX 5L V5.3 will interoperate correctly with Virtual I/O (VIO) devices. The support for VIO enables SAN File System clients to use data volumes that can be accessed through VIO. In addition, all other V2.2.2 SAN File System clients will interoperate correctly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients. SAN File System supports the use of data LUNs over VIO devices, except in storage subsystem/device driver configurations that require the administrator to write a VIOS Volume Label in order to use a LUN. The list of supported devices and configurations for VIO, including limitations on those which require writing a VIOS Volume Label, is available at: http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html Direct I/O Some applications, such as database management systems, use their own sophisticated cache management systems. For such cases, SAN File System provides a direct I/O mode. In this mode, SAN File System performs direct writes to disk, and bypasses local file system caching. Using the direct I/O mode makes files behave more like raw devices. This gives database systems direct control over their I/O operations, while still providing the advantages of SAN File System features, such as policy based placement. The application needs to be written to understand how to use direct I/O. Normally, Enterprise applications know how to handle this. Basically, the application sets the O_DIRECT flag when you call open() on AIX. On Windows, the application has to set the FILE_NO_INTERMEDIATE_BUFFERING flag at open time. It is not possible to enable this by the Administrator itself (for example, at mount time). Direct I/O is already available for IBM DB2 UDB for Windows. It is available for IBM DB2 UDB for AIX at V8.1 FP4. Direct I/O is also available on SAN File System Intel clients that run Linux releases (32-bit) that support the POSIX Direct I/O file system interface calls, such as SLES8 and RHEL3.3.9.4 Clustering support SAN File System supports the following clustering software on the clients: HACMP clustering software on AIX. Sun Cluster Version 3.1 is also supported. Microsoft Cluster Server (MSCS) on Windows 2000 Advanced Server and Windows Server 2003 Enterprise Edition. Chapter 3. MDS system design, architecture, and planning issues 87
  • 115. MSCS with SAN File System A maximum of two nodes per MSCS cluster are supported. When implementing MSCS on SAN File System clients, the administrator defines accessible filesets or directories in the SAN File System namespace as cluster resources that are owned and may be served (that is, using CIFS sharing) by only one node in the MSCS cluster at a time. MSCS then moves ownership of these resources around in the cluster according to the availability of the nodes. This means that any eligible (see next paragraph) fileset/directory in the SAN File System namespace is accessible through only one node in a given MSCS cluster at a time. For clients not in the MSCS cluster, access to the SAN File System namespace is shared as usual according to SAN File System features. The individual MSCS cluster resources that are eligible to be defined can be any first-level directory below the root drive. Therefore, these could be directories corresponding to filesets which are attached to the root of the SAN File System global namespace, or could be non-fileset directories that are attached directly at the ROOT fileset. Second or lower-level directories (regardless of whether they correspond to filesets) are not available to be defined as MSCS cluster resources. See Chapter 11, “Clustering the SAN File System Microsoft Windows client” on page 447 for more information about MSCS.3.9.5 Linux for zSeries The SAN File System client for Linux for IBM ^ zSeries supports the 31-bit SLES8 distribution, with the 2.4.21-251 kernel. This can be running under z/VM® V5.1 or later, or directly within an LPAR, on any generally available zSeries model that supports the co-required OS and software stack. The SAN File System zSeries Linux client supports the use of fixed block SCSI SAN for zSeries with the zFCP driver, with data LUNs on IBM ESS, DS6000, and DS8000 storage. Therefore, the zSeries SAN File System client can share data in the SAN File System with other zSeries clients, or with other SAN File System on other platforms, providing the data resides on one of the zSeries Linux supported disk systems. Non-IBM storage and iSCSI storage are not supported at this time.3.10 Data migration Existing data must be migrated (copied) into the SAN File System global namespace from its original file system location. This is because the SAN File System separates the metadata (information about the files) from the actual user data. The migrate process will store the metadata in the System Pool and the file data in the appropriate User Pool(s), according to the policy in place. Figure 3-5 on page 89 shows this process. Remember that after you have migrated applications and data, files may be stored in different locations than previously. You may have to update configuration files, environment variables, and scripts to reflect the new file locations. Careful testing will be required after migration has completed to ensure that the clients will be able to access the data. There are currently two options available for data migration: offline and online migration.88 IBM TotalStorage SAN File System
  • 116. Create Client (sees Metadata both original SAN File and W System r it destination e m directory) et s ad r u le at l ic y a Po Original Migrate User data System data Pools Pool Figure 3-5 SAN File System data migration process3.10.1 Offline data migration The SAN File System client includes a special data migration tool called migratedata. This is a transaction-based restartable utility, designed to migrate large quantities of data. This command operates offline, that is, all applications accessing the data must be stopped, and no clients may access the data while migration is in progress. The migratedata command operates in three modes: plan, migrate, and verify. In the plan phase, data is collected about the size of the data being migrated and the system resources available, to provide an estimate of the length of time required to migrate the data. The migrate phase actually copies the data into SAN File System, and the verify phase is used to verify the integrity of the migrated data. Any file-based copy utility can also be used to migrate data to SAN File System (for example, cp, mv, xcopy, tar, and backup/restore programs). You must make sure there is enough storage capacity to perform the migration to SAN File System. You must also determine whether there is sufficient storage capacity to provide for short and mid-term growth in the storage capacity. The amount of space required during migration will be at least double the space currently occupied in the source file system during the migration operation itself. This is because at the end of the migration, disk space will be required for both the original data, as well as the new location in SAN File System. After migration, the migrated files should occupy approximately the same amount of space as before. An exception to this is that migration files in NTFS compressed drives will be expanded, and sparse files will become dense or full; these types of files will therefore require more space. Once the migration is validated, for example, by verifying the files and performing some application testing, the source data can be deleted, and its associated disk space will be able to be re-used. An important aspect of planning for migration is the migration time. You should plan for approximately eight hours to migrate one terabyte (this includes 3-4 hours of hardware configuration and data verification). The migration of the data itself should take approximately 3-4 hours per terabyte of data, and you can also use the plan phase of the migratedata to gain a more accurate estimate. Careful planning and calculation of migration time is crucial, as user applications will be offline during the migration. Because migration is a complex task, it is highly recommended to engage professional services for migration to ensure proper planning and execution. IBM is providing migration services that will address these issues. Contact your local IBM representative for more information. More detailed information about migration of data to SAN File System using the migratedata utility is in 9.2, “Data migration” on page 389. Chapter 3. MDS system design, architecture, and planning issues 89
  • 117. 3.10.2 Online data migration A service offering is available from IBM that will provide nondisruptive data migration to SAN File System at installation time. Service description The TotalStorage Services SAN File System Migration offering provides a nondisruptive online data migration at the file or block level to ensure data integrity. The TotalStorage Services team will provide architectural planning, along with execution at the byte-level, to migrate data from current application servers into a virtualized SAN File System environment. The service includes: Architecture planning Installation and hardware planning Software installation Initial file system comparisons and synchronizations Implementing a calculated replication schedule Contact your IBM service representative for more details of this offering or go to the following Web site: http://www.storage.ibm.com/services/software.html3.11 Implementation services for SAN File System An IBM service offering for implementing the SAN File System is available to help you introduce SAN File System into your IT environment. Service description The IBM Implementation service offering for SAN File System provides planning, installation, configuration, and verification of SAN File System solutions. The service includes: Pre-install planning session Skills transfer SAN File System MDS installation Assistance with the LUNs configuration on back-end storage for SAN File System Storage pool and filesets configuration Master Console installation and configuration Client installation Optional LDAP installation Benefits IBM has years of experience with providing Storage Virtualization solutions. The key benefit is skills transfer from IBM Specialists to client personnel during the installation and configuration phase. This offering also helps clients to manage and focus resources on day-to-day operations. Contact your service representative for more details of this offering or go to the following Web site: http://www.storage.ibm.com/services/software.html90 IBM TotalStorage SAN File System
  • 118. 3.12 SAN File System sizing guide The purpose of this section is to provide guidance for sizing a SAN File System solution. The SAN File System provides a global namespace to a host or client, which appears as a local file system. The data contents for all the file objects in the SAN File System are directly accessible to all clients over the SAN. The metadata is served by the Metadata servers over an IP network that connects the client(s) to the SAN File System cluster. This architecture is therefore designed to give near-local file system performance, while providing the availability, scalability, and flexibility offered by a SAN.3.12.1 Assumptions We do not specifically address sizing of the SAN and fabrics. Specific fibre bandwidth must be available to support the current application workload. The SAN bandwidth and topology should be seen as a separate exercise using existing best practices. We assume that this exercise has been completed and the SAN performance is satisfactory. Since the SAN File System engines have 2 Gbps HBAs, it is recommended to use 2 Gbps connections in all the switches and clients. It is assumed that the IP network connecting the client/s and the Metadata servers is sufficiently large with sufficiently low latency.3.12.2 IP network sizing The network topology is recommended to be at least 100 Mbps, with 1 Gbps preferred. Standard network analyzers and network performance tools (for example, netstat) can be used to measure network utilization and traffic.3.12.3 Storage sizing SAN File System is a journaling or logging file system, and hence the Metadata servers need to perform logging operations periodically to preserve file system integrity. This logging is done in the System Pool; therefore, it is necessary to guarantee sufficient bandwidth and low latency to these LUNs to get overall good performance, especially during peak metadata transaction periods. You should use the most robust RAID configurations, such as RAID 5 with write back caching enabled for the System Pool LUNs. Size of the System Pool Another important aspect of sizing is to be able to estimate the amount of space required to set up, populate, and deploy a SAN File System installation. This involves estimating the volume of metadata that will be stored and served by the Metadata servers. In general, since SAN File System is more scalable in this aspect and can support heterogeneous clients, the metadata space overhead should be marginally higher than for most local file systems. The rule of thumb for generic local file systems (those without SAN File System) is that they require approximately 3 MB for every 100 MB of actual data for metadata, which is about 3%. SAN File System will require approximately 5% for metadata. This number is typically proportional to the number of populated objects (files, directories, symbolic links, and so on) in SAN File System. So both factors (total user space and number of objects) should be considered in sizing the metadata space requirement. For example, a small number of large files would have less metadata than a large number of small files. Chapter 3. MDS system design, architecture, and planning issues 91
  • 119. However, the minimum recommended size for a system volume is 2 GB. This is because SAN File System has been designed to work with large amounts of data, and therefore testing has been targeted on system volumes of at least this size. Using the 5% rule, this would give a minimum global namespace of 40 GB. Important: Do not allow the System Pool to fill up. Alerts are provided to monitor it. If spare LUNs are available to the MDS, the System Pool can be expanded without disruption. Size of the User Pools Initially, the User Pools would be of similar size to the local data space that they replaced. Some reduction in size will be achieved because of the separation of metadata. However, when migrating data into SAN File System, be aware that files in NTFS compressed drives will be uncompressed, and sparse files (if used) will be made dense or expanded. In addition, space needs to be considered for FlashCopy images. Therefore, you will want a free space margin in your User Pools. If spare LUNs are available to the clients, the User Pools can be expanded without disruption.3.12.4 SAN File System sizing The chief metric for SAN File System sizing is the number of MDSs that will be required. The data access method for SAN File System is critical to understanding how many MDSs are required, since one of the primary factors for this count is the number of metadata transactions per second. When a SAN File System client accesses a file for the first time, it sends a request over the LAN to the SAN File System cluster. The metadata is returned to the client. All reads and writes of the user data go between the client and the storage device, directly over the SAN, as shown in Figure 3-6 on page 93. The client also caches metadata locally, meaning that subsequent opens for the same file do not require a request to be sent to the MDS. Therefore, since each MDS is only involved in metadata access, not actual file data access, the mix and rate of metadata transactions is a key factor. This mix and rate will clearly vary according to the particular client workload; therefore, a pilot under specific application conditions may be the best method to get an accurate measure.92 IBM TotalStorage SAN File System
  • 120. TCP/IP LAN Metadata Client FS . . Data . SAN Fabric Metadata Cluster User Pools System PoolFigure 3-6 SAN File System data flowOther parameters affecting loading and sizing of the SAN File System include: The number and mix of file system objects (for example, files, directories, symbolic links, and so on) that would be involved in the combined workload as seen by the MDS cluster. The number of filesets those file system objects are partitioned into. The size and mix of the objects and filesets that each client would be expected to operate on with their respective applications. For workload distribution purposes, there should be at least one fileset assigned to each engine. This implies that there should be at least as many filesets as there are engines in the cluster, unless it is desired to have a spare idle MDS in the cluster, for example, for availability reasons. One subordinate MDS should have some spare capacity, so that it can support takeover of other filesets, in case of a hard failure of an engine. The mix of metadata operations affects the maximum load. For example, file create operations may take up to twice as long as a file open. The typical file operations a client application would generate. For example, is it primarily read-only, or does it write a lot of new files? The impact of multi-client sharing of SAN File System file system objects. This will generate more metadata traffic, particularly if the file is shared heterogeneously.Collecting this data and analyzing it is a difficult exercise and it requires considerableexpertise with the application under consideration. Performance analysis should be basedaround peak application workloads rather than average workloads. File operation profiles formany well known and standard workload classifications can be used to estimate thisinformation; your IBM representative can assist with the sizing of the SAN File System. Chapter 3. MDS system design, architecture, and planning issues 93
  • 121. SAN File System clients The cache plays an important role in SAN File System client operation, and it essentially operates like any other least recently used object cache. It has the typical characteristics of a cache in the sense that, the larger the cache, the better the performance, and a large working set size gives the potential for lower performance. Applications that use few file system objects and are I/O intensive over a small working set size tend to have the ideal cache footprint, and hence could potentially perform the best with SAN File System; that is, very close to that of a local file system. Various SAN File System clients may implement varying amounts of memory to be used for caching the metadata. In general, larger amounts of RAM on the client could improve performance, as more client caching will be done. Consider the impact of multi-client sharing of objects in the SAN File System, especially if the sharing involves heterogeneous clients. In general, the more clients that are sharing objects, the higher the potential for metadata transactions. An example of how object transactions within the SAN File System are done is shown in Figure 3-7. Note: The data cache and metadata cache are on the SAN File System client. APPLICATION SAN META METADATA MDS MDS PROCESS FOPS FILE PATH DATA OPS SERVER(s) OPRATIONS SYSTEM CACHE DP AA TT AH DATA CACHE D A T A SAN Figure 3-7 Typical data and metadata flow for a generic application with SAN File System SAN File System metadata workload The number of MDSs needed is defined by the predicted metadata workload of the clients. Clients cache metadata locally, and obviously the higher hit rate on this cache (that is, the percentage of time when a metadata request can be satisfied from the cache without having to access the MDS), the lower the workload will be on the MDS, and fewer MDSs will be required.94 IBM TotalStorage SAN File System
  • 122. Testing has shown very high client metadata cache hit ratios, depending on the application workload. Therefore, many application operations that could require metadata services will be able to be satisfied locally, without having to access the MDS itself. In other words, under normal working conditions, the volume of MDS operations per second (MDS OPS in Figure 3-7 on page 94) will essentially be relatively few compared to the volume of File System Operations per second (FOPS in Figure 3-7 on page 94) produced from a given workload of application operations per second. Please consult your IBM representative for support in sizing a SAN File System configuration.3.13 Planning worksheets Table 3-2, Table 3-3, Table 3-4, Table 3-5 on page 96, and Table 3-6 on page 96 are sample worksheets to fill in while planning the installation. You will find other worksheets inIBM TotalStorage SAN File System Planning Guide, GA27-4344. Table 3-2 Network configuration of SAN File System Item MDS 1 MDS 2 IP address for engine IP address for RSAII Host name Subnetmask Gateway DNS address (optional) Cluster name Table 3-3 SAN File System drive letter for Windows clients Item Windows Client 1 Windows Client 2 Windows Client 3 Host name Desired drive letter Table 3-4 SAN File System namespace directory point for UNIX-based clients Item UNIX Client 1 UNIX Client 2 UNIX Client 3 Host name Directory attach point Chapter 3. MDS system design, architecture, and planning issues 95
  • 123. Table 3-5 Storage planning sheet Pool type Storage device Accessible clients Volume_Names System User Default Default_Pool User Pool Name User Pool Name Table 3-5 will help you plan out the zoning or LUN access, by specifying which clients should have access to which storage pool(s). Remember a client needs access to all volumes in a storage pool. Table 3-6 Client to fileset and fileset to storage pool relationships planning sheet Client name Fileset name Storage Pool name Use Table 3-6 to relate your filesets, storage pools, and policies. First, decide which fileset(s) each client should have access to. Then decide which storage pool(s) each fileset should be able to store files in. You will use this information to plan your policies as well as to confirm that each client has access to the required volumes in the pools to support the required fileset access.3.14 Deploying SAN File System into an existing SAN If you are installing SAN File System into a separate, non-mission critical SAN (for example, a test environment), no special consideration needs to be taken. However, this is not true when deploying SAN File System into a production SAN environment. Keep in mind that by introducing SAN File System into a SAN environment, the way you look at the storage devices in the environment changes completely. As you can see in Figure 3-8 on page 97, with SAN File System, you move from the original concept of having one or more particular LUNs exclusively assigned to a single host and introduce a common file space approach instead.96 IBM TotalStorage SAN File System
  • 124. Therefore, your existing SAN configuration will be considerably affected, especially from the zoning and LUN management perspective of view. Consider initially deploying SAN File System in an isolated environment, do the basic setup, test your configuration, and once you are sure that SAN File System is running smoothly in an isolated environment, you can start with the rollout into the production environment. Tip: If you do not have the facility to use a stand-alone, isolated SAN environment for the initial SAN File System setup, you can zone-out necessary storage resources in your production environment and use that part of the zoned-out environment for your SAN File System setup. Another major step in the SAN File System deployment phase is preparation for data migration. We cover this topic in more detail in 3.10, “Data migration” on page 88. SANs SAN File Today System File System File System File System SAN File System SAN SAN high low medium Figure 3-8 SAN File System changes the way we look at the Storage in today’s SANs3.15 Additional materials Our aim in this chapter is to give you the basic overview on how to plan for the SAN File System deployment. The scale of this area goes well beyond the scope of this redbook. If you need additional information regarding planning and sizing a SAN File System environment, please refer to IBM TotalStorage SAN File System Planning Guide, GA27-4344 for more detailed information. Chapter 3. MDS system design, architecture, and planning issues 97
  • 125. 98 IBM TotalStorage SAN File System
  • 126. 4 Chapter 4. Pre-installation configuration In this chapter, we discuss how to pre-configure your environment before installing SAN File System. We discuss the following topics: Security considerations Target Machine Validation Tool (TMVT) Back-end storage and zoning considerations SDD on clients and SAN File System MDS RDAC on clients and SAN File System MDS© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 99
  • 127. 4.1 Security considerations As discussed in 3.5, “Security” on page 72, SAN File System requires administrator authentication and authorization whenever a GUI or CLI command is issued. Authentication means confirming that a valid user ID is being used. Authorization is determining the level of privileges (that is, permitted operations) that the user ID may perform. The administrator authentication and authorization function for SAN File System can be performed using either the native Linux operating system login process (local authentication), or using an LDAP (Lightweight Directory Access Protocol) server. We will discuss both these options. When issuing a SAN File System administrative request, communication occurs to authenticate the supplied user ID and password and to verify that the user ID has the authority to issue that particular request. Each user ID is assigned an LDAP role or is a member of a UNIX group, which gives that user a specific level of access to administrative operations. The available and required privilege levels are Monitor, Operator, Backup and Administrator. The IBM TotalStorage SAN File System Administrator’s Guide and Reference, GA27-4317 lists, for each SAN File System CLI command and GUI function, what privilege is required to execute it. Table 4-1 lists the SAN File System privilege levels. Table 4-1 SAN File System user privilege levels Role Level Description Monitor Basic level of access Can obtain basic status information about the cluster, display the message logs, display the rules in a policy, and list information regarding SAN File System elements such as storage pools, volumes, and filesets. Operator Monitor + backup access Can perform backup and recovery tasks plus all operations available to the Monitor role. Backup Backup + additional access Can perform day-to-day operations and tasks requiring frequent modifications, plus all operations available to the Backup and Monitor roles. Administrator Full access Has full, unrestricted access to all administrative operations. After authenticating the user ID, the administrative server interacts with the MDS to process the request. The administrative agent caches all authenticated user roles for 600 seconds. You can clear the cache using the resetadmuser command.4.1.1 Local authentication configuration Local authentication supports native Linux services on the MDSs to verify SAN File System CLI/GUI users and their authority to perform administrative operations. Before configuring local authentication, make sure you have read the considerations in “Some points to note when using the local authentication method” on page 73. To use local authentication with SAN File System, define user IDs and groups (corresponding to the required SAN File System privilege levels) as local objects on each MDS by following these steps. Do this task on every MDS.100 IBM TotalStorage SAN File System
  • 128. 1. Define the following four groups. These correspond to the four SAN File System command roles: # groupadd Administrator # groupadd Operator # groupadd Backup # groupadd Monitor You must use these exact group names and define all of the groups. 2. Decide which IDs you will require to administer SAN File System, and which administrative privilege (group) they should have. At a minimum, you need one ID in the Administrator group, but you can make them as required, and there can be several IDs each in the same group. Define the user IDs and passwords that will log in to the SAN File System CLI or GUI. When defining each user ID, associate it with the appropriate group. In this example, we are defining an ID itsoadm, in the Administrator group, and an ID ITSOMon, in the Monitor group. # useradd -g Administrator itsoadm # passwd itsoadm (specify a password when prompted) # useradd -g Monitor ITSOMon # passwd ITSOMon (specify a password when prompted) UNIX user IDs, groups, and passwords are case sensitive. We recommend limiting UNIX user IDs to eight characters or fewer. 3. Once all UNIX groups and user IDs/passwords are defined on all MDSs, log in using each user ID to verify the ID/password, and to make sure a /home/userid directory structure exists. Create home directories if required (use the md command). You can also list the contents of the /etc/passwd and /etc/group files to verify that the intended UNIX groups and user IDs were added to the MDSs. You are now ready to use local authentication in the SAN File System cluster that you will install in Chapter 5, “Installation and basic setup for SAN File System” on page 125. You will specify the -noldap option when installing SAN File System. You will select one local user ID/password combination, which is in the Administrator group, and specify it as the CLI_USER/CLI_PASSWD parameters when installing SAN File System (see step 4 on page 138 in 5.2.6, “Install SAN File System cluster” on page 138).4.1.2 LDAP and SAN File System considerations The LDAP server can be run on any LDAP compliant software and operating system, but is not supported on either an MDS or the Master Console. At the time of writing, tested combinations included: IBM Directory Server V5.1 for Windows IBM Directory Server V5.1 for Linux OpenLDAP for Linux Microsoft Active Directory for Windows Some basic configuration of the LDAP server is required by the SAN File System for it to use the LDAP server to authenticate SAN File System administrators. For example, the SAN File System requires an authorized LDAP user name that can browse the LDAP tree where the users and roles are stored. The requirements to configure the SAN File System for LDAP include: You must be able to create four objects under one parent distinguished name (DN), one for each SAN File System role. Chapter 4. Pre-installation configuration 101
  • 129. Each role object must contain an attribute that supports multiple DNs. You must be able to create an object for each SAN File System administrative user. Each administrative user object must contain an attribute that can be used to log in to the SAN File System console or CLI, and a userPassword attribute. If you are accessing the LDAP server over Secure Sockets Layer (SSL), a public SSL authorization certificate (key) must be included when the truststore is created during installation. For our configuration, we used the LDAP configuration shown in Figure 4-1. This configuration is represented in an LDIF file and imported into the LDAP server. We show the LDIF file corresponding to this tree in “Sample LDIF file used” on page 587. LDAP Directory Tree for ITSO Example o=ITSO ou=Roles ou=Users cn=Manager cn=Administrator cn=ITSOAdmin Administrator cn=Monitor cn=ITSOMon Monitor cn=Backup cn=ITSOBack Backup cn=Operator cn=ITSOOper Operator Figure 4-1 LDAP tree Users A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the SAN File System Console (GUI Interface) to administer the SAN File System. You can also use LDAP on your SAN File System clients to authenticate client users, and to coordinate a common user ID/group ID environment. For more detailed information about LDAP, see the IBM Redbook Understanding LDAP: Design and Implementation, SG24-4986. Roles SAN File System administrators must each have a certain role, which determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. The Roles are described in Table 4-1 on page 100. At least one user with the Administrator role is required. You can also choose to define other roles as appropriate for your organization.102 IBM TotalStorage SAN File System
  • 130. All roles must have the parent DN (distinguished name), and all roles must have the sameobjectClass. Examples are given in Appendix A, “Installing IBM Directory Server andconfiguring for SAN File System” on page 565 and Appendix B, “Installing OpenLDAP andconfiguring for SAN File System” on page 589.Next, verify that the LDAP has been set up correctly and that each MDS can “talk” to theLDAP server. This procedure assumes that Linux is already installed with TCP/IP configuredon the MDS, as described in 5.2.2, “Install software on each MDS engine” on page 127. Theldapsearch command is used to send LDAP queries from the MDS to the LDAP server. Starta login session with each MDS (using the default root/password) and enter ldapsearch at theLinux prompt, specifying the IP address of the LDAP server and the parent DN (ITSO in ourcase), as shown in Example 4-1.Example 4-1 Verifying that an MDS can contact the LDAP serverNP28Node1:~ # ldapsearch -h 9.42.164.125 -x -b o=ITSO (objectclass=*)version: 2# filter: (objectclass=*)# requesting: ALL# ITSOdn: o=ITSOobjectClass: organizationo: ITSO# Manager, ITSOdn: cn=Manager,o=ITSOobjectClass: organizationalRolecn: Manager# Users, ITSOdn: ou=Users,o=ITSOobjectClass: organizationalUnitou: Users# ITSOAdmin Administrator, Users, ITSOdn: cn=ITSOAdmin Administrator,ou=Users,o=ITSOobjectClass: inetOrgPersoncn: ITSOAdmin Administratorsn: Administratoruid: ITSOAdmin# ITSOMon Monitor, Users, ITSOdn: cn=ITSOMon Monitor,ou=Users,o=ITSOobjectClass: inetOrgPersoncn: ITSOMon Monitorsn: Monitoruid: ITSOMon# ITSOBack Backup, Users, ITSOdn: cn=ITSOBack Backup,ou=Users,o=ITSOobjectClass: inetOrgPersoncn: ITSOBack Backupsn: Backupuid: ITSOBack# ITSOOper Operator, Users, ITSOdn: cn=ITSOOper Operator,ou=Users,o=ITSOobjectClass: inetOrgPersoncn: ITSOOper Operatorsn: Operatoruid: ITSOOper# Roles, ITSOdn: ou=Roles,o=ITSOobjectClass: organizationalUnitou: Roles# Administrator, Roles, ITSO Chapter 4. Pre-installation configuration 103
  • 131. dn: cn=Administrator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO # Monitor, Roles, ITSO dn: cn=Monitor,ou=Roles,o=ITSO objectClass: organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO # Backup, Roles, ITSO dn: cn=Backup,ou=Roles,o=ITSO objectClass: organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO # Operator, Roles, ITSO dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO # search result search: 2 result: 0 Success # numResponses: 13 # numEntries: 12 You should perform ldapsearch on each MDS to ensure that they can all communicate with the LDAP server. To list the parameters that you can use with the ldapsearch command, use the -? option, as shown in Example 4-2. Example 4-2 Using ldapsearch help NP28Node1:~ # ldapsearch -? ldapsearch: invalid option -- ? ldapsearch: unrecognized option -? usage: ldapsearch [options] [filter [attributes...]] where: filter RFC-2254 compliant LDAP search filter attributes whitespace-separated list of attribute descriptions which may include: 1.1 no attributes * all user attributes + all operational attributes Search options: -a deref one of never (default), always, search, or find -A retrieve attribute names only (no values) -b basedn base dn for search -F prefix URL prefix for files (default: "file:///tmp/) -l limit time limit (Max seconds) for search -L print responses in LDIFv1 format -LL print responses in LDIF format without comments -LLL print responses in LDIF format without comments and version -s scope one of base, one, or sub (search scope) -S attr sort the results by attribute `attr -t write binary values to files in temporary directory -tt write all values to files in temporary directory -T path write files to directory specified by path (default: /tmp) -u include User Friendly entry names in the output -z limit size limit (in entries) for search104 IBM TotalStorage SAN File System
  • 132. Common options: -d level set LDAP debugging level to `level -D binddn bind DN -f file read operations from `file -h host LDAP server -H URI LDAP Uniform Resource Identifier(s) -I use SASL Interactive mode -k use Kerberos authentication -K like -k, but do only step 1 of the Kerberos bind -M enable Manage DSA IT control (-MM to make critical) -n show what would be done but dont actually search -O props SASL security properties -p port port on LDAP server -P version protocol version (default: 3) -Q use SASL Quiet mode -R realm SASL realm -U authcid SASL authentication identity -v run in verbose mode (diagnostics to standard output) -w passwd bind passwd (for simple authentication) -W prompt for bind passwd -x Simple authentication -X authzid SASL authorization identity ("dn:<dn>" or "u:<user>") -Y mech SASL mechanism -Z Start TLS request (-ZZ to require successful response) NP28Node1:~ #4.2 Target Machine Validation Tool (TMVT) As described in 2.5.1, “Metadata server” on page 37, SAN File System requires specific hardware and pre-installed operating system software. In order to validate your SAN File System setup, a validation tool is included with the SAN File System software package. This tool is known as the Target Machine Validation Tool (TMVT) and is intended to verify that your hardware and software prerequisites have been met. In order to run TMVT, you must have already installed the SAN File System metadata software. TMVT is invoked as shown: /usr/tank/server/bin/tmvt -r report_file_name Examine the results in report_file_name, paying particular attention to areas flagged as non-compliant. Resolve those prerequisites, and then rerun the tool until TMVT runs without errors. Example 4-3 shows a partial listing from the TMVT report file. In this case, we had to check and install the RSA firmware. Example 4-3 TMVT report file tank-mds1:~ # /usr/tank/server/bin/tmvt -r /usr/tank/admin/log/TMVT_MDS1_afterinstall -I=9.82.22.175 -U=USERID -P=PASW0RD HSTPV0009E The Hardware Components group fails to comply with the requirements of the recipe. HSTPV0007E Machine: tank-mds1 FAILS TO COMPLY with requirements of SAN File System release 2.2.2.91, build sv22_0001. tank-mds1:~ # cat /usr/tank/admin/log/TMVT_MDS1_afterinstall Hardware Components (14) Item Name Current Recipe Failed Hardware Component Checks (1) Remote Supervisor Adapter 2 MISSING present Chapter 4. Pre-installation configuration 105
  • 133. Passed Hardware Component Checks (13) Available RAM (Megabytes) 4039 4000 Disk space in /var (Megabytes) 16386 4096 TCP/IP enabled enabled Ethernet controller Broadcom Corporation NetX . . Ethernet controller Broadcom Corporation NetX . . Ethernet controller Intel Corp. 82546EB Gigab . . Machine BIOS Level NA . Machine BIOS Build GEE163AUS . Machine Type/Model 41461RX 4146* FC HBA Manufacturer QLogic QLogic FC HBA Model QLA2342 QLA23* FC HBA BIOS/Firmware Version 3.03.06 . FC HBA Driver Version 7.03.00 . Software Components (18) Item Name Current Recipe Correct Software Packages (18) xshared 4.2.0-270 4.2.0-270 perl 5.8.0-201 5.8.0-201 pango 1.0.4-148 1.0.4-148 ncurses 5.2-402 5.2-402 lsb-runtime 1.2-105 1.2-105 libusb 0.1.5-179 0.1.5-179 libstdc++ 3.2.2-54 3.2.2-54 libgcc 3.2.2-54 3.2.2-54 gtk2 2.0.6-154 2.0.6-154 gtk 1.2.10-463 1.2.10-463 glibc 2.2.5-233 2.2.5-233 glib2 2.0.6-47 2.0.6-47 glib 1.2.10-326 1.2.10-326 expect 5.34-192 5.34-192 ethtool 1.7cvs-26 1.7cvs-26 bash 2.05b-50 2.05b-50 atk 1.0.3-66 1.0.3-66 aaa_base 2003.3.27-76 2003.3.27-76 Note: TMVT non-compliance does not strictly prevent the installation of the SAN File System. It identifies deviations from the recommended hardware and software platform.4.3 SAN and zoning considerations Here are some guidelines for preparing your SAN and zoning for use with SAN File System. SAN considerations Set up your switch configuration to maximize the number of physical LUNs addressable by the MDSs and to minimize or preferably eliminate sharing of fabrics with other non-SAN File System users whose usage may be disruptive to the SAN File System. Verify that the storage devices that are used by SAN File System are set up so that the appropriate storage LUNs are available to the SAN File System.106 IBM TotalStorage SAN File System
  • 134. Zoning considerations Because of the restriction on the number of LUNs an MDS can access (126 currently), make sure you limit the number of paths created through the fabrics from each metadata server to the storage to two paths, one per host-bus adapter (HBA) port. Some combination of zoning and physical fabric construction may be used to reduce or limit the number of physical paths. Each fabric should consist of one or more switches from the same vendor. Keep in mind that no level of SAN zoning can totally protect SAN File System systems from SAN events caused by other non-SAN File System systems connected to the same fabric. Therefore, your SAN File System fabric should be isolated from traffic and administrative contact with non-SAN File System systems. You can utilize VSANs to accomplish this fabric isolation. When metadata and user storage reside on the same storage subsystem, you must ensure that the metadata storage is fully isolated and protected from access by client systems. With some subsystems, access to various LUNs is determined by connectivity to various ports of the storage subsystems. With these storage subsystems, hard zoning of the attached switches may be sufficient to ensure isolation of the metadata storage from access by client systems. However, with other storage subsystems (such as ESS), LUN access is available from all ports and LUN masking must be used to ensure that only the MDSs can access the metadata LUNs. Important: SAN File System user and metadata LUNs should not share the same ESS 2105 Host Adapter ports. SAN File System clients should be zoned or LUN masked such that each can see user storage only. Specify that the metadata storage or LUNs are to be configured to the Linux mode (if the storage subsystem has operating system-specific operating modes).For more information about planning to implement zoning, see the following manual andredbook: IBM TotalStorage SAN File System Planning Guide, GA27-4344 IBM SAN Survival Guide, SG24-6143The following is an example of a lab setup and is shown in Figure 4-2 on page 108. There aretwo MDS, two xSeries Windows clients and two pSeries AIX clients. Each system (MDS andclient) has two FC HBAs.The port names are: NP28Node1, two ports: MDS1_P1 and MDS1_P2 NP28Node2, two ports: MDS2_P1 and MDS2_P2 SVC: Two nodes, four ports per node: svcn1_p1, svcn1_p2, svcn1_p3, svcn1_p4, svcn2_p1, svcn2_p2, svcn2_p3, and svcn2_p4 AIX1, two ports: AIX1_P1 and AIX1_P2 AIX2, two ports: AIX2_P1and AIX2_P2 WIN2kup, two ports: wink2up_p1 and wink2up_p2 WIN2kdn, two ports: wink2dn_p1 and wink2dn_p2 Chapter 4. Pre-installation configuration 107
  • 135. There are two pairs of switches: the first pair consists of Switch 11 and Switch 31, and the second pair consists of Switch 12 and Switch 32. AIX1 AIX2 Client Client WIN2kup WIN2kdn Client1 Client Switch 31 Switch 32 Switch 11 Switch 12 NP28Node1 NP28Node2 (MDS1) (MDS2) SVC with FAStT Figure 4-2 Example of setup The zoning was implemented as follows: Each client HBA is zoned to one port of each SVC node. Since there are four clients and two HBAs in each client, four client zones have been defined on each switch pair. One MDS zone is defined on the first switch pair, including one port from each MDS and one port from the first SVC node (three ports in total). One MDS zone is defined on the second switch pair, including one port from each MDS and one port from the second SVC node (three ports in total). The switch zoning using the above rules is shown in Example 4-4. For simplicity, the zoning for the SVC to its back-end storage has been omitted. Example 4-4 Using zoneShow First switch pair: cfg: Redbook zone: AIX1_SVC 12,3 [SVCN1_P2] 12,4 [SVCN2_P2] 32,6 [AIX1_P1] zone: AIX2_SVC 12,1 [SVCN1_P4] 12,2 [SVCN2_P4] 32,4 [AIX2_P1] zone: MDS_SVC 32,9 [MDS1_P1]108 IBM TotalStorage SAN File System
  • 136. 32,8 [MDS1_P2] 12,3 [svcn1_p2] zone: win2kdn_SVC 32,14 [win2kdn_p1] 12,1 [SVCN1_P4] 12,2 [SVCN2_P4] zone: win2kup_SVC 12,4 [svcn2_p2] 12,3 [svcn1-p2] 32,13 [win2kup_p1] Second switch pair: cfg: Redbook zone: AIX1_SVC 31,6 [AIX1_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] zone: AIX2_SVC 31,4 [AIX2_p2] 11,1 [svcn1_p3] 11,2 [svcn2_p3] zone: MDS_SVC 31,9 [MDS1_P2] 31,8 [MDS2_P2] 11,4 [svcn2_p1] zone: win2kup_SVC 31,13[win2kup_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] zone: wink2dn_SVC 11,1 [svcn1_p3] 11,2 [svcn2_p3] 31,14[win2kdn_p2] LUN masking / Storage Partitioning was implemented as follows: One 3.5 GB LUN, mapped to both HBAs in both MDS Nodes to be used for the System Pool. Four LUNs, of size 4.5 GB, 4 GB, 3 GB, and 1 GB, were assigned to all the HBAs Host Clients, to be used for User Pools. The setup described here is simply to show how the fabric and back-end storage is configured and is an example only. There are many other possibilities for doing this. Planning rules and considerations are explained in Chapter 3, “MDS system design, architecture, and planning issues” on page 65.4.4 Subsystem Device Driver The Subsystem Device Driver (SDD) is a pseudo device driver designed to support the multipath configuration environments in the IBM TotalStorage Enterprise Storage Server and the IBM TotalStorage SAN Volume Controller. SDD provides the following functions: Enhanced data availability Dynamic input/output (I/O) load balancing across multiple paths Automatic path fail-over protection Concurrent download of licensed internal code This section describes how to install and verify (SDD) on the MDS, and on the SAN File System client platforms, AIX and Windows. Chapter 4. Pre-installation configuration 109
  • 137. Attention: The examples shown here for installing and configuring SDD may not exactly match the current required version of SDD for SAN File System; however, the instructions are similar. Please refer to the SAN File System support Web site to confirm the required SDD version.4.4.1 Install and verify SDD on Windows 2000 client The following hardware and software components are required to install SDD on a Windows 2000 client. The steps are very similar for a Windows 2003 client. One or more supported storage devices are needed. Supported Host Bus Adapters (HBAs) are necessary. For a complete list of HBAs supported by the back-end storage device, see: http://www.ibm.com/servers/storage/support/config/hba/index.wss Windows 2000 operating system with Service Pack 2 or higher is required for SDD; however, SAN File System requires Service Pack 4. Approximately 1 MB of space is required on the Windows 2000 system drive. ESS devices are configured as IBM 2105xxx (where xxx is the ESS model number), SVC devices are configured as 2145, and DS6000/DS8000 devices are configured as IBM 2107. Install SDD on Windows 2000 client Download the Windows 2000 SDD install package from the following Web site: http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430 1. Run setup.exe from the download directory and accept the defaults during the installation. Tip: If you have previously installed V1.3.1.1 (or earlier) of SDD, you will see an “Upgrade?” prompt. Answer Yes to continue the installation. 2. At the end of the process, you will be prompted to reboot now or later. A reboot is required to complete the installation. 3. After the reboot, the Start menu will include a Subsystem Device Driver entry containing the following selections: – Subsystem Device Driver management – SDD Technical Support Web site – README Verify SDD and the storage devices on Windows 2000 1. Verify that the disks are visible in Device Manager. Since we are using SVC, the disks are listed as 2145 SCSI disk devices, as in Figure 4-3 on page 111. (There are 16, representing the four paths for each of the four disks.)110 IBM TotalStorage SAN File System
  • 138. Figure 4-3 Verify disks are seen as 2145 disk devices2. To verify that SDD can see the devices, use the datapath query device command, as shown in Example 4-5.Example 4-5 Verifying SDD on Windows 2000C:Program FilesIBMSubsystem Device Driver>datapath query deviceTotal Devices : 4DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 600507680185001B2000000000000002============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 31 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 600507680185001B2000000000000003============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 30 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 29 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 Chapter 4. Pre-installation configuration 111
  • 139. DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 24 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 35 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 24 0 2 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 35 0 3 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0 The actual devices are shown in the DEVICE NAME heading, in this case Disk1, Disk2, Disk3, and Disk4. Note that there are four paths displayed for each disk, as the SVCs have been configured for four paths to each LUN. 3. Finally, check that both FC adapters has been correctly configured to use SDD. Use the datapath query adapter command, as shown in Example 4-6. Example 4-6 Display information about HBAs that is currently configured for SDD C:Program FilesIBMSubsystem Device Driver>datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 Scsi Port2 Bus0 NORMAL ACTIVE 109 0 8 8 1 Scsi Port3 Bus0 NORMAL ACTIVE 127 0 8 8 In this example, the two HBAs have been installed and successfully configured for SDD. You have now successfully installed and verified SDD on a Windows 2000 client.4.4.2 Install and verify SDD on an AIX client Before installing SDD on an AIX client, determine the installation package that is appropriate for your environment. SAN File System is supported (at the time of writing) on AIX 5L Version 5.1, Version 5.2, and Version 5.3. Download the appropriate package for your AIX version from the following Web site: http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430 The prerequisites for installing SDD on AIX are: You must have root access. The following procedures assume that SDD will be used to access all single-path and multipath devices. If installing an older version of SDD, first remove any previously installed, newer version of SDD from your client. Make sure that your HBAs are installed by using lsdev -Cc adapter |grep fc. The output should be similar to Example 4-7 on page 113, which shows two HBAs: fcs0 and fcs1.112 IBM TotalStorage SAN File System
  • 140. Example 4-7 Make sure FC adapter is installedfcs0 Available 20-58 FC Adapterfcs1 Available 20-60 FC Adapter Note: In certain circumstances, when upgrading from a previous version of SDD, you may see the following error message during installation: Error, volume group configuration may not be saved completely. Failure occurred during pre_rm. Failure occurred during rminstal. Finished processing all filesets. (Total time: 16 secs). To correct this, unmount all file systems belonging to SDD volume groups and vary off those volume groups. See the SDD manual and README file for more information.Install SDD on AIX clientWe will use SMIT to install the SDD driver:1. Use smitty install_update and select Install Software. In the INPUT device field, enter the directory where the SDD package was saved. The included packages will be displayed, as in Example 4-8.Example 4-8 Install and update softwareInstall and Update Software by Package Name (includes devices and printers)TylqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqkPrx Select Software to Install x x x x Move cursor to desired item and press F7. Use arrow keys to scroll. x* x ONE OR MORE items can be selected. x+ x Press Enter AFTER making all selections. x x x x devices.sdd.43 ALL x x 1.5.1.0 IBM Subsystem Device Driver for AIX V433 x x x x devices.sdd.51 ALL x x + 1.5.1.0 IBM Subsystem Device Driver for AIX V51 x x [BOTTOM] x x x x F1=Help F2=Refresh F3=Cancel xF1x F7=Select F8=Image F10=Exit xF5x Enter=Do /=Find n=Find Next xF9mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj2. Select the required SDD level depending on the level of AIX that you are running (devices.sdd.51 in our case). The installation will complete.3. If you are using SVC as a front-end to SAN File System user storage, you also need to install the 2145 component for SDD called AIX Attachment Scripts for SVC. This component can be found from the SVC Support site: http://www.ibm.com/servers/storage/support/virtual/2145.html Chapter 4. Pre-installation configuration 113
  • 141. Use smitty install_update, and select “Install Software”. In the INPUT device field, the included packages will be displayed, as in Example 4-9. Example 4-9 Install 2145 component for SDD Install and Update Software by Package Name (includes devices and printers) Type or select a value for the entry field. Press Enter AFTER making all desired changes. lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk x Select Software to Install x * x x+ x Move cursor to desired item and press F7. Use arrow keys to scroll. x x ONE OR MORE items can be selected. x x Press Enter AFTER making all selections. x x x x #--------------------------------------------------------------------- x x # x x # KEY: x x # @ = Already installed x x # x x #--------------------------------------------------------------------- x x x x ibm2145.rte ALL x x 4.3.2002.1111 IBM 2145 TotalStorage SAN Volume Controller x x x x F1=Help F2=Refresh F3=Cancel x F1x F7=Select F8=Image F10=Exit x F5x Enter=Do /=Find n=Find Next x F9mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj 4. Select ibm2145.rte and press Enter to install. 5. Verify that SDD has installed successfully using lslpp -l ‘*sdd*’, as in Example 4-10. Example 4-10 Verify that SDD has been installed root@aix2:/# lslpp -l *sdd* Fileset Level State Description ---------------------------------------------------------------------------- Path: /usr/lib/objrepos devices.sdd.51.rte 1.5.1.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Path: /etc/objrepos devices.sdd.51.rte 1.5.1.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Note: You do not need to reboot the pSeries, even though the installation message indicates this. SDD on your AIX client platform has now been installed, and you are ready to configure SDD. Tip: For AIX 5L Version 5.1 and AIX 5L Version 5.2, the published limitation on one system is 10,000 devices. The combined number of hdisk and vpath devices should not exceed the number of devices that AIX supports. In a multipath environment, because each path to a disk creates an hdisk, the total number of disks being configured can be reduced by the number of paths to each disk.114 IBM TotalStorage SAN File System
  • 142. Configure and verify SDD for the AIX clientBefore you configure SDD, ensure that: The supported storage devices are operational. The supported storage device hdisks are configured correctly on the AIX host system. The supported storage devices are configured. If you configure multiple paths to a supported storage device, ensure that all paths (hdisks) are in the Available state. Otherwise, some SDD devices will lose multipath capability.To configure SDD on AIX:1. Issue the lsdev -Cc disk | grep “2105” command to check the ESS device configuration, or issue the lsdev -Cc disk | grep “SAN Volume Controller” command to check the SVC. In our setup, we are using SVC, and the command output is shown in Example 4-11. We see 16 hdisks, which represent the four paths to each of the four disks.Example 4-11 Check that you can see the SVC volumesroot@aix2:/# lsdev -Cc disk |grep "SAN Volume Controller"hdisk2 Available 10-70-01 SAN Volume Controller Devicehdisk3 Available 10-70-01 SAN Volume Controller Devicehdisk4 Available 10-70-01 SAN Volume Controller Devicehdisk5 Available 10-70-01 SAN Volume Controller Devicehdisk6 Available 10-70-01 SAN Volume Controller Devicehdisk7 Available 10-70-01 SAN Volume Controller Devicehdisk8 Available 10-70-01 SAN Volume Controller Devicehdisk9 Available 10-70-01 SAN Volume Controller Devicehdisk10 Available 20-58-01 SAN Volume Controller Devicehdisk11 Available 20-58-01 SAN Volume Controller Devicehdisk12 Available 20-58-01 SAN Volume Controller Devicehdisk13 Available 20-58-01 SAN Volume Controller Devicehdisk14 Available 20-58-01 SAN Volume Controller Devicehdisk15 Available 20-58-01 SAN Volume Controller Devicehdisk16 Available 20-58-01 SAN Volume Controller Devicehdisk17 Available 20-58-01 SAN Volume Controller Device2. Verify that you can see the vpaths using lsdev -Cc disk | grep ‘vpath’ (Example 4-12). Here we see the consolidated devices, representing the four actual disks.Example 4-12 Verify that you can see the vpathsroot@aix2:/# lsdev -Cc disk | grep "vpath*"vpath0 Available Data Path Optimizer Pseudo Device Drivervpath1 Available Data Path Optimizer Pseudo Device Drivervpath2 Available Data Path Optimizer Pseudo Device Drivervpath3 Available Data Path Optimizer Pseudo Device Driver Chapter 4. Pre-installation configuration 115
  • 143. In our setup, four user data LUNs have been assigned to the clients. To verify that they have been correctly configured for SDD and correspond to the hdisk listing, use datapath query device (Example 4-13 shows how the command works). Example 4-13 Verify that vpaths correlate to the hdisk root@aix2:/# datapath query device Total Devices : 4 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000003 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk2 CLOSE NORMAL 0 0 1 fscsi0/hdisk6 CLOSE NORMAL 0 0 2 fscsi1/hdisk10 CLOSE NORMAL 0 0 3 fscsi1/hdisk14 CLOSE NORMAL 0 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000001 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi0/hdisk7 CLOSE NORMAL 0 0 2 fscsi1/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 CLOSE NORMAL 0 0 1 fscsi0/hdisk8 CLOSE NORMAL 0 0 2 fscsi1/hdisk12 CLOSE NORMAL 0 0 3 fscsi1/hdisk16 CLOSE NORMAL 0 0 DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000006 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi0/hdisk9 CLOSE NORMAL 0 0 2 fscsi1/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0 In our setup, we assigned four SVC LUNs to the AIX client, using four paths to each SVC LUN. If your LUNs do not show up as expected, continue to the next steps to configure your disk devices to work with SDD. If the disk devices have been configured correctly, the SDD setup for AIX 5L Version 5.1 has been completed. 3. If you have already created some ESS or SVC volume groups, vary off (deactivate) all active volume groups with ESS or SVC by using the varyoffvg AIX command. Attention: Before you vary off a volume group, unmount all file systems in that volume group. If some supported storage devices (hdisks) are used as physical volumes of an active volume group and if there are file systems of that volume group being mounted, you must unmount all file systems and vary off all active volume groups with supported storage device SDD disks, in order to configure SDD vpath devices correctly.116 IBM TotalStorage SAN File System
  • 144. 4. Using smit devices, highlight Data Path Device and press Enter. The Data Path Device panel is displayed, as shown in Example 4-14. Example 4-14 Data Path Device panel Data Path Devices Move cursor to desired item and press Enter. Display Data Path Device Configuration Display Data Path Device Status Display Data Path Device Adapter Status Define and Configure all Data Path Devices Add Paths to Available Data Path Devices Configure a Defined Data Path Device Remove a Data Path Device 5. Select “Define and Configure All Data Path Devices”. The configuration process begins. When complete, the output should look similar to Example 4-15. Example 4-15 Devices configured COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below. vpath0 Available Data Path Optimizer Pseudo Device Driver vpath1 Available Data Path Optimizer Pseudo Device Driver vpath2 Available Data Path Optimizer Pseudo Device Driver vpath3 Available Data Path Optimizer Pseudo Device Driver 6. Exit smitty and then verify the SDD configuration, as described in steps 1 through 3 above. 7. Use the varyonvg command to vary on all deactivated supported storage device volume groups. 8. If you want to convert the supported storage device hdisk volume group to SDD vpath devices, you must run the hd2vp utility. SDD provides two conversion scripts, hd2vp and vp2hd. The hd2vp converts a volume group from supported storage device hdisks to SDD vpaths, and vp2hd converts a volume group from SDD vpaths to supported storage device hdisks. Use vp2hd if you want to configure the applications back to their original supported storage device hdisks, or if you want to remove SDD from your AIX client. For more information about these scripts, consult your SDD user guide. You have now successfully configured SDD for AIX 5L Version 5.1.4.4.3 Install and verify SDD on MDS SDD is installed after the operating system has been upgraded and before SAN File System is installed. You can download SDD from the following Web site: http://www.ibm.com/servers/storage/support/virtual/2145.html Note: It is important that you verify the SDD level at the SDD Web site: http://www.ibm.com/servers/storage/support/software/sdd/ Chapter 4. Pre-installation configuration 117
  • 145. Install SDD on MDS 1. Download the code and store it in the /usr/tank/packages/ directory. 2. Install the SDD package with the following command: # rpm -Uvh /media/cdrom/IBMsdd-1.6.0.1-6.i686.ul1.rpm 3. Configure SDD to start during boot: # chkconfig -a sdd 35 4. Start SDD: # sdd start Verify SDD on MDS To verify that the MDS HBAs have been correctly configured for SDD, start a local session with each MDS (using default root/password) and enter datapath query adapter at the Linux prompt. Example 4-16 shows that two HBAs are installed in the MDS and are correctly recognized by SDD. Example 4-16 Display information about HBAs that are currently configured for SDD NP28Node1:~ # datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 Host2Channel0 NORMAL ACTIVE 2778 0 5 5 1 Host3Channel0 NORMAL ACTIVE 25 0 5 5 Verify that you can display information about the devices currently assigned to the MDS, using datapath query device, as shown in Example 4-17. We see the correct output: one SVC device is attached to the SCSI path. This will be used for the System Pool. Example 4-17 Display information about devices that are currently configured for SDD mds1:~ # datapath query device DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode