Your SlideShare is downloading. ×
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Ibm total storage san file system sg247057
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Ibm total storage san file system sg247057

1,452

Published on

Published in: Technology, Business
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,452
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
8
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Front coverIBM TotalStorage SANFile SystemNew! Updated for Version 2.2.2 of SANFile SystemHeterogeneous file sharingPolicy-based file lifecyclemanagement Charlotte Brooks Huang Dachuan Derek Jackson Matthew A. Miller Massimo Rosichiniibm.com/redbooks
  • 2. International Technical Support OrganizationIBM TotalStorage SAN File SystemJanuary 2006 SG24-7057-03
  • 3. Note: Before using this information and the product it supports, read the information in “Notices” on page xix.Fourth Edition (January 2006)This edition applies to Version 2, Release 2, Modification 2 of IBM TotalStorage SAN File System (productnumber 5765-FS2) on the day of announcement in October of 2005. Please note that pre-release code wasused for the screen captures and command output; some minor details may vary from the generally availableproduct.© Copyright International Business Machines Corporation 2003, 2004, 2006. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  • 4. Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv December 2004, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv January 2006, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvPart 1. Introduction to IBM TotalStorage SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction: Growth of SANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Storage networking technology: Industry trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 The IBM approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 Rise of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 What is virtualization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 Types of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Storage virtualization models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 SAN data sharing issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 IBM TotalStorage Open Software Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 IBM TotalStorage SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5.2 IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.3 Comparison of SAN Volume Controller and SAN File System . . . . . . . . . . . . . . . 18 1.5.4 IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.5 TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5.6 TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5.7 TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.8 TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.6 File system general terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.1 What is a file system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.2 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.6.3 Selecting a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.7 Filesets and the global namespace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.8 Value statement of IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 2. SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.1 SAN File System product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.2 SAN File System V2.2 enhancements overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview . . . . . . . . . . . . . . . . . . . 35 2.4 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5 SAN File System hardware and software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . 37© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. iii
  • 5. 2.5.1 Metadata server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5.2 Master Console hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5.3 SAN File System software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.4 Supported storage for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.5 SAN File System engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.6 Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.5.7 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.8 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5.9 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5.10 Policy based storage and data management . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.5.11 Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5.12 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.5.13 Reliability and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.5.14 Summary of major features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Part 2. Planning, installing, and upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 3. MDS system design, architecture, and planning issues. . . . . . . . . . . . . . . 65 3.1 Site infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.2 Fabric needs and storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3 SAN File System volume visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.3.1 Uniform SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.3.2 Non-uniform SAN File System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.4 Network infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.5 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5.1 Local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5.2 LDAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.6.1 Advanced heterogenous file sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.6.2 File sharing with Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7 Planning the SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7.1 Storage pools and filesets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.7.2 File placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.7.3 FlashCopy considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.8 Planning for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8.1 Cluster availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.8.2 Autorestart service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.8.3 MDS fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.8.4 Fileset and workload distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.8.5 Network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.8.6 SAN planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9 Client needs and application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9.1 Client needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.9.3 Client application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.9.4 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.9.5 Linux for zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.10 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.10.1 Offline data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.10.2 Online data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.11 Implementation services for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.12 SAN File System sizing guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91iv IBM TotalStorage SAN File System
  • 6. 3.12.2 IP network sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.3 Storage sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.12.4 SAN File System sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923.13 Planning worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.14 Deploying SAN File System into an existing SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.15 Additional materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Chapter 4. Pre-installation configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994.1 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1 Local authentication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.2 LDAP and SAN File System considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014.2 Target Machine Validation Tool (TMVT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1054.3 SAN and zoning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.4 Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.4.1 Install and verify SDD on Windows 2000 client. . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.2 Install and verify SDD on an AIX client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 Install and verify SDD on MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174.5 Redundant Disk Array Controller (RDAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 RDAC on Windows 2000 client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 RDAC on AIX client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.5.3 RDAC on MDS and Linux client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Chapter 5. Installation and basic setup for SAN File System . . . . . . . . . . . . . . . . . . . 1255.1 Installation process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265.2 SAN File System MDS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Pre-installation setting and configurations on each MDS . . . . . . . . . . . . . . . . . . 127 5.2.2 Install software on each MDS engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.3 SUSE Linux 8 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.2.4 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2.5 Install prerequisite software on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2.6 Install SAN File System cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.2.7 SAN File System cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.3 SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.3.1 SAN File System Windows 2000/2003 client . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.3.2 SAN File System Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3.3 SAN File System Solaris installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 5.3.4 SAN File System AIX client installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5.3.5 SAN File System zSeries Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . 1785.4 UNIX device candidate list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855.5 Local administrator authentication option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1865.6 Installing the Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 5.6.2 Installing Master Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925.7 SAN File System MDS remote access setup (PuTTY / ssh). . . . . . . . . . . . . . . . . . . . 228 5.7.1 Secure shell overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228Chapter 6. Upgrading SAN File System to Version 2.2.2. . . . . . . . . . . . . . . . . . . . . . . 2296.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306.2 Preparing to upgrade the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2316.3 Upgrade each MDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.3.1 Stop SAN File System processes on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.3.2 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.3.3 Upgrade the disk subsystem software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.3.4 Upgrade the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Contents v
  • 7. 6.3.5 Upgrade the MDS software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.4 Special case: upgrading the master MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 6.5 Commit the cluster upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 6.6 Upgrading the SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.6.1 Upgrade SAN File System AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 6.6.2 Upgrade Solaris/Linux clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.6.3 Upgrade SAN File System Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.7 Switching from LDAP to local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Part 3. Configuration, operation, maintenance, and problem determination . . . . . . . . . . . . . . . . . . . 249 Chapter 7. Basic operations and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 7.1 Administrative interfaces to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.1.1 Accessing the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.1.2 Accessing the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7.2 Basic navigation and verifying the cluster setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 7.2.1 Verify servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 7.2.2 Verify system volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.3 Verify pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 7.2.4 Verify LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 7.2.5 Verify administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 7.2.6 Basic commands using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 7.3 Adding and removing volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 7.3.1 Adding a new volume to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 7.3.2 Changing volume settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.3.3 Removing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 7.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 7.4.1 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 7.4.2 Adding a volume to a user storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.3 Adding a volume to the System Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 7.4.4 Changing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.4.5 Removing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 7.4.6 Expanding a user storage pool volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 7.4.7 Expanding a volume in the system storage pool. . . . . . . . . . . . . . . . . . . . . . . . . 284 7.5 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 7.5.1 Relationship of filesets to storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 7.5.2 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 7.5.3 Creating filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 7.5.4 Moving filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 7.5.5 Changing fileset characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 7.5.6 Additional fileset commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.5.7 NLS support with filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.6 Client operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 7.6.1 Fileset permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 7.6.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 7.6.3 Take ownership of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.7 Non-uniform SAN File System configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 7.7.1 Display a list of clients with access to particular volume or LUN . . . . . . . . . . . . 304 7.7.2 List fileset to storage pool relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7.8 File placement policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 7.8.1 Policies and rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 7.8.2 Rules syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 7.8.3 Create a policy and rules with CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309vi IBM TotalStorage SAN File System
  • 8. 7.8.4 Creating a policy and rules with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 7.8.5 More examples of policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.8.6 NLS support with policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.8.7 File storage preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 7.8.8 Policy management considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 7.8.9 Best practices for managing policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334Chapter 8. File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3378.1 File sharing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3388.2 Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 8.2.1 Implementation: Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . 3408.3 Advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 8.3.1 Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.2 Administrative commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.3 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 8.3.4 Directory server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 8.3.5 MDS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 8.3.6 Implementation of advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . 365Chapter 9. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3759.1 SAN File System FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9.1.1 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9.1.2 Creating, managing, and using the FlashCopy images . . . . . . . . . . . . . . . . . . . 3789.2 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 9.2.1 Planning migration with the migratedata command . . . . . . . . . . . . . . . . . . . . . . 390 9.2.2 Perform migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 9.2.3 Post-migration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3959.3 Adding and removing Metadata servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 9.3.1 Adding a new MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 9.3.2 Removing an MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 9.3.3 Adding an MDS after previous removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3989.4 Monitoring and gathering performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 9.4.1 Gathering and analyzing performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . 3999.5 MDS automated failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 9.5.1 Failure detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 9.5.2 Fileset redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 9.5.3 Master MDS failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 9.5.4 Failover monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 9.5.5 General recommendations for minimizing recovery time . . . . . . . . . . . . . . . . . . 4279.6 How SAN File System clients access data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4279.7 Non-uniform configuration client validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 9.7.1 Client validation sample script details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 9.7.2 Using the client validation sample script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431Chapter 10. File movement and lifecycle management . . . . . . . . . . . . . . . . . . . . . . . . 43510.1 Manually move and defragment files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 10.1.1 Move a single file using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . . 436 10.1.2 Move multiple files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . 439 10.1.3 Defragmenting files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . 44110.2 Lifecycle management with file management policy . . . . . . . . . . . . . . . . . . . . . . . . . 441 10.2.1 File management policy syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 10.2.2 Creating a file management policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 10.2.3 Executing the file management policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 10.2.4 Lifecycle management recommendations and considerations . . . . . . . . . . . . . 446 Contents vii
  • 9. Chapter 11. Clustering the SAN File System Microsoft Windows client . . . . . . . . . . 447 11.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 11.2 Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 11.2.1 MSCS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 11.2.2 SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 11.3 Installing the SAN File System MSCS Enablement package . . . . . . . . . . . . . . . . . . 455 11.4 Configuring SAN File System for MSCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 11.4.1 Creating additional cluster groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 11.5 Setting up cluster-managed CIFS share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Chapter 12. Protecting the SAN File System environment . . . . . . . . . . . . . . . . . . . . . 477 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.1.1 Types of backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 12.2 Disaster recovery: backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.2.1 LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 12.2.2 Setting up a LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480 12.2.3 Restore from a LUN based backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 12.3 Backing up and restoring system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 12.3.1 Backing up system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 12.3.2 Restoring the system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 12.4 File recovery using SAN File System FlashCopy function . . . . . . . . . . . . . . . . . . . . 493 12.4.1 Creating FlashCopy image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 12.4.2 Reverting FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 12.5 Back up and restore using IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 502 12.5.1 Benefits of Tivoli Storage Manager with SAN File System . . . . . . . . . . . . . . . . 502 12.6 Backup/restore scenarios with Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 503 12.6.1 Back up Windows data using Tivoli Storage Manager Windows client. . . . . . . 504 12.6.2 Back up user data in UNIX filesets with TSM client for AIX . . . . . . . . . . . . . . . 507 12.6.3 Backing up FlashCopy images with the snapshotroot option . . . . . . . . . . . . . . 510 Chapter 13. Problem determination and troubleshooting . . . . . . . . . . . . . . . . . . . . . . 519 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13.2 Remote access support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13.3 Logging and tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 13.3.1 SAN File System Message convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 13.3.2 Metadata server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 13.3.3 Administrative and security logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 13.3.4 Consolidated server message logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 13.3.5 Client logs and traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 13.4 SAN File System data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13.5 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 13.5.1 Validating the RSA configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 13.5.2 RSA II management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 13.6 Simple Network Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13.6.1 SNMP and SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13.7 Hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 13.8 SAN File System Message conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547Part 4. Exploiting the SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Chapter 14. DB2 with SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 14.1 Introduction to DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 14.2 Policy placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 14.2.1 SMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554viii IBM TotalStorage SAN File System
  • 10. 14.2.2 DMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 14.2.3 Other data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14.2.4 Sample SAN File System policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14.3 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 14.4 Load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 14.5 Direct I/O support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 14.6 High availability clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 14.7 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 14.8 Database path considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Appendix A. Installing IBM Directory Server and configuring for SAN File System 565 Installing IBM Tivoli Directory Server V5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 Creating the LDAP database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 Configuring IBM Directory Server for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 Starting the LDAP Server and configuring Admin Server . . . . . . . . . . . . . . . . . . . . . . . . . 577 Verifying LDAP entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Sample LDIF file used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 Appendix B. Installing OpenLDAP and configuring for SAN File System . . . . . . . . . 589 Introduction to OpenLDAP 2.0.x on Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Installation of OpenLDAP packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Configuration of OpenLDAP client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 Configuration of OpenLDAP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592 Configure OpenLDAP for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 Appendix C. Client configuration validation script . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Sample script listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . 603 How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Contents ix
  • 11. x IBM TotalStorage SAN File System
  • 12. Figures 1-1 SAN Management standards bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1-2 CIMOM proxy model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1-3 SNIA storage model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1-4 Intelligence moving to the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1-5 In-band and out-of-band models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1-6 Block level virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1-7 IBM TotalStorage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1-8 File level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1-9 IBM TotalStorage SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1-10 Summary of SAN Volume Controller and SAN File System benefits. . . . . . . . . . . . . 19 1-11 TPC for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1-12 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1-13 TPC for Disk functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1-14 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1-15 Windows system hierarchical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1-16 Windows file system security and permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1-17 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1-18 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2-1 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2-2 SAN File System administrative structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2-3 SAN File System GUI browser interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2-4 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2-5 Filesets and nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2-6 SAN File System storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2-7 File placement policy execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2-8 Windows 2000 client view of SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2-9 Exploring the SAN File System from a Windows 2000 client. . . . . . . . . . . . . . . . . . . 55 2-10 FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3-1 Mapping of Metadata and User data to MDS and clients . . . . . . . . . . . . . . . . . . . . . 68 3-2 Illustrating network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3-3 Data classification example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3-4 SAN File System design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3-5 SAN File System data migration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3-6 SAN File System data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3-7 Typical data and metadata flow for a generic application with SAN File System . . . 94 3-8 SAN File System changes the way we look at the Storage in today’s SANs. . . . . . . 97 4-1 LDAP tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-2 Example of setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4-3 Verify disks are seen as 2145 disk devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5-1 SAN File System Console GUI sign-on window . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-2 Select language for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5-3 SAN File System Windows 2000 Client Welcome window . . . . . . . . . . . . . . . . . . . 150 5-4 Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5-5 Configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5-6 Review installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5-7 Security alert warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5-8 Driver IBM SANFS Cluster Bus Enumerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5-9 Driver IBM SAN Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xi
  • 13. 5-10 Start SAN File System client immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-11 Windows client explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5-12 Windows 2000 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-13 Windows 20003 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5-14 SAN File System helper service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 5-15 Launch MMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-16 Add the Snap-in for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-17 Add Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-18 Add the IBM TotalStorage System Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-19 Add/Remove Snap-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5-20 Save MMC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5-21 Save MMC console to the Windows desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5-22 IBM TotalStorage File System Snap-in Properties . . . . . . . . . . . . . . . . . . . . . . . . . 161 5-23 DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5-24 Verify value for DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5-25 Trace Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5-26 Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5-27 Modify Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5-28 J2RE Setup Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5-29 J2RE verify the install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 5-30 SNMP Service Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 5-31 SNMP Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5-32 Verifying SNMP and SNMP Trap Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5-33 Master Console installation wizard initial window . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-34 Set user account privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5-35 Adobe Installer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 5-36 Master Console installation wizard information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5-37 Select optional products to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5-38 Viewing the Products List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 5-39 PuTTY installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5-40 DB2 Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5-41 DB2 select installation type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 5-42 DB2 select installation action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5-43 DB2 Username and Password menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5-44 DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5-45 DB2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 5-46 DB2 tools catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5-47 DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 5-48 DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 5-49 DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5-50 Verify DB2 install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 5-51 Verify SVC console install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5-52 Select database repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5-53 Specify single DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5-54 Enter DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 5-55 Set trapdSharePort162 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 5-56 Define trapdTrapReceptionPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5-57 Enter TSANM Manager name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 5-58 IBM Director Installation Directory window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 5-59 IBM Director Service Account Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 5-60 IBM Director network drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5-61 IBM Director database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 5-62 IBM Director superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220xii IBM TotalStorage SAN File System
  • 14. 5-63 Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225-64 Upgrade to dynamic disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2235-65 Verify both disks are set to type Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2235-66 Add Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2245-67 Select mirrored disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255-68 Mirroring process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255-69 Mirror Process completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265-70 Setting Folder Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266-1 SAN File System console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2457-1 Create PuTTY ssh session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2537-2 SAN File System GUI login window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2567-3 GUI welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577-4 Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587-5 Basic SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647-6 Select expand vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2797-7 vdisk expansion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807-8 Data LUN display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817-9 Disk before expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837-10 Disk after expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2847-11 Relationship of fileset to storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887-12 Filesets from the MDS and client perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897-13 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897-14 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2927-15 Windows Explorer shows cluster name sanfs as the drive label . . . . . . . . . . . . . . . 2937-16 List nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2947-17 MBCS characters in fileset attachment directory . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967-18 Select properties of fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017-19 ACL for the fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027-20 Verify change of ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027-21 Windows security tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037-22 Policy rules based file placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3067-23 Policies in SAN File System Console (GUI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3127-24 Create a New Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3137-25 New Policy: High Level Settings sample input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3147-26 Add Rules to Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3157-27 New rule created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3167-28 Edit Rules for Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3177-29 List of defined policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3187-30 Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3187-31 Verify Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3197-32 New Policy activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3197-33 Delete a Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3207-34 Verify - Delete Policy Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3217-35 List Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3217-36 MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3237-37 Generated SQL for MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . 3247-38 Select a policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3267-39 Rules for selected policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3267-40 Edited rule for Preallocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3277-41 Activate new policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3277-42 Disable default pool with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3317-43 Display policy statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3338-1 View Windows permissions on newly created fileset. . . . . . . . . . . . . . . . . . . . . . . . 341 Figures xiii
  • 15. 8-2 Set permissions for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 8-3 Advanced permissions for Everyone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 8-4 Set permissions on Administrator group to allow Full control . . . . . . . . . . . . . . . . . 343 8-5 View Windows permissions on winfiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 8-6 View Windows permissions on fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 8-7 Read permission for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 8-8 SAN File System user mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 8-9 Sample configuration for advanced heterogeneous file sharing . . . . . . . . . . . . . . . 350 8-10 Created Active Directory Domain Controller and Domain: sanfsdom.net . . . . . . . . 351 8-11 User Creation Verification in Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 8-12 SAN File System Windows client added to Active Directory domain. . . . . . . . . . . . 352 8-13 Sample heterogeneous file sharing LDAP diagram . . . . . . . . . . . . . . . . . . . . . . . . . 352 8-14 Log on as sanfsuser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 8-15 Contents of svcfileset6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8-16 unixfile.txt permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 8-17 Edit the file in Windows as sanfsuser and save it . . . . . . . . . . . . . . . . . . . . . . . . . . 370 8-18 Create the file on the Windows client as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . 371 8-19 Show file contents in Windows as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 8-20 winfile.txt permissions from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 9-1 Make FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-2 Copy on write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-3 The .flashcopy directory view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 9-4 Create FlashCopy image GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 9-5 Create FlashCopy wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 9-6 Fileset selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 9-7 Set Flashcopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 9-8 Verify FlashCopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 9-9 FlashCopy image created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 9-10 List of FlashCopy images using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 9-11 List of FlashCopy images before and after a revert operation . . . . . . . . . . . . . . . . . 386 9-12 Select image to revert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 9-13 Delete Image selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 9-14 Delete Image verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 9-15 Delete image complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 9-16 Data migration to SAN File System: data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 9-17 SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 9-18 View statistics: client sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9-19 Statistics: Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 9-20 Console Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 9-21 Create report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 9-22 View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 9-23 SAN File System failures and actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 9-24 List of MDS in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 9-25 List of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 9-26 Metadata server mds3 missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 9-27 Filesets list after failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 9-28 Metadata server mds3 not started automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 9-29 Failback warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 9-30 Graceful stop of the master Metadata server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 9-31 Metadata server mds2 as new master. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 9-32 Configuring SANFS for SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 9-33 Selecting the event severity level that will trigger traps . . . . . . . . . . . . . . . . . . . . . . 422 9-34 Log into IBM Director Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423xiv IBM TotalStorage SAN File System
  • 16. 9-35 Discover SNMP devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4239-36 Compile a new MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4249-37 Select the MIB to compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4249-38 MIB compilation status windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4259-39 Viewing all events in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4259-40 Viewing the test trap in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4269-41 Trap sent when an MDS is shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4269-42 Example of required client access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43010-1 Windows-based client accessing homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . 43710-2 Verify file sizes in homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44311-1 MSCS lab setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44811-2 Basic cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44911-3 Network Interfaces in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45011-4 Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45011-5 SAN File System client view of the global namespace . . . . . . . . . . . . . . . . . . . . . . 45111-6 Fileset directory accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45411-7 Show permissions and ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45411-8 Create a file on the fileset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45511-9 Choose the installation language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45611-10 License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45611-11 Complete the client information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45711-12 Choose where to install the enablement software . . . . . . . . . . . . . . . . . . . . . . . . . . 45711-13 Confirm the installation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45811-14 New SANFS resource is created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45911-15 Create a new cluster group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45911-16 Name and description for the group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46011-17 Specify preferred owners for group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46011-18 Group created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46111-19 ITSOSFSGroup displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46111-20 Create new resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46211-21 New resource name and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46211-22 Select all nodes as possible owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46311-23 Enter resource dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46311-24 SAN File System resource parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46411-25 Display filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46411-26 Fileset for cluster resource selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46511-27 Cluster resource created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46511-28 New resource in Resource list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46611-29 Bring group online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46611-30 Group and resource are online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46711-31 Resource moves ownership on failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46711-32 Resource stays with current owner after rebooting the original owner . . . . . . . . . . 46811-33 Create IP Address resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46911-34 IP address resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46911-35 IP address resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47011-36 Network Name resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47111-37 Network Name resource: Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47111-38 Network Name resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47211-39 File Share resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47211-40 File Share resource: dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47311-41 File Share resource: parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47311-42 All file share resources online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47411-43 Designate a drive for the CIFS share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Figures xv
  • 17. 11-44 CIFS client access SAN File System via clustered SAN File System client . . . . . . 475 11-45 Copy lots of files onto the share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 11-46 Drive not accessible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 12-1 SVC FlashCopy relationships and consistency group . . . . . . . . . . . . . . . . . . . . . . . 481 12-2 Metadata dump file creation start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 12-3 Metadata dump file name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 12-4 DR file creation final step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 12-5 Delete/remove the metadata dump file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 12-6 Verify deletion of the metadata dump file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 12-7 FlashCopy option window GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 12-8 FlashCopy Start GUI window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 12-9 Select Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 12-10 Set Properties of FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 12-11 Verify FlashCopy settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 12-12 FlashCopy images created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 12-13 Windows client view of the FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 12-14 Client file delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 12-15 FlashCopy image revert selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 12-16 Image restore / revert verification and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 12-17 Remaining FlashCopy images after revert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 12-18 Client data restored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 12-19 Exploitation of SAN File System with Tivoli Storage Manager. . . . . . . . . . . . . . . . . 502 12-20 User files selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 12-21 Restore selective file selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 12-22 Select destination of restore file(s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 12-23 Restore files selection for FlashCopy image backup . . . . . . . . . . . . . . . . . . . . . . . . 506 12-24 Restore files destination path selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 13-1 IBM Connection Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 13-2 Steps for remote access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 13-3 SAN File System message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 13-4 Event viewer on Windows 2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 13-5 OBDC from GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534 13-6 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 13-7 RSAII interface using Internet Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 13-8 Accessing remote power using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 13-9 Access BIOS log using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 13-10 Java Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13-11 RSA II: Remote control buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 13-12 ASM Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 13-13 SNMP configuration on RSA II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 14-1 Example storage pool layout for DB2 objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 14-2 Workload distribution of filesets for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 14-3 Default data caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 14-4 Directory structure information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 A-1 Select location where to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 A-2 Language selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 A-3 Setup type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 A-4 Features to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 A-5 User ID for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 A-6 Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 A-7 GSKit pop-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 A-8 Installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 A-9 Configuration tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570xvi IBM TotalStorage SAN File System
  • 18. A-10 User ID pop-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571A-11 Enter LDAP database user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571A-12 Enter the name of the database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572A-13 Select database codepage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572A-14 Database location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573A-15 Verify database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573A-16 Database created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574A-17 Add organizational attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575A-18 Browse for LDIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576A-19 Start the import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577A-20 IBM Directory Server login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578A-21 IBM Directory Server Web Administration Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579A-22 Change admin password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580A-23 Add host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581A-24 Enter host details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582A-25 Verify that host has been added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583A-26 Login to local host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584A-27 Admin console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585A-28 Manage entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586A-29 Expand ou=Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 Figures xvii
  • 19. xviii IBM TotalStorage SAN File System
  • 20. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not give you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisionsare inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THISPUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, anddistribute these sample programs in any form without payment to IBM for the purposes of developing, using,marketing, or distributing application programs conforming to IBMs application programming interfaces.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xix
  • 21. TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AFS® HACMP™ Storage Tank™ AIX 5L™ IBM® System Storage™ AIX® NetView® Tivoli® DB2 Universal Database™ PowerPC® TotalStorage® DB2® POWER™ WebSphere® DFS™ POWER5™ xSeries® Enterprise Storage Server® pSeries® z/VM® Eserver® Redbooks™ zSeries® Eserver® Redbooks (logo) ™ FlashCopy® SecureWay®The following terms are trademarks of other companies:Java, J2SE, Solaris, Sun, Sun Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in theUnited States, other countries, or both.Microsoft, Windows NT, Windows, Win32, and the Windows logo are trademarks of Microsoft Corporation in the UnitedStates, other countries, or both.i386, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of IntelCorporation or its subsidiaries in the United States, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.xx IBM TotalStorage SAN File System
  • 22. Preface This IBM Redbook is a detailed technical guide to the IBM TotalStorage® SAN File System. SAN File System is a robust, scalable, and secure network-based file system designed to provide near-local file system performance, file aggregation, and data sharing services in an open environment. SAN File System helps lower the cost of storage management and enhance productivity by providing centralized management, higher storage utilization, and shared access by clients to large amounts of storage. We describe the design and features of SAN File System, as well as how to plan for, install, upgrade, configure, administer, and protect it. This redbook is for all who want to understand, install, configure, and administer SAN File System. It is assumed the reader has basic knowledge of storage and SAN technologies.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. Figure 1 The team: Dachuan, Massimo, Matthew, Derek, and Charlotte© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xxi
  • 23. Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Storage Solutions at the International Technical Support Organization, San Jose Center. She has 14 years of experience with IBM in the fields of IBM TotalStorage hardware and software, IBM ^® pSeries® servers, and AIX®. She has written 15 Redbooks™, and has developed and taught IBM classes in all areas of storage and storage management. Before joining the ITSO in 2000, she was the Technical Support Manager for Tivoli® Storage Manager in the Asia Pacific Region. Huang Dachuan is an Advisory IT Specialist in the Advanced Technical Support team of IBM China in Beijing. He has nine years of experience in networking and storage support. He is CCIE certified and his expertise includes Storage Area Networks, IBM TotalStorage SAN Volume Controller, SAN File System, ESS, DS6000, DS8000, copy services, and networking products from IBM and Cisco. Derek Jackson is a Senior IT Specialist working for the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. He primarily supports SAN File System, IBM TotalStorage Productivity Center, and the ATS’s lab infrastructure. Derek has worked for IBM for 22 years, and has been employed in the IT field for 30 years. Before joining ATS, Derek worked for IBMs Business Continuity and Recovery Services and was responsible for delivering networking solutions for its clients. Matthew A. Miller is an IBM Certified IT Specialist and Systems Engineer with IBM in Phoenix, AZ. He has worked extensively with IBM Tivoli Storage Software products as both a field systems engineer and as a software sales representative and currently works with Tivoli Techline. Prior to joining IBM in 2000, Matt worked for 16 years in the client community in both technical and managerial positions. Massimo Rosichini is an IBM Certified Product Services and Country Specialist in the ITS Technical Support Group in Rome, Italy. He has extensive experience in IT support for TotalStorage solutions in the EMEA South Region. He is an ESS/DS Top Gun Specialist and is an IBM Certified Specialist for Enterprise Disk Solutions and Storage Area Network Solutions. He was an author of previous editions of the Redbooks IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 and IBM TotalStorage SAN File System SG24-7057. Thanks to the following people for their contributions to this project: The authors of previous editions of this redbook: Jorge Daniel Acuña, Asad Ansari, Chrisilia Davis, Ravi Khattar, Michael Newman, Massimo Rosichini, Leos Stehlik, Satoshi Suzuki, Mats Wahlstrom, Eric Wong Cathy Warrick and Wade Wallace International Technical Support Organization, San Jose Center Todd Bates, Ashish Chaurasia, Steve Correl, Vinh Dang, John George, Jeanne Gordon, Matthew Krill, Joseph Morabito, Doug Rosser, Ajay Srivastava, Jason Young SAN File System Development, IBM® Beaverton Rick Taliaferro, Ida Wood IBM Raleigh Herb Ahmuty, John Amann, Kevin Cummings, Gonzalo Fuentes, Craig Gordon, Rosemary McCutchen, IBM Gaithersburg Todd DeSantis IBM Pittsburghxxii IBM TotalStorage SAN File System
  • 24. Bill Cochran, Ron Henkhaus IBM Illinois Drew Davis IBM Phoenix Michael Klein IBM Germany John Bynum IBM San JoseBecome a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. Youll team with IBM technical professionals, Business Partners, or clients. Your efforts will help increase product acceptance and client satisfaction. As a bonus, youll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 Preface xxiii
  • 25. xxiv IBM TotalStorage SAN File System
  • 26. Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7057-03 for IBM TotalStorage SAN File System as created or updated on January 27, 2006.December 2004, Third Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information Advanced heterogeneous file sharing File movement and lifecycle management File sharing with Samba Changed information Client supportJanuary 2006, Fourth Edition This revision reflects the addition, deletion, or modification of new and changed information described below. New information New centralized installation procedure Preallocation policy for large files Local authentication option Microsoft clustering support Changed information New MDS server and client platform (including zSeries® support) New RSA connectivity and high availability details© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. xxv
  • 27. xxvi IBM TotalStorage SAN File System
  • 28. Part 1Part 1 Introduction to IBM TotalStorage SAN File System In this part of the book, we introduce general industry and client issues that have prompted the development of the IBM TotalStorage SAN File System, and then present an overview of the product itself.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 1
  • 29. 2 IBM TotalStorage SAN File System
  • 30. 1 Chapter 1. Introduction In this chapter, we provide background information for SAN File System, including these topics: Growth in SANs and current challenges Storage networking technology: industry trends Rise of storage virtualization and growth of SAN data Data sharing with SANs: issues IBM TotalStorage products overview Introduction to file systems and key concepts Value statement for SAN File System© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 3
  • 31. 1.1 Introduction: Growth of SANs Storage Area Networks (SANs) have gained wide acceptance. Interoperability issues between components from different vendors connected by a SAN fabric have received attention and have generally been resolved, but the problem of managing the data stored on a variety of devices from different vendors is still a major challenge to the industry. The volume of data storage required in daily life and business has exploded. Specific figures vary, but it is indisputably true that capacity is growing, hardware costs are decreasing, while availability requirements are rapidly approaching 100%. Three hundred million Internet users are driving two petabytes of data traffic per month. Users are mobile, access patterns unpredictable, and the content of data becomes more and more interactive. Clients deploying SANs today face many issues as they build or grow their storage infrastructures. Although the cost of purchasing storage hardware continues its rapid decline, the cost of managing storage is not keeping pace. In some cases, storage management costs are actually rising. Recent studies show that the purchase price of storage hardware comprises as little as 5 to 10 percent of the total cost of storage. The various factors that make up the total cost of ownership include: Administration costs Downtime Environmental overhead Device management tasks Backup and recovery procedures Shortage of skilled storage administrators Heterogeneous server and storage installations Information technology managers are under significant pressure to reduce costs while deploying more storage to remain competitive. They must address the increasing complexity of storage systems, the explosive growth in data, and the shortage of skilled storage administrators. Furthermore, the storage infrastructure must be designed to help maximize the availability of critical applications. Storage itself may well be treated as a commodity. However, the management of it is certainly not; in fact, the cost of managing storage is typically many times its actual acquisition cost.1.2 Storage networking technology: Industry trends In the late 1990s, storage networking emerged in the form of SANs, Network Attached Storage (NAS), and Internet Small Computer System Interface (iSCSI) technologies. These were aimed at reducing the total cost of ownership (TCO) of storage by managing islands of information among heterogeneous environments with disparate operating systems, data formats, and user interfaces, in a more efficient way. SANs enable you to consolidate storage and share resources by enabling storage capacity to be connected to servers at a greater distance. By disconnecting storage resource management from individual hosts, a SAN enables disk storage capacity to be consolidated. The results can be lower overall costs through better utilization of the storage, lower management costs, increased flexibility, and increased control. This can be achieved physically or logically.4 IBM TotalStorage SAN File System
  • 32. Physical consolidationData from disparate storage subsystems can be combined onto large, enterprise classshared disk arrays, which may be located at some distance from the servers. The capacity ofthese disk arrays can be shared by multiple servers, and users may also benefit from theadvanced functions typically offered with such subsystems. This may include RAIDcapabilities, remote mirroring, and instantaneous data replication functions, which might notbe available with smaller, integrated disks. The array capacity may be partitioned, so thateach server has an appropriate portion of the available gigabytes.Available capacity can be dynamically allocated to any server requiring additional space.Capacity not required by a server application can be re-allocated to other servers. This avoidsthe inefficiency associated with free disk capacity attached to one server not being usable byother servers. Extra capacity may be added, non disruptively.However, physical consolidation does not mean that all wasted space concerns areaddressed.Logical consolidationIt is possible to achieve shared resource benefits from the SAN, but without moving existingequipment. A SAN relationship can be established between a client and a group of storagedevices that are not physically co-located (excluding devices that are internally attached toservers). A logical view of the combined disk resources may allow available capacity to beallocated and re-allocated between different applications running on distributed servers, toachieve better utilization.Extending the reach: iSCSIWhile growing in popularity, nevertheless there are certain perceived barriers to entry withSANs. These include a higher cost, and complexity of implementation and administration.The iSCSI protocol is intended to address this by bringing some of the performance benefitsof a SAN, while not requiring the same infrastructure. It achieves this by providingblock-based I/O over a TCP/IP network, rather than the Fibre Channel for SAN.Today’s storage solutions need to embrace emerging technologies at all price points to offerthe client the highest freedom of choice. Chapter 1. Introduction 5
  • 33. 1.2.1 Standards organizations and standards Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interpretability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible. SAN Management Standards Bodies Marketing De-facto Standards Formal Standards Internet Engineering Task Force (IETF) Storage Networking Industry Association (SNIA) Formal standards for SNMP and MIBs SAN umbrella organization IBM participation: Founding member American National Standards Board, Tech Council, Project Chair Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards Jiro (StoreX) IBM participation Fibre Channel Industry Sun consortium Association (FCIA) Sponsors customer events IBM participation: Board Fibre Alliance International Organization for EMC consortium Standardization (ISO) International standardization SCSI Trade Association IBM Software National Storage Technology roadmaps development ISO Certified Industry Consortium IBM participation: Pre-competitive Member consortium Distributed Management Task Force (DMTF) Development of CIM IBM participation Figure 1-1 SAN Management standards bodies Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage. Storage Networking Industry Association (SNIA) Storage Management Initiative (SMI) Specification. CIM/WEB management model CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Desktop Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBM’s intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces. CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be6 IBM TotalStorage SAN File System
  • 34. developed to read MOF files and automatically generate data type definitions, interface stubs,and GUI constructs to be inserted into management applications.SMI SpecificationSNIA has fully adopted and enhanced the CIM standard for Storage Management in its SMISpecification. SMI Specification was launched in mid-2002 to create and develop a universalopen interface for managing storage devices, including storage networks.The idea behind SMIS is to standardize the management interfaces so that managementapplications can utilize them and provide cross device management. This means that a newlyintroduced device can be immediately managed, as it will conform to the standards.SMIS extends CIM/WBEM with the following features: A single management transport: Within the WBEM architecture, the CIM-XML over HTTP protocol was selected for this transport in SMIS. A complete, unified, and rigidly specified object model: SMIS defines “profiles” and “recipes” within the CIM that enables a management client to reliably utilize a component vendor’s implementation of the standard, such as the control of LUNs and Zones in the context of a SAN. Consistent use of durable names: As a storage network configuration evolves and is reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems so that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system: SMIS compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMIS compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager.The models and protocols in the SMIS implementation are platform-independent, enablingapplication development for any platform, and enabling them to run on different platforms.The SNIA will also provide interpretability tests that will help vendors to test their applicationsand devices if they conform to the standard. Chapter 1. Introduction 7
  • 35. Integrating existing devices into the CIM model As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMIS is introducing CIM agents and CIM object managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMIS. The agent is used for one device and an object manager for a set of devices. This type of operation is also called a proxy model and is shown in Figure 1-2. The CIM Agent or CIM Object Manager (CIM/OM) will translate a proprietary management interface to the CIM interface. An example of a CIM/OM is the IBM CIM Object Manager for the IBM TotalStorage Enterprise Storage Server®. Proxy model (CIM Agent/Object Manager) for legacy devices Lock Directory Manager Server Client Directory User SA 0…n Agent 0…n Agent 0…n SLP TCP/IP CIMxml CIM operations over http TCP/IP SA Service Agent (SA) SA Object Manager Agent Agent 0…n Device or 0…n Provider Subsystem 1 1 0…n Proprietary Proprietary 1 n Embedded Device or Model Device or Subsystem Device Subsystem Proxy Model Proxy Model Figure 1-2 CIMOM proxy model In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent, as shown in the “Embedded Model” in Figure 1-2. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible, feature-poor interfaces into their products. Component developers will no longer have to “push” their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end users. Ultimately, faced with reduced costs for management, end users will be able to adopt storage-networking technology faster and build larger, more powerful networks.8 IBM TotalStorage SAN File System
  • 36. 1.2.2 Storage Networking Industry Association The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is “to ensure that storage networks become efficient, complete, and trusted solutions across the IT community”1. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market. The SNIA Shared Storage Model IBM is an active member of SNIA and fully supports SNIA’s goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 1-3. The SNIA Storage Model Application File/record subsystem Database (dbms) File system (FS) Redundancy mgmt (backup, …) Resource mgmt, configuration High availability (fail-over, …) Services subsystem Storage domain Discovery, monitoring Capacity planning Host-based block aggregation Security, billing Block aggregation SN-based block aggregation Device-based block aggregation Storage devices (disks, tape, etc.) Block subsystem Copyright 2000, Storage Network Industry Association Figure 1-3 SNIA storage model IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Block aggregation File/record subsystems Storage devices/block subsystems 1 http://www.snia.org/news/mission/ Chapter 1. Introduction 9
  • 37. Services subsystems In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMIS/CIM/WBEM, see the SNIA and DMTF Web sites: http://www.snia.org http://www.dmtf.org Why open standards? Products that adhere to open standards offer significantly more benefits than using proprietary ones. The history of the information technology industry has shown essentially open systems offer three key benefits: Better solutions at a lower price: By harnessing the resources of multiple companies, more development resources are brought to bear on common client requirements, such as ease of management. Improved interoperability: Without open standards, every vendor needs to work with every other vendor to develop interfaces for interoperability. The result is a range of very complex products whose interdependencies make them difficult for clients to install, configure, and maintain. Client choice: By complying with standards developed jointly, products interoperate seamlessly with each other, preventing vendors from locking clients into their proprietary platform. As client needs and vendor choices change, products that interoperate seamlessly provide clients with more flexibility and improve co-operation among vendors. More significantly, given the industry-wide focus on business efficiency, the use of fully integrated solutions developed to open industry standards will ultimately drive down the TCO of storage.1.2.3 The IBM approach Deploying a storage network requires many choices. Not only are there SANs and NAS to consider, but also other technologies, such as iSCSI. The choice of when to deploy a SAN, or use NAS, continues to be debated. CIOs and IT professionals must plan to ensure that all the components from multiple storage vendors will work together in a virtualization environment to enhance their existing storage infrastructures, or build new infrastructures, while keeping a sharp focus on business efficiency and business continuance. The IBM approach to solving these pervasive storage needs is to address the entire problem by simplifying deployment, use, disaster recovery, and management of storage resources. From a TCO perspective, the initial purchase price is becoming an increasingly small part of the equation. As the cost per megabyte of disk drives continues to decrease, the client focus is shifting away from hardware towards software value-add functions, storage management software, and services. The importance of a highly reliable, high performance hardware solution, such as the IBM TotalStorage DS8000), as the guardian of mission-critical data for a business, is still a cornerstone concept. However, software is emerging as a critical element of any SAN solution. Management and virtualization software provide advanced functionality for administering distributed IT assets, maintaining high availability, and minimizing downtime.10 IBM TotalStorage SAN File System
  • 38. 1.3 Rise of storage virtualization Storage virtualization techniques are becoming increasingly more prevalent in the IT industry today. Storage virtualization forms one of several levels of virtualization in a storage network, and can be described as the abstraction from physical volumes of data storage to a logical level. Storage virtualization addresses the increasing complexity of managing storage, while reducing the associated costs. Its main purpose is the full exploitation of the benefits promised by a SAN. Virtualization enables data sharing, ensuring higher availability, providing disaster tolerance, improving performance, allowing for consolidation of resources, providing policy-based automation, and much more besides, which do not automatically result from the implementation of today’s SAN hardware components. Storage virtualization is possible on several levels of the storage network components, meaning that it is not limited to the disk subsystem. Virtualization separates the representation of storage to the operating system and its users from the actual physical components. This has been available, and taken for granted, in the mainframe environment for many years (such as DFSMS from IBM, and IBM’s VM operating system with minidisks).1.3.1 What is virtualization? Storage virtualization gathers the storage into storage pools, which are independent of the actual layout of the storage (that is, the overall file system structure). Because of this independence, new disk systems can be added to a storage network, and data migrated to them, without causing disruption to applications. Since the storage is no longer controlled by individual servers, it can be used by any server as needed. In addition, it can allow capacity to be added or removed on demand without affecting the application servers. Storage virtualization will simplify storage management, which has been an escalating expense in the traditional SAN environment.1.3.2 Types of storage virtualization Virtualization can be implemented at the following levels: Server level Storage level Fabric level Chapter 1. Introduction 11
  • 39. The IBM strategy is to move the intelligence out of the server, eliminating the dependency on having to implement specialized software at the server level. Removing it at the storage level decreases the dependency on implementing RAID subsystems, and alternative disks can be utilized. By implementing at a fabric level, storage control is moved into the network, which gives the opportunity for virtualization to all, and at the same time reduces complexity by providing a single view of storage. The storage network can be used to leverage all kinds of services across multiple storage devices, including virtualization. A high-level view of this is shown in Figure 1-4. Application Application DBMS DBMS Installable File File System System File System Device Driver Device Driver Common File Hardware Element Traditional SAN Tivoli Storage System - SAN File Management Management System SAN SAN Volume Controller Storage Network Intelligent Intelligent Storage Ctller Storage Ctller RAID Controller RAID Controller RAID Controller Disk Disk Disk Figure 1-4 Intelligence moving to the network The effective management of resources from the data center across the network increases productivity and lowers TCO. In Figure 1-4, you can see how IBM accomplishes this effective management by moving the intelligence from the storage subsystems into the storage network using the SAN Volume Controller, and moving the intelligence of the file system into the storage network using SAN File System. The IBM storage management software, represented in Figure 1-4 as hardware element management and Tivoli Storage Management (a suite of SAN and storage products), addresses administrative costs, downtime, backup and recovery, and hardware management. The SNIA model (see Figure 1-3 on page 9) distinguishes between aggregation at the block and file level. Block aggregation or block level virtualization The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices, such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can12 IBM TotalStorage SAN File System
  • 40. be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or slicing-and-dicing native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring File aggregation or file level virtualization The file/record layer in the SNIA model is responsible for packing items, such as files and databases, into larger entities, such as block level volumes and storage devices. File aggregation or file level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes. They can: Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support Enhance productivity by providing centralized and simplified management through policy-based storage management automation Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers1.3.3 Storage virtualization models Storage virtualization can be broadly classified into two models: In-band virtualization, also referred to as symmetric virtualization Out-of-band virtualization, also referred to as asymmetric virtualization Figure 1-5 shows the two storage virtualization models. Figure 1-5 In-band and out-of-band models Chapter 1. Introduction 13
  • 41. In-band In an in-band storage virtualization implementation, both data and control information flow over the same path. The IBM TotalStorage SAN Volume Controller (SVC) engine is an in-band implementation, which does not require any special software in the servers and provides caching in the network, allowing support of cheaper disk systems. See the redbook IBM TotalStorage SAN Volume Controller, SG24-6423 for further information. Out-of-band In an out-of-band storage virtualization implementation, the data flow is separated from the control flow. This is achieved by storing data and metadata (data about the data) in different places. This involves moving all mapping and locking tables to a separate server (the Metadata server) that contains the metadata of the files. IBM TotalStorage SAN File System is an out-of-band implementation. In an out-of-band solution, the servers (who are clients to the Metadata server) request authorization to data from the Metadata server, which grants it, handles locking, and so on. The servers can then access the data directly without further Metadata server intervention. Separating the flow of control and data in this manner allows the data I/O to use the full bandwidth that a SAN provides, while control I/O goes over a separate network like TCP/IP. For many operations, the metadata controller does not even intervene. Once a client has obtained access to a file, all I/O will go directly over the SAN to the storage devices. Metadata is often referred to as data about the data; it describes the characteristics of stored user data. A Metadata server, in the SAN File System, is a server that off loads the metadata processing from the data-storage environment to improve SAN performance. An instance of the SAN File System runs on each engine, and together the Metadata servers form a cluster.1.4 SAN data sharing issues The term “data sharing” is used somewhat loosely by users and some vendors. It is sometimes interpreted to mean the replication of files or databases to enable two or more users, or applications, to concurrently use separate copies of the data. The applications concerned may operate on different host platforms. Data sharing may also be used to describe multiple users accessing a single copy of a file. This could be called “true data sharing”. In a homogeneous server environment, with appropriate application software controls, multiple servers may access a single copy of data stored on a consolidated storage subsystem. If attached servers are heterogeneous platforms (for example, a mix of UNIX® and Windows®), sharing of data between such unlike operating system environments is complex. This is due to differences in file systems, access controls, data formats, and encoding structures.1.5 IBM TotalStorage Open Software Family Storage and network administrators face tough challenges today. Demand for storage continues to grow, and enterprises require increasingly resilient storage infrastructures to support their on demand business needs. Compliance with legal, governmental, and other industry specific regulations is driving new data retention requirements. The IBM TotalStorage Open Software Family is a comprehensive, flexible storage software solution that can help enterprises address these storage management challenges today. As a first step, IBM offers infrastructure components that adhere to industry standard open14 IBM TotalStorage SAN File System
  • 42. interfaces for registering with management software and communication connection and configuration information. As the second step, IBM offers automated management software components that integrate with these interfaces to collect, organize, and present information about the storage environment. The IBM TotalStorage Open Software Family includes the IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System, and the IBM TotalStorage Productivity Center.1.5.1 IBM TotalStorage SAN Volume Controller The IBM TotalStorage SAN Volume Controller (SVC) is an in-band, block-based virtualization product that minimizes the dependency on unique hardware and software, decoupling the storage functions expected in a SAN environment from the storage subsystems and managing storage resources. In a typical non-virtualized SAN, shown to the left of Figure 1-6, servers are mapped to specific devices, and the LUNs defined within the storage subsystem are directly presented to the host or hosts. With the SAN Volume Controller, servers are mapped to virtual disks, thus creating a virtualization layer. SANS Today Block Virtualization Servers are mapped to specific physical Servers are mapped to a virtual disk disks i.e. “physical mapping i.e. “logical mapping” Figure 1-6 Block level virtualization Chapter 1. Introduction 15
  • 43. The IBM TotalStorage SAN Volume Controller is designed to provide a redundant, modular, scalable, and complete solution, as shown in Figure 1-7. Redundant, modular, scalable, complete solution Managed Disks Figure 1-7 IBM TotalStorage SAN Volume Controller Each SAN Volume Controller consists of one or more pairs of engines, each pair operating as a single controller with fail-over redundancy. A large read/write cache is mirrored across the pair, and virtual volumes are shared between a pair of nodes. The pool of managed disks is controlled by a cluster of paired nodes. The SAN Volume Controller is designed to provide complete copy services for data migration and business continuity. Since these copy services operate on the virtual volumes, dramatically simpler replication configurations can be created using the SAN Volume Controller, rather than replicating each physical volume in the managed storage pool. The SAN Volume Controller improves storage administrator productivity, provides a common base for advanced functions, and provides for more efficient use of storage. The SAN Volume Controller consists of software and hardware components delivered as a packaged appliance solution in a variety of form factors. The IBM SAN Volume Controller solution can be preconfigured to the clients specification, and will be installed by an IBM customer engineer.1.5.2 IBM TotalStorage SAN File System The IBM TotalStorage SAN File System architecture brings the benefits of the existing mainframe system-managed storage (DFSMS) to the SAN environment. Features such as policy-based allocation, volume management, and file management have long been available on IBM mainframe systems. However, the infrastructure for such centralized, automated management has been lacking in the open systems world of Linux®, Windows, and UNIX. On conventional systems, storage management is platform dependent. IBM TotalStorage SAN File System provides a single, centralized point of control to better manage files and data, and is platform independent. Centralized file and data management dramatically simplifies storage administration and lowers TCO.16 IBM TotalStorage SAN File System
  • 44. SAN File System is a common file system specifically designed for storage networks. Bymanaging file details (via the metadata controller) on the storage network instead of inindividual servers, the SAN File System design moves the file system intelligence into thestorage network where it can be available to all application servers. Figure 1-8 shows the filelevel virtualization aggregation, which provides immediate benefits: a single globalnamespace and a single point of management. This eliminates the need to manage files on aserver by server basis. A global namespace is the ability to access any file from any clientsystem using the same name. Block virtualization: Common file system: SAN FS An important step Extends the value FS Servers are mapped to a virtual disk, easing Server file systems are enhanced through the administration of the physical assets a common file system and single name spaceFigure 1-8 File level virtualizationIBM TotalStorage SAN File System automates routine and error-prone tasks, such as fileplacement, and monitors out of space conditions. IBM TotalStorage SAN File System willallow true heterogeneous file sharing, where reads and writes on the same data can be doneby different operating systems.The SAN File System Metadata server (MDS) is a server cluster attached to a SAN thatcommunicates with the application servers to serve the metadata. Other than installing theSAN File System client on the application servers, no changes are required to applications touse SAN File System, since it emulates the syntax and behavior of local file systems. Chapter 1. Introduction 17
  • 45. Figure 1-9 shows the SAN File System environment. External clients NFS / CIFS SFS admin console IP Network Client / metadata communications SAN LAN FC iSCSI FC/iSCSI Gateway SFS meta-data cluster 2-8 servers SFS metadata storage SFS user System storage storage Multiple, heterogeneous storage pools Figure 1-9 IBM TotalStorage SAN File System architecture In summary, IBM TotalStorage SAN File System is a common SAN-wide file system that permits centralization of management and improved storage utilization at the file level. IBM TotalStorage SAN File System is configured in a high availability configuration with clustering for the Metadata servers, providing redundancy and fault tolerance. IBM TotalStorage SAN File System is designed to provide policy-based storage automation capabilities for provisioning and data placement, nondisruptive data migration, and a single point of management for files on a storage network.1.5.3 Comparison of SAN Volume Controller and SAN File System Both the IBM SAN Volume Controller and IBM SAN File System provide storage virtualization capabilities that address critical storage management issues, including: Optimized storage resource utilization Improved application availability Enhanced storage personnel productivity The IBM SAN Volume Controller addresses volume related tasks that impact these requirements including: Add, replace, remove storage arrays Add, delete, change LUNs Add capacity for applications Manage different storage arrays Manage disaster recovery tools Manage SAN topology18 IBM TotalStorage SAN File System
  • 46. Optimize storage performance The IBM SAN File System addresses file related tasks that impact these same requirements. For example: Extend or truncate file system Format file system De-fragmentation File-level replication Data sharing Global name space Data lifecycle management A summary of SAN Volume Controller and SAN File System benefits can be seen in Figure 1-10. IBM TotalStorageTM Virtualization Family Benefit SAN Volume Controller SAN File System Create a single pool of storage from Virtual Volumes from multiple disparate storage devices the storage pool File, Data sharing across heterogeneous Single SAN-wide File Servers, OS System, global namespace Centralized Management Single interface for the Single view of file space storage pool across heterogeneous servers Improved Capacity Utilization Pools volumes across Reduces storage needs disparate storage at File Level devices Improved Application Availability No downtime to manage Non-disruptive LUNs, migrate volumes, additions/ changes to add storage file space, less out-of- space conditions Single, Cost Effective set of Advanced Volume-based Peer-to- File-based space- Copy Services Peer Remote Copy and efficient FlashCopy® FlashCopy® Policy Based Automation Files, Data, Quality-of- Service based pooling SAN Volume Controller and SAN File System provide complementary benefits to address Volume and File level issues © 2005 IBM Corporation on demand operating environment Figure 1-10 Summary of SAN Volume Controller and SAN File System benefits1.5.4 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center is comprised of a user interface designed for ease of use, and the following components: TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Data TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Replication Chapter 1. Introduction 19
  • 47. 1.5.5 TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Fabric is designed to build and maintain a complete, current map of your storage network. TPC for Fabric can automatically determine both the physical and logical connections in your storage network and display the information in both a topological format and a hierarchical format. Looking outward from the SAN switch, TPC for Fabric can answer questions that help administrators validate proper configuration of your open storage network: What hosts are attached to your storage network and how many HBAs does each host have? What firmware levels are loaded on all your HBAs? What firmware levels are loaded on all your SAN switches? How are the logical zones configured? Looking downward from the host, TPC for Fabric answers administrator questions that arise when changes occur in the storage network that could affect host access to storage: Does a given host have alternate paths through the storage network? Do those alternate paths use alternate switches? If available, are those alternate paths connected to alternate controllers on the storage device? Looking upward from the storage device, TPC for Fabric answers administrator questions that arise when changes happen in the storage network that could affect the availability of stored data: What hosts are connected to a given storage device? What hosts have access to a given storage logical unit (LUN)? Another key function of the TPC for Fabric is “change validation”. TPC for Fabric detects changes in the storage network, both planned and unplanned, and it can highlight those changes for administrators. Figure 1-11 on page 21 shows a sample topology view provided by TPC for Fabric.20 IBM TotalStorage SAN File System
  • 48. Figure 1-11 TPC for Fabric1.5.6 TotalStorage Productivity Center for Data TotalStorage Productivity Center for Data is an analyzing software tool that helps storage administrators to manage the content of systems from a logical perspective. TPC for Data improves the storage return on investment by: Delaying purchases of disks: After performing housecleaning, you can satisfy the demand for more storage from existing (now freed-up) disks. Depending on your particular situation, you may discover you have more than adequate capacity and can defer the capital expense of additional disks for a considerable time. Lowering the storage growth rate: Because you are now monitoring and keeping better control of your storage according to policies in place, it should grow at a lower rate than before. Lowering disk costs: With TPC for Data, you will know what the real quarter-to-quarter growth rates actually are, instead of approximating (best-effort basis) once per year. You can project your annual demand with a good degree of accuracy, and can negotiate an annual contract with periodic deliveries, at a price lower than you would have paid for periodic emergency purchases. Lowering storage management costs: The manual effort is greatly reduced as most functions, such as gathering the information and analyzing it, are automated. Automated Alerts can be set up so the administrator only needs to get involved in exceptional conditions. Chapter 1. Introduction 21
  • 49. Figure 1-12 shows the TPC for Data dashboard.Figure 1-12 TPC for Data Before using TPC for Data to manage your storage, it was difficult to get advance warning of out-of-space conditions on critical application servers. If an application did run out of storage on a server, it would typically just stop. This means revenue generated from that application or the service provided by it also stopped. And it incurred a high cost to fix it, as fixing unplanned outages is usually expensive. With TPC for Data, applications will not run out of storage. You will know when they need more storage, and can get it at a reasonable cost before an outage occurs. You will avoid the loss of revenue and services, plus the additional costs associated with unplanned outages.1.5.7 TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Disk is designed to enable administrators to manage storage area network (SAN) storage components based on the Storage Networking Industry Association (SNIA) Storage Management Interface Specification (SMI-S). TPC for Disk also includes the BonusPack for TPC for Fabric, bringing together device management with fabric management. This combination is designed to allow a storage administrator to configure storage devices from a single point, monitor SAN status, and provide operational support to storage devices.22 IBM TotalStorage SAN File System
  • 50. Managing a virtualized SANIn a pooled or virtualized SAN environment, multiple devices work together to create astorage solution. TPC for Disk is designed to provide integrated administration, optimization,and replication features for these virtualization solutions.TPC for Disk is designed to provide an integrated view of an entire SAN system to helpadministrators perform complex configuration tasks and productively manage the SANinfrastructure. TPC for Disk offers features that can help simplify the establishment,monitoring, and control of disaster recovery and data migration solutions, because thevirtualization layers support advanced replication configurations.TPC for Disk includes a device management function, which discovers supported devices,collects asset, configuration, and availability data from the supported devices, and provides atopographical view of the storage usage relationships among these devices. Theadministrator can view essential information about storage devices discovered by TPC forDisk, examine the relationships among the devices, and change their configurations.The TPC for Disk device management function provides discovery of storage devices thatadhere to the SNIA SMI-S standards. The function uses the Service Location Protocol (SLP)to discover supported storage subsystems on the SAN, create managed objects to representthese discovered devices, and display them as individual icons in the TPC Console.Device management in TPC offers: Centralized access to information from storage devices Enhanced storage administrator productivity with integrated volume configuration Outstanding problem determination with cross-device configuration Centralized management of storage devices with browser launch capabilitiesTPC for Disk also provides a performance management function: a single, integrated consolefor the performance management of supported storage devices.The performance management function monitors metrics such as I/O rates and cacheutilization, and supports optimization of storage through the identification of the best LUNs forstorage allocation. It stores received performance statistics in database tables for later use,and analyzes and generates reports on monitored devices for display in the TPC Console.The administrator can configure performance thresholds for the devices based onperformance metrics and the system can generate alerts when these thresholds areexceeded. Actions can then be configured to trigger from these events, for example, sende-mail or an SNMP trap. The performance management function also provides gauges(graphs) to track real-time performance. These gauges are updated when new data becomesavailable.The performance management function provides: Proactive performance management Performance metrics monitoring across storage subsystems from a single console Timely alerts to enable event action based on client policies Focus on storage optimization through identification of the best LUN for a storage allocation Chapter 1. Introduction 23
  • 51. Figure 1-13 shows the TPC main window with the performance management functions expanded.Figure 1-13 TPC for Disk functions1.5.8 TotalStorage Productivity Center for Replication Data replication is a core function required for data protection and disaster recovery. TotalStorage Productivity Center for Replication (TPC for Replication) is designed to control and monitor the copy services operations in storage environments. It provides advanced copy services functions for supported storage subsystems on the SAN. Today, it provides Continuous Copy and Point-in-Time Copy services. Specific support is for IBM FlashCopy® for ESS and PPRC (Metro Mirror) for ESS. TPC for Replication provides configuration assistance by automating the source-to-target pairing setup, as well as monitoring and tracking the replication operations. TPC for Replication helps storage administrators keep data on multiple related volumes consistent across storage systems. It enables freeze-and-go functions to be performed with consistency on multiple pairs when errors occur during the replication (mirroring) operation. And it helps automate the mapping of source volumes to target volumes, allowing a group of source volumes to be automatically mapped to a pool of target volumes. With TPC for Replication, the administrator can: Keep data on multiple related volumes consistent across storage subsystems Perform freeze-and-go functions with consistency on multiple pairs when errors occur during a replication operation Figure 1-13 shows the TPC main window with the Replication management functions expanded.24 IBM TotalStorage SAN File System
  • 52. Figure 1-14 TPC for Replication1.6 File system general terminology Since SAN File System implements a SAN-based, global namespace file system, it is important here to understand some general file system concepts and terms.1.6.1 What is a file system? A file system is a software component that builds a logical structure for storing files on storage devices (typically disk drives). File systems hide the underlying physical organization of the storage media and present abstractions such as files and directories, which are more easily understood by users. Chapter 1. Introduction 25
  • 53. Generally, it appears as a hierarchical structure in which files and folders (or directories) can be stored. The top of the hierarchy of each file system is usually called “root”. Figure 1-15 shows an example of a Windows system hierarchical view, also commonly known as the tree or directory. Figure 1-15 Windows system hierarchical view A file system specifies naming conventions for naming the actual files and folders (for example, what characters are allowed in file and directory names; are spaces permitted?) and defines a path that represents the location where a specific file is stored. Without a file system, files would not even have names and would appear as nameless blocks of data randomly stored on a disk. However, a file system is more than just a directory tree or naming convention. Most file systems provide security features, such as privileges and access control for: Access to files based on user/group permissions Access Control Lists (ACLs) to allow/deny specific actions on file(s) to specific user(s) Figure 1-16 on page 27 and Example 1-1 on page 27 show Windows and UNIX system security and file permissions, respectively.26 IBM TotalStorage SAN File System
  • 54. Figure 1-16 Windows file system security and permissions Example 1-1 UNIX file system security and permissions # ls -l total 2659 -rw------- 1 root system 31119 Sep 15 16:11 .TTauthority -rw------- 1 root system 196 Sep 15 16:11 .Xauthority drwxr-xr-x 10 root system 512 Sep 15 16:11 .dt -rwxr-xr-x 1 root system 3970 Apr 17 11:36 .dtprofile -rw------- 1 root system 3440 Sep 16 08:16 .sh_history -rw-r--r-- 1 root system 115 May 13 14:12 .xerrors drwxr-xr-x 2 root system 512 Apr 17 11:36 TT_DB -rw-r--r-- 1 root system 3802 Sep 04 09:51 WebSM.pref -rwxrwxrwx 1 root system 6600 May 14 08:01 aix_sdd_data_gatherer drwxr-x--- 2 root audit 512 Apr 16 2001 audit lrwxrwxrwx 1 bin bin 8 Apr 17 09:35 bin -> /usr/bin drwxr-xr-x 2 root system 512 Apr 18 08:30 cdrom drwxrwxr-x 5 root system 3072 Sep 15 15:00 dev -rw-r--r-- 1 root system 108 Sep 15 09:16 dposerv.lock drwxr-xr-x 2 root system 512 May 13 15:12 drom drwxr-xr-x 2 root system 512 May 29 13:40 essdisk1fs1.6.2 File system types File systems have a wide variety of functions and capabilities and can be broadly classified into: Local file systems LAN file systems SAN file systems Chapter 1. Introduction 27
  • 55. Local file systems A local file system is tightly integrated with the operating system, and is therefore usually specific to that operating system. A local file system provides services to the system where the data is installed. All data and metadata are served over the system’s internal I/O path. Some examples of local file systems are Windows NTFS, DOS FAT, Linux ext3, and AIX JFS. LAN file systems LAN file systems allow computers attached via a LAN to share data. They use the LAN for both data and metadata. Some LAN file systems also implement a global namespace, like AFS®. Examples of LAN file systems are Network File System (NFS), Andrew File System (AFS), Distributed File System (DFS™), and Common Internet File System (CIFS). Network file sharing appliances A special case of a LAN file system is a specialized file serving appliance, such as the IBM N3700 and similar from other vendors. These provide CIFS and NFS file serving capabilities using both LAN and iSCSI protocols. SAN file systems SAN file systems allow computers attached via a SAN to share data. They typically separate the actual file data from the metadata, using the LAN path to serve the metadata, and the SAN path for the file data. The IBM TotalStorage SAN File System is a SAN File System. Figure 1-17 shows the different file system types. Local File Systems LAN File Systems SAN File Systems Use LAN for data & Use SAN for data and Integral part of OS metadata LAN for metadata NTFS, FAT, JFS NFS, AFS, DFS, CIFS SAN FS Leo Leo Iva Lou Leo Iva Lou Leo File File SAN files ServerA ServerB Metadata Server Leo/Iva/Lou files Leo/Iva files Iva/Lou files Virtualized Storage Subsystem Figure 1-17 File system types1.6.3 Selecting a file system The factors that determine which type of file system is most appropriate for an application or business requirement include: Volume of data being processed Type of data being processed Patterns of data access Availability requirements Applications involved Types of computers requiring access to the file system28 IBM TotalStorage SAN File System
  • 56. LAN file systems are designed to provide data access over the IP network. Two of the mostcommon protocols are Network File System (NFS) and Common Internet File System (CIFS).Typically, NFS is used for UNIX servers and CIFS is used for Windows servers. Tools exist toallow Windows servers to support NFS access and UNIX/Linux servers to support CIFSaccess, which enable these different operating systems to work with each others’ files.Local file systems’ limitations surface when business requirements mandate the need for arapid increase in data storage or sharing of data among servers. Issues may include: Separate “islands of storage” on each host. Because local file systems are integrated with the servers’ operating system, each file system must be managed and configured separately. In situations where two or more file system types are in use (for example, Windows and Sun™ Servers), operators require training and skills in each of these operating systems to complete even common tasks such as adding additional storage capacity. No file sharing between hosts. Inherently difficult to manage.LAN file systems can address some of the limitations of local file systems by adding the abilityto share among homogenous systems. In addition, there are some distributed file systemsthat can take advantage of both network-attached and SAN-attached disk. Some restrictionsof LAN file systems include: In-band cluster architectures are inherently more difficult to scale than out-of-band SAN file system architectures. Performance is impacted as these solutions grow. Homogeneous file-sharing only. There is no (or limited) ability to provide file-locking and security between mixed operating systems. Each new cluster creates an “island of storage” to manage. As the number of “islands” grow, similar issues as with local file systems tend to increase. File-level policy-based placement is inherently more difficult. Clients still use NFS/CIFS protocols with the inherent limitations of those protocols (security, locking, and so on) File system and storage resources are not scalable beyond a single NAS appliance. An NAS appliance must handle blocks for non-SAN attached clients.SAN file systems address the limitations of local and network file systems. They enable 7x24availability, increasing rates of change to the environment, and reduction of managementcost.The IBM SAN File System offers these advantages: Single global view of file system. This enables tremendous flexibility to increase or decrease the amount of storage available to any particular server as well as full file sharing (including locking) between heterogeneous servers. Metadata Server processes only metadata operations. All data I/O occurs at SAN speeds. Linear scalability of global file system can be achieved by adding Metadata Server nodes. Advanced, centralized, file-granular, and policy-based management. Automated lifecycle management of data can take full advantage of tiered storage. Nondisruptive management of physical assets provides the ability to add, delete, and change the disk subsystem without disruption to the application servers. Chapter 1. Introduction 29
  • 57. 1.7 Filesets and the global namespace A key concept for SAN File System is the global namespace. Traditional file systems and file sharing systems operate separate namespaces, that is, each file is tied or mapped to the server which hosts it, and the clients must know which server this is. For example, in Figure 1-17 on page 28, in a LAN file system, user Iva has files stored both on File Server A and File Server B. She would need to specify the particular file server in the access path for each file. SAN File System, by contrast, presents a global namespace: there is one file structure (subdivided into parts called filesets), which is available simultaneously to all the clients. This is shown in Figure 1-18. ROOT fileset 1 fileset 2 fileset 3 fileset 4 fileset 5 fileset 6 Figure 1-18 Global namespace Filesets are subsets of the global namespace. To the clients, the filesets appear as normal directories, where they can create their own subdirectories, place files, and so on. But from the SAN File System server perspective, the fileset is the building-block of the global namespace structure, which can only be created and deleted by SAN File System administrators. Filesets represent units of workload for metadata; therefore, by dividing the files into filesets, you can split the task of serving the metadata for the files across multiple servers. There are other implications of filesets; we will discuss them further in Chapter 2, “SAN File System overview” on page 33.1.8 Value statement of IBM TotalStorage SAN File System As the data stored in the open systems environment continues to grow, new paradigms for the attachment and management of data and the underlying storage of the data are emerging. One of most commonly used technologies in this area is the Storage Area Network (SAN). Using a SAN to connect large amounts of storage to large numbers of computers gives us the potential for new approaches to accessing, sharing, and managing our data and storage. However, existing operating systems and file systems are not built to exploit these new capabilities. IBM TotalStorage SAN File System is a SAN based distributed file system and storage management solution that enables many of the promises of SANs, including shared heterogeneous file access, centralized management, and enterprise-wide scalability. In addition, SAN File System leverages the policy-based storage and data management30 IBM TotalStorage SAN File System
  • 58. concepts found in mainframe computers and makes them available in the open systemsenvironment.IBM TotalStorage SAN File System can provide an effective solution for clients with a smallnumber of computers and small amounts of data, and it can scale up to support clients withthousands of computers and petabytes of data.IBM TotalStorage SAN File System is a member of the IBM TotalStorage Virtualization Familyof solutions. The SAN File System has been designed as a network-based heterogeneous filesystem for file aggregation and data sharing in an open environment. As a network-basedheterogeneous file system, it provides: High performance data sharing for heterogeneous servers accessing SAN-attached storage in an open environment. A common file system for UNIX and Windows servers with a single global namespace to facilitate data sharing across servers. A highly scalable out-of-band solution (see 1.3.3, “Storage virtualization models” on page 13) supporting both very large files and very large numbers of files without the limitations normally associated with NFS or CIFS implementations.IBM TotalStorage SAN File System is a leading edge solution that is designed to: Lower the cost of storage management Enhance productivity by providing centralized and simplified management through policy-based storage management automation Improve storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers Improve application availability Simplify and lower the cost of data backups through application server free backup and built in file-based FlashCopy images Allow data sharing and collaboration across servers with high performance and full locking support Eliminate data migration during application server consolidation Provide a scalable and secure infrastructure for storage and data on demandIBM TotalStorage SAN File System solution includes a Common Information Model (CIM)Agent, supporting storage management by products based on open standards for units thatcomply with the open standards of the Storage Network Industry Association (SNIA)Common Information Model. Chapter 1. Introduction 31
  • 59. 32 IBM TotalStorage SAN File System
  • 60. 2 Chapter 2. SAN File System overview In this chapter, we provide an overview of the SAN File System Version 2.2.2, including these topics: Architecture SAN File System Version 2.2, V2.2.1, and V2.2.2 enhancements overview Components: Hardware and software, supported storage, and clients Concepts: Global namespace, filesets, and storage pool Supported storage devices Supported clients Summary of major features – Direct data access – Global namespace (scalability for growth) – File sharing – Policy based automatic placement – Lifecycle management© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 33
  • 61. 2.1 SAN File System product overview The IBM TotalStorage SAN File System is designed on industry standards so it can: Allow data sharing and collaboration across servers over the SAN with high performance and full file locking support, using a single global namespace for the data. Provide more effective storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers. Improve productivity and reduce the “pain” for IT storage and server management staff by centralizing and simplifying management through policy-based storage management automation, thus significantly lowering the cost of storage management. Facilitate application server and storage consolidation across the enterprise to scale the infrastructure for storage and data on demand. Simplify and lower the cost of data backups through built-in, file-based FlashCopy image function. Eliminate data migration during application server consolidation, and also reduce application downtime and failover costs. SAN File System is a multiplatform, robust, scalable, and highly available file system, and is a storage management solution that works with Storage Area Networks (SANs). It uses SAN technology, which allows an enterprise to connect a large number of computers and share a large number of storage devices, via a high-performance network. With SAN File System, heterogeneous clients can access shared data directly from large, high-performance, high-function storage systems, such as IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 (formerly IBM TotalStorage FAStT), as well as non-IBM storage devices. The SAN File System is built on a Fibre Channel network, and is designed to provide superior I/O performance for data sharing among heterogeneous computers. SAN File System differs from conventional distributed file systems in that it uses a data-access model that separates file metadata (information about the files, such as owner, permissions, and the physical file location) from actual file data (contents of the files). The metadata is provided to clients by MDSs; the clients communicate with the MDSs only to get the information they need to locate and access the files. Once they have this information, the SAN File System clients access data directly from storage devices via the clients’ own direct connection to the SAN. Direct data access eliminates server bottlenecks and provides the performance necessary for data-intensive applications. SAN File System presents a single, global namespace to clients where they can create and share data, using uniform file names from any client or application. Furthermore, data consistency and integrity is maintained through SAN File System’s management of distributed locks and the use of leases. SAN File System also provides automatic file placement through the use of policies and rules. Based on rules specified in a centrally-defined and managed policy, SAN File System automatically stores data on devices in storage pools that are specifically created to provide the capabilities and performance appropriate for how the data is accessed and used.34 IBM TotalStorage SAN File System
  • 62. 2.2 SAN File System V2.2 enhancements overview In addition to the benefits listed above, enhancements of SAN File System V2.2 include: Support for SAN File System clients on AIX 5L™ V5.3, SUSE Linux Enterprise Server 8 SP4, Red Hat Enterprise Linux 3, Windows 2000/2003, and Solaris™ 9 Support for iSCSI attached clients and iSCSI attached user data storage Support for IBM storage and select non-IBM storage and multiple types of storage concurrently for user data storage Support for an unlimited amount of storage for the user data Support for multiple SAN storage zones for enhanced security and more flexible device support Support for policy-based movement of files between storage pools Support for policy-based deletion of files Ability to move or defragment individual files Improved heterogeneous file sharing with cross platform user authentication and security permissions between Windows and UNIX environments Ability to export the SAN File System global namespace using Samba 3.0 on the following SAN File System clients: AIX 5L V5.2 and V5.3 (32- and 64-bit), Red Hat EL 3.0, SUSE Linux Enterprise Server 8.0, and Sun Solaris 9 Improved globalization support, including Unicode fileset attach point names and Unicode fine name patterns in policy rule2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview MDS support for SLES9 as well as SLES8. Clients who remain with SLES8 will need to upgrade to SLES8 SP4. Support for xSeries® 365 as Metadata server. Support for new IBM disk hardware: IBM TotalStorage DS6000 and IBM TotalStorage DS8000. Redundant Ethernet support on the MDSs (Linux Ethernet bonding). Improved installation: A new loadcluster function automatically installs the SAN File System software and its prerequisites across the entire cluster from one MDS. Preallocation policies to improve performance of writing large new files. MDSs support (and require) a TCP/IP interface to RSA cards. Support for SAN File System client on zSeries Linux SLES8 and pSeries Linux SLES8. Microsoft® cluster support for SAN File System clients on Windows 2000 and Windows 2003. Local user authentication option: LDAP is no longer required for the authentication of administrative users. Virtual I/O device support on AIX. Support for POSIX direct I/O file system interface calls on Intel® 32-bit Linux. Japanese translation of Administrator interfaces: GUI and CLI at the V2.2 level. Chapter 2. SAN File System overview 35
  • 63. 2.4 SAN File System architecture SAN File System architecture and components are illustrated in Figure 2-1. Computers that want to share data and have their storage centrally managed are all connected to the SAN. In SAN File System terms, these are known as clients, since they access SAN File System services, although in the enterprise context, they would most likely be, for example, database servers, application servers, or file servers. External clients NFS / CIFS SFS admin console IP Network Client / metadata communications SAN LAN FC iSCSI FC/iSCSI Gateway SFS meta-data cluster 2-8 servers SFS metadata storage SFS user System storage storage Multiple, heterogeneous storage pools Figure 2-1 SAN File System architecture In Figure 2-1, we show five such clients, each running a SAN File System currently supported client operating system. The SAN File System client software enables them to access the global namespace through a virtual file system (VFS) on UNIX/Linux systems and an installable file system (IFS) on Windows systems. This layer (VFS/IFS) is built by the OS vendors for use specifically for special-purpose or newer file systems. There are also special computers called Metadata server (MDS) engines, that run the Metadata server software, as shown in the left side of the figure. The MDS’s manage file system metadata (including file creation time, file security information, file location information, and so on), but the user data accessed over the SAN by the clients does not pass through an MDS. This eliminates the performance bottleneck from which many existing shared file system approaches suffer, giving near-local file system performance. MDSs are clustered for scalability and availability of metadata operations and are often referred to as the MDS cluster. In a SAN File System server cluster, there is one master MDS and one or more subordinate MDSs. Each MDS runs on a separate physical engine in the cluster. Additional MDSs can be added as required if the workload grows, providing solution scalability. Storage volumes that store the SAN File System clients’ user data (User Pools) are separated from storage volumes that store metadata (System Pool), as shown in Figure 2-1.36 IBM TotalStorage SAN File System
  • 64. The Administrative server allows SAN File System to be remotely monitored and controlled through a Web-based user interface called the SAN File System console. The Administrative server also processes requests issued from an administrative command line interface (CLI), which can also be accessed remotely. This means the SAN File System can be administered from almost any system with suitable TCP/IP connectivity. The Administrative server can use local authentication (standard Linux user IDs and groups) to look up authentication and authorization information about the administrative users. Alternatively, an LDAP server (client supplied) can be used for authentication. The primary Administrative server runs on the same engine as the master MDS. It receives all requests issued by administrators and also communicates with Administrative servers that run on each additional server in the cluster to perform routine requests.2.5 SAN File System hardware and software prerequisites The SAN File System is delivered as a software only package. SAN File System software requires the following hardware and software to be supplied and installed on each MDS in advance, by the customer. SAN File System also includes software for an optional Master Console; if used, then the customer must also provide the prerequisite hardware and software for this, as described in 2.5.2, “Master Console hardware and software” on page 38.2.5.1 Metadata server SAN File System V2.2.2 supports from two to eight Metadata servers (MDS) running on hardware that must be supplied by the client. The hardware servers that run the MDSs are generically known as engines. Each engine must be a rack-mounted, high-performance, and highly-reliable Intel server. The engine can be a SAN File System Metadata Server engine (4146 Model 1RX), an IBM ^ xSeries 345 server, an IBM ^ xSeries 346 server, an IBM ^ xSeries 365 server, or equivalent servers with the hardware components listed below. SAN File System V2.2 will support a cluster of MDSs consisting of both 4146-1RX engines, IBM ^ xSeries servers, and equivalents. If not using the IBM ^ 345, 346, 365, or 4146-Model 1RX, the following hardware components are required for each MDS: Two processors of minimum 3 GHz each. Minimum of 4 GB of system memory. Two internal hard disk drives with mirroring capability, minimum 36 GB each. These are used to install the MDS operating system, and should be set up in a mirrored (RAID 1) configuration. Two power supplies (optional, but highly recommended for redundancy). A minimum of one 10/100/1000Gb port for Ethernet connection (Fibre or Copper); however, two Ethernet connections are recommended to take advantage of high-availability capabilities with Ethernet bonding. Two 2 Gb Fibre Channel host bus adapter (HBA) ports. These must be compatible with the SUSE operating system and the storage subsystems in your SAN environment. They must also be capable of running the QLogic 2342 device driver. Suggested adapters: QLogic 2342 or IBM part number 24P0960. CD-ROM and diskette drives. Chapter 2. SAN File System overview 37
  • 65. Remote Supervisory Adapter II card (RSA II). This must be compatible with the SUSE operating system. Suggested card: IBM part number 59P2984 for x345, 73P9341 - IBM Remote Supervisor Adapter II Slim line for x346. Certified for SUSE Linux Enterprise Server 8, with Service Pack 4 (kernel level 2.4.21-278) or SUSE Linux Enterprise Server 9, Service Pack 1, with kernel level 2.6.5-7.151. Each MDS must have the following software installed: SUSE Linux Enterprise Server 8, Service Pack 4, kernel level 2.4.21-278, or SUSE Linux Enterprise Server 9, Service Pack 1, kernel level 2.6.5-7.151. Multi-pathing driver for the storage device used for the metadata LUNs. At the time of writing, if using DS4x000 storage for metadata LUNs, then either RDAC V9.00.A5.09 (SLES8) or RDAC V9.00.B5.04 (SLES9) is required. If using other IBM storage for metadata LUNs (ESS, SVC, DS6000, or DS8000), then SDD V1.6.0.1-6 is required. However, these levels will change over time. Always check the release notes distributed with the product CD, as well as the SAN File System for the latest supported device driver level. More information about the multi-pathing driver can be found in 4.4, “Subsystem Device Driver” on page 109 and 4.5, “Redundant Disk Array Controller (RDAC)” on page 119.2.5.2 Master Console hardware and software The SAN File System V2.2.2 Master Console is an optional component of a SAN File System configuration for use as a control point. If deployed, it requires a client-supplied, high performance, and highly reliable rack-mounted Intel Pentium® 4 processor server. This can be an IBM ^ xSeries 305 server, a SAN File System V1.1 or V2.1 Master Console, 4146-T30 feature #4001, a SAN Volume Controller Master Console, or equivalent Intel server with the following capabilities: At least 2.6 GHz processor speed At least 1 GB of system memory Two 40 GB IDE hard disk drives CD-ROM drive Diskette drive Two 10/100/1000 Mb ports for Ethernet connectivity (Copper or Fiber) Two Fibre Channel Host Bus Adapter (HBA) ports Monitor and keyboard: IBM Netbay 1U Flat Panel Monitor Console Kit with keyboard or equivalent If a SAN Volume Controller Master Console is already available, it can be shared with SAN File System, since it meets the hardware requirements. The Master Console, if deployed, must have the following software installed: Microsoft Windows 2000 Server Edition with Service Pack 4 or higher, or Microsoft Windows Professional with Update 818043, or Windows 2003 Enterprise Edition, or Windows 2003 Standard Edition. Microsoft Windows Internet Explorer Version 6.0 (SP1 or later). Sun Java™ Version 1.4.2 or higher.38 IBM TotalStorage SAN File System
  • 66. Antivirus software is recommended. Additional software for the Master Console is shipped with the SAN File System software package, as described in 2.5.6, “Master Console” on page 45.2.5.3 SAN File System software SAN File System software (5765-FS2) is required licensed software for SAN File System. This includes the SAN File System code itself and the client software packages to be installed on the appropriate servers, which will gain access to the SAN File System global namespace. These servers are then known as SAN File System clients. The SAN File System software bundle consists of three components: Software that runs on each SAN File System MDS Software that runs on your application servers, called the SAN File System Client software Optional software that is installed on the Master Console, if used2.5.4 Supported storage for SAN File System SAN-attached storage is required for both metadata volumes as well as user volumes. Supported storage subsystems for metadata volumes (at the time of writing) are listed in Table 2-1. Table 2-1 Storage subsystem platforms supported for metadata LUNs Storage platform Models supported Driver and Mixed operating microcode system access? ESS 2105-F20, 2105-750, SDD v1.6.0.1-6 Yes 2105-800 DS4000 / FAStT 4100/100, 4300/600, RDAC v09.00.x for the No 4400/700, 4500/900, Linux v2.4 or v2.6 that is, all except for kernel DS4800 DS6000 All SDD v1.6.0.1-6 Yes DS8000 All SDD v1.6.0.1-6 Yes SVC (SLES8 only) 2145 v2.1.x SDD v1.6.0.1-6 Yes SVC for Cisco and v1.1.8 SDD v1.6.0.1-6 Yes MDS9000 Note this information can change at any time; the latest information about specific supported storage, including device driver levels and microcode, is at this Web site. Please check it before starting your SAN File System installation: http://www.ibm.com/storage/support/sanfs Metadata volume considerations Metadata volumes should be configured using RAID, with a low ratio of data to parity disks. Hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Chapter 2. SAN File System overview 39
  • 67. User volumes SAN File System can be configured with any SAN storage device for the user data storage, providing it is supported by the operating systems running the SAN File System client (including having a compatible HBA) and that it conforms to the SCSI standard for unique device identification. SAN File System also supports storage devices for user data storage attached through iSCSI. The iSCSI attached storage devices must conform to the SCSI standard for unique device identification and must be supported by the SAN File System client operating systems. Consult your storage system’s documentation or the vendor to see if it meets these requirements. Note: Only IBM storage subsystems are supported for the system (metadata) storage pool. SAN File System supports an unlimited number of LUNs for user data storage. The amount of user data storage that you can have in your environment is determined by the amount of storage that is supported by the storage subsystems and the client operating systems. In the following sections, SAN File System hardware and logical components are described in detail.2.5.5 SAN File System engines Within SAN File System, an engine is the physical hardware on which a MDS and an Administrative server runs. SAN File System supports any number from two to eight engines. Increasing the number of engines increases metadata traffic capacity and can provide higher availability to the configuration. Note: Although you cannot configure an initial SAN File System with only one engine, you can run a single-engine system if all of the other engines fail (for example, if you have only two engines and one of them fails), or if you want to bring down all of the engines except one before performing scheduled maintenance tasks. Performance would obviously be impacted in this case, but these scenarios are supported and workable, on a temporary basis. The administrative infrastructure on each engine allows an administrator to monitor and control SAN File System from a standard Web browser or an administrative command line interface. The two major components of the infrastructure are an Administrative agent, which provides access to administrative operations, and a Web server that is bundled with the console services and servlets that render HTML for the administrative browsers. The infrastructure also includes a Service Location Protocol (SLP) daemon, which is used for administrative discovery of SAN File System resources by third-party Common Information Model (CIM) agents. An administrator can use the SAN File System Console, which is the browser-based user interface, or administrative commands (CLI) to monitor and control an engine from anywhere with a TCP/IP connection to the cluster. This is in contrast to the SAN Volume Controller Console, which uses the Master Console for administrative functions. Metadata server A Metadata server (MDS) is a software server that runs on a SAN File System engine and performs metadata, administrative, and storage management services. In a SAN File System40 IBM TotalStorage SAN File System
  • 68. server cluster, there is one master MDS and one or more subordinate MDSs, each running ona separate engine in the cluster. Together, these MDSs provide clients with shared, coherentaccess to the SAN File System global namespace.All of the servers, including the master MDS, share the workload of the SAN File Systemglobal namespace. Each is responsible for providing metadata and locks to clients for filesetsthat are hosted by that MDS. Each MDS knows which filesets are hosted by each particularMDS, and when contacted by a client, can direct the client to the appropriate MDS. Theymanage distributed locks to ensure the integrity of all of the data within the globalnamespace. Note: Filesets are subsets of the entire global namespace and serve to organize the namespace for all the clients. A fileset serves as the unit of workload for the MDS; each MDS has a workload assigned of some of the filesets. From a client perspective, a fileset appears as a regular directory or folder, in which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories at which filesets are attached.In addition to providing metadata to clients and managing locks, MDSs perform a wide varietyof other tasks. They process requests issued by administrators to create and manage filesets,storage pools, volumes, and policies, they enforce the policies defined by administrators toplace files in appropriate storage pools, and they send alerts when any thresholdsestablished for filesets and storage pools are exceeded.Performing metadata servicesThere are two types of metadata: File metadata: This is information needed by the clients in order to access files directly from storage devices on a Storage Area Network. File metadata includes permissions, owner and group, access time, creation time, and other file characteristics, as well as the location of the file on the storage. System metadata: This is metadata used by the system itself. System metadata includes information about filesets, storage pools, volumes, and policies. The MDSs perform the reads and writes required to create, distribute, and manage this information.The metadata is stored and managed in a separate system storage pool that is onlyaccessible by the MDS in a server cluster.Distributing locks to clients involves the following operations: Issuing leases that determine the length of time that a server guarantees the locks it grants to clients. Granting locks to clients that allow them shared or exclusive access to files or parts of files. These locks are semi-preemptible, which means that if a client does not contact the server within the lease period, the server can “steal” the client’s locks and grant them to other clients if requested; otherwise, the client can reassert its locks (get its locks back) when it can make contact, thereby inter-locking the connection again. Providing a grace period during which a client can reassert its locks before other clients can obtain new locks if the server itself goes down and then comes back online. Chapter 2. SAN File System overview 41
  • 69. Performing administrative services An MDS processes the requests from administrators (issued from the SAN File System console or CLI) to perform the following types of tasks: Create and manage filesets, which are subsets of the entire global namespace and serve as the units of workload assigned to specific MDSs. Receive requests to create and manage volumes, which are LUNs labeled for SAN File System’s use in storage pools. Create and maintain storage pools (for example, an administrator can create a storage pool that consists of RAID or striped storage devices to meet reliability requirements, and can create a storage pool that consists of random or sequential access or low-latency storage devices to meet high performance requirements). Manually move files between storage pools, and defragment files in storage pools. Create FlashCopy images of filesets in the global namespace that can be used to make file-based backups easier to perform. Define policies containing rules for placement of files in storage pools. Define policies that define the automatic background movement of files among storage pools and the background deletion of files. Performing storage management services An MDS performs these storage management services: Manages allocation of blocks of space for files in storage pool volumes. Maintains pointers to the data blocks of a file. Evaluates the rules in the active policy and manages the placement of files in specific storage pools based on those rules. Issues alerts when filesets and storage pools reach or exceed their administrator-specified thresholds, or returns out-of-space messages if they run out of space. Administrative server Figure 2-2 on page 43 shows the overall administrative interface structure of SAN File System.42 IBM TotalStorage SAN File System
  • 70. Admin Clients SAN File File System Clients Web Browser (GUI Client) Installable file system ssh (CLI access) Virtual file system rd CIM Client (3 party/IGS) Customer SAN File system Server Network Cluster Master Console (optional) GUI Web Server CLI Client (tanktool) (sfscli) Console KVM Admin Agent (CIM) Call-Home Remote Support Metadata server Server Director Linux Auth Server RSA Card LDAP Server OR Local AuthenticationFigure 2-2 SAN File System administrative structureThe SAN File System Administrator server, which is based on a Web server softwareplatform, is made up of two parts: the GUI Web server and the Administrative Agent. Chapter 2. SAN File System overview 43
  • 71. The GUI Web server is the part of the administrative infrastructure that interacts with the SAN File System MDSs and renders the Web pages that make up the SAN File System Console. The Console is a Web-based user interface, either Internet Explorer or Netscape. Figure 2-3 shows the GUI browser interface for the SAN File System. Figure 2-3 SAN File System GUI browser interface The Administrative Agent implements all of the management logic for the GUI, CLI, and CIM interfaces, as well as performing administrative authorization/authentication against the LDAP server. The Administrative Agent processes all management requests initiated by an administrator from the SAN File System console, as well as requests initiated from the SAN File System administrative CLI, which is called sfscli. The Agent communicates with the MDS, the operating system, the Remote Supervisor Adapter (RSA II) card in the engine, the LDAP, and Administrative Agents on other engines in the cluster when processing requests. Example 2-1 shows all the commands available with sfscli. Example 2-1 The sfscli commands for V2.2.2 itso3@tank-mds3:/usr/tank/admin/bin> ./sfscli sfscli> help activatevol lsadmuser mkvol setfilesetserver addprivclient lsautorestart mvfile setoutput addserver lsclient quiescecluster settrap addsnmpmgr lsdomain quit startautorestart attachfileset lsdrfile rediscoverluns startcluster autofilesetserver lsfileset refreshusermap startmetadatacheck builddrscript lsimage reportclient startserver catlog lslun reportfilesetuse statcluster catpolicy lspolicy reportvolfiles statfile chclusterconfig lspool resetadmuser statfileset chdomain lsproc resumecluster statldap chfileset lsserver reverttoimage statpolicy chldapconfig lssnmpmgr rmdomain statserver44 IBM TotalStorage SAN File System
  • 72. chpool lstrapsetting rmdrfile stopautorestart chvol lsusermap rmfileset stopcluster clearlog lsvol rmimage stopmetadatacheck collectdiag mkdomain rmpolicy stopserver detachfileset mkdrfile rmpool suspendvol disabledefaultpool mkfileset rmprivclient upgradecluster dropserver mkimage rmsnmpmgr usepolicy exit mkpolicy rmusermap expandvol mkpool rmvol help mkusermap setdefaultpool sfscli> itso3@tank-mds3:/usr/tank/admin/bin> An Administrative server interacts with a SAN File System MDS through an intermediary, called the Common Information Model (CIM) agent. When a user issues a request, the CIM agent checks with an LDAP server, which must be installed in the environment, to authenticate the user ID and password and to verify whether the user has the authority (is assigned the appropriate role) to issue a particular request. After authenticating the user, the CIM agent interacts with the MDS on behalf of that user to process the request. This same system of authentication and interaction is also available to third-party CIM clients to manage SAN File System.2.5.6 Master Console The Master Console software is designed to provide a unified point of service for the entire SAN File System cluster, simplifying service to the MDSs. It makes a Virtual Private Network (VPN) connection readily available that you can initiate and monitor to enable hands-on access by remote IBM support personnel. It also provides a common point of residence for the IBM TotalStorage TPC for Fabric, IBM Director, and other tools associated with the capabilities just described, and can act as a central repository for diagnostic data. It is optional (that is, not required) to install a Master Console in a SAN File System configuration. If deployed, the Master Console hardware is customer-supplied and must meet the specifications listed in 2.5.2, “Master Console hardware and software” on page 38. The Master Console supported by the SAN File System is the same as that used for the IBM TotalStorage SAN Volume Controller (SVC) and IBM TotalStorage SAN Integration Server (SIS), so if there is already one in the client environment, it can be shared with the SAN File System. The Master Console software package includes the following software, which must be installed on it, if deployed: Adobe Acrobat Reader DB2® DS4000 Storage Manager Client IBM Director PuTTY SAN Volume Controller Console 6 Tivoli Storage Area Network Manager IBM VPN Connection Manager From the Master Console, the user can access the following components: SAN File System console, through a Web browser. Administrative command-line interface, through a Secure Shell (SSH) session. Any of the engines in the SAN File System cluster, through an SSH session. Chapter 2. SAN File System overview 45
  • 73. The RSA II card for any of the engines in the SAN File System cluster, through a Web browser. In addition, the user can use the RSA II Web interface to establish a remote console to the engine, allowing the user to view the engine desktop from the Master Console. Any of the SAN File System clients, through an SSH session, a telnet session, or a remote display emulation package, depending on the configuration of the client. Remote access Remote Access support is the ability for IBM support personnel who are not located on a user’s premises to assist an administrator or a local field engineer in diagnosing and repairing failures on a SAN File System engine. Remote Access support can help to greatly reduce service costs and shorten repair times, which in turn will reduce the impact of any SAN File System failures on business. Remote Access provides a support engineer with full access to the SAN File System console, after a request initiated by the customer. The access is via a secure VPN connection, using IBM VPN Connection Manager. This allows the support engineer to query and control the SAN File System MDS and to access metadata, log, dump, and configuration data, using the CLI. While the support engineer is accessing the SAN File System, the customer is able to monitor their progress via the Master Console display.2.5.7 Global namespace In most file systems, a typical file hierarchy is represented as a series of folders or directories that form a tree-like structure. Each folder or directory could contain many other folders or directories, file objects, or other file system objects, such as symbolic links or hard links. Every file system object has a name associated with it, and it is represented in the namespace as a node of the tree. SAN File System introduces a new file system object, called a fileset. A fileset can be viewed as a portion of the tree-structured hierarchy (or global namespace). It is created to divide the global namespace into a logical, organized structure. Filesets attach to other directories in the hierarchy, ultimately attaching through the hierarchy to the root of the SAN File System cluster mount point. The collection of filesets and its content in SAN File System combine to form the global namespace. Fileset boundaries are not visible to the clients. Only a SAN File System administrator can see them. From a client’s perspective, a fileset appears as a regular directory or folder within which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories to which filesets are attached. The global namespace is the key to the SAN File System. It allows common access to all files and directories by all clients if required, and ensures that the SAN File System clients have both consistent access and a consistent view of the data and files managed by SAN File System. This reduces the need to store and manage duplicate copies of data, and simplifies the backup process. Of course, security mechanisms, such as permissions and ACLs, will restrict visibility of files and directories. In addition, access to specific storage pools and filesets can be restricted by the use of non-uniform SAN File System configurations, as described in 3.3.2, “Non-uniform SAN File System configuration” on page 69. How the global namespace is organized The global namespace is organized into filesets, and each fileset is potentially available to the client-accessible global namespace at its attach point. An administrator is responsible for creating filesets and attaching them to directories in the global namespace, which can be done at multiple levels. Figure 2-4 on page 47 shows a sample global namespace. An attach point appears to a SAN File System client as a directory in which it can create files and46 IBM TotalStorage SAN File System
  • 74. folders (permissions permitting). From the MDS perspective, the filesets allow the metadata workload to be split between all the servers in the cluster. Note: Filesets can be organized in any way desired, to reflect enterprise needs. SAN File System / ROOT (Default Fileset) (Additional Filesets) /HR /Finance /CRM /Manufacturing Figure 2-4 Global namespace For example, the root fileset (for example, ROOT) is attached to the root level in the namespace hierarchy (for example, sanfs), and the filesets are attached below it (that is, HR, Finance, CRM, and Manufacturing). The client would simply see four subdirectories under the root directory of the SAN File System. By defining the path of a fileset’s attach point, the administrator also automatically defines its nesting level in relationship to the other filesets.2.5.8 Filesets A fileset is a subset of the entire SAN File System global namespace. It serves as the unit of workload for each MDS, and also dictates the overall organizational structure for the global namespace. It is also a mechanism for controlling the amount of space occupied by SAN File System clients. Filesets can be created based on workflow patterns, security, or backup considerations, for example. You might want to create a fileset for all the files used by a specific application, or associated with a specific client. The fileset is used not only for managing the storage space, but also as the unit for creating FlashCopy images (see 2.5.12, “FlashCopy” on page 58). Correctly defined filesets mean that you can take a FlashCopy image for all the files in a fileset together in a single operation, thus providing a consistent image for all of those files. A key part of SAN File System design is organizing the global namespace into filesets that match the data management model of the enterprise. Filesets can also be used as a criteria in placement of individual files within the SAN File System (see 2.5.10, “Policy based storage and data management” on page 49). Tip: Filesets are assigned to a MDS either statically (that is, by specifying a MDS to serve the fileset when it is created), or dynamically. If dynamic assignment is chosen, automatic simple load balancing will be done. If using static fileset assignment, consider the overall I/O loads on the SAN File System cluster. Since each fileset is assigned to one (and only one) MDS at a time, for serving the metadata, you will want to balance the load across all MDS in the cluster, by assigning filesets appropriately. More information about filesets is given in 7.5, “Filesets” on page 286. Chapter 2. SAN File System overview 47
  • 75. An administrator creates filesets and attaches them at specific locations below the global fileset. An administrator can also attach a fileset to another fileset. When a fileset is attached to another fileset, it is called a nested fileset. In Figure 2-5, fileset1 and fileset2 are the nested filesets of parent fileset Winfiles. Note: In general, we do not recommend creating nested filesets; see 7.5.2, “Nested filesets” on page 289 for the reasons why. / ( ROOT ) /HR /UNIXfiles /Winfiles /Manufacturing (filesets) fileset1 fileset2 (nested filesets) Figure 2-5 Filesets and nested filesets Here we have shown several filesets, including filesets called UNIXfiles and Winfiles. We recommend separating filesets by their “primary allegiance” of the operating system. This will facilitate file sharing (see “Sharing files” on page 54 for more information). Separation of filesets also facilitates backup, since if you are using file-based backup methods (for example, tar, Windows Backup vendor products like VERITAS NetBackup, or IBM Tivoli Storage Manager), full metadata attributes of Windows files can only be backed up from a Windows backup client, and full metadata attributes of UNIX files can only be backed up from a UNIX backup client. See Chapter 12, “Protecting the SAN File System environment” on page 477 for more information. When creating a fileset, an administrator can specify a maximum size for the fileset (called a quota) and specify whether SAN File System should generate an alert if the size of the fileset reaches or exceeds a specified percentage of the maximum size (called a threshold). For example, if the quota on the fileset was set at 100 GB, and the threshold was 80%, an alert would be raised once the fileset contained 80 GB of data. The action taken when the fileset reaches its quota size (100 GB in this instance) depends on whether the quota is defined as hard or soft. If a hard quota is used, once the threshold is reached, any further requests from a client to add more space to the fileset (by creating or extending files) will be denied. If a soft quota is used, which is the default, more space can be allocated, but alerts will continue to be sent. Of course, once the amount of physical storage available to SAN File System is exceeded, no more space can be used. The quota limit, threshold, and quota type can be set differently and individually for each fileset.2.5.9 Storage pools A storage pool is a collection of SAN File System volumes that can be used to store either metadata or file data. A storage pool consists of one or more volumes (LUNs from the back-end storage system perspective) that provide, for example, a desired quality of service for a specific use, such as to store all files for a particular application. An administrator must assign one or more volumes to a storage pool before it can be used.48 IBM TotalStorage SAN File System
  • 76. SAN File System has two types of storage pools (System and User), as shown in Figure 2-6. SAN File System System User User User Pool Pool1 Pool2 Pool3 Default User Pool Figure 2-6 SAN File System storage pools System Pool The System Pool contains the system metadata (system attributes, configuration information, and MDS state) and file metadata (file attributes and locations) that is accessible to all MDSs in the server cluster. There is only one System Pool, which is created automatically when SAN File System is installed with one or more volumes specified as a parameter to the install process. The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended. The RAID configuration should have a low ratio of data to parity disks, and hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Remote mirroring solutions, such as MetroMirror, available on the IBM TotalStorage SAN Volume Controller, DS6000, and DS8000, are also possible. User Pools User Pools contain the blocks of data that make up user files. Administrators can create one or more user storage pools, and then create policies containing rules that cause the MDS servers to store data for specific files in the appropriate storage pools. A special User Pool is the default User Pool. This is used to store the data for a file if the file is not assigned to a specific storage pool by a rule in the active file placement policy. One User Pool, which is automatically designated the default User Pool, is created when SAN File System is installed. This can be changed by creating another User Pool and setting it to the default User Pool. The default pool can also be disabled if required.2.5.10 Policy based storage and data management SAN File System provides automatic file placement, at the time of creation, through the use of polices and storage pools. An administrator can create quality-of-service storage pools that are available to all users, and define rules in file placement policies that cause newly created files to be placed in the appropriate storage pools automatically. SAN File System also provides file lifecycle management through the use of file management policies. File placement policy A file placement policy is a list of rules that determines where the data for specific files is stored. A rule is an SQL-like statement that tells a SAN File System MDS to place the data for a file in a specific storage pool if the file attribute that the rule specifies meets a particular condition. A rule can apply to any file being created, or only to files being created within a specific fileset, depending on how it is defined. Chapter 2. SAN File System overview 49
  • 77. A storage pool is a named set of storage volumes that can be specified as the destination for files in rules. Only User Pools are used to store file data. The rules in a file placement policy are processed in order until the condition in one of the rules is met. The data for the files is then stored in the specified storage pool. If none of the conditions specified in the rules of the policy is met, the data for the file is stored in the default storage pool. Figure 2-7 shows an example of how file placement policies work. The yellow box shows a sequence of rules defined in the policy. Underneath each storage pool is a list of some files that will be placed in it, according to the policy. For example, the file /HR/dsn.bak matches the first rule (put all files in the fileset /HR into User Pool 1) and is therefore put into User Pool 1. The fact that it also matches the second rule is irrelevant, because only the first matching rule is applied. See 7.8, “File placement policy” on page 304 for more information. / File Name Fileset Rules for File Placement File Type /HR go into User Pool 1 *.bak go into User Pool 4 /HR /Finance /CRM /Manufacturing DB2.* go into User Pool 2 *.tmp go into User Pool 3 SAN File System User User User User Pool 1 Pool 2 Pool 3 Pool 4 /HR/dsn1.txt /CRM/DB2.pgm /CRM/dsn3.tmp /CRM/dsn2.bak /HR/DB2.pgm /Finance/DB2.tmp /Finance/dsn4.bak /HR/dsn1.bak Figure 2-7 File placement policy execution The file placement policy can also optionally contain preallocation rules. These rules, available with SAN File System V2.2.2, allow a system administrator to automatically preallocate space for designated files, which can improve performance. See 7.8.7, “File storage preallocation” on page 324 for more information about preallocation. File management policy and lifecycle management SAN File System Version 2.2 introduced a lifecycle management function. This allows administrators to specify how files should be automatically moved among storage pools during their lifetime, and, optionally, specify when files should be deleted. The business value of this feature is that it improves storage space utilization, allowing a balanced use of premium and inexpensive storage matching the objectives of the enterprise. For example, an enterprise may have two types of storage devices; one that has higher speed, reliability, and cost, and one that has lower speed, reliability, and cost. Lifecycle management in SAN File System could be used to automatically move infrequently accessed files from the more50 IBM TotalStorage SAN File System
  • 78. expensive storage to cheaper storage, or vice versa, for more critical files. Lifecycle management reduces the manual intervention necessary in managing space utilization and therefore also reduces the cost of management. Lifecycle management is set up via file management policies. A file management policy is a set of rules controlling the movement of files among different storage pools. Rules are of two types: migration and deletion. A migration rule will cause matching files to be moved from one storage pool to another. A deletion rule will cause matching files to be deleted from the SAN File System global namespace. Migration and deletion rules can be specified based on pool, fileset, last access date, or size criteria. The system administrator defines these rules in a file management policy, then runs a special script to act on the rules. The script can be run in a planning mode to determine in advance what files would be migrated/deleted by the script. The plan can optionally be edited by the administrator, and then passed back for execution by the script so that the selected files are actually migrated or deleted. For more information, see Chapter 10, “File movement and lifecycle management” on page 435.2.5.11 Clients SAN File System is based on a client-server design. A SAN File System client is a computer that accesses and creates data that is stored in the SAN File System global namespace. The SAN File System is designed to support the local file system interfaces on UNIX, Linux, and Windows servers. This means that the SAN File System is designed to be used without requiring any changes to your applications or databases that use a file system to store data. The SAN File System client for AIX, Sun Solaris, Red Hat, and SUSE Linux use the virtual file system interface within the local operating system to provide file system interfaces to the applications running on AIX, Sun Solaris, Red Hat, and SUSE Linux. The SAN File System client for Microsoft Windows (supported Windows 2000 and 2003 editions) uses the installable file system interface within the local operating system to provide file system interfaces to the applications. Clients access metadata (such as a files location on a storage device) only through a MDS, and then access data directly from storage devices attached to the SAN. This method of data access eliminates server bottlenecks and provides read and write performance that is comparable to that of file systems built on bus-attached, high-performance storage. SAN File System currently supports clients that run these operating systems: AIX 5L Version 5.1 (32-bit uniprocessor or multiprocessor). The bos.up or bos.mp packages must be at level 5.1.0.58, plus APAR IY50330 or higher. AIX 5L Version 5.2 (32-bit and 64-bit). The bos.up package must be at level 5.2.0.18 or later. The bos.mp package must be at level 5.2.0.18 or later. APAR IY50331 or higher is required. AIX 5L Version 5.3 (32-bit or 64-bit). Windows 2000 Server and Windows 2000 Advanced Server with Service Pack 4 or later. Windows 2003 Server Standard and Enterprise Editions with Service Pack 1 or later. VMWare ESX 2.0.1 running Windows only. Red Hat Enterprise Linux 3.0 AS, ES, and WS, with U2 kernel 2.4.21-15.0.3 hugemem, smp or U4 kernel 2.4.21-27 hugemem, and smp on x86 systems. Chapter 2. SAN File System overview 51
  • 79. SUSE Linux Enterprise Server 8.0 on kernel level 2.4.21-231 (Service Pack 3) kernel level 2.4.21-278 (Service Pack 4) on x86 servers (32-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on pSeries (64-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on zSeries (31-bit). Sun Solaris 9 (64-bit) on SPARC servers. Note: The AIX client is supported on pSeries systems with a maximum of eight processors. The Red Hat client is supported on either the SMP or Hugemem kernel, with a maximum of 4 GB of main memory. The zSeries SUSE 8 SAN File System client uses the zFCP driver and supports access to ESS, DS6000, and DS8000 for user LUNs. SAN File System client software must be installed on each AIX, Windows, Solaris, SUSE, or Red Hat client. On an AIX, Linux, and Solaris client, the software is a virtual file system (VFS), and on a Windows client, it is an installable file system (IFS). The VFS and IFS provide clients with local access to the global namespace on the SAN. Note that clients can also act as servers to a broader clientele. They can provide NFS or CIFS access to the global namespace to LAN-attached clients and can host applications such as database servers. A VFS is a subsystem of an AIX/Linux/Solaris client’s virtual file system layer, and an IFS is a subsystem of a Windows client’s file system. The SAN File System VFS or IFS directs all metadata operations to an MDS and all data operations to storage devices attached to a SAN. The SAN File System VFS or IFS provides the metadata to the clients operating system and any applications running on the client. The metadata looks identical to metadata read from a native, locally attached file system, that is, it emulates the local file system semantics. Therefore, no change is necessary to the client applications access methods to use SAN File System. When the global namespace is mounted on an AIX/Linux/Solaris client, it looks like a local file system. When the global namespace is mounted on a Windows client, it appears as another drive letter and looks like an NTFS file system. Files can therefore be shared between Windows and UNIX clients (permissions and suitable applications permitting). Clustering SAN File System V2.2.2 supports clustering software running on AIX, Solaris, and Microsoft clients. AIX clients HACMP™ is supported on SAN File System clients running AIX 5L V5.1, V5.2, and V5.3, when the appropriate maintenance levels are installed. Solaris clients Solaris client clustering is supported when used with Sun Cluster V3.1. Sun clustered applications can use SAN File System provided that the SAN File System is declared to the cluster manager as a Global File System. Likewise, non-clustered applications are supported when Sun Cluster is present on the client. Sun Clusters can also be used as an NFS server, as the NFS service will fail over using local IP connectivity.52 IBM TotalStorage SAN File System
  • 80. Microsoft clientsMicrosoft client clustering is supported for Windows 2000 and Windows 2003 clients withMSCS (Microsoft Cluster Server), using a maximum of two client nodes per cluster.Caching metadata, locks, and dataCaching allows a client to achieve low-latency access to both metadata and data. A client cancache metadata to perform multiple metadata reads locally. The metadata includes mappingof logical file system data to physical addresses on storage devices attached to a SAN.A client can also cache locks to allow the client to grant multiple opens to a file locally withouthaving to contact a MDS for each operation that requires a lock.In addition, a client can cache data for small files to eliminate I/O operations to storagedevices attached to a SAN. A client performs all data caching in memory. Note that if there isnot enough space in the client’s cache for all of the data in a file, the client simply reads thedata from the shared storage device on which the file is stored. Data access is still fastbecause the client has direct access to all storage devices attached to a SAN.Using the direct I/O modeSome applications, such as database management systems, use their own sophisticatedcache management systems. For such applications, SAN File System provides a direct I/Omode. In this mode, SAN File System performs direct writes to disk, and bypasses local filesystem caching. Using the direct I/O mode makes files behave more like raw devices. Thisgives database systems direct control over their I/O operations, while still providing theadvantages of SAN File System, such as SAN File System FlashCopy. Applications need tobe aware of (and configured for) direct I/O. IBM DB2 UDB supports direct I/O (see 14.5,“Direct I/O support” on page 558 for more information).On the Intel Linux (IA32) releases supported with the SAN File System V2.2.2 client, supportis provided for the POSIX direct I/O file system interface calls.Virtual I/OThe SAN File System 2.2.2 client for AIX 5L V5.3 will interoperate with Virtual I/O (VIO)devices. VIO enables virtualization of storage across LPARs in a single POWER5™ system.SAN File System support for VIO enables SAN File System clients to use data volumes thatcan be accessed through VIO. In addition, all other SAN File System clients will interoperatecorrectly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients.Version 1.2.0.0 of VIO is supported by SAN File System. Restriction: SAN File System does not support the use of Physical Volume Identifier (PVID) in order to export a LUN/physical volume (for example, hdisk4) on a VIO Server. To list devices with a PVID, type lspv. If the second column has a value of none, the physical volume does not have a PVID.For a description of driver configurations that require the creation of a volume label, see“What are some of the restrictions and limitations in the VIOS environment?” on the VIOSWeb site at: http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/faq.html Chapter 2. SAN File System overview 53
  • 81. Sharing files In a homogenous environment (either all UNIX or all Windows clients), SAN File System provides access and semantics that are customized for the operating system running on the clients. When files are created and accessed from only Windows clients, all the security features of Windows are available and enforced. When files are created and accessed from only UNIX clients, all the security features of UNIX are available and enforced. In Version 2.2 of SAN File System (and beyond), the heterogenous file sharing feature improves the flexibility and security involved in sharing files between Windows and UNIX based environments. The administrator defines and manages a set of user map entries using the CLI or GUI, which specifies a UNIX domain-qualified user and a Windows domain-qualified user that are to be treated as equivalent for the purpose of validating file access permissions. Once these mappings are defined, the SAN File System automatically accesses the Active Directory Sever (Windows) and either LDAP or Network Information Service (NIS) on UNIX to cross-reference the user ID and group membership. See 8.3, “Advanced heterogeneous file sharing” on page 347 for more information about heterogenous file sharing. If no user mappings are defined, then heterogeneous file sharing (where there are both UNIX and Windows clients) is handled in a restricted manner. When files created on a UNIX client are accessed by a non-mapped user on a Windows client, the access available will be the same as those granted by the “Other” permission bits in UNIX. Similarly, when files created on a Windows client are accessed on a non-mapped user on a UNIX client, the access available is the same as that granted to the “Everyone” user group in Windows. If the improved heterogenous file sharing capabilities (user mappings) are not implemented by the administrator, then file sharing is positioned primarily for homogenous environments. The ability to share files heterogeneously is recommended for read-only, that is, create files on one platform, and provide read-only access on the other platform. To this end, filesets should be established so that they have a “primary allegiance”. This means that certain filesets will have files created in them only by Windows clients, and other filesets will have files created in them only by UNIX clients. How clients access the global namespace SAN File System clients mount the global namespace onto their systems. After the global namespace is mounted on a client, users and applications can use it just as they do any other file system to access data and to create, update, and delete directories and files. On a UNIX-based client (including AIX, Solaris, and Linux), the global namespace looks like a local UNIX file system. On a Windows client, it appears as another driver letter and looks like any other local NTFS file system. Basically, the global namespace looks and acts like any other file system on a client’s system. There are some restrictions on NTFS features supported by SAN File System (see “Windows client restrictions” on page 56). Figure 2-8 on page 55 shows the My Computer view from a Windows 2000 client: The S: drive (labelled sanfs) is the attach point of the SAN File System. A Windows 2003 client will see a similar display.54 IBM TotalStorage SAN File System
  • 82. Figure 2-8 Windows 2000 client view of SAN File SystemIf we expand the S: drive in Windows Explorer, we can see the directories underneath(Figure 2-9 shows this view). There are a number of filesets available, including the rootfileset (top level) and two filesets under the root (USERS and userhomes). However, the clientis not aware of this; they simply see the filesets as regular folders. The hidden directory,.flashcopy, is part of the fileset and is used to store FlashCopy images of the fileset. Moreinformation about FlashCopy is given in 2.5.12, “FlashCopy” on page 58 and 9.1, “SAN FileSystem FlashCopy” on page 376.Figure 2-9 Exploring the SAN File System from a Windows 2000 client Chapter 2. SAN File System overview 55
  • 83. Example 2-2 shows the AIX mount point for the SAN File System, namely SANFS. It is mounted on the directory /sfs. Other UNIX-based clients see a similar output from the df command. A listing of the SAN File System namespace base directory shows the same directory or folder names as in the Windows output. The key thing here is that all SAN File System clients, whether Windows or UNIX, will see essentially the same view of the global namespace. Example 2-2 AIX /UNIX mount point of the SAN file system Rome:/ >df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 65536 46680 29% 1433 9% / /dev/hd2 1310720 73752 95% 21281 13% /usr /dev/hd9var 65536 52720 20% 455 6% /var /dev/hd3 131072 103728 21% 59 1% /tmp /dev/hd1 65536 63368 4% 18 1% /home /proc - - - - - /proc /dev/hd10opt 65536 53312 19% 291 4% /opt /dev/lv00 4063232 1648688 60% 657 1% /usr/sys/inst.images SANFS 603095040 591331328 2% 1 1% /sfs Rome:/ > cd /sfs/sanfs Rome:/ > ls .flashcopy aix51 aixfiles axi51 files lixfiles lost+found smallwin testdir tmp userhomes USERS winfiles winhome Some client restrictions There are certain restrictions in the current release for SAN File System clients. Use of MBCS Multi-byte characters (MBCS) can now be used (from V2.2 onwards) in pattern matching in file placement policies and for fileset attach point directories. MBCS are not supported in the names of storage pools and filesets. Likewise, MBCS cannot be used in the SAN File System cluster name, which appears in the namespace as the root fileset attach point directory name (for example, /sanfs), or in the fileset administrative object name (as opposed to the fileset directory attach point). UNIX client restriction UNIX clients cannot use user IDs or group IDs 999999 and 1000000 for real users or groups; these are reserved IDs used internally by SAN File System. Note: To avoid any conflicts with your current use of IDs, the reserved user IDs can be configured once at installation time. Windows client restrictions The SAN File System is natively case-sensitive. However, Windows applications can choose to use case-sensitive or case-insensitive names. This means that case-sensitive applications, such as those making use of Windows support for POSIX interfaces, behave as expected. Native Win32® clients (such as Windows Explorer) get only case-aware semantics. The case specified at the time of file creation is preserved, but in general, file names are case-insensitive. For example, Windows Explorer allows the user to create a file named Hello.c, but an attempt to create hello.c in the same folder will fail because the file already exists. If a Windows-based client accesses a folder that contains two files that are created on56 IBM TotalStorage SAN File System
  • 84. a UNIX-based client with names that differ only in case, its inability to distinguish between thetwo files may lead to undesirable results. For this reason, it is not recommended for UNIXclients to create case-differentiated files in filesets that will be accessed by Windows clients.The following features of NTFS are not currently supported by SAN File System: File compression on either individual files or all files within a folder. Extended attributes. Reparse points. Built-in file encryption on files and directories. Quotas; however, quotas are provided by SAN File System filesets. Defragmentation and error-checking tools (including CHKDSK). Alternate data streams. Assigning an access control list (ACL) for the entire drive. NTFS change journal. Scanning all files/directories owned by a particular SID (FSCTL_FIND_FILES_BY_SID). Security auditing or SACLs. Windows sparse files. Windows Directory Change Notification.Applications that use the Directory Change Notification feature may stop running when a filesystem does not support this feature, while other applications will continue running.The following applications stop running when Directory Change Notification is not supportedby the file system: Microsoft applications – ASP.net – Internet Information Server (IIS) – The SMTP Service component of Microsoft Exchange Non-Microsoft application – Apache Web serverThe following application continues to run when Directory Change Notification is notsupported by the file system: Windows Explorer. Note that when changes to files occur by other processes, the changes will not be automatically reflected until a manual refresh is done or the file folder is reopened.In addition to the above limitations, note these differences: Programs that open files using the 64-bit file ID (the FILE_OPEN_BY_FILE_ID option) will fail. This applies to the NFS server bundled with Microsoft Services for UNIX. Symbolic links created on UNIX-based clients are handled specially by SAN File System on Windows-based clients; they appear as regular files with a size of 0, and their contents cannot be accessed or deleted. Batch oplocks are not supported. LEVEL_1, LEVEL_2 and Filter types are supported. Chapter 2. SAN File System overview 57
  • 85. Differences between SAN File System and NTFS SAN File System differs from Microsoft Windows NT® File System (NTFS) in its degree of integration into the Windows administrative environment. The differences are: Disk management within the Microsoft Management Console shows SAN File System disks as unallocated. SAN File System does not support reparse points or extended attributes. SAN File System does not support the use of the standard Windows write signature on its disks. Disks used for the global namespace cannot sleep or hibernate. SAN File System also differs from NTFS in its degree of integration into Windows Explorer and the desktop. The differences are: Manual refreshes are required when updates to the SAN File System global namespace are initiated on the metadata server (such as attaching a new fileset). The recycle bin is not supported. You cannot use distributed link tracing. This is a technique through which shell shortcuts and OLE links continue to work after the target file is renamed or moved. Distributed link tracking can help a user locate the link sources in case the link source is renamed or moved to another folder on the same or different volume on the same PC, or moved to a folder on any PC in the same domain. You cannot use NTFS sparse-file APIs or change journaling. This means that SAN File System does not provide efficient support for the indexing services accessible through the Windows “Search for files or folders” function. However, SAN File System does support implicitly sparse files.2.5.12 FlashCopy A FlashCopy image is a space-efficient, read-only copy of the contents of a fileset in a SAN File System global namespace at a particular point in time. A FlashCopy image can be used with standard backup tools available in a user’s environment to create backup copies of files onto tapes. A FlashCopy image can also be quickly “reverted”, that is, roll back the current fileset contents to an available FlashCopy image. When creating FlashCopy images, an administrator specifies which fileset to create the FlashCopy image for. The FlashCopy image operation is performed individually for each fileset. A FlashCopy image is simply an image of an entire fileset (and just that fileset, not any nested filesets) as it exists at a specific point in time. An important benefit is that during creation of a FlashCopy image, all data remains online and available to users and applications. The space used to keep the FlashCopy image is included in its overall fileset space; however, a space-efficient algorithm is used to minimize the space requirement. The FlashCopy image does not include any nested filesets within it. You can create and maintain a maximum of 32 FlashCopy images of any fileset. See 9.1, “SAN File System FlashCopy” on page 376 for more information about SAN File System FlashCopy. Figure 2-10 on page 59 shows how a FlashCopy image can be seen on a Windows client. In this case, a FlashCopy image was made of the fileset container_A, and specified to be created in the directory 062403image. The fileset has two top-level directories, DRIVERS and Adobe. After the FlashCopy image is made, a subdirectory called 062403image appears in the special directory .flashcopy (which is hidden by default) underneath the root of the fileset. This directory contains the same folders as the actual fileset, that is, DRIVERS and Adobe, and all the file/folder structure underneath. It is simply frozen at the time the image was taken.58 IBM TotalStorage SAN File System
  • 86. Therefore, clients have file-level access to these images, to access older versions of files, or to copy individual files back to the real fileset if required, and if permissions on the flashcopy folder are set appropriately. Figure 2-10 FlashCopy images2.5.13 Reliability and availability Reliability is defined as the ability of SAN File System to perform to its specifications without error. This is critical for a system that will store corporate data. Availability is the ability to stay up and running, plus the ability to transparently recover to maintain the available state. SAN File System has many built-in features for reliability and availability. The SAN File System operates in a cluster. Each MDS engine supplied by the client is required to have the following features for availability: Dual hardware components: – Hardware mirrored internal disk drives – Dual Fibre Channel ports supporting multi-path I/O for storage devices Chapter 2. SAN File System overview 59
  • 87. Remote Supervisor Adapter II (RSA II). The RSA-II provides remote access to the engine’s desktop, monitoring of environmental factors, and engine restart capability. The RSA card communicates with the service processors on the MDS engines in the cluster to collect hardware information and statistics. The RSA cards also communicate with the service processors to enable remote management of the servers in the cluster, including automatic reboot if a server hang is detected. More information about the RSA card can be found in 13.5, “Remote Supervisor Adapter II” on page 537. To improve availability, the MDS hardware also needs the following dual redundant features: Dual power supplies. Dual fans. Dual Ethernet connections with network bonding enabled. Bonding network interfaces together allows for increased failover in high availability configurations. Beginning with V2.2.2, SAN File System supports network bonding with SLES8 SP 4 and SLES 9 SP 1. Redundant Ethernet support on each MDS enables the full redundancy of the IP network between the MDSs in the cluster as well as between the SAN File System Clients and the MDSs. The dual network interfaces in each MDS are combined redundantly servicing a single IP address. – Each MDS still uses only one IP address. – One interface is used for IP traffic unless the interface fails, in which case IP service is failed over to the other interface. – The time to fail over an IP service is on the order of a second or two. The change is transparent to SAN File System. – No change to client configuration is needed. We also strongly recommend UPS systems to protect the SAN File System engines. Automatic restart from software problems SAN File System has the availability functions to monitor, detect, and recover from faults in the cluster. Failures in SAN File System can be categorized into two types: software faults that affect MDS software components, and hardware faults that affect hardware components. Software faults Software faults are server errors or failures for which recovery is possible via a restart of the server process without manual administrative intervention. SAN File System detects and recovers from software faults via a number of mechanisms. An administrative watchdog process on each server monitors the health of the server and restarts the MDS processes in the event of failure, typically within about 20 seconds of the failure. If the operating system of an MDS hangs, it will be ejected from the cluster once the MDS stops responding to other cluster members. A surviving cluster member will raise an event and SNMP trap, and will use the RSA card to restart the MDS that was hung. Hardware faults Hardware faults are server failures for which recovery requires administrative intervention. They have a greater impact than software faults and require at least a machine reboot and possibly physical maintenance for recovery. SAN File System detects hardware faults by way of a heartbeat mechanism between the servers in a cluster. A server engine that experiences a hardware fault stops responding to heartbeat messages from its peers. Failure of a server to respond for a long enough period of60 IBM TotalStorage SAN File System
  • 88. time causes the other servers to mark it as being down and to send administrative SNMP alerts. Automatic fileset and master role failover SAN File System supports the nondisruptive, automatic failover of the workload (filesets). If any single MDS fails or is manually stopped, SAN File System automatically redistributes the filesets of that MDS to surviving MDSs and, if necessary, reassigns the master role to another MDS in the cluster. SAN File System also uses automatic workload failover to provide nondisruptive maintenance for the MDSs. 9.5, “MDS automated failover” on page 413 contains more information about SAN File system failover.2.5.14 Summary of major features To summarize, SAN File System provides the following features. Direct data access by exploitation of SAN technology SAN File System uses a data access model that allows client systems to access data directly from storage systems using a high-bandwidth SAN, without interposing servers. Direct data access helps eliminate server bottlenecks and provides the performance necessary for data-intensive applications. Global namespace SAN File System presents a single global namespace view of all files in the system to all of the clients, without manual, client-by-client configuration by the administrator. A file can be identified using the same path and file name, regardless of the system from which it is being accessed. The single global namespace shared directly by clients also reduces the requirement of data replication. As a result, the productivity of the administrator as well as the users accessing the data is improved. It is possible to restrict access to the global namespace by using a non-uniform SAN File System configuration. In this way, only certain SAN File System volumes and therefore filesets will be available to each client. See 3.3.2, “Non-uniform SAN File System configuration” on page 69 for more information. File sharing SAN File System is specifically designed to be easy to implement in virtually any operating system environment. All systems running this file system, regardless of operating system or hardware platform, potentially have uniform access to the data stored (under the global namespace) in the system. File metadata, such as last modification time, are presented to users and applications in a form that is compatible with the native file system interface of the platform. SAN File System is also designed to allow heterogeneous file sharing among the UNIX and Windows client platforms with full locking and security capabilities. By enabling this capability, heterogeneous file sharing with SAN File System increases in performance and flexibility. Chapter 2. SAN File System overview 61
  • 89. Policy based automatic placement SAN File System is aimed at simplifying the storage resource management and reducing the total cost of ownership by the policy based automatic placement of files on appropriate storage devices. The storage administrator can define storage pools depending on specific application requirements and quality of services, and define rules based on data attributes to store the files at the appropriate storage devices automatically. Lifecycle management SAN File System provides the administrator with policy based data management that automates the management of data stored on storage resources. Through the policy based movement of files between storage pools and the policy based deletion of files, there is less effort needed to update the location of files or sets of files. Free space within storage pools will be more available as potentially older files are removed. The overall cost of storage can be reduced by using this tool to manage data between high/low performing storage based on importance of the data.62 IBM TotalStorage SAN File System
  • 90. Part 2Part 2 Planning, installing, and upgrading In this part of the book, we present detailed information for planning, installing, and upgrading the IBM TotalStorage SAN File System.© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 63
  • 91. 64 IBM TotalStorage SAN File System
  • 92. 3 Chapter 3. MDS system design, architecture, and planning issues In this chapter, we discuss the following topics: Site infrastructure Fabric needs and storage partitioning SAN storage infrastructure Network infrastructure Security: Local Authentication and LDAP File Sharing: Heterogeneous file sharing Planning for storage pools, filesets, and policies Planning for high availability Client needs and application support Client data migration SAN File System sizing guide Integration of SAN File System into an existing SAN Planning worksheets© Copyright IBM Corp. 2003, 2004, 2006. All rights reserved. 65
  • 93. 3.1 Site infrastructure To make sure that the installation of SAN File System is successful, it is crucial to plan thoroughly. You need to verify that the following site infrastructure is available for SAN File System: Adequate hardware for SAN File System Metadata server engines. SAN File System is shipped as a software product; therefore, the hardware for SAN File System must be supplied by the client. In order to help to size the hardware for SAN File System Metadata server engines, a SAN File System sizing guide is available. We discuss sizing considerations in 3.12, “SAN File System sizing guide” on page 91. The Metadata servers must be set up with two internal drives for the operating system, configured as a RAID 1 mirrored pair. SAN configuration with no single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. Detailed information about planning SANs is available in the redbook Designing and Optimizing an IBM Storage Area Network, SG24-6419. A KVM (Keyboard Video Mouse) for each server. This is also required for the Master Console, if deployed; however, a separate KVM can also be used. Typical clients will use a switch so that the KVM can be shared between multiple servers. The Master Console KVM can be shared with the SAN File System servers through the RSA card. A SAN with two switch ports per SAN File System server engine, and enough SAN ports for any additional storage devices and clients. The SAN ports on the SAN File System engines are required to be 2 Gbps, so appropriate cabling is required. Client supplied switches can be 1 or 2 Gbps (2 Gbps is recommended for performance). Optionally, but recommended, the Master Console, if deployed, uses two additional SAN ports. The HBA in the MDS must be capable of supporting the QLogic device driver level recommended for use with SAN File System V2.2.2. A supported back-end storage device with LUNs defined for both system and user storage. – Currently supported disk systems for system storage are the IBM TotalStorage Enterprise Storage Server (ESS), the IBM TotalStorage DS8000 series, the IBM TotalStorage DS6000 series, the IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 series (formally FAStT) Models DS4300, DS4400, and DS4500. System metadata should be configured on high availability storage (RAID with a low ratio of data to parity disks). – SAN File System V2.2.2 can be configured with any suitable SAN storage device for user data storage. That is, any SAN-attached storage supported by the operating systems on which the SAN File System client runs can be used, provided it conforms to the SCSI standard for unique device identification. SAN File System V2.2.2 also supports iSCSI data LUNs as long as the devices conform to the SCSI driver interface standards. Sufficient GBICs, LAN, and SAN cables should be available for the installation. Each SAN File System engine needs at least two network ports and TCP/IP addresses (one for the server host address and the other for the RSA connection). The ports can be either the standard 10/100/1000 Ethernet, or optional Fibre connection. The Master Console, if deployed, requires two 10/100 Ethernet ports and two TCP/IP address. Therefore the minimum requirement for a two engine cluster is four Ethernet ports, or six if the optional Master Console is deployed. In addition, Ethernet bonding (see 3.8.5, “Network planning” on page 84 for more information) is HIGHLY recommended for every SAN File System configuration. This requires an additional network port (either standard66 IBM TotalStorage SAN File System
  • 94. copper or optional fibre), preferably on a separate switch for maximum redundancy. With Ethernet bonding configured, three network ports are required per MDS. To perform a rolling upgrade to SAN File System V2.2.2, you must leave the USB/RS-485 serial network interface in place for the RSA cards. Once the upgrade is committed, you can remove the RS-485 interface, since it is no longer used. It is replaced by the TCP/IP interface for the RSA cards. Power outlets (one or two per server engine; dual power supplies for the engine are recommended but not required). You need two wall outlets or two rack PDU outlets per server engine. For availability, these should be on separate power circuits. The Master Console, if deployed, requires one wall outlet or one PDU outlet. SAN clients with supported client operating systems, and supported Fibre Channel adapters for the disk system being used. Supported SAN File System clients at the time of writing are listed in 2.5.11, “Clients” on page 51, and are current at the following Web site: http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html3.2 Fabric needs and storage partitioning When planning the fabric for SAN File System, consider these criteria: The SAN configuration for the SAN File System should not have a single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. We recommend separating the fabrics between the HBA ports within the MDS. By separating the fabrics, you will avoid a single path of failure for the fabric services, such as the name server. A maximum of 126 dual-path LUNs can be assigned to the system storage pool. SAN File System V2.2 supports an unlimited number of LUNs for user data storage; however, the environment will necessarily impose some practical restrictions on this item, determined by the amount of storage supported by the storage devices and the client operating systems. The SAN File System Metadata servers (MDS) must have access to all Metadata (or system storage pool) LUNs. Access to client data LUNs is not required. The SAN File System clients must be prevented from having access to the Metadata LUNs, as shown in Figure 3-1 on page 68. The darker area includes the MDS engines and the LUNs in the system pool. The lighter areas include various combinations of SAN File System clients and LUNs in user pools. Overlaps are possible in the clients’ range of access, depending on the user data access required and the underlying support for this in the storage devices. The SAN File System clients need to have access only to those LUNs they will eventually access. This will be achieved by using zoning/LUN masking/storage partitioning on the back-end storage devices. Chapter 3. MDS system design, architecture, and planning issues 67
  • 95. AIX AIX W indow s W indow s M etadata 2000 2000 Server HBA HBA HBA HBA HBA FC Sw itch 1 1 FC Sw itch FC Sw itch 1 System User HBA Pool Pool M etadata Server Figure 3-1 Mapping of Metadata and User data to MDS and clients Each of the SAN File System clients should be zoned separately (hard zoning is recommended) so that each HBA can detect all the LUNs containing that client’s data in the User Pools. If there are multiple clients with the same HBA-type (manufacturer and model), these may be in the same zone; however, putting different HBA-types in the same zone is not supported, for incompatibility reasons. LUN masking must be used where supported by the storage device to LUN mask the metadata storage LUNs for exclusive use by the Metadata servers. Here are some guidelines for LUN masking: – Specify the Metadata LUNs to the Linux mode (if the back-end storage has OS-specific operating modes). – Specify the LUNs for User Pool LUNs, when using ESS, as follows (note that on SVC, there is no host type setting): • Set the correct host type according to which client/server you are configuring. The host type is set on a per-host basis, not for the LUN, regardless of host. • Therefore, with LUNs in User Pools, the LUNs may be mapped to multiple hosts, for example, Windows and AIX. You can ignore any warning messages about unlike hosts. Tip: For ESS, if you have microcode level 2.2.0.488 or above, there will be a host type entry of IBM SAN File System (Lnx MDS). If this is available, choose it for the LUNs. If running an earlier microcode version, choose Linux. For greatest security, SAN File System fabrics should preferably be isolated from non-SAN File System fabrics on which administrative activities could occur. No hosts can have access to the LUNs used by the SAN File System apart from the MDS servers and the SAN File System clients. This could be achieved by appropriate zoning/LUN masking, or for greatest security, by using separate fabrics for SAN File System and non-SAN File System activities.68 IBM TotalStorage SAN File System
  • 96. The Master Console hardware, if deployed, requires two fibre ports for connection to the SAN. This enables it to perform SAN discovery for use with IBM TotalStorage Productivity Center for Fabric. We strongly recommend installing and configuring IBM TotalStorage Productivity Center for Fabric on the Master Console, as having an accurate picture of the SAN configuration is important for a successful SAN File System installation. Multi-pathing device drivers are required on the MDS. IBM Subsystem Device driver (SDD) is required on SAN File System MDS when using IBM TotalStorage Enterprise Storage Server, DS8000, DS6000, and SAN Volume Controller. RDAC is required on SAN File System MDS for SANs using IBM TotalStorage DS4x00 series disk systems. Multi-pathing device drivers are recommended on the SAN File System clients for availability reasons, if provided by the storage system vendor.3.3 SAN File System volume visibility In SAN File System V1.1, there were restrictions on the visibility of user volumes. Basically, all the MDS and all the clients were required to have access to all the data LUNs. With V21 and later of SAN File System, this restriction is eased. The MDS requires access to all the Metadata LUNs only, and the clients require access to all or a subset of the data LUNs. Note that it is still true that SAN File System clients must not have visibility to the System volumes. Important: Make sure your storage device supports sharing LUNs among different operating systems if you will be sharing individual user volumes (LUNs) among different SAN File System clients. Some storage devices allow each LUN to be made available only to one operating system type. Check with your vendor. In general, we can distinguish two ways for setting up a SAN File System environment: a uniform and a non-uniform SAN File System configuration.3.3.1 Uniform SAN File System configuration In a uniform SAN File System configuration, all SAN File System clients have access to all user volumes. Since this uniform configuration simplifies the management of the whole SAN File System environment, it might be a preferred approach for smaller, homogenous environments. In a uniform SAN File System configuration, all SAN File System data are visible to all clients. If you need to prevent undesired client access to a particular data, you can use standard operating system file/directory permissions to control access at a file/directory level. The uniform SAN File System configuration corresponds to a SAN File System V1.1 environment.3.3.2 Non-uniform SAN File System configuration In a non-uniform SAN File System configuration, not all SAN File System clients have access to all the user volumes. Clients only access user volumes they really need, or the volumes residing on disk systems for which they have operating support. The main consideration for a non-uniform configuration is to ensure that all clients have access to all user storage pool volumes that can potentially be used by a corresponding fileset. Any attempt to read/write data on a volume to which a SAN File System client does not have access will lead to an I/O error. We consider non-uniform configurations as preferable for large and heterogeneous SAN environments. Chapter 3. MDS system design, architecture, and planning issues 69
  • 97. Note for SAN File System V2.1 clients: SAN configurations for SAN File System V2.1 are still supported by V2.2 and above, so no changes are required in the existing SAN infrastructure when upgrading. A non-uniform SAN File System configuration provides the following benefits. Flexibility Scalability Security Wider range of mixed environment support Flexibility SAN File System can adapt to desired, environment-to-environment specific SAN zoning requirements. Instead of enforcing a single zone environment, multiple zones, and therefore multiple spans of access to SAN File System user data, are possible. This means it is now easier to deploy SAN File System into an existing SAN environment. In order to help make SAN File System configurations more manageable, a set of new functions and commands were introduced with SAN File System V2.1: SAN File System volume size can now be increased in size without interrupting file system processing or moving the content of the volume. This function is supported on those systems on which the actual device driver allows LUN expansion (for example, current models of SVC or the DS4000 series) and the host operating system also supports it. Data volume drain functionality (rmvol) uses a transactional-based approach to manage the movement of data blocks to other volumes in the particular storage pool. From the client perspective, this is a serialized operation, where only one I/O at a time occurs to volumes within the storage pool. The goal of employing this kind of mechanism is to reduce the client’s CPU cycles. Some commands for managing the client data (for example, mkvol and rmvol) now require a client name as a mandatory parameter. This ensures that the administrative command will be executed only on that particular client. We cover the basic usage of most common SAN File System commands in Chapter 7, “Basic operations and configuration” on page 251. Scalability The MDS can host up to 126 dual-path LUNs for the system pool. The maximum number of LUNs for client data depends on platform-specific capabilities of that particular client. Very large LUN configurations are now possible if the data LUNs are divided between different clients. Security By easing the zoning requirements in SAN File System, better storage and data security is possible in the SAN environment, as all hosts (SAN File System clients) have access only to their own data LUNs. You can see an example of a SAN File System zoning scenario in Figure 3-1 on page 68. Wider range of mixed environment support Since not all the data LUNs need to be visible to all SAN File System clients and to the MDS, and therefore not all storage must be supported on every client and MDS, this expands the range of supported storage devices for clients. For example, if you have Linux and Windows clients, and a storage system that is supported only on Windows, you could make the LUNs on that system available only to the Windows clients, and not the Linux clients.70 IBM TotalStorage SAN File System
  • 98. Note that LUNs within a DS4000 partition can only be used by one operating system type; this is a restriction of the DS4x00 partition. Other disk systems, for example, SVC, allow multi-operating system access to the same LUNs.3.4 Network infrastructure SAN File System has the following requirements for the network topology: One IP address is required for each MDS and one for the Remote Supervisor Adapter II (RSAII) in each engine. This is still true when implementing redundant Ethernet support (Ethernet bonding; see 3.8.5, “Network planning” on page 84) with SAN File System V2.2.2, since the two Ethernet NICs share one physical IP address. Currently, SAN File System supports from two to eight engines. To take full advantage of the MDS Dual Ethernet/Ethernet bonding support provided in V2.2.2, each Ethernet NIC must be cabled to a separate Ethernet port, preferably in a separate switch.This provides greater availability in the event of an Ethernet switch outage. Two types of interfaces are supported on the MDS: 10/100/1000 Copper or 1 Gb Fibre Ethernet. The RSAII uses 10/100/1000 Copper Ethernet. The Master Console, if deployed, requires two Ethernet ports. One is connected to the existing IP network (connected to the Master Console, all MDS, and clients), and one for a VPN connection to be used for remote access to bypass the firewall. This configuration allows the Master Console to be shared with an SVC (if installed). The client to cluster and intra-cluster communication traffic will be on the existing client LAN. All Metadata servers must be on the same physical network. If multiple subnets are configured on the physical network, it is recommended that all engines are on the same subnet. If possible, avoid any routers or gateways between the clients and the MDS. This will optimize performance. Any systems that will be used for SAN File System administration require IP access to the SAN File System servers hosting the Administrative servers. Chapter 3. MDS system design, architecture, and planning issues 71
  • 99. An example of how the network can be set up is shown in Figure 3-2. Note there are two physical connections on the right of each MDS, indicating the redundant Ethernet configuration. However, these share the one TCP/IP address. M a s te r C o n s o le V P N fo r re m o te a c c e s s E x is tin g IP N e tw o r k M e ta d a ta S e rve r W in d o w s W in d o w s A IX A IX 2000 2000 RSA F C S w iS c h t c h 1 FC t w i 1 F C S w itc h 2 RSA S y s te m U ser Pool Pool M e ta d a ta S e rve r Figure 3-2 Illustrating network setup3.5 Security Authentication to the SAN File System administration interface can be accomplished in one of two ways: using LDAP, or using a new procedure called local authentication, which uses the Linux operating system login process (/etc/passwd and /etc/group). You must choose, as part of the planning process, whether you will use LDAP or local authentication. If an LDAP environment already exists, and you plan to implement SAN File System heterogenous file sharing, there is an advantage to using that LDAP; however, for those environments not already using LDAP, SAN File System implementation can be simplified by using local authentication. Using local authentication can eliminate one potential point of failure, since it does not depend on access to an external LDAP server to perform administrative functions.3.5.1 Local authentication With SAN File System V2.2.1 and later, you can use local authentication for your administrative IDs. Local authentication uses native Linux methods on the MDS to verify users and their authority to perform administrative operations. When issuing an administrative request (for example, to start the SAN File System CLI or log in to the GUI), the user ID and password is validated, and then it is verified that the user ID has authority to issue that particular request. Each user ID is assigned a role (corresponding to an OS group) that gives that user a specific level of access to administrative operations. These roles are Monitor, Operator, Backup, and Administrator. After authenticating the user ID, the administrative server interacts with the MDS to process the request. Setting up local authentication To use local authentication, define specific groups on each MDS (Administrator, Operator, Backup, or Monitor). They must have these exact names. Then add users, associating them72 IBM TotalStorage SAN File System
  • 100. with the appropriate groups according to the privileges required. For a new SAN File System installation, this is part of the pre-installation/planning process. For an existing SAN File System cluster that has previously been using LDAP authentication, migration to the local authentication method can be at any time, except for during a SAN File System software upgrade. We show detailed steps for defining the required groups and user IDs in 4.1.1, “Local authentication configuration” on page 100 (for new SAN File System installations) and 6.7, “Switching from LDAP to local authentication” on page 246 (for existing SAN File System installations who want to change methods). When using local authentication, whenever a user ID/password combination is entered to start the SAN File System CLI or GUI, the authentication method checks that the user ID exists as a UNIX user account in /etc/passwd, and if the correct password was supplied. It then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access. Some points to note when using the local authentication method Every MDS must have the standard groups defined (Administrator, Operator, Backup, or Monitor). You need at least one user ID with the Administrator role. Other IDs with the Administrator or other roles may be defined, as many as are required. You can have more than one ID in each group, but each ID can only be in one group. Every MDS must have the same set of user IDs defined as UNIX OS accounts. The same set of users and groups must be manually configured on each MDS. Use the same password for each SAN File System user ID on every MDS. These must be synchronized manually in the local /etc/passwd and /etc/group files; use of other methods (for example, NIS) are not supported. You cannot change the authentication method during a rolling upgrade of the SAN File System software. Each user ID corresponding to a SAN File System administrator name must be a member of exactly one group name corresponding to a SAN File System administrator authorization level (Administrator, Operator, Backup, or Monitor). Users who will not access SAN File System must not be members of SAN File System administration groups.