040419 san forum

277 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
277
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Prior to moving into the technical discussion, would like to comment on A&T’s process of integrating new technology. New technology is evaluated on typical requirements such as performance, reliability, and cost. Probably more importantly, we evaluate how easily the technology can integrate within EDC’s current infrastructure. The evaluation is documented within a white paper, which typically includes a weighted decision matrix to help projects make intelligent decisions. After reviewing the options available, it is the Program/Project that selects the best solution.
  • Direct Attached Storage -- File sharing is difficult and inefficient -- NFS is slow, requires I/O on both servers -- FTP requires multiple copies of the data, I/O on both servers -- Data unavailable is server is down -- Reallocation of resources -- Physical movement of hardware
  • Hardware SAN Solution -- Multiple servers share one or more storage devices -- -- Typically over sold by storage vendors -- -- Does not provide any improvement to data process flows
  • -- Clustered file system adds a software component -- Heterogeneous systems share a single file system (CXFS, StorNext) -- Direct Data access
  • SAN Goals -- Improve data processing flows
  • Prior to moving into the technical discussion, would like to comment on A&T’s process of integrating new technology. New technology is evaluated on typical requirements such as performance, reliability, and cost. Probably more importantly, we evaluate how easily the technology can integrate within EDC’s current infrastructure. The evaluation is documented within a white paper, which typically includes a weighted decision matrix to help projects make intelligent decisions. After reviewing the options available, it is the Program/Project that selects the best solution.
  • 040419 san forum

    1. 1. A&T Advisory Board EDC Storage Area Network (SAN) April 19, 2004 Ken Gacke, Brian Sauer, Doug Jaton [email_address] [email_address] [email_address]
    2. 2. Agenda <ul><li>Storage Architecture </li></ul><ul><li>EDC SAN Architectures </li></ul><ul><ul><li>Digital Reproduction SAN </li></ul></ul><ul><ul><li>Landsat SAN </li></ul></ul><ul><ul><li>LPDAAC SAN </li></ul></ul><ul><li>SAN Reality Check </li></ul>
    3. 3. Storage Architecture Linux Sun SGI Direct Attached Storage Ethernet <ul><li>Difficult to reallocate resources </li></ul><ul><li>File sharing via Network (NFS, FTP) </li></ul><ul><ul><li>NFS Performance/Security Issues </li></ul></ul><ul><ul><li>Duplicate copies of data </li></ul></ul><ul><ul><li>I/O Performance/Bandwidth </li></ul></ul><ul><li>Data Availability Concerns </li></ul><ul><ul><li>Server failure => no data access </li></ul></ul>
    4. 4. Storage Technology Linux Sun SGI Disk Farm SAN Configuration Ethernet Fibre Switch <ul><li>Hardware Solution </li></ul><ul><ul><li>Fibre Channel Switch </li></ul></ul><ul><ul><li>Fibre Channel RAID </li></ul></ul><ul><li>Logical Reallocation of Resources </li></ul><ul><li>File sharing via Network (NFS, FTP) </li></ul><ul><ul><li>NFS Performance/Security Issues </li></ul></ul><ul><ul><li>Duplicate copies of data </li></ul></ul><ul><ul><li>I/O Performance/Bandwidth </li></ul></ul><ul><li>Data Availability Concerns </li></ul><ul><ul><li>Server failure => no data access </li></ul></ul>
    5. 5. Storage Technology Linux Sun SGI Clustered File System SAN Configuration Ethernet Fibre Switch <ul><li>Hardware/Software Solution </li></ul><ul><ul><li>Fibre Channel Switch </li></ul></ul><ul><ul><li>Fibre Channel RAID </li></ul></ul><ul><ul><li>Sharable File System </li></ul></ul><ul><li>Logical Reallocation of Resources </li></ul><ul><li>Direct File Sharing </li></ul><ul><ul><li>Single data copy </li></ul></ul><ul><ul><li>Efficient I/O </li></ul></ul><ul><ul><li>Scalable Bandwidth </li></ul></ul><ul><li>High Data Availability </li></ul>Shared File System CXFS/CFS CXFS/CFS CXFS/CFS
    6. 6. Storage Architecture <ul><li>SAN Goals </li></ul><ul><ul><li>File sharing across multiple servers </li></ul></ul><ul><ul><ul><li>Heterogeneous Platform Support (IRIX, Solaris, Linux) </li></ul></ul></ul><ul><ul><ul><li>Reduce number of file copies </li></ul></ul></ul><ul><ul><ul><li>Improve I/O efficiency </li></ul></ul></ul><ul><ul><ul><ul><li>Reduce I/O requirements on server </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Reduce Network load </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Reduce time required to transfer data </li></ul></ul></ul></ul><ul><ul><li>Storage Management </li></ul></ul><ul><ul><ul><li>Increase disk storage utilization </li></ul></ul></ul><ul><ul><ul><li>Logical reallocation of storage resources </li></ul></ul></ul><ul><ul><li>Data Availability </li></ul></ul><ul><ul><ul><li>Maintain data access when a server fails </li></ul></ul></ul>
    7. 7. Digital Reproduction CR1 SAN April 19, 2004 Ken Gacke SAIC Contractor [email_address]
    8. 8. Historical Architecture – No SAN Product Distribution Ethernet Architecture Notes: 1) Data transfer via FTP 2) Duplicate storage on both servers 3) Multiple data file I/O required on both servers 4) System bandwidth constrained by Network UniTree Server Tape Drives 8x9840 2x9940B
    9. 9. CR1 SAN Timeline <ul><li>FY2002 – DMF Integration </li></ul><ul><ul><li>DMF Production Release in December 2001 </li></ul></ul><ul><ul><ul><li>Fully automated Data Migration process </li></ul></ul></ul><ul><ul><ul><li>21TB migrated to DMF within 3 months </li></ul></ul></ul><ul><ul><ul><ul><li>Data migration during off hours </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Full data access through data migration period </li></ul></ul></ul></ul><ul><li>FY2003 – CXFS Integration </li></ul><ul><ul><li>SGI CXFS Certified SAN Configuration </li></ul></ul><ul><ul><ul><li>CXFS On Two IRIX Servers, DMF and PDS </li></ul></ul></ul><ul><ul><ul><li>SGI TP9400 1TB RAID </li></ul></ul></ul><ul><ul><ul><li>8 Port Brocade and 16 Port Brocade fibre switches </li></ul></ul></ul><ul><ul><li>SGI Installed on 10/8/02 </li></ul></ul><ul><ul><ul><li>Test DMF/CXFS configuration </li></ul></ul></ul><ul><ul><ul><li>Performed final CXFS testing </li></ul></ul></ul><ul><ul><li>DMF/CXFS released to production on 11/5/02 </li></ul></ul>
    10. 10. CR1 SAN Architecture DMF Server Product Distribution Ethernet Tape Drives 8x9840 2x9940B 1Gb Fibre 2Gb Fibre Disk Cache /dmf/edc 68GB /dmf/doqq 547GB /dmf/guo 50GB /dmf/pds 223GB /dmf/pdsc 1100GB
    11. 11. CR1 SAN Architecture
    12. 12. CR1 SAN Summary <ul><li>Data Storage </li></ul><ul><ul><li>2TB Disk Cache storing 67 Terabytes on the backend </li></ul></ul><ul><ul><li>2.5 Million Files </li></ul></ul><ul><li>2003 Average Monthly Data Throughput </li></ul><ul><ul><li>Data ingest – 3.5TB </li></ul></ul><ul><ul><li>Data retrieval – 9.6TB </li></ul></ul><ul><ul><li>Average data throughput of 8.5MB/sec (includes tape access) </li></ul></ul><ul><li>Minimal System/Ops Administration </li></ul><ul><li>Single Vendor Solution </li></ul><ul><ul><li>SGI Software, RAID, and Fibre Switches </li></ul></ul><ul><ul><li>CXFS supported on SGI IRIX, Linux, Solaris, Windows, etc </li></ul></ul>
    13. 13. Landsat SAN April 19, 2004 Brian Sauer SAIC Contractor [email_address]
    14. 14. Landsat SAN Goals <ul><li>Improve Overall Performance (3 Hrs -> 1.5 Hrs) </li></ul><ul><li>Maximize Disk Storage Through Shared Resources </li></ul><ul><li>Centralized Management (System Admin, Hardware Eng) </li></ul><ul><li>Overcome Old SCSI RAID Obsolescence (Ciprico 6900) </li></ul><ul><li>Utilize Existing Investment in Fibre Channel Storage </li></ul><ul><ul><li>Existing Investment in Ciprico NetArrays </li></ul></ul><ul><ul><li>“ Open” Solution </li></ul></ul><ul><li>High Performance </li></ul><ul><ul><li>Combined throughput of over 240MB/sec </li></ul></ul><ul><li>High Availability </li></ul><ul><li>Total Usable Storage over 10TB </li></ul><ul><li>SGI, Linux and SUN Clients </li></ul><ul><li>Integrate in Phases as Tasks Become SAN Ready </li></ul>
    15. 15. Landsat SAN Overview <ul><li>13 TB of Raw Storage Utilizing Ciprico NetArrays </li></ul><ul><li>Three Brocade Switches </li></ul><ul><li>Eleven Linux and Six SGI Clients </li></ul><ul><ul><li>Data Capture System Database Server (DDS) </li></ul></ul><ul><ul><li>Landsat Processing System (LPS) </li></ul></ul><ul><ul><li>Landsat Archive Management System (LAM) </li></ul></ul><ul><ul><li>Image Assessment System (IAS) </li></ul></ul><ul><ul><li>Landsat Product Generation System (LPGS) </li></ul></ul><ul><li>ADIC StorNext File System Software </li></ul><ul><ul><li>Shared High Performance File System </li></ul></ul><ul><li>Qlogic Fibre Channel Host Bus Adapters </li></ul>
    16. 16. Landsat OLD Data Flow L7 L0Ra Archive (LAM) L7 Raw CC Archive (LAM) R C C L7 Processing System (LPS) L 0 R a 85 Minutes to Process DCS Database Server ( DDS ) R C C R C C R C C Capture & Transfer System (CTS) R C C R C C 24 Minute Transfer 14 Minute Pass 24 Minute Transfer 20 Minute Transfer
    17. 17. Landsat SAN Satellite dish SAN LGS CTS1 CTS2 CTS3 RAID3 RAID3 RAID3 DDS <ul><li>Eliminated FTP Transfers </li></ul>RAW DATA L0RA DATA LAM LPS
    18. 18. Landsat SAN Summary <ul><li>Advantages </li></ul><ul><ul><li>Able to share data in a high performance environment to reduce the amount of storage necessary </li></ul></ul><ul><ul><li>Increase in overall performance of the Landsat Ground System </li></ul></ul><ul><ul><li>Open Solution </li></ul></ul><ul><ul><ul><li>Able to utilize existing equipment </li></ul></ul></ul><ul><ul><ul><li>Currently testing with other vendors </li></ul></ul></ul><ul><ul><li>Disk availability for projects during off-peak times e.g. IAS </li></ul></ul><ul><li>Disadvantages / Challenges </li></ul><ul><ul><li>Challenge to integrate an open solution </li></ul></ul><ul><ul><ul><li>CIPRICO RAID controller failures </li></ul></ul></ul><ul><ul><li>Not good for real-time I/O </li></ul></ul><ul><ul><li>Challenge to integrate into multiple tasks </li></ul></ul><ul><ul><ul><li>Own agenda and schedule </li></ul></ul></ul><ul><ul><ul><li>Individual requirements </li></ul></ul></ul><ul><ul><ul><li>Difficult to guarantee I/O </li></ul></ul></ul>
    19. 19. LP DAAC SAN Forum April 19, 2004 Douglas Jaton SAIC Contractor [email_address]
    20. 20. LP DAAC Data Pool – Phase I SAN Goals <ul><li>Phase I – “Data Pool” Implementation in early FY03 </li></ul><ul><li>Access/Distribution Method (ftp site): </li></ul><ul><li>Support increased electronic distribution </li></ul><ul><li>Reduce need to pull data from archive silos </li></ul><ul><li>Reduce need for order submissions (and media/shipping costs) </li></ul><ul><li>Give science and applications users timely, direct access to data, including machine access </li></ul><ul><li>Allow users to tailor their data views to more quickly locate the data they need by providing </li></ul><ul><li>“ The Data Pool SAN infrastructure effectively acts as a subset archive of the full ECS archive” </li></ul>
    21. 21. LP DAAC Data Pool (SAN) Configuration <ul><li>Data Pools are an additional subset “inventory” of science data (granule, browse, metadata) that reside in a separate inventory database, with their physical files resident on local storage area network (SAN = 44TB) </li></ul><ul><ul><li>STK D178 RAID racks with 1 Sun E450 metadata server. </li></ul></ul><ul><ul><li>Data Pool inventory is managed via 2 nd Sybase Inventory database </li></ul></ul><ul><li>Data pool contents are populated from the primary ECS archive. </li></ul><ul><ul><li>Subscriptions can be fully qualified with the population occurring at insert time in the primary ECS archive (a function of ingest) (forward population) </li></ul></ul><ul><ul><li>Historical data load from primary ECS archive via query (historical population capability) in support of science or user requirements. </li></ul></ul><ul><ul><li>NASA intent is to grow the on-line to be a “working copy” of the most popular data </li></ul></ul><ul><li>Dataset “Collections” belong to “Groups” and are configured for “N” days of persistence and are automatically removed at expiration (rolling archive concept) </li></ul><ul><ul><li>Data Management of this 2 nd archive to keep synchronized to primary has been problematic and has increased O&M costs. </li></ul></ul><ul><li>Data Pool Web client(s) and/or anonymous ftp site access are used to navigate contents, browse, access, and download data products. Directory structure is used: </li></ul><ul><ul><li>/datapool/<mode>/<collect grp>/<esdt.version_id>/<acq date> e.g. /datapool/ops/astt/ast_l1b.001/1999.12.31 </li></ul></ul>
    22. 22. LP DAAC Data Pool Contents & Access <ul><li>Science Data: </li></ul><ul><li>ASTER L1B Group (TERRA) </li></ul><ul><ul><li>ASTER collection over U.S. States and Territories (no billing!) </li></ul></ul><ul><li>MODIS Group (TERRA & AQUA) </li></ul><ul><ul><li>8 day rolling archive of daily data for MODIS </li></ul></ul><ul><ul><li>12 months of data for higher level products </li></ul></ul><ul><ul><ul><li>Most 8-day, 16-day, and 96-day products </li></ul></ul></ul><ul><li>Access Methods: </li></ul><ul><li>Anonymous FTP Site </li></ul><ul><li>Web Client interface(s) to navigate & browse data holdings via Sybase inventory database </li></ul><ul><li>Public Access: http:// lpdaac . usgs . gov / datapool / datapool .asp </li></ul>
    23. 23. LP DAAC Data Pool – Phase II SAN Goals <ul><li>Phase II FY04 – Optimize System Throughput (systemic resource): </li></ul><ul><li>Maximize Disk Storage Through Shared Resources </li></ul><ul><li>Centralized Management (System Admin, Hardware Engr) of disk </li></ul><ul><li>High Performance fibre channel connections </li></ul><ul><ul><li>SGI, Linux and SUN Clients </li></ul></ul><ul><li>Decrease turn-around time for production and distribution orders. </li></ul><ul><li>Integrate SAN into ECS subsystems in Phases as tasks become SAN ready/capable </li></ul><ul><ul><li>Granules will be served from SAN (Data Pool) if available, rather than staging from tape. Less thrashing of the archives for popular datasets. </li></ul></ul><ul><ul><ul><li>Effectively allows for more ingest bandwidth as less archive drive contention </li></ul></ul></ul><ul><ul><ul><li>Trick here is to maintain rule sets for popular data to minimize silo thrashing </li></ul></ul></ul><ul><ul><li>Less copying of data – no need for dedicated read only caches across ingest, archive staging, production, media (PDS), distribution (ftp push & pull) </li></ul></ul><ul><li>“ Fully Utilize the SAN infrastructure effectively across the sub-systems of the full ECS archive” </li></ul>
    24. 24. LP DAAC SAN Overview
    25. 25. SAN Reality Check April 19, 2004 Brian Sauer SAIC Contractor [email_address]
    26. 26. EDC SAN Experience <ul><li>Technology Infusion </li></ul><ul><ul><li>TSSC Understands this new technology. </li></ul></ul><ul><ul><li>Bring it in at right level and at the right time to satisfy USGS programmatic requirements. </li></ul></ul><ul><ul><li>SAN technology is not a one size fits all solution set. </li></ul></ul><ul><ul><li>Need to balance complexity vs. benefits. </li></ul></ul><ul><li>Project Requirements Differ </li></ul><ul><ul><li>Size of SAN (Storage, Number Clients, etc) </li></ul></ul><ul><ul><li>Open System Versus Single Vendor </li></ul></ul><ul><li>Experiences Gained </li></ul><ul><ul><li>Provides high performance shared storage access </li></ul></ul><ul><ul><li>Provides better manageability and utilization </li></ul></ul><ul><ul><li>Provides flexibility in reallocating resources </li></ul></ul><ul><ul><li>Requires trained Storage Engineers </li></ul></ul><ul><ul><li>Complex architecture, especially as number of nodes increases </li></ul></ul>
    27. 27. EDC SAN Reality Check <ul><li>SAN Issues </li></ul><ul><ul><li>Vendors typically oversell SAN architecture </li></ul></ul><ul><ul><ul><li>Infrastructure costs </li></ul></ul></ul><ul><ul><ul><ul><li>Hardware – Switches, HBAs, Fibre Infrastructure </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Software </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Maintenance </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Hardware/Software maintenance </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Labor </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Disk maintenance higher than tape </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><li>Power & cooling of disk vs. tape </li></ul></ul></ul></ul><ul><ul><ul><li>Complex Architecture </li></ul></ul></ul><ul><ul><ul><ul><li>Requires additional/stronger System Engineering </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Requires highly skilled System Administration </li></ul></ul></ul></ul><ul><ul><ul><li>Lifecycle is significantly shorter with disk vs. tape. </li></ul></ul></ul>
    28. 28. EDC SAN Reality Check <ul><li>SAN Issues </li></ul><ul><ul><li>Difficult to share resources among projects in an enterprise environment </li></ul></ul><ul><ul><ul><li>Ability to fund large shared infrastructure historically been problematic for EDC </li></ul></ul></ul><ul><ul><ul><li>Ability to allocate and guarantee performance to projects (storage, bandwidth, security, peak vs. sustained) </li></ul></ul></ul><ul><ul><ul><li>Scheduling among multiple projects would be challenging </li></ul></ul></ul><ul><li>Not all projects require a SAN </li></ul><ul><ul><li>SAN will not replace the Tape Archive(s) anytime soon </li></ul></ul><ul><ul><li>Direct attached storage may be sufficient for many projects </li></ul></ul>

    ×