Data View of TeraGrid Logical Site Model

Uploaded on


  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. TeraGrid: Logical Site Model Chaitan Baru Data and Knowledge Systems San Diego Supercomputer Center
  • 2. National Science Foundation TeraGrid
    • Prototype for Cyberinfrastructure (the “lower” levels)
    • High Performance Network: 40 Gb/s backbone, 30 Gb/s to each site
    • National Reach: SDSC, NCSA, CIT, ANL, PSC
    • Over 20 Teraflops compute power
    • Approx. 1 PB rotating Storage
    • Extending by 2-3 sites in Fall 2003
  • 3. Services/Software View of Cyberinfrastructure
  • 4. SDSC Focus on Data: A Cyberinfrastructure “Killer App”
    • Over the next decade, data will come from everywhere
      • Scientific instruments
      • Experiments
      • Sensors and sensornets
      • New devices (personal digital devices, computer-enabled clothing, cars, …)
    • And be used by everyone
      • Scientists
      • Consumers
      • Educators
      • General public
    • SW environment will need to support unprecedented diversity, globalization, integration, scale, and use
    Data from simulations Data from instruments Data from analysis Data from sensors
  • 5. Prototype for Cyberinfrastructure
  • 6. SDSC Machine Room Data Architecture
    • Enable SDSC to be the grid data engine
    Blue Horizon HPSS LAN (multiple GbE, TCP/IP) SAN (2 Gb/s, SCSI) Linux Cluster, 4TF Sun F15K WAN (30 Gb/s) SCSI/IP or FC/IP FC Disk Cache (400 TB) FC GPFS Disk (100TB) 200 MB/s per controller Silos and Tape, 6 PB, 1 GB/sec disk to tape 32 tape drives 30 MB/s per drive Database Engine Data Miner Vis Engine Local Disk (50TB) Power 4 Power 4 DB
    • .5 PB disk
    • 6 PB archive
    • 1 GB/s disk-to-tape
    • Support for DB2 /Oracle
    DBMS disk (~10TB)
  • 7. The TeraGrid Logical Site View
    • Ideally, applications / users would like to see:
      • One single computer
      • Global everything: filesystem, HSM, database system
      • With highest possible performance
    • We will get there in steps
    • Meanwhile, the TeraGrid Logical Site View provides a uniform view of sites
      • A common abstraction supported by every site
  • 8. Logical Site View
    • Logical Site View is currently simply provided as a set of environment variables
      • Can easily become a set of services
    • This is minimum required to enable a TG application to easily make use of TG storage resources
    • However, for “power” users, we also anticipate the need to expose mapping from logical to physical resources at each site
      • Enables applications to take advantage of site-specific configurations and obtain optimal performance
  • 9. Basic Data Operations
    • The Data WG has stated as a minimum requirement:
      • the ability for a user to transfer data between any TG storage resource to memory on any TG compute resource – possibly via the use of an intermediate storage resource
      • Ability to transfer data between any two TG storage resources
  • 10. Logical Site View Compute Cluster HSM DBMS Collection Management Scratch “ Network” Staging Area Compute Cluster DBMS Staging Area Staging Area Staging Area
  • 11. Environment Variables
    • TG_PFS
  • 12. Issues Under Consideration
    • Suppose a user wants to run computation, C, on data, D
    • The TG middleware should automatically figure out
      • Whether C should move to where D is, or vice versa
      • Whether data, D, should be pre-fetched, or “streamed”
      • Whether output data should be streamed to persistent storage, or staged via intermediate storage
      • Whether prefetch/staging time ought to be “charged” to the user or not