• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
BWC Supercomputing 2008 Presentation
 

BWC Supercomputing 2008 Presentation

on

  • 591 views

 

Statistics

Views

Total Views
591
Views on SlideShare
584
Embed Views
7

Actions

Likes
0
Downloads
0
Comments
0

2 Embeds 7

http://www.linkedin.com 6
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    BWC Supercomputing 2008 Presentation BWC Supercomputing 2008 Presentation Presentation Transcript

    • Towards Global Scale Cloud Computing: Using Sector and Sphere on the Open Cloud Testbed NCDM SC08 Bandwidth Challenge Robert Grossman 1,2 , Yunhong Gu 1 , Michal Sabala 1 , David Hanley 1 , Shirley Connelly 1 , David Turkington 1 , and Jonathan Seidman 2 1 National Center for Data Mining, Univ. of Illinois at Chicago 2 Open Data Group
    • Overview
      • Cloud computing software Sector/Sphere
        • Open source, developed by NCDM
      • Open Cloud Testbed
        • Cloud computing testbed supported by the Open Cloud Consortium
      • BWC Applications and Performance Evaluation
        • TeraSort: sorting 1TB distributed dataset
        • CreditStone: computing fraudulent rate from 3TB credit card transaction records
    • Cloud Computing & Sector/Sphere
      • Hardware
        • Inexpensive computer clusters (may across multiple data centers)
        • High speed wide area networks
      • Software
        • Storage cloud: Distributed, big data set
        • Compute cloud: simplified distributed data processing
      High Speed Transport Protocol UDT Routing: C/S or P2P Storage Cloud: Sector Compute Cloud: Sphere
    • Sector: Distributed Storage System Security Server Master slaves slaves SSL SSL Client User account Data protection System Security Storage System Mgmt. Processing Scheduling Service provider System access tools App. Programming Interfaces Storage and Processing Data UDT Encryption optional
    • Sphere: Simplified Data Processing
      • Data is processed at where it resides (locality)
      • Same user defined functions (UDF) can be applied on all elements (records, blocks, or files)
      • Processing output can be written to Sector files, on the same node or other nodes
      • Generalize Map/Reduce
    • Sphere: Simplified Data Processing for each file F in (SDSS datasets) for each image I in F findBrownDwarf(I, …); SphereStream sdss; sdss.init("sdss files"); SphereProcess myproc; myproc->run(sdss," findBrownDwarf "); myproc->read(result); findBrownDwarf(char* image, int isize, char* result, int rsize);
    • Sphere: Data Movement
      • Slave -> Slave Local
      • Slave -> Slaves (Shuffle)
      • Slave -> Client
    • Load Balance & Fault Tolerance
      • The number of data segments is much more than the number of SPEs. When an SPE completes a data segment, a new segment will be assigned to the SPE.
      • If one SPE fails, the data segment assigned to it will be re-assigned to another SPE and be processed again.
      • Load balance and fault tolerance are transparent to users!
    • Implementation
      • Sector/Sphere is open source software developed by NCDM
      • Written in C++
      • Client tools (file access, system management, etc)
      • Sector/Sphere API library
      • http://sector.sf.net
      • Current release version 1.15
    • Open Cloud Testbed
      • 4 Racks
        • Baltimore (JHU): 32 nodes, 1 AMD Dual Core CPU, 4GB RAM, 1TB single disk
        • Chicago (StarLight): 32 nodes, 2 AMD Dual Core CPUs, 12GB RAM, 1TB single disk
        • Chicago (UIC): 32 nodes, 2 AMD Dual Core CPUs, 8GB RAM, 1TB single disk
        • San Diego (Calit2): 32 nodes, 2 AMD Dual Core CPUs, 12GB RAM, 1TB single disk
      • 10Gb/s inter-rack connection on CiscoWave
      • 1Gb/s intra-rack connection
      • SC08
        • 9 nodes, 2 AMD Quad core CPUs, 16GB RAM, 6TB 8-disk RAID, 10Gb/s intra-rack connection
    • SC08 & Open Cloud Testbed
    • BWC App #1: Sorting a TeraByte
      • Data is split into small files, scattered on all slaves
      • Stage 1: On each slave, an SPE scans local files, sends each record to a bucket file on a remote node according to the key, so that all buckets are sorted.
      • Stage 2: On each destination node, an SPE sort all data inside each bucket.
    • TeraSort 10-byte 90-byte Key Value 10-bit Bucket-0 Bucket-1 Bucket-1023 0-1023 Stage 1 : Hash based on the first 10 bits Bucket-0 Bucket-1 Bucket-1023 Stage 2 : Sort each bucket on local node Binary Record 100 bytes
    • Performance Results: TeraSort [Previous run] Run time: seconds 1.2TB 900GB 600GB 300GB Data Size 3702 6675 1526 UIC + StarLight + Calit2 + JHU 3069 4341 1430 UIC + StarLight + Calit2 2617 2896 1361 UIC + StarLight 2252 2889 1265 UIC Hadoop (1 replica) Hadoop (3 replicas) Sphere
    • Performance Results: TeraSort SC08 Tue Night
      • Sorting 1TB on 101 nodes
        • UIC: 24 + SL: 10 + JHU: 29 + Calit2: 29 + SC08: 9
      • Hash vs. Local Sort: 1660sec + 604sec
      • Hash
        • Per rack (in/out): UIC 201/193GB; SL 96/95GB; JHU 212/217GB; Calit2 214/218GB; SC08 87/89GB
        • Per node (in/out): 10GB AVG
        • System total AVG 9.6Gb/s (Peak 20Gb/s)
      • Local Sort
        • No network IO
    • BWC App# 2: CreditStone Merchant ID Time Key Value 3-byte merch-000X merch-001X merch-999X 000-999 Stage 1 : Process each record and hash into buckets according to merchant ID merch-000X merch-001X merch-999x Stage 2 : Compute fraudulent rate for each merchant Trans ID|Time|Merchant ID|Fraud|Amount 01491200300|2007-09-27|2451330|0|66.49 Text Record Transform Text Record Fraud
    • Performance Results: CreditStone [Previous run] 55 53 37 36 Sphere w/o Index (min) 71 64 47 46 Sphere with Index (min) 189 191 180 179 Hadoop (min) 58.5B 44.5B 29.5B 15B Size of Dataset (rows) 3276 2492 1652 840 Size of Dataset (GB) 117 89 59 30 Number of Nodes JHU, SL, Calit2, UIC JHU, SL, Calit2 JHU, SL JHU Racks
    • Performance Results: CreditStone SC08 Tue Night
      • Process 3TB (53.5 Billion records) on 101 nodes
        • UIC: 24 + SL: 10 + JHU: 29 + Calit2: 29 + SC08: 9
      • Stage 1 vs. Stage 2: 3022sec : 471sec
      • Scan and hash each record
        • Per rack (in/out): UIC 157/248GB; SL 84/122GB; JHU 229/135GB; Calit2 222/154GB; SC08 77/111GB
        • Per node (in/out): 9.7GB AVG
        • System total AVG 5.5Gb/s (Peak 7.2Gb/s)
      • Compute fraudulent rate per merchant
        • No network IO
    • BWC App #3: Cistrack SC08 Tue Night
      • 10Gb/s from SC08 & JGN2 (Japan) – OCT connection via CiscoWave & JGN2, respectively.
      • Upload Cistrack data from SC08 & JGN2 (Japan) to OCT
        • Approx 8Gb/s
      • Download Cistrack data from OCT to SC08 & JGN2 (Japan)
        • Approx 8Gb/s
    • System Monitoring (Testbed)
    • System Monitoring (Sector/Sphere)
    • Summary
      • Aggregated bandwidth
        • Upload/Download SC08 <=> OCT: 8Gb/s each direction
        • TeraSort on OCT: 9.6Gb/s AVG, 20Gb/s Peak
        • CreditStone on OCT: 5.5Gb/s AVG, 7.2Gb/s Peak
      • Use of distributed clients
        • 101 nodes in 4 locations (Chicago, Baltimore, San Diego, Austin)
      • Bandwidth monitoring
        • Sector/Sphere application monitoring
        • Testbed system monitoring
        • Both monitor other resources (CPU, mem, etc) too
    • Summary (cont.)
      • Usability
        • Client tools and C++ library (can call functions/applications written in other languages)
        • Sector is just like a distributed file system
        • Terasort: 60 lines of Sphere code, plus application specific code
        • CreditStone: 170 lines of Sphere code, plus application specific code
      • Real world Applicability
        • Software: Open Source Sector/Sphere
        • System: Open Cloud Testbed
          • The software can be installed on any heterogeneous clusters over single or multiple data centers
        • Real applications: bioinformatics, sorting and credit card transaction processing (and many other applications!)
    • Summary (cont.)
      • Use of distributed HPC resources
        • Open cloud testbed (4 distributed racks + SC08 booth)
        • 10Gb/s CiscoWave
      • Conference floor resources
        • One 10Gb/s SCinet connection to CiscoWave
        • 9 servers + 1 10GE switch
        • Participating as slave (storage + computing) nodes for Sphere applications
    • For More Information
      • Sector/Sphere code & docs: http://sector.sf.net
      • Open Cloud Consortium: http://www.opencloudconsortium.org
      • NCDM: http://www.ncdm.uic.edu
      • @SC08: Booth 321