• Save
Hadoop for Scientific Workloads__HadoopSummit2010
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Hadoop for Scientific Workloads__HadoopSummit2010

on

  • 4,353 views

Hadoop Summit 2010 - Research Track

Hadoop Summit 2010 - Research Track
Hadoop for Scientific Workloads
Lavanya Ramakrishnan, Lawrence Berkeley National Lab

Statistics

Views

Total Views
4,353
Views on SlideShare
4,350
Embed Views
3

Actions

Likes
3
Downloads
0
Comments
0

2 Embeds 3

http://duckduckgo.com 2
http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • This is the Title slide. Please use the name of the presentation that was used in the abstract submission.
  • Range of application classes with different models Part of IMG family of systems hosted at the DOE Joint Genome Institute (JGI) Supports analysis of microbial community metagenomes in the integrated context of all public reference isolate microbial genomes Data pipeline, task parallel workflow, image matching algorithms that should work Might be heavy on Io side but the other advantages might outweigh the performance Data Integration challenges ~ 35 science data products including atmospheric and land products products are in different projection, resolutions (spatial and temporal), different times data volume and processing requirements exceed desktop capacity
  • There is a huge spectrum of scientific applications - High energy physics, eco-sciences, bioinformatics at LBL. These have a varied set of requirements and a need for unlimited compute cycles and data storage. NERSC and IT provide infrastructure and resources for these applications. Other groups in CRD work closely with the scientists to explore and develop user interfaces, middleware, grid tools, data support, infrastructure tools for monitoring that are required to facilitate scientific exploration.   Cloud computing brings in a new resource model of delivering “on-demand cycles at a cost” and a new set of programming models and tools. Many groups at LBL are interested in seeing how the different features of cloud computing would help them in their scientific explorations   In general we need to explore the big question of how do we work closely with scientists to deliver a more diverse set of services that not just target the traditional HPC applications   Make it easier for us to do what we have traditionally being doing? Help us do things differently than before? Can bring other users in?
  • Why Hadoop? What implications
  • Part of IMG family of systems hosted at the DOE Joint Genome Institute (JGI) Supports analysis of microbial community metagenomes in the integrated context of all public reference isolate microbial genomes Content maintenance consists of Integrating new metagenome datasets with the reference genomes every 2-4 weeks Involves running BLAST for identifying pair-wise gene similarities between new metagenome & reference genomes Reference genome baseline updated with new (~500) genomes every 4 months Involves running BLAST for refreshing pair-wise gene similarities between reference genomes, and between metagenome & reference genomes takes about 3 weeks on a Linux cluster with 256 cores Take away point is there is a growth in the databases BLAST is used majorly in pipleline
  • Hard limits in Hadoop config (3GB ulimit but DB > 3GB) Thrashing due to DB not fitting in available memory - first iteration 3.5 to 4.5 hrs for job to finish but 80% DB that fits into memory takes half the time Hadoop does not guarantee simultaneous availability of resources so time to solution is hard to predict
  • This is the final slide; generally for questions at the end of the talk. Please post your contact information here.
  • Here are the different features of cloud and each has an attraction for a class of users.   a. Who doesn’t want free cycles and the on-demand aspect is appealing. Getting 10 cpus for 1 hr now or getting 5 cpus for 2 hrs has the same cost. This combined with the idea that you don’t have to wait for CPUs is also very attractive for batch queue users. b. The virtual environments that seem common place tend to impose some overheads but when there are large parameteric studies such as BLAST, the overhead might be acceptable c. Users bear the brunt of OS and software upgrades – for e.g., supernova factory has code base that works only on 32 bit systems and as 64 bit systems are more common place they are restricted on where they can run. d. Science problems are exceeding current systems
  • Science gateways S3 storage Phasing

Hadoop for Scientific Workloads__HadoopSummit2010 Presentation Transcript

  • 1. Hadoop for Scientific Workloads
    • Lavanya Ramakrishnan
    • Shane Canon
    • Shreyas Cholia
    • Keith Jackson
    • John Shalf
    Lawrence Berkeley National Lab
  • 2. Example Scientific Applications
    • Integrated Microbial Genomes (IMG)
      • analysis of microbial community metagenomes in the integrated context of all public reference isolate microbial genomes
    • Supernova Factory
      • tools to measure expansion of universe and energy
      • task parallel workflow, large data volume
    • MODerate-resolution Imaging Spectroradiometer (MODIS)
      • two MODIS satellites near polar orbits
      • ~ 35 science data products including atmospheric and land products
      • products are in different projection, resolutions (spatial and temporal), different times
  • 3. Supporting Science at LBL
    • Unlimited need for compute cycles and data storage
    • Tools and middleware to access resources
    Scientists HPC and IT resources User interfaces, grid middleware, workflow tools, data management, etc
    • Does cloud computing
    • make it easier or better to do what we do?
    • help us do things differently than before?
    • help us include other users?
  • 4. Magellan – Exploring Cloud Computing
    • Test-bed to explore Cloud Computing for Science
    • National Energy Research Scientific Computing Center (NERSC)
    • Argonne Leadership Computing Facility (ALCF)
    • Funded by DOE under the American Recovery and Reinvestment Act (ARRA)
  • 5. Magellan Cloud at NERSC 720 nodes, 5760 cores in 9 Scalable Units (SUs)  61.9 Teraflops SU = IBM iDataplex rack with 640 Intel Nehalem cores 8G FC 10G Ethernet 14 I/O nodes (shared) 18 Login/network nodes 1 Petabyte with GPFS SU SU SU SU SU SU SU SU SU Load Balancer I/O I/O NERSC Global Filesystem Network Login Network Login QDR IB Fabric HPSS (15PB) Internet 100-G Router ANI
  • 6. Magellan Research Agenda
    • What are the unique needs and features of a science cloud?
    • What applications can efficiently run on a cloud?
    • Are cloud computing programming models such as Hadoop effective for scientific applications?
    • Can scientific applications use a data-as-a-service or software-as-a-service model?
    • Is it practical to deploy a single logical cloud across multiple DOE sites?
    • What are the security implications of user-controlled cloud images?
    • What is the cost and energy efficiency of clouds?
  • 7. Hadoop for Science
    • Classes of applications
      • tightly coupled MPI application, loosely couple data intensive science
      • use batch queue systems in supercomputing centers, local clusters and desktop
    • Advantages of Hadoop
      • transparent data replication, data locality aware scheduling
      • fault tolerance capabilities
    • Mode of operation
      • use streaming to launch a script that calls executable
      • HDFS for input, need shared file system for binary and database
      • input format
        • handle multi-line inputs (BLAST sequences), binary data (High Energy Physics)
  • 8. Hadoop Benchmarking: Early Results
    • Compare traditional parallel file systems to HDFS
      • 40 node Hadoop cluster where each node contains two Intel Nehalem quad-core processors
      • TeraGen and Terasort to compare file system performance
        • 32 maps for TeraGen and 64 reduces for Terasort over a terabyte of data
      • TestDFSIO to understand concurrency
  • 9. + 287 Samples: ~105 Studies + 12.5 Mil genes 19 Mil genes IMG Systems: Genome & Metagenome Data Flow
    • ~ 350 - 500 Genomes
    • ~ .5 – 1 Mil Genes
    Every 4 months 65 Samples: 21 Studies IMG+2.6 Mil genes 9.1 Mil total Monthly On demand On demand
    • + 330 Genomes
      • 158 GEBA
    • 8.2 Mil genes
    Monthly 5,115 Genomes 6.5 Mil genes
  • 10. BLAST on Hadoop
    • NCBI BLAST (2.2.22)
      • reference IMG genomes- of 6.5 mil genes (~3Gb in size)
      • full input set 12.5 mil metagenome genes against reference
    • BLAST Hadoop
      • uses streaming to manage input data sequences
      • binary and databases on a shared file system
    • BLAST Task Farming Implementation
      • server reads inputs and manages the tasks
      • client runs blast, copies database to local disk or ramdisk once on startup, pushes back results
      • advantages: fault-resilient and allows incremental expansion as resources come available
  • 11. Hardware Platforms
    • Franklin: Traditional HPC System
      • 40k core, 360TFLOP Cray XT4 system at NERSC, Lustre parallel filesystem
    • Amazon EC2: Commercial “Infrastructure as a Service” Cloud
      • Configure and boot customized virtual machines in Cloud
    • Yahoo M45: Shared Research “Platform as a Service” Cloud
      • 400 nodes, 8 cores per node, Intel Xeon E5320, 6GB per compute node, 910.95TB
      • Hadoop/MapReduce service: HDFS and shared file system
    • Windows Azure BLAST “Software as a Service”
  • 12. BLAST Performance
  • 13. BLAST on Yahoo! M45 Hadoop
    • Initial config – Hadoop memory ulimit issues,
      • Hadoop memory limits increased to accommodate high memory tasks
      • 1 map per node for high memory tasks to reduce contention
      • thrashing when DB does not fit in memory
    • NFS shared file system for common DB
      • move DB to local nodes (copy to local /tmp).
      • initial copy takes 2 hours, but now BLAST job completes in < 10 minutes
      • performance is equivalent to other cloud environments.
      • future: Experiment with Distributed Cache
    • Time to solution varies - no guarantee of simultaneous availability of resources
      • Strong user group and sysadmin support was key in working through this.
  • 14. HBase for Metagenomics
    • Output of “all vs. all” pairwise gene sequence comparisons
      • currently data stored in compressed files
        • modifying individual entries is challenging
        • queries are hard
      • duplication of data to ease presentation by different UI components
    • Evaluating changing to Hbase
      • easily update individual rows and simple queries
      • query and update performance exceeds requirements
    • Challenge: Bulk loads of approximately 30 billion rows
      • trying multiple techniques for bulk loading
      • best practices are not well documented
  • 15. Magellan Application: De-novo assembly
    • Move data from disk to clustered memory
    • Move analysis pipeline from single-node to parallel map/reduce jobs
    • ==
    • efficient horizontal scalability
    • (more data -> add more nodes)
    Private/public cloud Memory requirements: ~500 GB (de Bruijn graph) CPU hours (single assembly): velveth: ~23h,velvetg: ~21h Source: Karan Bhatia
  • 16. Summary
    • Deployment Challenges
      • all jobs run as user “hadoop” affecting file permissions
      • less control on how many nodes are used - affects allocation policies
      • file system performance for large file sizes
    • Programming Challenges: No turn-key solution
      • using existing code bases, managing input formats and data
    • Performance
      • BLAST over Hadoop: performance is comparable to existing systems
      • existing parallel file systems can be used through Hadoop On Demand
    • Additional benchmarking, tuning needed
    • Plug-ins for Science
  • 17. Acknowledgements
    • This work was funded in part by the Advanced Scientific Computing Research (ASCR) in the DOE Office of Science under contract number DE-C02-05CH11231.
    • CITRIS/UC, Yahoo M45!, Greg Bell, Victor Markowitz, Rollin Thomas
  • 18. Questions?
    • [email_address]
  • 19. Cloud Usage Model
    • On-demand access to computing and cost associativity
    • Customized and controlled environments
      • e.g., Supernova Factory codes have sensitivity to OS/compiler versions
    • Overflow capacity to supplement existing systems
      • e.g., Berkeley Water Center has analysis that far exceeds capacity of desktops
    • Parallel programming models for data intensive science
      • e.g., BLAST parametric runs
  • 20. NERSC Magellan Software Strategy
    • Runtime provisioning of software images via Moab and xCat
    • Explore a variety of usage models
    • Choice of local or remote cloud
    ANI Magellan Cluster