• Like

Hadoop: Distributed Data Processing

  • 10,888 views
Uploaded on

This is an updated version of Amr's Hadoop presentation. Amr gave this talk recently at NASA CIDU event, TDWI LA Chapter, and also Netflix HQ. You should watch the powerpoint version as it has …

This is an updated version of Amr's Hadoop presentation. Amr gave this talk recently at NASA CIDU event, TDWI LA Chapter, and also Netflix HQ. You should watch the powerpoint version as it has animations. The slides also include handout notes with additional information.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
10,888
On Slideshare
0
From Embeds
0
Number of Embeds
5

Actions

Shares
Downloads
542
Comments
1
Likes
14

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
  • The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else.The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow.One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write)Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality.1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
  • http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)
  • Speculative Execution, Data rebalancing, Background Checksumming, etc.
  • Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
  • Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries.MapReduce can run on top of HDFS or a selection of other storage systemsIntelligent scheduling algorithms for locality, sharing, and resource optimization.
  • Think: SELECT word, count(*) FROM documents GROUP BY wordCheckout ParBASH:http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
  • The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper.
  • HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletesJava MapReduceGives the most flexibility and performance, but with a potentially longer development cycleStreaming MapReduceAllows you to develop in any language of your choice, but slightly slower performancePigA relatively new data-flow language (contributed by Yahoo), suitable for ETL like workloads (procedural multi-stage jobs)HiveA SQL warehouse on top of MapReduce (contributed by Facebook), translates SQL into MapReduceHive Features: A subset of SQL covering the most common statementsAgile data types: Array, Map, Struct, and JSON objectsUser Defined Functions and AggregatesRegular Expression supportMapReduce supportJDBC supportPartitions and Buckets (for performance optimization)In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/CollectMore details: http://wiki.apache.org/hadoop/HiveQuery: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALLJoin: LEFT, RIGHT, FULL, OUTER, INNERDDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONSDML: LOAD DATA INTO, FROM INSERTTypes: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECTQuery:Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE)Relational: IS NULL, IS NOT NULL, LIKE, REGEXPBuilt in aggregates: COUNT, MAX, MIN, AVG, SUMBuilt in functions: CAST, IF, REGEXP_REPLACE, …Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BYList and Map operators: array[i], map[k], struct.field
  • Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per bit basis and is expensive to maintain.Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically.Hadoop:A data grid operating systemStores Files (Unstructured)Stores 10s of petabytesProcesses 10s of PB/jobWeak ConsistencyScan all blocks in all filesQueries & Data ProcessingBatch response (>1sec)Relational Databases:An ACID Database systemStores Tables (Schema)Stores 100s of terabytesProcesses 10s of TB/queryTransactional ConsistencyLookup rows using indexMostly queriesInteractive responseHadoop Myths:Hadoop MapReduce requires Rocket ScientistsHadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)Hadoop is not very efficient hardware wiseHadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in softwareHadoop can’t do quick random lookupsHBase enables low-latency key-value pair lookups (no fast joins)Hadoop doesn’t support updates/inserts/deletesNot for multi-row transactions, but HBase enables transactions with row-level consistency semanticsHadoop isn’t highly availableThough Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for itHadoop can’t be backed-up/recovered quicklyHDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clustersHadoop doesn’t have securityHadoop has Unix style user/group permissions, and the community is working on improving its security modelHadoop can’t talk to other systemsHadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP

Transcript

  • 1.
  • 2. Outline
    Scaling for Large Data Processing
    What is Hadoop?
    HDFS and MapReduce
    Hadoop Ecosystem
    Hadoop vsRDBMSes
    Conclusion
  • 3. Current Storage Systems Can’t Compute
    Ad hoc Queries &
    Data Mining
    Interactive Apps
    RDBMS (200GB/day)
    ETL Grid
    Non-Consumption
    Filer heads are a bottleneck
    Storage Farm for Unstructured Data (20TB/day)
    Mostly Append
    Collection
    Instrumentation
  • 4. The Solution: A Store-Compute Grid
    Interactive Apps
    “Batch” Apps
    RDBMS
    Ad hoc Queries
    & Data Mining
    ETL and Aggregations
    Storage + Computation
    Mostly Append
    Collection
    Instrumentation
  • 5. What is Hadoop?
    A scalable fault-tolerant grid operating system for data storage and processing
    Its scalability comes from the marriage of:
    HDFS: Self-Healing High-Bandwidth Clustered Storage
    MapReduce: Fault-Tolerant Distributed Processing
    Operates on unstructured and structured data
    A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)
    Open source under the friendly Apache License
    http://wiki.apache.org/hadoop/
  • 6. Hadoop History
    2002-2004: Doug Cutting and Mike Cafarella started working on Nutch
    2003-2004: Google publishes GFS and MapReduce papers
    2004: Cutting adds DFS & MapReduce support to Nutch
    2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
    2007: NY Times converts 4TB of archives over 100 EC2s
    2008: Web-scale deployments at Y!, Facebook, Last.fm
    April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes
    May 2009:
    Yahoo does fastest sort of a TB, 62secs over 1460 nodes
    Yahoo sorts a PB in 16.25hours over 3658 nodes
    June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500)
    September 2009: Doug Cutting joins Cloudera
  • 7. Hadoop Design Axioms
    System Shall Manage and Heal Itself
    Performance Shall Scale Linearly
    Compute Should Move to Data
    Simple Core, Modular and Extensible
  • 8. HDFS: Hadoop Distributed File System
    Block Size = 64MB
    Replication Factor = 3
    Cost/GB is a few ¢/month vs $/month
  • 9. MapReduce: Distributed Processing
  • 10. MapReduce Example for Word Count
    SELECT word, COUNT(1) FROM docs GROUP BY word;
    cat *.txt | mapper.pl | sort | reducer.pl > out.txt
    (docid, text)
    (words, counts)
    Map 1
    (sorted words, counts)
    Reduce 1
    Output File 1
    (sorted words, sum of counts)
    Split 1
    Be, 5
    “To Be Or Not To Be?”
    Be, 30
    Be, 12
    Reduce i
    Output File i
    (sorted words, sum of counts)
    (docid, text)
    Map i
    Split i
    Be, 7
    Be, 6
    Shuffle
    Reduce R
    Output File R
    (sorted words, sum of counts)
    (docid, text)
    Map M
    (sorted words, counts)
    (words, counts)
    Split N
  • 11. Hadoop High-Level Architecture
    Hadoop Client
    Contacts Name Node for data
    or Job Tracker to submit jobs
    Name Node
    Maintains mapping of file blocks
    to data node slaves
    Job Tracker
    Schedules jobs across
    task tracker slaves
    Data Node
    Stores and serves blocks of data
    Task Tracker
    Runs tasks (work units)
    within a job
    Share Physical Node
  • 12. Apache Hadoop Ecosystem
    BI Reporting
    ETL Tools
    RDBMS
    Hive (SQL)
    Sqoop
    Pig (Data Flow)
    MapReduce (Job Scheduling/Execution System)
    (Streaming/Pipes APIs)
    HBase(key-value store)
    Avro (Serialization)
    Zookeepr (Coordination)
    HDFS(Hadoop Distributed File System)
  • 13. Relational Databases:
    Hadoop:
    Use The Right Tool For The Right Job
    When to use?
    • Affordable Storage/Compute
    • 14. Structured or Not (Agility)
    • 15. Resilient Auto Scalability
    When to use?
    • Interactive Reporting (<1sec)
    • 16. Multistep Transactions
    • 17. Interoperability
  • Economics of Hadoop
    Typical Hardware:
    Two Quad Core Nehalems
    24GB RAM
    12 * 1TB SATA disks (JBOD mode, no need for RAID)
    1 Gigabit Ethernet card
    Cost/node: $5K/node
    Effective HDFS Space:
    ¼ reserved for temp shuffle space, which leaves 9TB/node
    3 way replication leads to 3TB effective HDFS space/node
    But assuming 7x compression that becomes ~ 20TB/node
    Effective Cost per user TB: $250/TB
    Other solutions cost in the range of $5K to $100K per user TB
  • 18. Sample Talks from Hadoop World ‘09
    VISA: Large Scale Transaction Analysis
    JP Morgan Chase: Data Processing for Financial Services
    China Mobile: Data Mining Platform for Telecom Industry
    Rackspace: Cross Data Center Log Processing
    Booz Allen Hamilton: Protein Alignment using Hadoop
    eHarmony: Matchmaking in the Hadoop Cloud
    General Sentiment: Understanding Natural Language
    Yahoo!: Social Graph Analysis
    Visible Technologies: Real-Time Business Intelligence
    Facebook: Rethinking the Data Warehouse with Hadoop and Hive
    Slides and Videos at http://www.cloudera.com/hadoop-world-nyc
  • 19. Cloudera Desktop
  • 20. Conclusion
    Hadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time.
  • 21. Contact Information
    AmrAwadallah
    CTO, Cloudera Inc.
    aaa@cloudera.com
    http://twitter.com/awadallah
    Online Training Videos and Info:
    http://cloudera.com/hadoop-training
    http://cloudera.com/blog
    http://twitter.com/cloudera