Outline What is Hadoop? Overview of HDFS and MapReduce How Hadoop augments an RDBMS? Industry Business Needs: Data Consolidation (Structured or Not) Data Schema Agility (Evolve Schema Fast) Query Language Flexibility (Data Engineering) Data Economics (Store More for Longer) Conclusion
What is Hadoop? A scalable fault-tolerant distributed system for data storage and processing Its scalability comes from the marriage of: HDFS: Self-Healing High-Bandwidth Clustered Storage MapReduce: Fault-Tolerant Distributed Processing Operates on structured and complex data A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) Open source under the Apache License http://wiki.apache.org/hadoop/
Hadoop History 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch 2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch 2007: NY Times converts 4TB of archives over 100 EC2s 2008: Web-scale deployments at Y!, Facebook, Last.fm April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes May 2009: Yahoo does fastest sort of a TB, 62secs over 1460 nodes Yahoo sorts a PB in 16.25hours over 3658 nodes June 2009, Oct 2009: Hadoop Summit, Hadoop World September 2009: Doug Cutting joins Cloudera
Hadoop Design Axioms System Shall Manage and Heal Itself Performance Shall Scale Linearly Compute Shall Move to Data Simple Core, Modular and Extensible
HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
Typical Hadoop Architecture Business Users End Customers Business Intelligence Interactive Application OLAP Data Mart OLTP Data Store Engineers Hadoop: Storage and Batch Processing Data Collection
Complex Data is Growing Really Fast Gartner – 2009
Enterprise Data will grow 650% in the next 5 years.
80% of this data will be unstructured (complex)data
IDC – 2008
85% of all corporate information is in unstructured (complex) forms
Growth of unstructured data (61.7% CAGR) will far outpace that of transactional data
Data Consolidation: One Place For All Complex Data Documents Web feeds System logs Online forums SharePoint Sensor data EMB archives Images/Video Structured Data (“relational”) CRM Financials Logistics Data Marts Inventory Sales records HR records Web Profiles A single data system to enable processing across the universe of data types.
Data Agility: Schema on Read vs Write Schema-on-Read: Schema-on-Write:
Schema must be created before data is loaded.
An explicit load operation has to take place which transforms the data to the internal structure of the database.
New columns must be added explicitly before data for such columns can be loaded into the database.
Java MapReduce: Gives the most flexibility and performance, but potentially long development cycle (the “assembly language” of Hadoop).
Streaming MapReduce: Allows you to develop in any programming language of your choice, but slightly lower performance and less flexibility.
Pig: A relatively new language out of Yahoo, suitable for batch dataflowworkloads
Hive: A SQL interpreter on top of MapReduce, also includes a meta-store mapping files to their schemas and associated SerDe’s. Hive also supports User-Defined-Functions and pluggable MapReduce streaming functions in any language.
Return on Byte = value to be extracted from that byte / cost of storing that byte.
If ROB is < 1 then it will be buried into tape wasteland, thus we need cheaper active storage.
High ROB Low ROB
Case Studies: Hadoop World ‘09 VISA: Large Scale Transaction Analysis JP Morgan Chase: Data Processing for Financial Services China Mobile: Data Mining Platform for Telecom Industry Rackspace: Cross Data Center Log Processing Booz Allen Hamilton: Protein Alignment using Hadoop eHarmony: Matchmaking in the Hadoop Cloud General Sentiment: Understanding Natural Language Yahoo!: Social Graph Analysis Visible Technologies: Real-Time Business Intelligence Facebook: Rethinking the Data Warehouse with Hadoop and Hive Slides and Videos at http://www.cloudera.com/hadoop-world-nyc
Conclusion Hadoop is a scalable distributed data processing system which enables: Consolidation (Structured or Not) Data Agility (Evolving Schemas) Query Flexibility (Any Language) Economical Storage (ROB > 1)
Contact Information AmrAwadallah CTO, Cloudera Inc. email@example.com http://twitter.com/awadallah Online Training Videos and Info: http://cloudera.com/hadoop-training http://cloudera.com/blog http://twitter.com/cloudera
MapReduce: The Programming Model SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (docid, text) (words, counts) Map 1 (sorted words, counts) Reduce 1 Output File 1 (sorted words, sum of counts) Split 1 Be, 5 “To Be Or Not To Be?” Be, 30 Be, 12 Reduce i Output File i (sorted words, sum of counts) (docid, text) Map i Split i Be, 7 Be, 6 Shuffle Reduce R Output File R (sorted words, sum of counts) (docid, text) Map M (sorted words, counts) (words, counts) Split N
Hadoop High-Level Architecture Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves Data Node Stores and serves blocks of data Task Tracker Runs tasks (work units) within a job Share Physical Node
Economics of Hadoop Storage Typical Hardware: Two Quad Core Nehalems 24GB RAM 12 * 1TB SATA disks (JBOD mode, no need for RAID) 1 Gigabit Ethernet card Cost/node: $5K/node Effective HDFS Space: ¼ reserved for temp shuffle space, which leaves 9TB/node 3 way replication leads to 3TB effective HDFS space/node But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Other solutions cost in the range of $5K to $100K per user TB