Outline<br />What is Hadoop?<br />Overview of HDFS and MapReduce<br />How Hadoop augments an RDBMS?<br />Industry Business...
What is Hadoop?<br />A scalable fault-tolerant distributed system  for data storage and processing<br />Its scalability co...
Hadoop History<br />2002-2004: Doug Cutting and Mike Cafarella started working on Nutch<br />2003-2004: Google publishes G...
Hadoop Design Axioms<br />System Shall Manage and Heal Itself<br />Performance Shall Scale Linearly <br />Compute Shall Mo...
HDFS: Hadoop Distributed File System<br />Block Size = 64MB<br />Replication Factor = 3<br />Cost/GB is a few ¢/month vs $...
MapReduce: Distributed Processing<br />
Apache Hadoop Ecosystem<br />BI Reporting<br />ETL Tools<br />RDBMS<br />Hive (SQL)<br />Sqoop<br />Pig (Data Flow)<br />M...
Use The Right Tool For The Right Job <br />Relational Databases:<br />Hadoop:<br />When to use?<br /><ul><li>Affordable St...
Structured or Not (Agility)
Resilient Auto Scalability</li></ul>When to use?<br /><ul><li>Interactive Reporting (<1sec)
Multistep Transactions
Lots of Inserts/Updates/Deletes</li></li></ul><li>Typical Hadoop Architecture<br />Business Users<br />End Customers<br />...
Complex Data is Growing Really Fast<br />Gartner – 2009<br /><ul><li>Enterprise Data will grow 650% in the next 5 years.
80% of this data will be unstructured (complex)data</li></ul>IDC – 2008<br /><ul><li>85% of all corporate information is i...
Growth of unstructured data (61.7% CAGR) will far outpace that of transactional data</li></li></ul><li>Data Consolidation:...
Data Agility: Schema on Read vs Write <br />Schema-on-Read:<br />Schema-on-Write:<br /><ul><li>Schema must be created befo...
An explicit load operation has to take place which transforms the data to the internal structure of the database.
New columns must be added explicitly before data for such columns can be loaded into the database.
Read is Fast.
Standards/Governance.
Data is simply copied to the file store, no special transformation is needed.
A SerDe (Serializer/Deserlizer) is applied during read time to extract the required columns.
New data can start flowing anytime and will appear retroactively once the SerDe is updated to parse them.
Load is Fast
Upcoming SlideShare
Loading in...5
×

Hadoop: An Industry Perspective

32,640

Published on

Keynote that Amr Awadallah (Cloudera CTO and co-founder) delivered at MDAC'2010 (Massive Data Analytics over the Cloud).

Published in: Education
3 Comments
32 Likes
Statistics
Notes
No Downloads
Views
Total Views
32,640
On Slideshare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
844
Comments
3
Likes
32
Embeds 0
No embeds

No notes for slide
  • The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else.The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow.One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write)Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality.1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
  • http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)
  • Speculative Execution, Data rebalancing, Background Checksumming, etc.
  • Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
  • Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries.MapReduce can run on top of HDFS or a selection of other storage systemsIntelligent scheduling algorithms for locality, sharing, and resource optimization.
  • HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletes
  • Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per byte basis and is expensive to maintain.Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically.Hadoop:A data grid operating systemStores Files (Unstructured)Stores 10s of petabytesProcesses 10s of PB/jobWeak ConsistencyScan all blocks in all filesQueries &amp; Data ProcessingBatch response (&gt;1sec)Relational Databases:An ACID Database systemStores Tables (Schema)Stores 100s of terabytesProcesses 10s of TB/queryTransactional ConsistencyLookup rows using indexMostly queriesInteractive responseHadoop Myths:Hadoop MapReduce requires Rocket ScientistsHadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)Hadoop is not very efficient hardware wiseHadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in softwareHadoop can’t do quick random lookupsHBase enables low-latency key-value pair lookups (no fast joins)Hadoop doesn’t support updates/inserts/deletesNot for multi-row transactions, but HBase enables transactions with row-level consistency semanticsHadoop isn’t highly availableThough Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for itHadoop can’t be backed-up/recovered quicklyHDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clustersHadoop doesn’t have securityHadoop has Unix style user/group permissions, and the community is working on improving its security modelHadoop can’t talk to other systemsHadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV &amp; FTP
  • The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
  • Hive Features: A subset of SQL covering the most common statementsAgile data types: Array, Map, Struct, and JSON objectsUser Defined Functions and AggregatesRegular Expression supportMapReduce streaming supportJDBC/ODBC supportPartitions and Buckets (for performance optimization)In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/CollectMore details: http://wiki.apache.org/hadoop/HiveQuery: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALLJoin: LEFT, RIGHT, FULL, OUTER, INNERDDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONSDML: LOAD DATA INTO, FROM INSERTTypes: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECTQuery:Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE)Relational: IS NULL, IS NOT NULL, LIKE, REGEXPBuilt in aggregates: COUNT, MAX, MIN, AVG, SUMBuilt in functions: CAST, IF, REGEXP_REPLACE, …Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BYList and Map operators: array[i], map[k], struct.field
  • Think: SELECT word, count(*) FROM documents GROUP BY wordCheckout ParBASH:http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
  • The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper.
  • Hadoop: An Industry Perspective

    1. 1.
    2. 2. Outline<br />What is Hadoop?<br />Overview of HDFS and MapReduce<br />How Hadoop augments an RDBMS?<br />Industry Business Needs:<br />Data Consolidation (Structured or Not)<br />Data Schema Agility (Evolve Schema Fast)<br />Query Language Flexibility (Data Engineering)<br />Data Economics (Store More for Longer)<br />Conclusion<br />
    3. 3. What is Hadoop?<br />A scalable fault-tolerant distributed system for data storage and processing<br />Its scalability comes from the marriage of:<br />HDFS: Self-Healing High-Bandwidth Clustered Storage<br />MapReduce: Fault-Tolerant Distributed Processing<br />Operates on structured and complex data<br />A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)<br />Open source under the Apache License<br />http://wiki.apache.org/hadoop/<br />
    4. 4. Hadoop History<br />2002-2004: Doug Cutting and Mike Cafarella started working on Nutch<br />2003-2004: Google publishes GFS and MapReduce papers <br />2004: Cutting adds DFS & MapReduce support to Nutch<br />2006: Yahoo! hires Cutting, Hadoop spins out of Nutch<br />2007: NY Times converts 4TB of archives over 100 EC2s<br />2008: Web-scale deployments at Y!, Facebook, Last.fm<br />April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes<br />May 2009:<br />Yahoo does fastest sort of a TB, 62secs over 1460 nodes<br />Yahoo sorts a PB in 16.25hours over 3658 nodes<br />June 2009, Oct 2009: Hadoop Summit, Hadoop World<br />September 2009: Doug Cutting joins Cloudera<br />
    5. 5. Hadoop Design Axioms<br />System Shall Manage and Heal Itself<br />Performance Shall Scale Linearly <br />Compute Shall Move to Data<br />Simple Core, Modular and Extensible<br />
    6. 6. HDFS: Hadoop Distributed File System<br />Block Size = 64MB<br />Replication Factor = 3<br />Cost/GB is a few ¢/month vs $/month<br />
    7. 7. MapReduce: Distributed Processing<br />
    8. 8. Apache Hadoop Ecosystem<br />BI Reporting<br />ETL Tools<br />RDBMS<br />Hive (SQL)<br />Sqoop<br />Pig (Data Flow)<br />MapReduce (Job Scheduling/Execution System)<br />(Streaming/Pipes APIs)<br />HBase(key-value store)<br />Avro (Serialization)<br />Zookeepr (Coordination)<br />HDFS(Hadoop Distributed File System)<br />
    9. 9. Use The Right Tool For The Right Job <br />Relational Databases:<br />Hadoop:<br />When to use?<br /><ul><li>Affordable Storage/Compute
    10. 10. Structured or Not (Agility)
    11. 11. Resilient Auto Scalability</li></ul>When to use?<br /><ul><li>Interactive Reporting (<1sec)
    12. 12. Multistep Transactions
    13. 13. Lots of Inserts/Updates/Deletes</li></li></ul><li>Typical Hadoop Architecture<br />Business Users<br />End Customers<br />Business Intelligence<br />Interactive Application<br />OLAP Data Mart<br />OLTP Data Store<br />Engineers<br />Hadoop: Storage and Batch Processing<br />Data Collection<br />
    14. 14. Complex Data is Growing Really Fast<br />Gartner – 2009<br /><ul><li>Enterprise Data will grow 650% in the next 5 years.
    15. 15. 80% of this data will be unstructured (complex)data</li></ul>IDC – 2008<br /><ul><li>85% of all corporate information is in unstructured (complex) forms
    16. 16. Growth of unstructured data (61.7% CAGR) will far outpace that of transactional data</li></li></ul><li>Data Consolidation: One Place For All<br />Complex Data<br />Documents<br />Web feeds<br />System logs<br />Online forums<br />SharePoint<br />Sensor data<br />EMB archives<br />Images/Video<br />Structured Data (“relational”) <br />CRM<br />Financials<br />Logistics<br />Data Marts<br />Inventory<br />Sales records<br />HR records<br />Web Profiles<br />A single data system to enable processing across the universe of data types.<br />
    17. 17. Data Agility: Schema on Read vs Write <br />Schema-on-Read:<br />Schema-on-Write:<br /><ul><li>Schema must be created before data is loaded.
    18. 18. An explicit load operation has to take place which transforms the data to the internal structure of the database.
    19. 19. New columns must be added explicitly before data for such columns can be loaded into the database.
    20. 20. Read is Fast.
    21. 21. Standards/Governance.
    22. 22. Data is simply copied to the file store, no special transformation is needed.
    23. 23. A SerDe (Serializer/Deserlizer) is applied during read time to extract the required columns.
    24. 24. New data can start flowing anytime and will appear retroactively once the SerDe is updated to parse them.
    25. 25. Load is Fast
    26. 26. Evolving Schemas/Agility</li></li></ul><li>Query Language Flexibility<br /><ul><li>Java MapReduce: Gives the most flexibility and performance, but potentially long development cycle (the “assembly language” of Hadoop).
    27. 27. Streaming MapReduce: Allows you to develop in any programming language of your choice, but slightly lower performance and less flexibility.
    28. 28. Pig: A relatively new language out of Yahoo, suitable for batch dataflowworkloads
    29. 29. Hive: A SQL interpreter on top of MapReduce, also includes a meta-store mapping files to their schemas and associated SerDe’s. Hive also supports User-Defined-Functions and pluggable MapReduce streaming functions in any language.</li></li></ul><li>Hive Extensible Data Types<br /><ul><li>STRUCTS:
    30. 30. SELECT mytable.mycolumn.myfield FROM …
    31. 31. MAPS (Hashes):
    32. 32. SELECT mytable.mycolumn[mykey] FROM …
    33. 33. ARRAYS:
    34. 34. SELECT mytable.mycolumn[5] FROM …
    35. 35. JSON:
    36. 36. SELECT get_json_object(mycolumn,objpath)</li></li></ul><li>Data Economics (Return On Byte)<br /><ul><li> Return on Byte = value to be extracted from that byte / cost of storing that byte.
    37. 37. If ROB is < 1 then it will be buried into tape wasteland, thus we need cheaper active storage.</li></ul>High ROB<br />Low ROB<br />
    38. 38. Case Studies: Hadoop World ‘09<br />VISA: Large Scale Transaction Analysis<br />JP Morgan Chase: Data Processing for Financial Services<br />China Mobile: Data Mining Platform for Telecom Industry<br />Rackspace: Cross Data Center Log Processing<br />Booz Allen Hamilton: Protein Alignment using Hadoop<br />eHarmony: Matchmaking in the Hadoop Cloud<br />General Sentiment: Understanding Natural Language<br />Yahoo!: Social Graph Analysis<br />Visible Technologies: Real-Time Business Intelligence<br />Facebook: Rethinking the Data Warehouse with Hadoop and Hive<br />Slides and Videos at http://www.cloudera.com/hadoop-world-nyc<br />
    39. 39. Cloudera Desktop for Hadoop<br />
    40. 40. Conclusion<br />Hadoop is a scalable distributed data processing system which enables:<br />Consolidation (Structured or Not)<br />Data Agility (Evolving Schemas)<br />Query Flexibility (Any Language)<br />Economical Storage (ROB > 1)<br />
    41. 41. Contact Information<br />AmrAwadallah<br />CTO, Cloudera Inc.<br />aaa@cloudera.com<br />http://twitter.com/awadallah<br />Online Training Videos and Info:<br />http://cloudera.com/hadoop-training<br />http://cloudera.com/blog<br />http://twitter.com/cloudera<br />
    42. 42.
    43. 43. MapReduce: The Programming Model<br />SELECT word, COUNT(1) FROM docs GROUP BY word;<br />cat *.txt | mapper.pl | sort | reducer.pl > out.txt<br />(docid, text)<br />(words, counts)<br />Map 1<br />(sorted words, counts)<br />Reduce 1<br />Output File 1<br />(sorted words, sum of counts)<br />Split 1<br />Be, 5<br />“To Be Or Not To Be?”<br />Be, 30<br />Be, 12<br />Reduce i<br />Output File i<br />(sorted words, sum of counts)<br />(docid, text)<br />Map i<br />Split i<br />Be, 7<br />Be, 6<br />Shuffle<br />Reduce R<br />Output File R<br />(sorted words, sum of counts)<br />(docid, text)<br />Map M<br />(sorted words, counts)<br />(words, counts)<br />Split N<br />
    44. 44. Hadoop High-Level Architecture<br />Hadoop Client<br />Contacts Name Node for data <br />or Job Tracker to submit jobs<br />Name Node<br />Maintains mapping of file blocks <br />to data node slaves<br />Job Tracker<br />Schedules jobs across <br />task tracker slaves<br />Data Node<br />Stores and serves blocks of data<br />Task Tracker<br />Runs tasks (work units) <br />within a job<br />Share Physical Node<br />
    45. 45. Economics of Hadoop Storage<br />Typical Hardware:<br />Two Quad Core Nehalems<br />24GB RAM<br />12 * 1TB SATA disks (JBOD mode, no need for RAID)<br />1 Gigabit Ethernet card<br />Cost/node: $5K/node<br />Effective HDFS Space:<br />¼ reserved for temp shuffle space, which leaves 9TB/node<br />3 way replication leads to 3TB effective HDFS space/node<br />But assuming 7x compression that becomes ~ 20TB/node<br />Effective Cost per user TB: $250/TB<br />Other solutions cost in the range of $5K to $100K per user TB<br />
    46. 46. Data Engineering vs Business Intelligence<br /><ul><li>Business Intelligence:
    47. 47. The practice of extracting business numbers to monitor and evaluate the health of the business.
    48. 48. Humans make decisions based on these numbers to improve revenues or reduce costs.
    49. 49. Data Engineering:
    50. 50. The science of writing algorithms that convertdata into money  Alternatively, how to automatically transform data into new features that increase revenues or reduce costs.</li>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×