Hadoop: Distributed Data Processing
      Amr Awadallah
      Founder/CTO, Cloudera, Inc.
      ACM Data Mining SIG
      ...
Outline

      ▪Scaling for Large Data
       Processing
      ▪What is Hadoop?

      ▪HDFS and MapReduce

      ▪Hadoop ...
Current Storage Systems Can’t Compute




     Amr Awadallah, Cloudera Inc    3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute




                                   Collection
                               Ins...
Current Storage Systems Can’t Compute




        Storage Farm for Unstructured Data (20TB/day)
                          ...
Current Storage Systems Can’t Compute

              Interactive Apps
          RDBMS (200GB/day)
                      ET...
Current Storage Systems Can’t Compute

              Interactive Apps
          RDBMS (200GB/day)
                      ET...
Current Storage Systems Can’t Compute

              Interactive Apps                   Ad hoc Queries &
                 ...
The Solution: A Store-Compute Grid




     Amr Awadallah, Cloudera Inc         4
Wednesday, January 27, 2010
The Solution: A Store-Compute Grid




                              Storage + Computation
                               ...
The Solution: A Store-Compute Grid

              Interactive Apps
                       RDBMS
        ETL and
      Aggr...
The Solution: A Store-Compute Grid

              Interactive Apps                       “Batch” Apps
                    ...
What is Hadoop?




     Amr Awadallah, Cloudera Inc   5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A  scalable fault-tolerant grid operating
        system for data storage and processing




     A...
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ I...
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ I...
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ I...
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ I...
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ I...
Hadoop History




     Amr Awadallah, Cloudera Inc   6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch




     Amr Awadal...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004...
Hadoop Design Axioms




     Amr Awadallah, Cloudera Inc   7
Wednesday, January 27, 2010
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself




     Amr Awadallah, Cloudera Inc            7
...
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly




 ...
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly
     ...
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly
     ...
HDFS: Hadoop Distributed File System
       Block Size = 64MB
      Replication Factor = 3




   Cost/GB is a few ¢/month...
HDFS: Hadoop Distributed File System
       Block Size = 64MB
      Replication Factor = 3




   Cost/GB is a few ¢/month...
MapReduce: Distributed Processing




     Amr Awadallah, Cloudera Inc         9
Wednesday, January 27, 2010
MapReduce: Distributed Processing




     Amr Awadallah, Cloudera Inc         9
Wednesday, January 27, 2010
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sor...
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sor...
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sor...
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sor...
Hadoop High-Level Architecture
                                                    Hadoop Client
                         ...
Apache Hadoop Ecosystem




                 MapReduce (Job Scheduling/Execution System)




                             ...
Apache Hadoop Ecosystem
        Zookeepr (Coordination)




                                                              ...
Apache Hadoop Ecosystem
        Zookeepr (Coordination)




                                                              ...
Apache Hadoop Ecosystem

                                    ETL Tools        BI Reporting      RDBMS

                   ...
Use The Right Tool For The Right Job
      Hadoop:                      Relational Databases:




     Amr Awadallah, Clou...
Use The Right Tool For The Right Job
      Hadoop:                      Relational Databases:




     Amr Awadallah, Clou...
Use The Right Tool For The Right Job
      Hadoop:                               Relational Databases:




      When to u...
Economics of Hadoop




     Amr Awadallah, Cloudera Inc   14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   1...
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   1...
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   1...
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   1...
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   1...
Sample Talks from Hadoop World ‘09
      ▪   VISA: Large Scale Transaction Analysis
      ▪   JP Morgan Chase: Data Proces...
Cloudera Desktop




     Amr Awadallah, Cloudera Inc   16
Wednesday, January 27, 2010
Conclusion




     Amr Awadallah, Cloudera Inc   17
Wednesday, January 27, 2010
Conclusion


              Hadoop is a data grid
            operating system which
            provides an economically
 ...
Contact Information

                        Amr Awadallah
                     CTO, Cloudera Inc.
                     aa...
(c) 2008 Cloudera, Inc. or its licensors.  "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1....
Upcoming SlideShare
Loading in...5
×

Hadoop: Distributed data processing

4,603

Published on

Published in: Technology
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
4,603
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
362
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide

Hadoop: Distributed data processing

  1. 1. Hadoop: Distributed Data Processing Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25th, 2010 Wednesday, January 27, 2010
  2. 2. Outline ▪Scaling for Large Data Processing ▪What is Hadoop? ▪HDFS and MapReduce ▪Hadoop Ecosystem ▪Hadoop vs RDBMSes ▪Conclusion Amr Awadallah, Cloudera Inc 2 Wednesday, January 27, 2010
  3. 3. Current Storage Systems Can’t Compute Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  4. 4. Current Storage Systems Can’t Compute Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  5. 5. Current Storage Systems Can’t Compute Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  6. 6. Current Storage Systems Can’t Compute Interactive Apps RDBMS (200GB/day) ETL Grid Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  7. 7. Current Storage Systems Can’t Compute Interactive Apps RDBMS (200GB/day) ETL Grid Filer heads are a bottleneck Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  8. 8. Current Storage Systems Can’t Compute Interactive Apps Ad hoc Queries & Data Mining RDBMS (200GB/day) ETL Grid Non-Consumption Filer heads are a bottleneck Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  9. 9. The Solution: A Store-Compute Grid Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  10. 10. The Solution: A Store-Compute Grid Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  11. 11. The Solution: A Store-Compute Grid Interactive Apps RDBMS ETL and Aggregations Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  12. 12. The Solution: A Store-Compute Grid Interactive Apps “Batch” Apps RDBMS Ad hoc Queries ETL and & Data Mining Aggregations Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  13. 13. What is Hadoop? Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  14. 14. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  15. 15. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  16. 16. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  17. 17. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  18. 18. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) ▪ Open source under the friendly Apache License Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  19. 19. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) ▪ Open source under the friendly Apache License ▪ http://wiki.apache.org/hadoop/ Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  20. 20. Hadoop History Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  21. 21. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  22. 22. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  23. 23. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  24. 24. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  25. 25. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  26. 26. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  27. 27. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  28. 28. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  29. 29. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes ▪ June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500) Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  30. 30. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes ▪ June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500) Amr Awadallah, Cloudera Inc 6 ▪ September 2009: Doug Cutting joins Cloudera Wednesday, January 27, 2010
  31. 31. Hadoop Design Axioms Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  32. 32. Hadoop Design Axioms 1. System Shall Manage and Heal Itself Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  33. 33. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  34. 34. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Should Move to Data Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  35. 35. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Should Move to Data 4. Simple Core, Modular and Extensible Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  36. 36. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month Amr Awadallah, Cloudera Inc 8 Wednesday, January 27, 2010
  37. 37. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month Amr Awadallah, Cloudera Inc 8 Wednesday, January 27, 2010
  38. 38. MapReduce: Distributed Processing Amr Awadallah, Cloudera Inc 9 Wednesday, January 27, 2010
  39. 39. MapReduce: Distributed Processing Amr Awadallah, Cloudera Inc 9 Wednesday, January 27, 2010
  40. 40. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  41. 41. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 Be, 5 “To Be Or Not To Be?” Be, 12 Split i (docid, text) Map i Be, 7 Be, 6 Split N (docid, text) Map M (words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  42. 42. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 (sorted words, counts) Be, 5 Reduce 1 “To Be Or Not To Be?” Be, 12 Reduce i Split i (docid, text) Map i Be, 7 Be, 6 Shuffle Reduce R Split N (docid, text) Map M (words, counts) (sorted words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  43. 43. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 (sorted words, counts) Output File Be, 5 Reduce 1 (sorted words, sum of counts) 1 “To Be Or Not Be, 30 To Be?” Be, 12 Output File i (sorted words, Reduce i sum of counts) Split i (docid, text) Map i Be, 7 Be, 6 Shuffle Output File (sorted words, R Reduce R sum of counts) Split N (docid, text) Map M (words, counts) (sorted words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  44. 44. Hadoop High-Level Architecture Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Name Node Job Tracker Maintains mapping of file blocks Schedules jobs across to data node slaves task tracker slaves Data Node Task Tracker Stores and serves Runs tasks (work units) blocks of data within a job Share Physical Node Amr Awadallah, Cloudera Inc 11 Wednesday, January 27, 2010
  45. 45. Apache Hadoop Ecosystem MapReduce (Job Scheduling/Execution System) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  46. 46. Apache Hadoop Ecosystem Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  47. 47. Apache Hadoop Ecosystem Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HBase (key-value store) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  48. 48. Apache Hadoop Ecosystem ETL Tools BI Reporting RDBMS Pig (Data Flow) Hive (SQL) Sqoop Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HBase (key-value store) (Streaming/Pipes APIs) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  49. 49. Use The Right Tool For The Right Job Hadoop: Relational Databases: Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  50. 50. Use The Right Tool For The Right Job Hadoop: Relational Databases: Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  51. 51. Use The Right Tool For The Right Job Hadoop: Relational Databases: When to use? When to use? • Affordable Storage/ • Interactive Reporting Compute (<1sec) • Structured or Not (Agility) • Multistep Transactions • Resilient Auto Scalability • Interoperability Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  52. 52. Economics of Hadoop Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  53. 53. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  54. 54. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  55. 55. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  56. 56. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  57. 57. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Other solutions cost in the range of $5K to $100K per user TB Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  58. 58. Sample Talks from Hadoop World ‘09 ▪ VISA: Large Scale Transaction Analysis ▪ JP Morgan Chase: Data Processing for Financial Services ▪ China Mobile: Data Mining Platform for Telecom Industry ▪ Rackspace: Cross Data Center Log Processing ▪ Booz Allen Hamilton: Protein Alignment using Hadoop ▪ eHarmony: Matchmaking in the Hadoop Cloud ▪ General Sentiment: Understanding Natural Language ▪ Yahoo!: Social Graph Analysis ▪ Visible Technologies: Real-Time Business Intelligence ▪ Facebook: Rethinking the Data Warehouse with Hadoop and Hive Slides and Videos at http://www.cloudera.com/hadoop- Amr Awadallah, Cloudera Inc world-nyc 15 Wednesday, January 27, 2010
  59. 59. Cloudera Desktop Amr Awadallah, Cloudera Inc 16 Wednesday, January 27, 2010
  60. 60. Conclusion Amr Awadallah, Cloudera Inc 17 Wednesday, January 27, 2010
  61. 61. Conclusion Hadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time. Amr Awadallah, Cloudera Inc 17 Wednesday, January 27, 2010
  62. 62. Contact Information Amr Awadallah CTO, Cloudera Inc. aaa@cloudera.com http://twitter.com/awadallah Online Training Videos and Info: http://cloudera.com/hadoop- training http://cloudera.com/blog http://twitter.com/cloudera Amr Awadallah, Cloudera Inc 18 Wednesday, January 27, 2010
  63. 63. (c) 2008 Cloudera, Inc. or its licensors.  "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0 Wednesday, January 27, 2010
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×