Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Amr Awadallah CTO, Cloudera, Inc. August 5, 2009 How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Outline <ul><li>Problems We Wanted to Solve </li></ul><ul><li>What is Hadoop? </li></ul><ul><li>HDFS and MapReduce </li></...
Our Older Systems Limited Raw Data Access Storage Farm for Unstructured Data (20TB/day) Instrumentation Collection RDBMS (...
We Needed To Be More Agile (part 1) <ul><li>Data Errors and Reprocessing   </li></ul><ul><ul><li>We encountered data error...
We Needed To Be More Agile (part 2) <ul><li>Data Model Agility: Schema-on-Read vs Schema-on-Write </li></ul><ul><ul><li>We...
The Solution: A Store-Compute Grid Storage + Computation Instrumentation Collection RDBMS Interactive Apps “ Batch” Apps M...
What is Hadoop? <ul><li>A scalable fault-tolerant  grid operating system  for data storage and processing </li></ul><ul><l...
Hadoop History <ul><li>2002-2004:  Doug Cutting and Mike Cafarella started working on Nutch (a web-scale crawler-based sea...
Hadoop Design Axioms <ul><li>System Shall Manage and Heal Itself </li></ul><ul><li>Performance Shall Scale Linearly  </li>...
HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
MapReduce: Distributed Processing
MapReduce Example for Word Count cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Map 1 (docid,...
Hadoop Is More Than Just Analytics/BI <ul><li>Building the Web Search Index </li></ul><ul><li>Processing News/Content Feed...
Apache Hadoop Ecosystem HDFS (Hadoop Distributed File System) HBase  (Key-Value store) MapReduce  (Job Scheduling/Executio...
Hadoop Development Languages <ul><li>Java MapReduce </li></ul><ul><ul><li>Gives the most flexibility and performance, but ...
Hive Features <ul><li>A subset of SQL covering the most common statements </li></ul><ul><li>Agile data types: Array, Map, ...
<ul><li>Relational Databases: </li></ul><ul><li>An ACID Database system </li></ul><ul><li>Stores Tables (Schema) </li></ul...
<ul><li>Relational Databases: </li></ul><ul><li>Hadoop: </li></ul>Use The Right Tool For The Right Job
Hadoop Criticisms (part 1) <ul><li>Hadoop MapReduce requires Rocket Scientists </li></ul><ul><ul><li>Hadoop has the benefi...
Hadoop Criticisms (part 2) <ul><li>Hadoop isn’t highly available </li></ul><ul><ul><li>Though Hadoop rarely loses data, it...
Conclusion <ul><li>Hadoop is a  data grid operating system  which  augments  current BI systems and improves their  agilit...
Contact Information <ul><li>If you have further questions or comments: </li></ul><ul><li>Amr Awadallah </li></ul><ul><li>C...
APPENDIX
Hadoop High-Level Architecture Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs a...
Upcoming SlideShare
Loading in …5
×

How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook

47,915 views

Published on

Presentation given at the TDWI Executive Summit 2009 in San Diego, California.

Published in: Technology, Business
  • DOWNLOAD FULL BOOKS, INTO AVAILABLE FORMAT ......................................................................................................................... ......................................................................................................................... 1.DOWNLOAD FULL. PDF EBOOK here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... 1.DOWNLOAD FULL. EPUB Ebook here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... 1.DOWNLOAD FULL. doc Ebook here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... 1.DOWNLOAD FULL. PDF EBOOK here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... 1.DOWNLOAD FULL. EPUB Ebook here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... 1.DOWNLOAD FULL. doc Ebook here { https://tinyurl.com/y6a5rkg5 } ......................................................................................................................... ......................................................................................................................... ......................................................................................................................... .............. Browse by Genre Available eBooks ......................................................................................................................... Art, Biography, Business, Chick Lit, Children's, Christian, Classics, Comics, Contemporary, Cookbooks, Crime, Ebooks, Fantasy, Fiction, Graphic Novels, Historical Fiction, History, Horror, Humor And Comedy, Manga, Memoir, Music, Mystery, Non Fiction, Paranormal, Philosophy, Poetry, Psychology, Religion, Romance, Science, Science Fiction, Self Help, Suspense, Spirituality, Sports, Thriller, Travel, Young Adult,
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • A
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook

  1. Amr Awadallah CTO, Cloudera, Inc. August 5, 2009 How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
  2. Outline <ul><li>Problems We Wanted to Solve </li></ul><ul><li>What is Hadoop? </li></ul><ul><li>HDFS and MapReduce </li></ul><ul><li>Access Languages for Hadoop </li></ul><ul><li>Hadoop vs RDBMSes </li></ul><ul><li>Conclusion </li></ul>
  3. Our Older Systems Limited Raw Data Access Storage Farm for Unstructured Data (20TB/day) Instrumentation Collection RDBMS (200GB/day) BI / Reports Mostly Append Ad hoc Queries & Data Mining ETL Grid Non-Consumption Filer heads are a bottleneck
  4. We Needed To Be More Agile (part 1) <ul><li>Data Errors and Reprocessing </li></ul><ul><ul><li>We encountered data errors that required reprocessing, which could happen a long time after the fact. “Tape Data” was cost prohibitive to reprocess, we needed to retain raw-data online for long time periods </li></ul></ul><ul><li>Conformation Loss </li></ul><ul><ul><li>Conversion of data from raw format to conformed dimensions causes some information loss. We needed access to the original data to recover lost information whenever needed (e.g.: a new browser user agent) </li></ul></ul><ul><li>Shrinking ETL Window </li></ul><ul><ul><li>The storage filers for raw data started becoming a significant bottleneck as large amounts of data needed to be copied to the ETL grid for processing (e.g. 30 hours to process a day’s worth of data) </li></ul></ul><ul><li>Ad Hoc Queries on Raw Data </li></ul><ul><ul><li>We wanted to run ad hoc queries against the original raw event data, but the storage filers only store and can’t compute </li></ul></ul>
  5. We Needed To Be More Agile (part 2) <ul><li>Data Model Agility: Schema-on-Read vs Schema-on-Write </li></ul><ul><ul><li>We wanted to access data even if it had no schema yet, e.g. frequently a new product or feature will launch but we can’t get their dashboards since their schemas weren’t defined yet </li></ul></ul><ul><ul><li>Schema-on-Read is slower in terms of machine time (due to read overhead) but it allows us to evolve in an agile way, then we materialize to relational datamarts when data model stabilizes </li></ul></ul><ul><li>Consolidated Repository and Ubiquitous Access </li></ul><ul><ul><li>We wanted to eliminate borders and have a single repository where anybody can store, join, and process any of our data bits </li></ul></ul><ul><li>Beyond Reporting (Data-As-Product) </li></ul><ul><ul><li>Last, but not least, we wanted to process the data in ways that feed directly into the product/business (e.g. Email Spam Filtering, Ad Targeting, Collaborative Filtering, Multimedia Processing) </li></ul></ul>
  6. The Solution: A Store-Compute Grid Storage + Computation Instrumentation Collection RDBMS Interactive Apps “ Batch” Apps Mostly Append ETL and Aggregations Ad hoc Queries & Data Mining
  7. What is Hadoop? <ul><li>A scalable fault-tolerant grid operating system for data storage and processing </li></ul><ul><li>Its scalability comes from the marriage of: </li></ul><ul><ul><li>HDFS: Self-Healing High-Bandwidth Clustered Storage </li></ul></ul><ul><ul><li>MapReduce: Fault-Tolerant Distributed Processing </li></ul></ul><ul><li>Operates on unstructured and structured data </li></ul><ul><li>A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) </li></ul><ul><li>Open source under the friendly Apache License </li></ul><ul><li>http://wiki.apache.org/hadoop/ </li></ul>
  8. Hadoop History <ul><li>2002-2004: Doug Cutting and Mike Cafarella started working on Nutch (a web-scale crawler-based search system) </li></ul><ul><li>2003-2004: Google publishes GFS and MapReduce papers </li></ul><ul><li>2004: Cutting adds DFS & MapReduce support to Nutch </li></ul><ul><li>2006: Yahoo! hires Cutting, Hadoop spins out of Nutch </li></ul><ul><li>2007: NY Times converts 4TB of archives over 100 EC2s </li></ul><ul><li>2008: Web-scale deployments at Y!, Facebook, Last.fm </li></ul><ul><li>April 2008: Fastest sort of a TB, 3.5mins over 910 nodes </li></ul><ul><li>May 2009: </li></ul><ul><ul><li>Fastest sort of a TB, 62secs over 1460 nodes </li></ul></ul><ul><ul><li>Sorted a PB in 16.25hours over 3658 nodes </li></ul></ul><ul><ul><li>100s of deployments worldwide ( http://wiki.apache.org/hadoop/PoweredBy ) </li></ul></ul><ul><li>June 2009: Hadoop Summit 2009 – 750 attendees </li></ul>
  9. Hadoop Design Axioms <ul><li>System Shall Manage and Heal Itself </li></ul><ul><li>Performance Shall Scale Linearly </li></ul><ul><li>Compute Should Move to Data </li></ul><ul><li>Simple Core, Modular and Extensible </li></ul>
  10. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
  11. MapReduce: Distributed Processing
  12. MapReduce Example for Word Count cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Map 1 (docid, text) (docid, text) Map i (docid, text) Map M Reduce 1 Output File 1 (sorted words, sum of counts) Reduce i Output File i (sorted words, sum of counts) Reduce R Output File R (sorted words, sum of counts) (words, counts) (sorted words, counts) Map (in_key, in_value) => list of (out_key, intermediate_value) Reduce (out_key, list of intermediate_values) => out_value(s) Shuffle (words, counts) (sorted words, counts) “ To Be Or Not To Be?” Be, 5 Be, 12 Be, 7 Be, 6 Be, 30
  13. Hadoop Is More Than Just Analytics/BI <ul><li>Building the Web Search Index </li></ul><ul><li>Processing News/Content Feeds </li></ul><ul><li>Content/Ad Targeting Optimization </li></ul><ul><li>Fraud Detection and Fighting Email Spam </li></ul><ul><li>Facebook Lexicon: Trends of words on walls </li></ul><ul><li>Collaborative Filtering (you might like) </li></ul><ul><li>Batch Video/Image Transcoding </li></ul><ul><li>Gene Sequence Alignment </li></ul>
  14. Apache Hadoop Ecosystem HDFS (Hadoop Distributed File System) HBase (Key-Value store) MapReduce (Job Scheduling/Execution System) Pig (Data Flow) Hive (SQL) BI Reporting ETL Tools Avro (Serialization) Zookeepr (Coordination) Sqoop RDBMS
  15. Hadoop Development Languages <ul><li>Java MapReduce </li></ul><ul><ul><li>Gives the most flexibility and performance, but with a potentially longer development cycle </li></ul></ul><ul><li>Streaming MapReduce </li></ul><ul><ul><li>Allows you to develop in any language of your choice, but slightly slower performance </li></ul></ul><ul><li>Pig </li></ul><ul><ul><li>A relatively new data-flow language contributed by Yahoo, suitable for ETL like workloads (procedural multi-stage jobs) </li></ul></ul><ul><li>Hive </li></ul><ul><ul><li>A SQL warehouse on top of MapReduce (contributed by Facebook). It has two main components: </li></ul></ul><ul><ul><li>A meta-store which keeps the schema for files, and </li></ul></ul><ul><ul><li>An interpreter which converts the SQL query into MapReduce </li></ul></ul>
  16. Hive Features <ul><li>A subset of SQL covering the most common statements </li></ul><ul><li>Agile data types: Array, Map, Struct, and JSON objects </li></ul><ul><li>User Defined Functions and Aggregates </li></ul><ul><li>Regular Expression support </li></ul><ul><li>MapReduce support </li></ul><ul><li>JDBC support </li></ul><ul><li>Partitions and Buckets (for performance optimization) </li></ul><ul><li>In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/Collect </li></ul><ul><li>More details: http://wiki.apache.org/hadoop/Hive </li></ul>
  17. <ul><li>Relational Databases: </li></ul><ul><li>An ACID Database system </li></ul><ul><li>Stores Tables (Schema) </li></ul><ul><li>Stores 100s of terabytes </li></ul><ul><li>Processes 10s of TB/query </li></ul><ul><li>Transactional Consistency </li></ul><ul><li>Lookup rows using index </li></ul><ul><li>Mostly queries </li></ul><ul><li>Interactive response </li></ul><ul><li>Hadoop: </li></ul><ul><li>A data grid operating system </li></ul><ul><li>Stores Files (Unstructured) </li></ul><ul><li>Stores 10s of petabytes </li></ul><ul><li>Processes 10s of PB/job </li></ul><ul><li>Weak Consistency </li></ul><ul><li>Scan all blocks in all files </li></ul><ul><li>Queries & Data Processing </li></ul><ul><li>Batch response (>1sec) </li></ul>Hadoop vs. Relational Databases
  18. <ul><li>Relational Databases: </li></ul><ul><li>Hadoop: </li></ul>Use The Right Tool For The Right Job
  19. Hadoop Criticisms (part 1) <ul><li>Hadoop MapReduce requires Rocket Scientists </li></ul><ul><ul><li>Hadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter) </li></ul></ul><ul><li>Hadoop is not very efficient hardware wise </li></ul><ul><ul><li>Hadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance </li></ul></ul><ul><ul><li>It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in software </li></ul></ul><ul><li>Hadoop can’t do quick random lookups </li></ul><ul><ul><li>HBase enables low-latency key-value pair lookups (no fast joins) </li></ul></ul><ul><li>Hadoop doesn’t support updates/inserts/deletes </li></ul><ul><ul><li>Not for multi-row transactions, but HBase enables transactions with row-level consistency semantics </li></ul></ul>
  20. Hadoop Criticisms (part 2) <ul><li>Hadoop isn’t highly available </li></ul><ul><ul><li>Though Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for it </li></ul></ul><ul><li>Hadoop can’t be backed-up/recovered quickly </li></ul><ul><ul><li>HDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clusters </li></ul></ul><ul><li>Hadoop doesn’t have security </li></ul><ul><ul><li>Hadoop has Unix style user/group permissions, and the community is working on improving its security model </li></ul></ul><ul><li>Hadoop can’t talk to other systems </li></ul><ul><ul><li>Hadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP </li></ul></ul>
  21. Conclusion <ul><li>Hadoop is a data grid operating system which augments current BI systems and improves their agility by providing an economically scalable solution for storing and processing large amounts of unstructured data over long periods of time </li></ul>
  22. Contact Information <ul><li>If you have further questions or comments: </li></ul><ul><li>Amr Awadallah </li></ul><ul><li>CTO, Cloudera Inc. </li></ul><ul><li>[email_address] </li></ul><ul><li>650-362-0488 </li></ul><ul><li>twitter.com/awadallah </li></ul><ul><li>twitter.com/cloudera </li></ul>
  23. APPENDIX
  24. Hadoop High-Level Architecture Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves Data Node Stores and serves blocks of data Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Task Tracker Runs tasks (work units) within a job Share Physical Node

×