Your SlideShare is downloading. ×
20090422 Www
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

20090422 Www

1,233
views

Published on

Published in: Technology, Education

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,233
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
30
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Wednesday, April 22, 2009
  • 2. Socializing Big Data Lessons from the Hadoop Community Jeff Hammerbacher Chief Scientist and Vice President of Products, Cloudera April 22, 2009 Wednesday, April 22, 2009
  • 3. My Background Thanks for Asking hammer@cloudera.com ▪ Studied Mathematics at Harvard ▪ Worked as a Quant on Wall Street ▪ Conceived, built, and led the Data team at Facebook ▪ Nearly 30 amazing engineers and data scientists ▪ Released Hive and Cassandra as open source projects ▪ Published research at conferences: SIGMOD, CHI, ICWSM ▪ Founder of Cloudera ▪ Building tools to make learning go faster, starting with Hadoop ▪ Wednesday, April 22, 2009
  • 4. Presentation Outline What is Hadoop? ▪ Hadoop at Facebook ▪ Brief history of the Facebook Data team ▪ Summary of how we used Hadoop ▪ Reasons for choosing Hadoop ▪ How is software built and adopted? ▪ “Laboratory Life” ▪ Social Learning Theory ▪ Organizations and tools in open source development ▪ Moving from the “Age of Data” to the “Age of Learning” ▪ Wednesday, April 22, 2009
  • 5. The Hadoop community is producing innovative, world class software for web scale data management and analysis. By studying how software is built and adopted, we can enhance rate at which data processing technologies evolve. The Hadoop community is open to everyone and will play a central role in this evolution. You should join us! Wednesday, April 22, 2009
  • 6. What is Hadoop? Not Just a Stuffed Elephant Open source project, written mostly in Java ▪ Most active Apache Software Foundation project ▪ Inspired by Google infrastructure ▪ Over one hundred production deployments ▪ Project structure ▪ Hadoop Distributed File System (HDFS) ▪ Hadoop MapReduce ▪ Hadoop Core: client libraries and management tools ▪ Other subprojects: Avro, HBase, Hive, Pig, Zookeeper ▪ Wednesday, April 22, 2009
  • 7. Anatomy of a Hadoop Cluster Commodity servers ▪ 2 x 4 core CPU, 8 GB RAM, 4 x 1 TB SATA, 2 x 1 gE NIC ▪ Typically arranged in 2 level architecture ▪ Commodity Hardware Cluster 40 nodes per rack ▪ Inexpensive to acquire and maintain ▪ •! Typically in 2 level architecture –! Nodes are commodity Linux PCs –! 40 nodes/rack Wednesday, April 22, 2009
  • 8. HDFS Pool commodity servers into a single hierarchical namespace ▪ Break files into 128 MB blocks and replicate blocks ▪ Designed for large files written once but read many times ▪ Two main daemons: NameNode and DataNode ▪ NameNode manages filesystem metadata ▪ DataNode manages data using local filesystem ▪ HDFS manages checksumming, replication, and compression ▪ Throughput scales nearly linearly with cluster size ▪ Access from Java, C, command line, FUSE, or Thrift ▪ Wednesday, April 22, 2009
  • 9. '$*31%10$13+3&'1%)#$#I% #79:quot;5$)$3-.quot;.0&2$3-quot;&)quot;06-quot;*+,.0-2quot;84quot;82-$?()3quot;()*&5()3quot; /(+-.quot;()0&quot;'(-*-.;quot;*$++-%quot;C8+&*?.;Dquot;$)%quot;.0&2()3quot;-$*6quot;&/quot;06-quot;8+&*?.quot; HDFS 2-%,)%$)0+4quot;$*2&..quot;06-quot;'&&+quot;&/quot;.-2=-2.<quot;quot;B)quot;06-quot;*&55&)quot;*$.-;quot; #79:quot;.0&2-.quot;062--quot;*&5'+-0-quot;*&'(-.quot;&/quot;-$*6quot;/(+-quot;84quot;*&'4()3quot;-$*6quot; '(-*-quot;0&quot;062--quot;%(//-2-)0quot;.-2=-2.Equot; HDFS distributes file blocks among servers quot; quot; !quot; quot; Fquot; Iquot; !quot; quot; Hquot; Hquot; Fquot; quot; Fquot; !quot; #79:quot; Gquot; Gquot; Iquot; Iquot; Hquot; quot; !quot; quot; Fquot; Gquot; Gquot; Iquot; Hquot; quot; !quot;#$%&'()'*+!,'-quot;./%quot;0$/&.'1quot;2&'02345.'6738#'.&%9&%.' quot; Wednesday, April 22, 2009
  • 10. Hadoop MapReduce Fault-tolerant execution layer and API for parallel data processing ▪ Can target multiple storage systems ▪ Key/value data model ▪ Two main daemons: JobTracker and TaskTracker ▪ Three main phases: Map, Shufle, and Reduce ▪ Growing sophistication for job and task scheduling ▪ Many client interfaces ▪ Java, C++, Streaming ▪ Pig, SQL (Hive QL) ▪ Wednesday, April 22, 2009
  • 11. MapReduce MapReduce pushes work out to the data (#)**+%$#41'% Kquot; Qquot; #)5#0$#.1%*6%(/789% )#$#%)'$3:;$*0% Qquot; !quot; '$3#$1.%$*%+;'quot;%=*34% Nquot; Nquot; *;$%$*%#0%0*)1'%0%#% ?@;'$13A%Bquot;'%#@@*='% #0#@'1'%$*%3;0%0% Kquot; +#3#@@1@%#0)%1@0#$1'% $quot;1%:*$$@101?4'% Pquot; +*'1)%:%*0*@$quot;?% !quot; '$*3#.1%''$1'A% Qquot; Kquot; Pquot; Pquot; !quot; Nquot; quot; !quot;#$%'()'*+,--.'.$/0/'1-%2'-$3'3-'30',+3+' quot; Wednesday, April 22, 2009
  • 12. Hadoop Subprojects Avro ▪ Cross-language framework for RPC ▪ HBase ▪ Table storage above HDFS, modeled after Google’s BigTable ▪ Hive ▪ SQL interface to structured data stored in HDFS ▪ Pig ▪ Language for data flow programming ▪ Zookeeper ▪ Coordination service for distributed systems ▪ Wednesday, April 22, 2009
  • 13. Hadoop at Yahoo! Jan 2006: Hired Doug Cutting ▪ Apr 2006: Sorted 1.9 TB on 188 nodes in 47 hours ▪ March 2008: Hadoop Summit attracted several hundred attendees ▪ Apr 2008: Sorted 1 TB on 910 nodes in 209 seconds ▪ Aug 2008: Deployed 4,000 node Hadoop cluster ▪ Data Points ▪ Over 20,000 nodes running Hadoop ▪ Hundreds of thousands of jobs per day ▪ Typical HDFS cluster: 1,400 nodes, 2 PB capacity ▪ Largest shufle is 450 TB ▪ Wednesday, April 22, 2009
  • 14. Facebook Before Hadoop Early 2006: The First Research Scientist Source data living on horizontally partitioned MySQL tier ▪ Intensive historical analysis dificult ▪ No way to assess impact of changes to the site ▪ First try: Python scripts pull data into MySQL ▪ Second try: Python scripts pull data into Oracle ▪ ...and then we turned on impression logging ▪ Wednesday, April 22, 2009
  • 15. Facebook Data Infrastructure 2007 Scribe Tier MySQL Tier Data Collection Server Oracle Database Server Wednesday, April 22, 2009
  • 16. Facebook Data Infrastructure 2008 Scribe Tier MySQL Tier Hadoop Tier Oracle RAC Servers Wednesday, April 22, 2009
  • 17. Facebook Workloads Data collection ▪ server logs ▪ application databases ▪ web crawls ▪ Thousands of multi-stage processing pipelines ▪ Summaries consumed by external users ▪ Summaries for internal reporting ▪ Ad optimization pipeline ▪ Experimentation platform pipeline ▪ Ad hoc analyses ▪ Wednesday, April 22, 2009
  • 18. Facebook Hadoop Statistics Over 700 servers running Hadoop in one data center ▪ 2.5 PB in largest Hadoop cluster ▪ 15 TB loaded into Hadoop cluster each day ▪ 4,000 MapReduce jobs with 800,000 tasks run per day ▪ 55 TB of data processed per day ▪ 15 TB of additional data produced from cluster activity per day ▪ Hadoop cluster not retiring data! ▪ Wednesday, April 22, 2009
  • 19. Why Did Facebook Choose Hadoop? 1. Demonstrated eectiveness for primary workload 2. Proven ability to scale past any commercial vendor 3. Easy provisioning and capacity planning with commodity nodes 4. Data access for engineers and business analysts 5. Single system to manage XML, JSON, text, and relational data 6. No schemas enabled data collection without involving Data team 7. Cost of software: zero dollars 8. Deep commitment to continued development from Yahoo! 9. Active user and developer community 10. Apache-licensed open source code Wednesday, April 22, 2009
  • 20. Hadoop Community Support People Build Technology Most active Apache mailing lists ▪ Details oficial documentation per release ▪ Three books this year: O’Reilly, Apress, Manning ▪ Free training videos online ▪ Regular user group meetings in many cities ▪ University courses across the world ▪ Growing consultant and sys integrator expertise ▪ Commercial training and support from Cloudera ▪ Wednesday, April 22, 2009
  • 21. How Software is Built Methodological Reflexivity Latour and Woolgar’s “Laboratory Life” ▪ Study scientists doing science ▪ Use “thick descriptions” and focus on “microconcerns” ▪ Some studies of closed and open source development exist ▪ “Mythical Man Month”, “Cathedral and the Bazaar” ▪ Hertel et al. surveyed 141 Linux kernel developers ▪ Focus on the people creating code ▪ Less religion, more empirical analyses ▪ Build tools to facilitate interaction and output ▪ Wednesday, April 22, 2009
  • 22. Building Open Source Software Structural Conditions for Success Moon and Sproul proposed some rules for successful projects ▪ Authority comes from competence ▪ Leaders have clear responsibilities and delegate often ▪ The code has a modular structure ▪ Establish a parallel release policy: stable and experimental ▪ Give credit to non-source contributions, e.g. documentation ▪ Communicate clear rules and norms for community online ▪ Use simple and reliable communication tools ▪ Wednesday, April 22, 2009
  • 23. Building Software Faster Consolidate Best Practices Javascript frameworks starting to converge ▪ Many adopting jQuery’s selector syntax ▪ Significant benchmarks emerging ▪ Web frameworks push idioms into project structure ▪ What would be the Rails/Django equivalent for data storage? ▪ Reusable components also nice, e.g. log structured merge trees ▪ Compare work on BOOM, RodentStore ▪ Debian distributes release note writing responsibility via “beats” ▪ Wednesday, April 22, 2009
  • 24. Complications of Open Source Intellectual property ▪ Trademark, Copyright, Patent, and Trade Secret ▪ Litigation history ▪ Business models and foundations to ensure long-term support ▪ Direct support: Red Hat, MySQL ▪ Indirect support: LLVM, GSoC ▪ Foundations: Apache, Python, Django ▪ Diversity of licenses ▪ Licenses form communities ▪ Licenses change over time (cf. Rambus BSD incident) ▪ Wednesday, April 22, 2009
  • 25. How Software is Adopted Choosing the Right Tool for the Job Must be aware that a software project exists ▪ Tools like GitHub, Ohloh, Launchpad ▪ Sites like Reddit and Hacker News ▪ Existing example use cases are critical ▪ At Facebook, we studied motivations for content production ▪ Especially eective: Bandura’s “Social Learning Theory” ▪ Hadoop being run in production at Yahoo! and Facebook ▪ Active user communities and great documentation ▪ Wednesday, April 22, 2009
  • 26. Open Learning Open Data, Hypotheses and Workflows In science, data is generated once and analyzed many times ▪ IceCube ▪ LHC ▪ Lots of places where data and visualizations get shared ▪ data.gov, Many Eyes, Swivel, theinfo.org, InfoChimps, iCharts ▪ Record which hypotheses and workflows have been applied ▪ Increase diversity of questions asked and applications built ▪ Analysis skills unevenly distributed; send skills to the data! ▪ Wednesday, April 22, 2009
  • 27. The Future of Data Processing Hadoop, the Browser, and Collaboration “The Unreasonable Eectiveness of Data”, “MAD Skills” ▪ Single namespace for your organization’s bits ▪ Single engine for distributed data processing ▪ Materialization of structured subsets into optimized stores ▪ Browser as client interface with focus on user experience ▪ The system gets better over time using workload information ▪ Cloning and sharing of common libraries and workflows ▪ Global metadata store driving collection, analysis, and reporting ▪ Version control within and between sites, cf. Orchestra ▪ Wednesday, April 22, 2009
  • 28. (c) 2009 Cloudera, Inc. or its licensors.  quot;Clouderaquot; is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0 Wednesday, April 22, 2009