• Save

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Apache Hadoop and HBase

on

  • 24,593 views

Apache Hadoop and HBase

Apache Hadoop and HBase

Todd Lipcon
Cloudera

Statistics

Views

Total Views
24,593
Views on SlideShare
21,368
Embed Views
3,225

Actions

Likes
85
Downloads
0
Comments
1

20 Embeds 3,225

http://nosql.mypopescu.com 2062
http://softwarestrategiesblog.com 948
http://www.scoop.it 125
http://students.kennesaw.edu 21
http://127.0.0.1:8000 19
http://kiran-thirumala.dotcloud.com 11
http://vasiljevic.net 10
http://192.168.6.56:3000 5
http://www.thirumalakiran.com 5
http://webcache.googleusercontent.com 4
http://static.slidesharecdn.com 3
http://djangotutorial-thirumala.dotcloud.com 3
https://softwarestrategiesblog.com 2
http://localhost:3000 1
http://www.linkedin.com 1
http://www.readpath.com 1
http://feeds.feedburner.com 1
http://www.slideshare.net 1
https://www.google.com 1
http://nosql.mypopescu.com.sixxs.org 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Apache Hadoop and HBase Apache Hadoop and HBase Presentation Transcript

  • Apache Hadoop and HBase Todd Lipcon todd@cloudera.com @tlipcon @cloudera Nov 2, 2010
  • 今日は! Software Engineer at Hadoop contributor, HBase committer Previously: systems programming, operations, large scale data analysis I love data and data systems
  • Outline Why should you care? (Intro) What is Hadoop? How does it work? Hadoop MapReduce The Hadoop Ecosystem Questions
  • Data is everywhere. Data is important.
  • “I keep saying that the sexy job in the next 10 years will be statisticians, and I‟m not kidding.” Hal Varian (Google‟s chief economist)
  • Are you throwing away data? Data comes in many shapes and sizes: relational tuples, log files, semistructured textual data (e.g., e- mail), … . Are you throwing it away because it doesn‟t „fit‟?
  • So, what‟s Hadoop?
  • Apache Hadoop is an open-source system to reliably store and process A LOT of information across many commodity computers.
  • Two Core Components Store Process HDFS Map/Reduce Self-healing Fault-tolerant high-bandwidth distributed clustered storage. processing.
  • What makes Hadoop special?
  • Hadoop separates distributed system fault- tolerance code from application logic. Unicorns Systems Statisticians Programmers
  • Hadoop lets you interact with a cluster, not a bunch of machines. Image:Yahoo! Hadoop cluster [ OSCON ‟07 ]
  • Hadoop scales linearly with data size or analysis complexity. Data-parallel or compute-parallel. For example: Extensive machine learning on <100GB of image data Simple SQL-style queries on >100TB of clickstream data Hadoop works for both applications!
  • A Typical Look... 5-4000 commodity servers (8-core, 24GB RAM, 4-12 TB, gig-E) 2-level network architecture 20-40 nodes per rack
  • Hadoop sounds like magic. How is it possible?
  • Cluster nodes Master nodes (1 each) NameNode (metadata server and database) JobTracker (scheduler) Slave nodes (1-4000 each) DataNodes TaskTrackers (block storage) (task execution)
  • HDFS Data Storage /logs/weblog.txt DN 1 64MB blk_29232 DN 2 158MB 30MB 64MB blk_19231 DN 3 blk_329432 NameNode DN 4
  • HDFS Write Path
  • • HDFS has split the file into 64MB blocks and stored it on the DataNodes. • Now, we want to process that data.
  • The MapReduce Programming Model
  • You specify map() and reduce() functions. The framework does the rest.
  • map() map: K₁,V₁→list K₂,V₂ Key: byte offset 193284 Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326” Key: userimage Value: 2326 bytes The map function runs on the same node as the data was stored!
  • Input Format • Wait! HDFS is not a Key-Value store! • InputFormat interprets bytes as a Key and Value 127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326 Key: log offset 193284 Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326”
  • The Shuffle Each map output is assigned to a “reducer” based on its key map output is grouped and sorted by key
  • reduce() K₂, iter(V₂)→list(K₃,V₃) Key: userimage Value: 2326 bytes (from map task 0001) Value: 1000 bytes (from map task 0008) Value: 3020 bytes (from map task 0120) Reducer function Key: userimage Value: 6346 bytes TextOutputFormat userimage t 6346
  • Putting it together...
  • Hadoop is not NoSQL (sorry!) Hive project adds SQL support to Hadoop HiveQL (SQL dialect) compiles to a query plan Query plan executes as MapReduce jobs
  • Hive Example CREATE TABLE movie_rating_data ( userid INT, movieid INT, rating INT, unixtime STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't„ STORED AS TEXTFILE; LOAD DATA INPATH „/datasets/movielens‟ INTO TABLE movie_rating_data; CREATE TABLE average_ratings AS SELECT AVG(rating) FROM movie_rating_data GROUP BY movieid;
  • The Hadoop Ecosystem (Column DB)
  • Hadoop in the Wild (yes, it‟s used in production) Yahoo! Hadoop Clusters: > 82PB, >25k machines (Eric14, HadoopWorld NYC ‟09) Facebook: 15TB new data per day; 1200 machines, 21PB in one cluster Twitter: ~1TB per day, ~80 nodes Lots of 5-40 node clusters at companies without petabytes of data (web, retail, finance, telecom, research)
  • What about real time access? • MapReduce is a batch system • The fastest MR job takes 24 seconds • HDFS just stores bytes, and is append- only • Not about to serve data for your next web site.
  • Apache HBase HBase is an open source, distributed, sorted map modeled after Google‟s BigTable
  • HBase is built on Hadoop • Hadoop provides: • Fault tolerance • Scalability • Batch processing with MapReduce
  • HDFS + HBase = HDFS + random read/write • HBase uses HDFS for storage • “Log structured merge trees” • Similar to “log structured file systems” • Same storage pattern as Cassandra!
  • A Big Sorted Map Row key Column key Timestamp Cell Row1 info:aaa 1273516197868 valueA Row1 info:bbb 1273871824184 valueB 2 Versions Row1 info:bbb 1273871823022 oldValueB of this cell Sorted by Row key Row1 info:ccc 1273746289103 valueC and Column Row2 info:hello 1273878447049 i_am_a_value Row3 info: 1273616297446 another_value Timestamp is a long value
  • HBase API • get(row) • put(row, map<column, value>) • scan(key range, filter) • increment(row, columns) • … (checkAndPut, delete, etc…) • MapReduce/Hive
  • HBase Architecture
  • HBase in Numbers • Largest cluster: 600 nodes, ~600TB • Most clusters: 5-20 nodes, 100GB-4TB • Writes: 1-3ms, 1k-10k writes/sec per node • Reads: 0-3ms cached, 10-30ms disk • 10-40k reads / second / node from cache • Cell size: 0-3MB preferred
  • HBase compared • Favors Consistency over Availability (but availability is good in practice!) • Great Hadoop integration (very efficient bulk loads, MapReduce analysis) • Ordered range partitions (not hash) • Automatically shards/scales (just turn on more servers) • Sparse column storage (not key-value)
  • HBase in Production • Facebook (product release soon) • StumbleUpon / su.pr • Mozilla (receives crash reports) • … many others
  • Ok, fine, what next? Get Hadoop! Cloudera‟s Distribution for Hadoop http://cloudera.com/ Available in Japanese! http://hadoop.apache.org/ Try it out! (Locally, VM, or EC2) Watch free training videos on http://cloudera.com/
  • Questions? • todd@cloudera.com • (feedback? yes!) • (hiring? yes!)