• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
2011 06-30-hadoop-summit v5
 

2011 06-30-hadoop-summit v5

on

  • 21,563 views

Slides from presentation at Hadoop Summit 2011 on Facebook's Data Freeway system

Slides from presentation at Hadoop Summit 2011 on Facebook's Data Freeway system

Statistics

Views

Total Views
21,563
Views on SlideShare
6,039
Embed Views
15,524

Actions

Likes
28
Downloads
263
Comments
0

20 Embeds 15,524

http://kimws.wordpress.com 14496
http://hbase.info 736
http://blog.csdn.net 110
http://www.hanrss.com 106
https://kimws.wordpress.com 18
http://cache.baidu.com 8
http://www.google.co.kr 8
http://cn.qzs.qq.com 6
http://webcache.googleusercontent.com 6
url_unknown 6
http://paper.li 6
https://www.google.co.kr 6
http://cafe.naver.com 3
http://localhost 2
http://seaboy.tistory.com 2
https://twitter.com 1
https://www.facebook.com 1
http://blog.naver.com 1
https://www.google.nl 1
http://webcache-exp-test.googleusercontent.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Sam (35s) -Who am I? -Worked in batch computing and distributed systems for over 10 years at both research-related facilities and in the internet industry -presently work at facebook; have worked on HDFS, scribe, calligraphus, and realtime metrics Eric (?s)
  • -We’ll give you a little context of how data fits into facebook -also what pieces of datafreeway enable realtime computations (20s)
  • -we have a lot of data at facebook -some of these stats are probably old (6 months old) -accurate one: we handle about 250 TB of data per day with scribe-hdfs, and probably close to 300 TB total -we have both batch and realtime uses of data -we’ll focus on the components of datafreeway that enable realtime use (35s)
  • Entry points: A1. tfb www php code (flib/core/scribe/client.php, or use Nectar) A2. binary/script that sends data to scribe directly (fbcode/scribe/if/scribe.thrift with TFramedProtocol to localhost:1456) A* : all data comes in the form category X message Policy System -manages quotas Realtime B1. raw Data Streams: ptail B2. Realtime Analytics: flagship realtime app; ptail -> Puma/HBase Batch B3. Reporting and Batch Processing: Hadoop/Hive Cluster (Daily tables and Current Tables) (1m30s)
  • Realtime requirements Scalable: also ease of operations Reliable: Data loss sla – this means bound loss due to hw failure Fast: Data latency sla – again, in the face of slow and failing hardware Easy to use: ptail –f –time (1m30s)
  • -scribe is reliable: handles buffering when scribeh is down -really easy to use due to being on every host + language agnostic thrift api -major reason we are doing 7 GB/s (40s)
  • what -in a nut-shell, calligraphus is scribe-compatible server that logs to HDFS - why -decided on re-writing just part of scribe for scribeh; java made more sense -libhdfs and JNI has had not only memory leaks, but improper error return codes; ‘exists’ : not there and error both -1 Status -Hope to open source soon (1m)
  • -this is a different use of HDFS (not a batch system) -think of each entity as a publisher of data (say they are web hosts) -the consumer of a ptail output stream is a subscriber -writer tags what it publishes with category 1, reader asks for all messages tagged with category 1 -this way, you get both a low-latency pub-sub system and reliable persistence on HDFS! (45-60s)
  • -how do we get to HDFS as a message hub? -we first need to make sure clients can have a guarantee writes are persisted and available for read -the fsync call only happens the first sync() call; the blocks need to be persisted only once so we don ’t hit the Namenode on every client sync all -return from sync means : blocks persisted and every datanode has written the block to disk (may be in OS buffers though) -typically sync is very expensive. Experiments with HBase write-ahead-log show 50% drop in throughput with frequent sync calls (1m)
  • -with sync, next step is to make blocks being written available for read; -recall that one of our realtime requirements is 10s latency for reading data -typical block size is 512MB -4 GB/s is split out across 1000s of files; compression -without concurrent reads, data would not be visible for 1 hour (when we roll files) -concurrent reads essential to satisfy our realtime requirements (1m)
  • -updated FSNameSystem so that Namenode will return the targets for any file that is currently being written -updated DFSClient so that if it tries to read the block being written, gets the length from the datanode -can then start reading the data up to the length at the time datanode was queried -note: one problem here, visible length per datanode can vary, so if a datanode goes down, it ’s possible to get a different datanode with a shorter length; especially true in the case of lease recovery (truncation) -solved in 0.22 with visible length (sort of) (1m15s)
  • -the issue here is that data and metadata can be out of sync for last 512 byte chunk -define visible length is the length of data for which metadata exists and track in memory -if we read more data than that, we know we will get a CRC error and can therefore compute the CRC to send to the client -note: we only do the on-the-fly CRC in the strictest of situation so we don ’t regress and miss disk errors -trunk solution is more elegant and was done afterwards; possible to back-port (1m)
  • Calligraphus’ role in the Datafreeway pipeline is to take incoming data streams and persist them to HDFS Many thousands of client hosts generating log data for a number of categories and are then delivered to some randomly selected calligraphus servers => load balancing purposes Since ratio of clients to servers is very high, we can safely assume every server will receive roughly every category Question is in what way should we persist this to disk? In our system every stream written to HDFS needs to corresponds to a directory in which data is appended in the form of files To maintain proper stream semantics for downstream components, which will follow each directory as an independent stream. Thus, should avoid having multiple writers for one directory (a.k.a stream)
  • A really simple solution is just to have every writer write each category stream independently. Problem with independent writer approach: RESOURCE INEFFICIENT AND UNSCALABLE! Number of output streams is a approximately the product of # categories and # servers! Scaling to more machines or categories takes its toll on the name node (resource bottleneck) as well as down stream components that need to read these streams on a per category basis
  • This is the approach that we took. A more suitable solution is to do data stream consolidation to reduce the number of output streams. We do this by breaking the Calligraphus servers into two logical components: a router and writer. Then we add add an intermediate shuffle phase between the router and writer tier before writing to HDFS in order to consolidate streams. Calligraphus writers are assigned category streams or portions of category streams (for large streams) to be written. Based on this assignment, routers direct data streams to the appropriate writers This drastically reduces the number of output data streams and minimizes all of the problems we mentioned in the last slide. ZooKeeper is the core component in facilitating router-writer interactions, serving as a distributed map for routers and as a stream assignment platform for writers.
  • ZooKeeper can be viewed as a type of light-weight distributed file system. You can put and get data from a hierarchy namespace of nodes that can be accessed like file system paths. The paths that we define consist of a category/bucket pair and these serve as the root nodes for leader elections. We use buckets in addition to categories in order to partition category streams that are too large to handle for one writer. We run leader elections under each of these paths to determine task assignment. Since elections winners may not always have enough capacity, election winners can choose to reject leadership either immediately or shed if off in the future if load changes Aggressive reader side caches to minimize ZooKeeper network IO for queries. This is ok because mappings do not frequently change after stabilizing. Policy db sync propagates new categories into the maps and adjusts numbers of buckets (on the fly)
  • Highly available is an inherent property of ZooKeeper. We can lose a few ZooKeeper nodes and still keep going. No centralized authority (independent elections, writers independently manage load) => no single point of failure Fast map lookups a result of client caching Failover => another property of using ZooKeeper’s ephemeral nodes for leader elections Adapt to changing conditions by adding new election root nodes for new categories or by adjusting number of buckets for a category
  • We can enter about 3000 elections in just under 30 seconds routers can run elections to find leaders on all buckets in about 5-10 seconds Stable configuration it takes a bit of time since we have slow start phase to balance load, and writers need to have a little time after each bucket acquisition to determine their load -- but this is ok for our use case since in the long run the mapping is stable. If no mapping exists for some data stream, we buffer the data until it is defined. Later on during operation, we can dandles incremental changes to mapping very quickly. We can respond to election events or failures in less than a second.
  • -Servers log data to NFS filer in a way that consolidates the data into one or more files -tailer app then reads the data -examples: all need data in a timely fashion (30s)
  • 2 key points Hides the fact we have many HDFS instances: user can specify a category and get a stream Checkpointing -high value of generalization of checkpoint mechanism (used in puma) (45s)
  • Configure servers local scribed to write to scribeh Client uses ptail app to get an aggregated log stream in realtime (25s)
  • -the goal of Puma is to provide a configurable realtime analytics platform. -customers can setup pipelines here similar to with Hive, but get the data in realtime -puma is in fact a canonical streaming app in how it uses ptail -we also leverage HBase for persistence (30s)
  • Write Flow: Ptail delivers log lines at up to 600,000 per second Driver includes a parser and processor that filters lines and sends appropriate lines to the aggregation store Aggregation Store will update any metrics based on the parsed entry If the Driver sees a checkpoint line, it instead passes it to the checkpoint handler The checkpoint handler decides if it it should flush; if so, tells Aggregation to persist changes to Storage and then writes checkpoint data to storage Notes * Storage is an interface; have memory and Hbase; can change to other forms such as MySQL (1m15s)
  • Read path Client makes request to a thrift server: Sever proxies the request to the Store implementation which queries HBase Performance Elapsed time typically 200-300 ms for 30 day queries 99 th percentile, cross-country, < 500ms for 30 day queries (1m)
  • -scribe & calligraphus get data into the system -HDFS at the core -ptail provides data out -puma is our emerging streaming analytics platform (20s)
  • -puma: basically implemented a shared write-ahead-log in an Hbase table for our specific use case -using LZO or GZIP, we can ’t seek to the end of the uncompressed stream -solution: provide a basic container structure with information about compressed/uncompressed blocks. -each 1M block contains a header with a list of compressed/uncompressed offset in stream (1m)

2011 06-30-hadoop-summit v5 2011 06-30-hadoop-summit v5 Presentation Transcript

  • Data Freeway : Scaling Out to Realtime
    • Eric Hwang, Sam Rash
    • {ehwang,rash}@fb.com
  • Agenda
    • Data at Facebook
    • Data Freeway System Overview
    • Realtime Requirements
    • Realtime Components
      • Calligraphus/Scribe
      • HDFS use case and modifications
      • Calligraphus: a Zookeeper use case
      • ptail
      • Puma
    • Future Work
  • Big Data, Big Applications / Data at Facebook
    • Lots of data
      • more than 500 million active users
      • 50 million users update their statuses at least once each day
      • More than 1 billion photos uploaded each month
      • More than 1 billion pieces of content (web links, news stories, blog posts, notes, photos, etc.) shared each week
      • Data rate: over 7 GB / second
    • Numerous products can leverage the data
      • Revenue related: Ads Targeting
      • Product/User Growth related: AYML, PYMK, etc
      • Engineering/Operation related: Automatic Debugging
      • Puma: streaming queries
  • Data Freeway System Diagram
  • Realtime Requirements
      • Scalability: 10-15 GBytes/second
      • Reliability: No single point of failure
      • Data loss SLA: 0.01%
        • loss due to hardware: means at most 1 out of 10,000 machines can lose data
      • Delay of less than 10 sec for 99% of data
        • Typically we see 2s
      • Easy to use: as simple as ‘tail –f /var/log/my-log-file’
  • Scribe
    • Scalable distributed logging framework
    • Very easy to use:
      • scribe_log(string category, string message)
    • Mechanics:
      • Runs on every machine at Facebook
      • Built on top of Thrift
      • Collect the log data into a bunch of destinations
      • Buffer data on local disk if network is down
    • History:
      • 2007: Started at Facebook
      • 2008 Oct: Open-sourced
  • Calligraphus
    • What
      • Scribe-compatible server written in Java
      • emphasis on modular, testable code-base, and performance
    • Why?
      • extract simpler design from existing Scribe architecture
      • cleaner integration with Hadoop ecosystem
        • HDFS, Zookeeper, HBase, Hive
    • History
      • In production since November 2010
      • Zookeeper integration since March 2011
  • HDFS : a different use case
    • message hub
      • add concurrent reader support and sync
      • writers + concurrent readers a form of pub/sub model
  • HDFS : add Sync
    • Sync
      • implement in 0.20 (HDFS-200)
        • partial chunks are flushed
        • blocks are persisted
      • provides durability
      • lowers write-to-read latency
  • HDFS : Concurrent Reads Overview
    • Without changes, stock Hadoop 0.20 does not allow access to the block being written
    • Need to read the block being written for realtime apps in order to achieve < 10s latency
  • HDFS : Concurrent Reads Implementation
    • DFSClient asks Namenode for blocks and locations
    • DFSClient asks Datanode for length of block being written
    • opens last block
  • HDFS : Checksum Problem
    • Issue: data and checksum updates are not atomic for last chunk
    • 0.20-append fix:
      • detect when data is out of sync with checksum using a visible length
      • recompute checksum on the fly
    • 0.22 fix
      • last chunk data and checksum kept in memory for reads
  • Calligraphus: Log Writer Calligraphus Servers HDFS Scribe categories Server Server Server Category 1 Category 2 Category 3
      • How to persist to HDFS?
  • Calligraphus (Simple) Calligraphus Servers HDFS Scribe categories Number of categories Number of servers Total number of directories x = Server Server Server Category 1 Category 2 Category 3
  • Calligraphus (Stream Consolidation) Calligraphus Servers HDFS Scribe categories Number of categories Total number of directories = Category 1 Category 2 Category 3 Router Router Router Writer Writer Writer ZooKeeper
  • ZooKeeper: Distributed Map
    • Design
      • ZooKeeper paths as tasks (e.g. /root/<category>/<bucket>)
      • Cannonical ZooKeeper leader elections under each bucket for bucket ownership
      • Independent load management – leaders can release tasks
      • Reader-side caches
      • Frequent sync with policy db
    A 1 5 2 3 4 B 1 5 2 3 4 C 1 5 2 3 4 D 1 5 2 3 4 Root
  • ZooKeeper: Distributed Map
    • Real-time Properties
      • Highly available
      • No centralized control
      • Fast mapping lookups
      • Quick failover for writer failures
      • Adapts to new categories and changing throughput
  • Distributed Map: Performance Summary
    • Bootstrap (~3000 categories)
      • Full election participation in 30 seconds
      • Identify all election winners in 5-10 seconds
      • Stable mapping converges in about three minutes
    • Election or failure response usually <1 second
      • Worst case bounded in tens of seconds
  • Canonical Realtime Application
    • Examples
      • Realtime search indexing
      • Site integrity: spam detection
      • Streaming metrics
  • Parallel Tailer
    • Why?
      • Access data in 10 seconds or less
      • Data stream interface
    • Command-line tool to tail the log
      • Easy to use: ptail -f cat1
      • Support checkpoint: ptail -cp XXX cat1
  • Canonical Realtime ptail Application
  • Puma Overview
    • realtime analytics platform
    • metrics
      • count, sum, unique count, average, percentile
    • uses ptail checkpointing for accurate calculations in the case of failure
    • Puma nodes are sharded by keys in the input stream
    • HBase for persistence
  • Puma Write Path
  • Puma Read Path
  • Summary - Data Freeway
    • Highlights:
      • Scalable: 4G-5G Bytes/Second
      • Reliable: No single-point of failure; < 0.01% data loss with hardware failures
      • Realtime: delay < 10 sec (typically 2s)
    • Open-Source
      • Scribe, HDFS
      • Calligraphus/Continuous Copier/Loader/ptail (pending)
    • Applications
      • Realtime Analytics
      • Search/Feed
      • Spam Detection/Ads Click Prediction (in the future)
  • Future Work
    • Puma
      • Enhance functionality: add application-level transactions on Hbase
      • Streaming SQL interface
    • Seekable Compression format
      • for large categories, the files are 400-500 MB
      • need an efficient way to get to the end of the stream
      • Simple Seekable Format
        • container with compressed/uncompressed stream offsets
        • contains data segments which are independent virtual files
  • Fin
    • Questions?