Big Data Real Time Analytics - A Facebook Case Study
 

Big Data Real Time Analytics - A Facebook Case Study

on

  • 20,204 views

Building Your Own Facebook Real Time Analytics System with Cassandra and GigaSpaces. ...

Building Your Own Facebook Real Time Analytics System with Cassandra and GigaSpaces.

Facebook's real time analytics system is a good reference for those looking to build their real time analytics system for big data.

The first part covers the lessons from Facebook's experience and the reason they chose HBase over Cassandra.

In the second part of the session, we learn how we can build our own Real Time Analytics system, achieve better performance, gain real business insights, and business analytics on our big data, and make the deployment and scaling significantly simpler using the new version of Cassandra and GigaSpaces Cloudify.

Statistics

Views

Total Views
20,204
Views on SlideShare
19,905
Embed Views
299

Actions

Likes
27
Downloads
425
Comments
1

11 Embeds 299

http://paper.li 124
http://learni.st 79
http://localhost 42
https://twitter.com 23
http://jmmiddleware.wordpress.com 17
http://a0.twimg.com 4
http://us-w1.rockmelt.com 3
http://twitter.com 2
http://tweetedtimes.com 2
http://huhry.dyndns-web.com 2
https://si0.twimg.com 1
More...

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Blog References:
    1) Real Time analytics for Big Data: Facebook's New Realtime Analytics System - http://natishalom.typepad.com/nati_shaloms_blog/2011/07/real-time-analytics-for-big-data-an-alternative-approach-to-facebooks-new-realtime-analytics-system.html
    2) Real Time Analytics for Big Data: An Alternative Approach - http://natishalom.typepad.com/nati_shaloms_blog/2011/07/real-time-analytics-for-big-data-an-alternative-approach.html
    3) A recorded version of the presentation is available here:
    http://natishalom.typepad.com/nati_shaloms_blog/2012/01/realtime-analytics-for-big-data-a-facebook-case-study.html
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • http://developers.facebook.com/blog/post/476/
  • http://highscalability.com/blog/2011/3/22/facebooks-new-realtime-analytics-system-hbase-to-process-20.html MySQL DB Counters Have a row with a key and a counter. Results in lots of database activity. Stats are kept at a day bucket granularity. Every day at midnight the stats would roll over.  When the roll over period is reached this resulted in a lot of writes to the database, which caused a lot of lock contention. Tried to spread the work by taking into account time zones.  Tried to shard things differently. The high write rate led to lock contention, it was easy to overload the databases, had to constantly monitor the databases, and had to rethink their sharding strategy. Solution not well tailored to the problem. In-Memory Counters If you are worried about bottlenecks in IO then throw it all in-memory. No scale issues. Counters are stored in memory so writes are fast and the counters are easy to shard. Felt in-memory counters, for reasons not explained, weren't as accurate as other approaches. Even a 1% failure rate would be unacceptable. Analytics drive money so the counters have to be highly accurate.  They didn't implement this system. It was a thought experiment and the accuracy issue caused them to move on. MapReduce Used Hadoop/Hive for previous solution.  Flexible. Easy to get running. Can handle IO, both massive writes and reads. Don't have to know how they will query ahead of time. The data can be stored and then queried. Not realtime. Many dependencies. Lots of points of failure. Complicated system. Not dependable enough to hit realtime goals. Cassandra HBase seemed a better solution based on availability and the write rate. Write rate was the huge bottleneck being solved.
  • http://highscalability.com/blog/2011/3/22/facebooks-new-realtime-analytics-system-hbase-to-process-20.html The Winner: HBase + Scribe + Ptail + Puma At a high level: HBase stores data across distributed machines. Use a tailing architecture, new events are stored in log files, and the logs are tailed. A system rolls the events up and writes them into storage. A UI pulls the data out and displays it to users. Data Flow User clicks Like on a web page. Fires AJAX request to Facebook. Request is written to a log file using Scribe.  Scribe handles issues like file roll over. Scribe is built on the same HTFS file store Hadoop is built on. Write extremely lean log lines. The more compact the log lines the more can be stored in memory. Ptail Data is read from the log files using Ptail. Ptail is an internal tool built to aggregate data from multiple Scribe stores. It tails the log files and pulls data out. Ptail data is separated out into three streams so they can eventually be sent to their own clusters in different datacenters. Plugin impression News feed impressions Actions (plugin + news feed) Puma Batch data to lessen the impact of hot keys. Even though HBase can handle a lot of writes per second they still want to batch data. A hot article will generate a lot of impressions and news feed impressions which will cause huge data skews which will cause IO issues. The more batching the better. Batch for 1.5 seconds on average. Would like to batch longer but they have so many URLs that they run out of memory when creating a hashtable. Wait for last flush to complete for starting new batch to avoid lock contention issues. UI  Renders Data Frontends are all written in PHP. The backend is written in Java and Thrift is used as the messaging format so PHP programs can query Java services. Caching solutions are used to make the web pages display more quickly. Performance varies by the statistic. A counter can come back quickly. Find the top URL in a domain can take longer. Range from .5 to a few seconds.  The more and longer data is cached the less realtime it is. Set different caching TTLs in memcache. MapReduce The data is then sent to MapReduce servers so it can be queried via Hive. This also serves as a backup plan as the data can be recovered from Hive. Raw logs are removed after a period of time. HBase is a distribute column store.  Database interface to Hadoop. Facebook has people working internally on HBase.  Unlike a relational database you don't create mappings between tables. You don't create indexes. The only index you have a primary row key. From the row key you can have millions of sparse columns of storage. It's very flexible. You don't have to specify the schema. You define column families to which you can add keys at anytime. Key feature to scalability and reliability is the WAL, write ahead log, which is a log of the operations that are supposed to occur.  Based on the key, data is sharded to a region server.  Written to WAL first. Data is put into memory. At some point in time or if enough data has been accumulated the data is flushed to disk. If the machine goes down you can recreate the data from the WAL. So there's no permanent data loss. Use a combination of the log and in-memory storage they can handle an extremely high rate of IO reliably.  HBase handles failure detection and automatically routes across failures. Currently HBase resharding is done manually. Automatic hot spot detection and resharding is on the roadmap for HBase, but it's not there yet. Every Tuesday someone looks at the keys and decides what changes to make in the sharding plan. Schema  Store on a per URL basis a bunch of counters. A row key, which is the only lookup key, is the MD5 hash of the reverse domain Selecting the proper key structure helps with scanning and sharding. A problem they have is sharding data properly onto different machines. Using a MD5 hash makes it easier to say this range goes here and that range goes there.  For URLs they do something similar, plus they add an ID on top of that. Every URL in Facebook is represented by a unique ID, which is used to help with sharding. A reverse domain,  com.facebook/  for example, is used so that the data is clustered together. HBase is really good at scanning clustered data, so if they store the data so it's clustered together they can efficiently calculate stats across domains.  Think of every row a URL and every cell as a counter, you are able to set different TTLs (time to live) for each cell. So if keeping an hourly count there's no reason to keep that around for every URL forever, so they set a TTL of two weeks. Typically set TTLs on a per column family basis.  Per server they can handle 10,000 writes per second.  Checkpointing is used to prevent data loss when reading data from log files.  Tailers save log stream check points  in HBase. Replayed on startup so won't lose data. Useful for detecting click fraud, but it doesn't have fraud detection built in. Tailer Hot Spots In a distributed system there's a chance one part of the system can be hotter than another. One example are region servers that can be hot because more keys are being directed that way. One tailer can be lag behind another too. If one tailer is an hour behind and the others are up to date, what numbers do you display in the UI? For example, impressions have a way higher volume than actions, so CTR rates were way higher in the last hour. Solution is to figure out the least up to date tailer and use that when querying metrics.
  • A Potential for Improvement There are lots of areas in which you can see potential improvements, if the assumptions are changed. As a contrast to Facebook's working system: We can simplify the design. If memory can be seen as transactional - and it can - we can use them without transforming them as they proceed along our analytics workflow. This makes our design and implementation much simpler to implement and test, and performance improves as well. We can strengthen the design. With a polling semantic, such systems are brittle, relying on systems that pull data in order to generate realtime analytics data. We should be able to reduce the fragility of the system, even while making it faster. We can strengthen the implementation. With batching subsystems, there are limits shouldn’t exist. For example, one concern in Facebook's implementation is the use of an in-memory hash table that stores intermediate data; the in-memory aspect isn’t a concern until you realize that the batch sizes are chosen partially to make sure that this hash table doesn’t overflow available space. We can allow deployments to change databases based on their requirements. There's nothing wrong with HBase, but it's got specific characteristics that aren't appropriate for all enterprises. We can design a system which you’d be able to deploy on various and flexible platforms, and we can migrate the underlying long-term data store to a different database if needed. We can consolidate the analytics system so that management is easier and unified. While there are system management standards like SNMP that allow management events to be presented  in the same way no matter the source, having so many different pieces means that managing the system requires an encompassing understanding, which makes maintenance and scaling more difficult. What we want to do, then, is create a general model for an application that can accomplish the same goals as Facebook’s realtime analytics system, while leveraging the capabilities that in-memory data grids offer where available, potentially offering improvement in the areas of scalability, manageability, latency, platform neutrality, and simplicity, all while increasing ease of data access. That sounds like quite a tall order, but it’s doable. The key is to remember that at heart, realtime analytics represent an events system. Facebook’s entire architecture is designed to funnel events through various channels, such that they can safely and sequentially manage event updates. Therefore, they receive a massive set of events that “look like” marbles, which they line up in single file; they then sort the marbles by color, you might say, and for each color they create a bundle of sticks; the sticks are lit on fire, and when the heat goes up past a certain temperature, steam is generated, which turns a turbine. It’s a real-life Rube Goldberg machine, which is admirable in that it works, but much of it is still unnecessary if the assumptions about memory ("unreliable") and database ("HBase is the only target that counts") are changed. Looking at the analogy from the previous paragraph, there’s no need to change a marble into anything. The marble is enough.
  • Value Write/Read scaling through partitioning Performance through Memory speed Reliability through replication and redundancy
  • Value Data Grid like GigaSpaces comes with rich set of API that provides not only the mean to store data fast and reliably but also access the data, query it just as you would do with a database. Specifically for GigaSpaces we support both JPA and Document API and the way to mix and match between those API’s Unlike Scribe and log system we can now look at the data as it comes in and not only once it is stored into the database The later makes it possible to partition data based on time – First day in memory and the rest through the database etc.
  • Collocating the processing with the data can provides the biggest gain in terms of scalbility and performance as we reduce the amount of network hops as well as serialization overhead. We also reduce the number of moving parts which in itself simplifies our runtime architecture and our ability to scale. The other benefit is that we decentralize the Puma services from the facebook example and thus make the entire architecture significantly more scalable.
  • He snippet of code shows the part of the code that generate the statistical information as the events comes in The template defines the fliter for the events. In the above example we will filter any event that is of type Data that has a false value in its “processed” attribute. For every event that match this filter the method eventListener will be called with the appropriate data object.
  • Value gained: Avoid lockin to specific NoSQL API Performance – reduced network hops, serialization overhead Simplicity – less moving parts Scalability without compromising on consistency (Strict consistency at the front, eventual consistency for the long term data) JPA/Stanard API
  • content based routing, workflow

Big Data Real Time Analytics - A Facebook Case Study Big Data Real Time Analytics - A Facebook Case Study Presentation Transcript

  • Real Time Analytics for Big Data Lessons from Facebook..
  • The Real Time Boom.. ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved Google Real Time Web Analytics Google Real Time Search Facebook Real Time Social Analytics Twitter paid tweet analytics SaaS Real Time User Tracking New Real Time Analytics Startups..
  • Analytics @ Twitter
  • Note the Time dimension
  • The data resolution & processing models
  • Traditional analytics applications
    • Scale-up Database
      • Use traditional SQL database
      • Use stored procedure for event driven reports
      • Use flash memory disks to reduce disk I/O
      • Use read only replica to scale-out read queries
    • Limitations
      • Doesn’t scale on write
      • Extremely expensive (HW + SW)
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • CEP – Complex Event Processing
    • Process the data as it comes
    • Maintain a window of the data in-memory
    • Pros:
      • Extremely low-latency
      • Relatively low-cost
    • Cons
      • Hard to scale (Mostly limited to scale-up)
      • Not agile - Queries must be pre-generated
      • Fairly complex
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • In Memory Data Grid
    • Distributed in-memory database
    • Scale out
    • Pros
      • Scale on write/read
      • Fits to event driven (CEP style) , ad-hoc query model
    • Cons
      • Cost of memory vs disk
      • Memory capacity is limited
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • NoSQL
    • Use distributed database
      • Hbase, Cassandra, MongoDB
    • Pros
      • Scale on write/read
      • Elastic
    • Cons
      • Read latency
      • Consistency tradeoffs are hard
      • Maturity – fairly young technology
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Hadoop MapReudce
    • Distributed batch processing
    • Pros
      • Designed to process massive amount of data
      • Mature
      • Low cost
    • Cons
      • Not real-time
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Hadoop Map/Reduce – Reality check.. ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • So what’s the bottom line? ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Facebook Real-time Analytics System ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Goals
    • Show why plugins are valuable.
      • What value is your business deriving from it?
    • Make the data more actionable.
      • Help users take action to make their content more valuable.
      • How many people see a plugin, how many people take action on it, and how many are converted to traffic back on your site.  
    • Make the data more timely. 
      • Went from a 48-hour turn around to 30 seconds.
      • Multiple points of failure were removed to make this goal. 
    • Handle massive load
      • 20 billion events per day (200,000 events per second)
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • The actual analytics..
    • Like button analytics
    • Comments box analytics
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Technology Evaluation
    • MySQL DB Counters
    • In-Memory Counters
    • MapReduce
    • Cassandra
    • HBase
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • The solution.. PTail Scribe Puma Hbase HDFS Real Time Long Term Batch 1.5 Sec 10,000 write/sec per server FACEBOOK Log FACEBOOK Log FACEBOOK Log
  • Checking the assumptions.. ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Facebook Analytics.Next..
    • What if..
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
      • We can rely on memory as a reliable store?
      • We can’t decide on a particular NoSQL database?
      • We need to package the solution as a product?
  • Step 1: Use memory..
    • Instead of treating memory as a cache, why not treat it as a primary data store?
      • Facebook keeps 80% of its data in Memory (Stanford research)
      • RAM is 100-1000x faster than Disk (Random seek)
        • Disk - 5 -10ms
        • RAM – x0.001msec
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved Events Memory Grid Data Grid Data Grid Data Grid FACEBOOK FACEBOOK FACEBOOK
  • Step 1: Use memory..
    • Reliability is achieved through redundancy and replication
    • One Data. Any API
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved Events Any API Data Grid FACEBOOK FACEBOOK FACEBOOK
  • Step 2 – Collocate
    • Putting the code together with the data.
    Events Processing Grid Data Grid Data Grid Data Grid FACEBOOK FACEBOOK FACEBOOK
  • Step 2 – Collocate
    • Putting the code together with the data.
    Events Processing Grid Data Grid Data Grid Data Grid FACEBOOK FACEBOOK FACEBOOK @EventDriven @Polling public class SimpleListener { @EventTemplate Data unprocessedData () { Data template = new Data (); template . setProcessed ( false ); return template ; } @SpaceDataEvent public Data eventListener ( Data event ) { //process Data here } }
  • Step 3 – Write behind to SQL/NoSQL Events Processing Grid Open Long Term persistency Write Behind FACEBOOK FACEBOOK FACEBOOK Data Grid Data Grid Data Grid
  • Economic Data Scaling
    • Combine memory and disk
      • Memory is x100, x1000 lower than disk for high data access rate (Stanford research)
      • Disk is lower at cost for high capacity lower access rate.
      • Solution:
        • Memory - short-term data,
        • Disk - long term. data
      • Only ~16G required to store the log in memory ( 500b messages at 10k/h ) at a cost of ~32$ month per server.
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved Memory Disk
  • Economic Scaling
    • Automation - reduce operational cost
    • Elastic Scaling – reduce over provisioning cost
    • Cloud portability (JClouds) – choose the right cloud for the job
    • Cloud bursting – scavenge extra capacity when needed
    ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Putting it all together Analytic Application Event Sources Write behind
    • - In Memory Data Grid
    • - RT Processing Grid
    • Light Event Processing
    • Map-reduce
    • Event driven
    • Execute code with data
    • Transactional
    • Secured
    • Elastic
    • NoSQL DB
    • Low cost storage
    • Write/Read scalability
    • Dynamic scaling
    • Raw Data and aggregated Data
    Generate Patterns
  • Putting it all together Analytic Application Event Sources Write behind
    • - In Memory Data Grid
    • - RT Processing Grid
    • Light Event Processing
    • Map-reduce
    • Event driven
    • Execute code with data
    • Transactional
    • Secured
    • Elastic
    • NoSQL DB
    • Low cost storage
    • Write/Read scalability
    • Dynamic scaling
    • Raw Data and aggregated Data
    Generate Patterns Real Time Map/Reduce R Script script = new StaticScritpt( “groovy”,”println hi; return 0”) Query q = em.createNativeQuery( “execute ?”); q.setParamter(1, script); Integer result = query.getSingleResult();
  • 5x better performance per server!
    • Hardware – Linux
      • HP DL380 G6 servers - each has:
      • 2 Intel quad-core Xeon X5560 processors (2.8 Ghz Nehalem)
      • 32 Gb RAM (4GB per core)
      • 6 * 146 Gb 15K RPM SAS disks
      • Red Hat 5.2
    Event injector Up to 128 threads GigaSpaces/ (Other Msg Server) App Services Up to 128 threads Other Giga 50,000 write/sec per server
  • Live demo Inter Day Activity (Real Time) Monthly Trend Analysis
  • 5 Big Data Predictions ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Summary Big Data Development Made Simple: Focus on your business logic, Use Big Data platform for dealing scalability, performance, continues availability ,.. Its Open: Use Any Stack : Avoid Lockin Any database (RDBMS or NoSQL); Any Cloud, Use common API’s & Frameworks . All While Minimizing Cost Use Memory & Disk for optimum cost/performance . Built-in Automation and management - Reduces operational costs Elasticity – reduce over provisioning cost
  • Further reading.. ® Copyright 2011 Gigaspaces Ltd. All Rights Reserved
  • Thank YOU! @natishalom http://blog.gigaspaces.com