Building a Social Platform
Part 3:
Scaling the Data Feed
Socialite
• Reference Implementation
– Various Fanout Feed Models
– User Graph Implementation
– Content storage
• Configur...
Architecture
GraphServiceProxy
ContentProxy
Feed Service
• Two main functions :
– Aggregating “followed” content for a user
– Forwarding user’s content to “followers”...
Fanout On Read
Fanout On Read
Pros
Simple implementation
No extra storage for timelines
Cons
– Timeline reads (typically) hit all shard...
Fanout On Write
Fanout On Write
Pros
Timeline can be single document read
Dormant users easily excluded
Working set minimized
Cons
– Fa...
Fanout On Write
• Three different approaches
– Time buckets
– Size buckets
– Cache
• Each has different pros & cons
Timeline Buckets - Time
Upsert to time range buckets for each user
> db.timed_buckets.find().pretty()
{
"_id" : {"_u" : "j...
Timeline Buckets - Size
More complex, but more consistently sized
> db.sized_buckets.find().pretty()
{
"_id" : ObjectId("....
Timeline - Cache
Store a limited cache, fall back to fanout on read
– Create single cache doc on demand with upsert
– Limi...
Embedding vs Linking Content
Embedded content for direct access
– Great when it is small, predictable in size
Link to cont...
Socialite Feed Service
• Implemented four models as plugins
– FanoutOnRead
– FanoutOnWrite – Buckets (size)
– FanoutOnWrit...
Benchmark by feed type
Benchmarking the Feed
• Biggest challenge: scaling the feed
• High cost of "fanout on write"
• Popular user posts => # ope...
Benchmarking the Feed
• Timeline is different from content!
– "It's a Cache"
IT CAN BE REBUILT!
Benchmarking the Feed
• MongoDB as a cache
IT CAN BE REBUILT!
Effect of removing the cache and forcing drop-back to
fanout on read and rebuilding of the cache:
Bench...
Benchmarking the Feed
Benchmarking the Feed
Benchmarking the Feed
• Results
– last two weeks
– ran load with one million users
– ran load with ten million users (curr...
Summary
Socialite
• Real Working Implementation
– Implements All Components
– Configurable models and options
• Built-in benchmark...
https://github.com/10gen-labs/socialite
Thank You!
Upcoming SlideShare
Loading in...5
×

Socialite, the Open Source Status Feed Part 3: Scaling the Data Feed

3,667

Published on

Scaling the delivery of posts and content to the follower networks of millions of users has many challenges. In this section we look at the various approaches to fanning out posts and look at a performance comparison between them. We will highlight some tricks for caching the recent timeline of active users to drive down read latency. We will also look at overall performance metrics from Socialite as we scale from a single replica set to a large sharded environment using MMS Automation.

Published in: Technology
0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,667
On Slideshare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
46
Comments
0
Likes
6
Embeds 0
No embeds

No notes for slide
  • For a Social Platform to store and deliver streaming timelines over long periods of time, careful attention must be paid to the way content is stored. We provide a detailed look into storing an infinite timeline of data while optimizing indexing and sharding configuration for access the most recent window of data. We will also look at some overall performance metrics from Socialite as we scale from a single replica set to a large sharded environment.
  • image at https://dropwizard.github.io/dropwizard of the hat 

  • BRUTAL!!!
  • Variants?
  • Should you embed the messages/content into "cache"/buckets/etc. or just store references?

  • WHICH ONE DID WE IMPLEMENT IN SOCIALITE???
    All work with Async Service(? or mention later)
    And we did benchmark them! -> Asya
  • examining latency of reading content by fanout type - note two types of latency – for sender and for recipient.
    scaling throughput... THIS WILL NOT SCALE LINEARLY(!)

    *RERUN WITH SEVERAL SHARDS* replace with new screenshot

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

    Number of updates across all cache / number of documents updated


  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • MongoDB as a cache
    Storage amplification on a feed service – Justin Bieber makes a single post and we need to write it to 2 million timelines.... ???
    Cache only for active users.

  • Some kind of wrap-up
  • image at https://dropwizard.github.io/dropwizard of the hat 

  • Socialite, the Open Source Status Feed Part 3: Scaling the Data Feed

    1. 1. Building a Social Platform Part 3: Scaling the Data Feed
    2. 2. Socialite • Reference Implementation – Various Fanout Feed Models – User Graph Implementation – Content storage • Configurable models and options • REST API in Dropwizard (Yammer) – https://dropwizard.github.io/dropwizard/ • Built-in benchmarking https://github.com/10gen-labs/socialite
    3. 3. Architecture GraphServiceProxy ContentProxy
    4. 4. Feed Service • Two main functions : – Aggregating “followed” content for a user – Forwarding user’s content to “followers” • Common implementation models : – Fanout on read • Query content of all followed users on fly – Fanout on write • Add to “cache” of each user’s timeline for every post • Various storage models for the timeline
    5. 5. Fanout On Read
    6. 6. Fanout On Read Pros Simple implementation No extra storage for timelines Cons – Timeline reads (typically) hit all shards – Often involves reading more data than required – May require additional indexing on Content
    7. 7. Fanout On Write
    8. 8. Fanout On Write Pros Timeline can be single document read Dormant users easily excluded Working set minimized Cons – Fanout for large follower lists can be expensive – Additional storage for materialized timelines
    9. 9. Fanout On Write • Three different approaches – Time buckets – Size buckets – Cache • Each has different pros & cons
    10. 10. Timeline Buckets - Time Upsert to time range buckets for each user > db.timed_buckets.find().pretty() { "_id" : {"_u" : "jsr", "_t" : 516935}, "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"}, {"_id" : ObjectId("...dd2"), "_a" : "ian", "_m" : "message from ian"} ] } { "_id" : {"_u" : "ian", "_t" : 516935}, "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"} ] } { "_id" : {"_u" : "jsr", "_t" : 516934 }, "_c" : [ {"_id" : ObjectId("...da7"), "_a" : "ian", "_m" : "earlier from ian"} ] }
    11. 11. Timeline Buckets - Size More complex, but more consistently sized > db.sized_buckets.find().pretty() { "_id" : ObjectId("...122"), "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"}, {"_id" : ObjectId("...dd2"), "_a" : "ian", "_m" : "message from ian"}, {"_id" : ObjectId("...da7"), "_a" : "ian", "_m" : "earlier from ian"} ], "_s" : 3, "_u" : "jsr" } { "_id" : ObjectId("...011"), "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"} ], "_s" : 1, "_u" : "ian" }
    12. 12. Timeline - Cache Store a limited cache, fall back to fanout on read – Create single cache doc on demand with upsert – Limit size of cache with $slice – Timeout docs with TTL for inactive users > db.timeline_cache.find().pretty() { "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"}, {"_id" : ObjectId("...dd2"), "_a" : "ian", "_m" : "message from ian"}, {"_id" : ObjectId("...da7"), "_a" : "ian", "_m" : "earlier from ian"} ], "_u" : "jsr" } { "_c" : [ {"_id" : ObjectId("...dc1"), "_a" : "djw", "_m" : "message from daz"} ], "_u" : "ian" }
    13. 13. Embedding vs Linking Content Embedded content for direct access – Great when it is small, predictable in size Link to content, store only metadata – Read only desired content on demand – Further stabilizes cache document sizes > db.timeline_cache.findOne({”_id" : "jsr"}) { "_c" : [ {"_id" : ObjectId("...dc1”)}, {"_id" : ObjectId("...dd2”)}, {"_id" : ObjectId("...da7”)} ], ”_id" : "jsr" }
    14. 14. Socialite Feed Service • Implemented four models as plugins – FanoutOnRead – FanoutOnWrite – Buckets (size) – FanoutOnWrite – Buckets (time) – FanoutOnWrite - Cache • Switchable by config • Store content by reference or value • Benchmark-able back to back
    15. 15. Benchmark by feed type
    16. 16. Benchmarking the Feed • Biggest challenge: scaling the feed • High cost of "fanout on write" • Popular user posts => # operations: – Content collection insert: 1 – Timeline Cache: on average, 130+ cache document updates • SCATTER GATHER (slowest shard determines latency)
    17. 17. Benchmarking the Feed • Timeline is different from content! – "It's a Cache" IT CAN BE REBUILT!
    18. 18. Benchmarking the Feed • MongoDB as a cache
    19. 19. IT CAN BE REBUILT! Effect of removing the cache and forcing drop-back to fanout on read and rebuilding of the cache: Benchmarking the Feed
    20. 20. Benchmarking the Feed
    21. 21. Benchmarking the Feed
    22. 22. Benchmarking the Feed • Results – last two weeks – ran load with one million users – ran load with ten million users (currently running) – used avg send rate 1K/s; 2K/s; reads 10K-20k/s – 22 AWS c3.2xlarge servers (7.5GB RAM) – 18 across six shards (3 content, 3 user graph) – 4 mongos and app machines – 2 c2x4xlarge servers (30GB RAM) – timeline feed cache (six shards)
    23. 23. Summary
    24. 24. Socialite • Real Working Implementation – Implements All Components – Configurable models and options • Built-in benchmarking • Questions? – We will be at "Ask The Experts" this afternoon! https://github.com/10gen-labs/socialite https://github.com/10gen-labs/socialite
    25. 25. https://github.com/10gen-labs/socialite Thank You!
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×