Twitter Streaming API Architecture
Upcoming SlideShare
Loading in...5
×
 

Twitter Streaming API Architecture

on

  • 14,948 views

Follow a Tweet from creation to timeline and Streaming API delivery. The design of streaming within Twitter is influenced by the entire Twitter architecture, the direction of the platform, data ...

Follow a Tweet from creation to timeline and Streaming API delivery. The design of streaming within Twitter is influenced by the entire Twitter architecture, the direction of the platform, data syndication policies and Quality of Service requirements. We'll discuss these influences and our system implementation.

Statistics

Views

Total Views
14,948
Views on SlideShare
14,710
Embed Views
238

Actions

Likes
28
Downloads
375
Comments
1

12 Embeds 238

http://lanyrd.com 114
http://www.slideshare.net 109
http://www.redditmedia.com 3
http://nigellegg.posterous.com 2
https://twitter.com 2
http://us-w1.rockmelt.com 2
http://paper.li 1
http://paper.li 1
http://staging-assets.local.twitter.com 1
http://a0.twimg.com 1
http://www.coolfolder.com 1
http://www.slashdocs.com 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • The slide notes contain more detail.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • <br />
  • <br />
  • So, where are we going with the Streaming API? <br /> What are our constraints? <br /> <br /> We have four big goals for Streaming. <br /> <br /> First, We want users to have a low latency experience. <br /> <br /> Instant feels like the right speed for Twitter. Not 18 seconds later. More or less right now. <br /> <br /> Second, every write into Twitter is an event that someone, somewhere might be interest in. <br /> <br /> We want to expose more event types than just new Tweets. <br /> <br />
  • Third, we also want to provide full fidelity data for those that need it. <br /> <br /> Sometimes you just need everything. And you also a place to put it. <br /> <br /> And finally, we need to make Large Scale integrations with other services as easy as possible. <br /> <br /> You shouldn&#x2019;t have to wrestle with parallel fetching, rate limiting, and all that. <br /> <br /> It should be easier for all developers to get data out of Twitter. <br />
  • The REST API is not the solution for a low latency experience, <br /> for large scale integrations, or for <br /> exposing more and more event types. <br /> <br /> The REST model may great for many things, but for real-time Twitter <br /> where you just want to know what&#x2019;s changed <br /> we&#x2019;ve already pushed Request Response too far. <br /> <br /> It&#x2019;s painful. <br /> <br />
  • You can&#x2019;t quickly poll for deltas on the social graph, friends timeline, user timeline, mentions, favorites, searches, lists, and trends for just one user, never mind all your followings. <br /> <br /> Or a million users. Or ten million. Impossible. <br /> <br /> As Twitter adds more features, this just gets worse. <br /> <br /> It&#x2019;s just not practical to lift rate limits high enough to meet everyone&#x2019;s goals. <br /> <br /> The real-time REST model is near a the point of collapse. <br /> <br />
  • Why is REST so expensive? <br /> <br /> A lot of effort goes into responding to each API request. <br /> <br /> There&#x2019;s a lot to do, a lot of data to gather, and <br /> none of it is on that front end box handling the request. <br /> <br /> To make matters more difficult, the the cost and latency distributions are very wide -- <br /> from a cheap cache hit to a deep database crawl. <br /> <br /> Keeping latency low is a struggle. <br />
  • Any solution, powerful enough to solve all of these problems, is going to be a bit dangerous. <br /> <br /> It needs some controls especially if rate limits are removed. <br /> <br /> And, will still need to preserve all of our policies around <br /> abuse, privacy, terms of use, and so forth. <br />
  • We&#x2019;ve really tried to think through all of the policy implications here. <br /> <br /> Everyone has to play by the same rules and it must be <br /> possible for everyone to have a chance at building a sustainable business. <br /> <br /> We&#x2019;ve come to some win win decisions about the firehose and other elevated access levels. <br /> <br /> I think we can make nearly everyone very happy. <br /> <br /> Go to the Corp Dev Office hours at 2:30pm for more detail about our Commercial Data Licenses. <br /> <br /> Keep this in mind- solving these policy issues are requirements, just as much as the technology issues are. <br /> <br /> <br /> <br />
  • Our Solution for all this is the Streaming API. <br /> <br /> We&#x2019;ve already proven that we can offer low latency streams of all Twitter events. <br /> <br /> We&#x2019;ve been streaming these events to ourselves for quite some time. <br /> <br /> Twitter Analytics, for example, takes various private streams to feed experimental and production features. <br /> <br /> Pleasant. <br /> <br /> So, how does Streaming work? <br /> <br />
  • In a nutshell, <br /> we gather interesting events everywhere in the Twitter system and <br /> apply those events to each Streaming server. <br /> <br /> Inside the server, <br /> we examine the event just once, and <br /> route to all interested clients. <br /> <br /> <br />
  • This approach is a huge huge win over Request Response. <br /> <br /> It has turned out to be practical, stable and very efficient. <br /> <br /> Little effort is wasted. <br /> <br /> Yes, we look at each event on each of our streaming servers, <br /> but that&#x2019;s really nothing compared to processing billions of requests <br /> only to say: sorry no new tweets yet. <br /> <br /> Since each event is delivered only once, there&#x2019;s no bandwidth wasted. <br /> <br /> Latency is very low too. More on that later. <br /> <br />
  • There&#x2019;s a flexibility bonus here too. <br /> <br /> We can add new event types to streams without having everyone recode to hit new endpoints. <br /> <br /> Just like adding new fields to JSON markup is future proof, <br /> we can also easily add new events to existing streams. <br /> <br /> When you are ready to use the new events, you can, <br /> otherwise, ignore them. <br /> <br />
  • What does a stream look like? <br /> <br /> Well, its a continuous stream of discrete JSON or XML messages. <br /> <br /> We deliver events at least once and in roughly sorted order. <br /> <br /> In general, during steady state, you&#x2019;ll see each event exactly once with a practical K sorting. <br /> <br /> I&#x2019;ll talk more about how these properties affect you at my other talk. <br /> <br />
  • These properties mean that you need to do at least a little post processing on your end. <br /> <br /> The data isn&#x2019;t always display ready or even display worthy -- <br /> you need to post-process the Streaming API. <br /> <br /> Also, the streaming api servers don&#x2019;t do much markup rendering -- that happens upstream in Ruby Daemons -- so whatever rendering quirks you are used to on the REST API, well, they&#x2019;ll be here too. <br /> <br /> At least it&#x2019;s always the same quirks. <br /> <br />
  • So how does the Streaming API fit within the Twitter system? <br /> <br /> It&#x2019;s all a downstream model. <br /> <br /> Users do things, stuff happens, and we route a copy to Streaming. <br /> <br /> Let&#x2019;s look at how we handle a common event: the creation of a new tweet. <br /> <br />
  • In the user visible loop, the FEs validate the input and update critical stores. <br /> <br /> They ack the user, then drop a message into a Kestrel message queue for offline processing. <br /> <br /> This way we can give user feedback, yet defer the heavy lifting to our event driven architecture. <br /> <br /> The tweets are fanned out to internal services: search, streaming, facebook, mobile, lists, and timelines. <br /> <br /> As an example, timeline processing daemons read the event, serially look up all the followers in Flock and re-enqueue large batches of work. <br /> <br /> Even before this flock lookup completes, another timeline daemon pool reads these batches then <br /> updates the memcache timeline vector of all the followers in a massively parallel fashion. <br /> <br /> The other server do their own thing, and the tweet is eventually published everywhere. <br /> <br />
  • Here we can see how events are fanned out from the Kestrel cluster into a single Hosebird cluster. <br /> <br /> Hosebird is the name of the Streaming server implementation. <br /> I really don&#x2019;t like it. But the name stuck. <br /> <br /> Anyway, we use kestrel fanout queues to present each event to each fanout Hosebird process. <br /> <br /> Fanout queues duplicate each message for each known reader. <br /> <br /> Kestrel queues are bomb-proof and relatively inexpensive, but they aren&#x2019;t free. <br />
  • So, within a streaming cluster, we get cheap, by cascading. <br /> <br /> Cascading is where a hosebird process reads from a peer via streaming HTTP, just like any other streaming client. <br /> <br /> No coordination is needed and we&#x2019;re eating our own dogfood. <br /> <br /> There&#x2019;s hardly any latency added by cascading, but the cost savings are considerable when there&#x2019;s a large cluster of hosebird machines. <br /> <br /> Also, we get rack locality of bandwidth, as the hosebirds are generally together in a rack, <br /> while the kestrels are located on another isle. <br /> <br /> <br />
  • OK. We&#x2019;ve routed the events to all of the hosebird servers. <br /> <br /> How do the servers work internally? <br /> <br /> Hosebird runs on the JVM. <br /> <br /> It&#x2019;s written in Scala. <br /> <br /> And uses an embedded Jetty webserver to handle the front end issues. <br /> <br /> We feed each process 8 cores and about 12 gigs of memory. <br /> <br /> And they each can send a lot of data to many many of clients. <br />
  • Events flow through Scala actors that host the application logic. <br /> <br /> Filtered events are sent through a Java queue <br /> then read by the connection thread which handles the socket writing details. <br /> <br /> We use the Grabby Hands kestrel client to provide <br /> highly parallel and low latency <br /> blocking transactional reads from Kestrel. <br /> <br /> We use our own Streaming client in the cascading case. <br /> <br /> Both fetching clients are very efficient and hardly use any CPU. <br /> <br />
  • I used the Scala Actor model wherever practical, <br /> to prevent a lot of worrying about concurrency issues. <br /> <br /> It&#x2019;s not a panacea but it has made much of this work trivial. <br /> <br /> Actors currently fall down if you have too many of them, <br /> so we use the Java concurrency model to host the connections. <br /> <br /> Otherwise its all Actors. <br />
  • You may notice the apostasy of burning a thread per connection. <br /> <br /> The year 1997 is calling to mock me, I&#x2019;m sure. But so far it hasn&#x2019;t mattered. <br /> <br /> The memory utilization isn&#x2019;t a limiting factor, and it keeps things very simple. <br /> <br /> <br /> <br /> <br />
  • Feeds are logical groupings of events -- public statuses, direct messages, social graph changes, etc. <br /> <br /> Feeds keep a circular buffer of recent events to support the count parameter and some historical look back. <br /> <br /> I had to parallelize the JSON and XML parsing, <br /> which turned out to be the big CPU burn and probably our major tweets per second scaling risk. <br /> <br /> <br /> <br />
  • Feeds can be reconfigured to internally forward events to other feeds. <br /> <br /> Arbitrary composition in conf files a pretty powerful concept. <br /> <br /> So, to create user streams, I just had to forward events from all these other existing feeds <br /> into the User feed and write some custom delivery logic. <br /> <br /> Yes, there are streams of direct messages. And social graph changes. And other interesting things. <br /> <br /> We can&#x2019;t expose them just yet due to privacy policy issues. <br /> <br /> But, we&#x2019;ll get there. Plans have been laid. <br /> <br /> <br />
  • It doesn&#x2019;t take long for a tweet to be created, <br /> pass through all of these components, and <br /> be presented to your stream. <br /> <br /> If all is running well with all of the upstream systems -- <br /> tweets and other events are usually delivered with an average latency of about 160ms. <br />
  • Here&#x2019;s a ganglia monitoring graph, one of hundreds just for the Streaming API. <br /> <br /> Sometimes I find it funny that outside devs say <br /> &#x201C;hey, did you know that you are throwing 503s on this endpoint&#x201D;. <br /> <br /> Yes, we know. There&#x2019;s a graph for it. <br /> <br /> If there isn&#x2019;t a graph -- we immediately add one. <br /> <br /> And we roll the key ones up into a grid of 12 summary graphs that everyone watches. <br /> <br /> There&#x2019;s also a bank of graph monitors in ops. <br /> <br />
  • Each line above represents the average latency from each of several hosebird clusters. <br /> <br /> This was taken during peak load on a typical weekday. <br /> <br /> You can see a blip about half way through. <br /> <br /> Given that all clusters moved in unison, there was probably an upstream garbage collection in kestrel, or something similar. <br /> <br /> We&#x2019;ve put a lot of effort into lowering Twitter latency and keeping it low and predictable. <br /> <br /> (If visible, blue line is a cascaded cluster, where yellow and green are fanout only.) <br />
  • User streams offer a much more engaging way to interact with Twitter. <br /> <br /> You get to see a lot that happens to you -- who favorited your tweet, who followed you, and so forth -- <br /> in real time. <br /> <br /> You also get to see what your followings are doing. Who they favorited and followed. <br /> <br /> There&#x2019;s a huge opportunity for discovery here with User Streams. <br /> If two friends favorite a tweet, and two others follow the tweeter, show me the tweet! <br /> <br /> We know that User Streams are transformative. Goldman and I were watching #chirp during Ryan&#x2019;s talk. <br /> It was incredible to watch them scroll by. <br /> <br /> In the few days we&#x2019;ve been using them at the office, everyone has been transfixed. <br /> Engineering productivity has plummeted! <br /> It&#x2019;s the Farmville of Twitter. <br /> <br /> <br />
  • OK, what next for Streaming? <br /> <br /> First we&#x2019;re going to get user streams out there. <br /> <br /> We have some more critical features to add and we have to <br /> add capacity to handle potentially millions of connections. <br /> <br /> We&#x2019;ve announced the details for a user stream preview period. <br /> Read them carefully before coding or planning anything. <br /> <br /> There are also some interesting events that we don&#x2019;t yet publish. <br /> We&#x2019;ll see what we can get out there for you. <br /> <br /> Once user streams are in a good spot, we want to get back to some interesting large scale integration features. <br /> <br /> <br />
  • With user streams its all coming together. <br /> <br /> Real time Twitter. <br /> <br /> Lots of event types. <br /> <br /> More engagement. <br /> <br /> More discovery. <br /> <br /> New user experiences are now possible. <br /> <br /> Go out and build something great! <br /> <br /> <br />

Twitter Streaming API Architecture Twitter Streaming API Architecture Presentation Transcript

  • Twitter Streaming API Architecture John Kalucki @jkalucki Infrastructure
  • Heading • Immediate User Experience • More event types • Full fidelity • Easier integrations
  • REST? • Downsides • Latency • Complexity • Expense • Prevents • At-Scale Integrations • Features • Fidelity
  • Needy • Authenticate • Rate Limit • Query vast caches • Query deep data stores • Render. Render. Render. • All just to say: “No new tweets. Try later.”
  • Policy • Prove Relationship via Auth Token • Terms of Use • No resyndication • Protect users, content, ecosystem
  • Streaming Solution • Gather events • Route to all servers • Present to decision logic: Exactly once per server • Move ‘em out: Send just once* to clients
  • Win, Huge • Low latency • Little duplicated computation • No wasted bandwidth • New event types without new endpoints
  • Win, Huge • Low latency • Little duplicated computation • No wasted bandwidth • New event types without new endpoints
  • Properties • At Least Once • Roughly Sorted (K-Sorted) by Time • Middleware - No rendering • More at 2:30pm talk - Thinking In Streams
  • Properties • At Least Once • Roughly Sorted (K-Sorted) by Time • Middleware - No rendering • More at 2:30pm talk - Thinking In Streams
  • Gather All Events
  • Push Events
  • Latency
  • <created_at> - client arrival Policy 200ms 100ms 0 ms 1 hour period
  • <created_at> - client arrival Policy 200ms 100ms 0 ms 1 hour period
  • User Streams chirpstream/2b/user.json
  • Future • User Streams refinement and launch • More data types • Better query support • Large scale integration support