Samza at LinkedIn: Taking Stream Processing to the Next Level
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Samza at LinkedIn: Taking Stream Processing to the Next Level

on

  • 758 views

Slides from my talk at Berlin Buzzwords, 27 May 2014. Unfortunately Slideshare has screwed up the fonts. See https://speakerdeck.com/ept/samza-at-linkedin-taking-stream-processing-to-the-next-level ...

Slides from my talk at Berlin Buzzwords, 27 May 2014. Unfortunately Slideshare has screwed up the fonts. See https://speakerdeck.com/ept/samza-at-linkedin-taking-stream-processing-to-the-next-level for a version of the deck with correct fonts.

Stream processing is an essential part of real-time data systems, such as news feeds, live search indexes, real-time analytics, metrics and monitoring. But writing stream processes is still hard, especially when you're dealing with so much data that you have to distribute it across multiple machines. How can you keep the system running smoothly, even when machines fail and bugs occur?

Apache Samza is a new framework for writing scalable stream processing jobs. Like Hadoop and MapReduce for batch processing, it takes care of the hard parts of running your message-processing code on a distributed infrastructure, so that you can concentrate on writing your application using simple APIs. It is in production use at LinkedIn.

This talk will introduce Samza, and show how to use it to solve a range of different problems. Samza has some unique features that make it especially interesting for large deployments, and in this talk we will dig into how they work under the hood. In particular:

• Samza is built to support many different jobs written by different teams. Isolation between jobs ensures that a single badly behaved job doesn't affect other jobs. It is robust by design.
• Samza can handle jobs that require large amounts of state, for example joining multiple streams, augmenting a stream with data from a database, or aggregating data over long time windows. This makes it a very powerful tool for applications.

Statistics

Views

Total Views
758
Views on SlideShare
749
Embed Views
9

Actions

Likes
9
Downloads
24
Comments
0

1 Embed 9

http://www.slideee.com 9

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Also provide interfaces for windowing tasks that are called specific amounts of time <br /> Also provide methods for initialization, configuration, etc. <br /> Checkpointing is handled behind the scenes

Samza at LinkedIn: Taking Stream Processing to the Next Level Presentation Transcript

  • 1. Apache Samza: Taking stream processing to the next level
  • 2. Martin Kleppmann  Co-founded two startups, Rapportive ⇒ LinkedIn  Committer on Avro & Samza ⇒ Apache  Writing book on data-intensive apps ⇒ O’Reilly  martinkl.com | @martinkl
  • 3. Apache Kafka Apache Samza
  • 4. Apache Kafka Apache Samza Credit: Jason Walsh on Flickr https://www.flickr.com/photos/22882695@N00/2477241427/ Credit: Lucas Richarz on Flickr https://www.flickr.com/photos/22057420@N00/5690285549
  • 5. Things we would like to do (better)
  • 6. Provide timely, relevant updates to your newsfeed
  • 7. Update search results with new information as it appears
  • 8. “Real-time” analysis of logs and metrics
  • 9. Tools? Response latency Kafka & Samza Milliseconds to minutes Loosely coupled REST Synchronous Closely coupled Hours to days Loosely
  • 10. Service 1 Kafka events/messages Analytics Cache maintenance Notifications subscribe subscribe subscribe publish publish Service 2
  • 11. Publish / subscribe  Event / message = “something happened” – Tracking: User x clicked y at time z – Data change: Key x, old value y, set to new value z – Logging: Service x threw exception y in request z – Metrics: Machine x had free memory y at time z  Many independent consumers
  • 12. Kafka at LinkedIn  350+ Kafka brokers  8,000+ topics  140,000+ Partitions  278 Billion messages/day  49 TB/day in  176 TB/day out  Peak Load – 4.4 Million messages per second – 6 Gigabits/sec Inbound – 21 Gigabits/sec Outbound
  • 13. public interface StreamTask { void process( IncomingMessageEnvelope envelope, MessageCollector collector, TaskCoordinator coordinator); } Samza API: processing messages getKey(), getMsg() commit(), shutdown() sendMsg(topic, key, val
  • 14. Familiar ideas from MR/Pig/Cascading/…  Filter records matching condition  Map record ⇒ func(record)  Join two/more datasets by key  Group records with the same value in field  Aggregate records within the same group  Pipe job 1’s output ⇒ job 2’s input  MapReduce assumes fixed dataset. Can we adapt this to unbounded streams?
  • 15. Operations on streams  Filter records matching condition✔ easy  Map record ⇒ func(record) ✔ easy  Join two/more datasets by key …within time window, need buffer  Group records with the same value in field …within time window, need buffer  Aggregate records within the same group ✔ ok… when do you emit result?  Pipe job 1’s output ⇒ job 2’s input ✔ ok… but what about faults?
  • 16. Stateful stream processing (join, group, aggregate)
  • 17. Joining streams requires state  User goes to lunch ⇒ click long after impression  Queue backlog ⇒ click before impression  “Window join” Join and aggregate Click-through rate Key-value store Ad impressionsAd clicks
  • 18. Remote state or local state? Samza job partition 0 Samza job partition 1 e.g. Cassandra, MongoDB, … 100-500k msg/sec/node 100-500k msg/sec/node 1-5k queries/sec??
  • 19. Remote state or local state? Samza job partition 0 Samza job partition 1 Local LevelDB/ RocksDB Local LevelDB/ RocksDB
  • 20. Another example: Newsfeed & following  User 138 followed user 582  User 463 followed user 536  User 582 posted: “I’m at Berlin Buzzwords and it rocks”  User 507 unfollowed user 115  User 536 posted: “Nice weather today, going for a walk”  User 981 followed user 575  Expected output: “inbox” (newsfeed) for each user
  • 21. Newsfeed & following Fan out messages to followers Delivered messages 582 => [ 138, 721, … ] Follow/unfollow events Posted messages User 582 posted: “I’m at Berlin Buzzwords and it rocks” User 138 followed user 582 Notify user 138: {User 582 posted: “I’m at Berlin Buzzwords and it rocks”}Push notifications etc.
  • 22. Local state: Bring computation and data together in one place
  • 23. Fault tolerance
  • 24. Kafka Kafka YARN NodeManager YARN NodeManager YARN RM Samza Container Samza Container Samza Container Samza Container Machine 1 Machine 2 Task Task Task Task Task Task Task Task BOOM
  • 25. Kafka Kafka YARN NodeManager YARN NodeManager YARN RM Samza Container Samza Container Samza Container Samza Container Machine 1 Machine 2 Task Task Task Task Task Task Task Task BOOM
  • 26. YARN NodeManager Samza Container Samza Container Kafka YARN NodeManager Samza Container Samza Container Machine 2 Task Task Task Task Kafka Machine 3 Task Task Task Task 
  • 27. Fault-tolerant local state Samza job partition 0 Samza job partition 1 Local LevelDB/ RocksDB Local LevelDB/ RocksDBDurable changelog replicate writes
  • 28. YARN NodeManager Samza Container Samza Container Kafka YARN NodeManager Samza Container Samza Container Machine 2 Task Task Task Task Machine 3 Task Task Task Task  Kafka
  • 29. Samza’s fault-tolerant local state  Embedded key-value: very fast  Machine dies ⇒ local key-value store is lost  Solution: replicate all writes to Kafka!  Machine dies ⇒ restart on another machine  Restore key-value store from changelog  Changelog compaction in the background (Kafka 0.8.1)
  • 30. When things go slow…
  • 31. Owned by Team X Team Y Team Z Cascades of jobs Job 1 Stream BStream A Job 2 Job 3 Job 4 Job 5
  • 32. Consumer goes slow Backpressur e Queue upDrop data Other jobs grind to a halt  Run out of memory  Spill to disk Oh wait… Kafka does this anyway! No thanks 
  • 33. Job 1 Stream BStream A Job 2 Job 3 Job 4 Job 5 Job 1 output Job 2 output Job 3 output
  • 34. Samza always writes job output to Kafka MapReduce always writes job output to HDFS
  • 35. Every job output is a named stream  Open: Anyone can consume it  Robust: If a consumer goes slow, nobody else is affected  Durable: Tolerates machine failure  Debuggable: Just look at it  Scalable: Clean interface between teams  Clean: loose coupling between jobs
  • 36. Problem Solution Need to buffer job output for downstream consumers Write it to Kafka! Need to make local state store fault-tolerant Write it to Kafka! Need to checkpoint job state for recovery Write it to Kafka! Recap
  • 37. Apache Kafka Apache Samza kafka.apache.org samza.incubator.apache.org
  • 38. Hello Samza (try Samza in 5 mins)
  • 39. Thank you! Samza: • Getting started: samza.incubator.apache.org • Underlying thinking: bit.ly/jay_on_logs • Start contributing: bit.ly/samza_newbie_issues Me: • Twitter: @martinkl • Blog: martinkl.com