Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka


Published on

At NMC (Nielsen Marketing Cloud) we provide our customers (marketers and publishers) real-time analytics tools to profile their target audiences.
To achieve that, we need to ingest billions of events per day into our big data stores, and we need to do it in a scalable yet cost-efficient manner.

In this session, we will discuss how we continuously transform our data infrastructure to support these goals.

Specifically, we will review how we went from CSV files and standalone Java applications all the way to multiple Kafka and Spark clusters, performing a mixture of Streaming and Batch ETLs, and supporting 10x data growth.

We will share our experience as early-adopters of Spark Streaming and Spark Structured Streaming, and how we overcame technical barriers (and there were plenty...).

We will present a rather unique solution of using Kafka to imitate streaming over our Data Lake, while significantly reducing our cloud services' costs.

Topics include :
* Kafka and Spark Streaming for stateless and stateful use-cases
* Spark Structured Streaming as a possible alternative
* Combining Spark Streaming with batch ETLs
* "Streaming" over Data Lake using Kafka

Published in: Technology
  • Be the first to comment

Stream, Stream, Stream: Different Streaming Methods with Spark and Kafka

  1. 1. Stream, Stream, Stream: Different Streaming methods with Spark and Kafka Itai Yaffe Nielsen
  2. 2. Introduction Itai Yaffe ● Tech Lead, Big Data group ● Dealing with Big Data challenges since 2012
  3. 3. Introduction - part 2 (or: “your turn…”) ● Data engineers? Data architects? Something else? ● Attended our session yesterday about counting unique users with Druid? ● Working with Spark/Kafka? Planning to?
  4. 4. Agenda ● Nielsen Marketing Cloud (NMC) ○ About ○ High-level architecture ● Data flow - past and present ● Spark Streaming ○ “Stateless” and “stateful” use-cases ● Spark Structured Streaming ● “Streaming” over our Data Lake
  5. 5. Nielsen Marketing Cloud (NMC) ● eXelate was acquired by Nielsen on March 2015 ● A Data company ● Machine learning models for insights ● Targeting ● Business decisions
  6. 6. Nielsen Marketing Cloud - questions we try to answer 1. How many unique users of a certain profile can we reach? E.g campaign for young women who love tech 2. How many impressions a campaign received?
  7. 7. Nielsen Marketing Cloud - high-level architecture
  8. 8. Data flow in the old days... In-DB aggregation OLAP
  9. 9. Data flow in the old days… What’s wrong with that? ● CSV-related issues, e.g: ○ Truncated lines in input files ○ Can’t enforce schema ● Scale-related issues, e.g: ○ Had to “manually” scale the processes
  10. 10. That's one small step for [a] man… (2014) “Apache Spark is the Taylor Swift of big data software" (Derrick Harris,, 2015) In-DB aggregation OLAP
  11. 11. Why just a small step? ● Solved the scaling issues ● Still faced the CSV-related issues
  12. 12. Data flow - the modern way + Photography Copyright: NBC
  13. 13. Read Messages In-DB aggregation OLAP
  14. 14. The need for stateful streaming Fast forward a few months... ●New requirements were being raised ●Specific use-case : ○ To take the load off of the operational DB (used both as OLTP and OLAP), we wanted to move most of the aggregative operations to our Spark Streaming app
  15. 15. Stateful streaming via “local” aggregations 1. Read Messages 5. Upsert aggregated data (every X micro-batches) 2. Aggregate current micro-batch 3. Write combined aggregated data 4. Read aggregated data From HDFS every X micro-batches OLAP
  16. 16. Stateful streaming via “local” aggregations ● Required us to manage the state on our own ● Error-prone ○ E.g what if my cluster is terminated and data on HDFS is lost? ● Complicates the code ○ Mixed input sources for the same app (Kafka + files) ● Possible performance impact ○ Might cause the Kafka consumer to lag
  17. 17. Structured Streaming - to the rescue? Spark 2.0 introduced Structured Streaming ●Enables running continuous, incremental processes ○ Basically manages the state for you ●Built on Spark SQL ○ DataFrame/Dataset API ○ Catalyst Optimizer ●Many other features ●Was in ALPHA mode in 2.0 and 2.1 Structured Streaming
  18. 18. Structured Streaming - stateful app use-case 2. Aggregate current window 3. Checkpoint (offsets and state) handled internally by Spark 1. Read Messages 4. Upsert aggregated data (on window end) Structured streaming OLAP
  19. 19. Structured Streaming - known issues & tips ● 3 major issues we had in 2.1.0 (solved in 2.1.1) : ○ ○ ○ ● Checkpointing to S3 wasn’t straight-forward ○ Tried using EMRFS consistent view ■ Worked for stateless apps ■ Encountered sporadic issues for stateful apps
  20. 20. Structured Streaming - strengths and weaknesses (IMO) ● Strengths include : ○ Running incremental, continuous processing ○ Increased performance (e.g via Catalyst SQL optimizer) ○ Massive efforts are invested in it ● Weaknesses were mostly related to maturity
  21. 21. Back to the future - Spark Streaming revived for “stateful” app use-case 1. Read Messages 3. WriteFiles 2. Aggregate Current micro-batch 4. Load Data OLAP
  22. 22. Cool, so… Why can’t we stop here? ● Significantly underutilized cluster resources = wasted $$$
  23. 23. Cool, so… Why can’t we stop here? (cont.) ● Extreme load of Kafka brokers’ disks ○ Each micro-batch needs to read ~300M messages, Kafka can’t store it all in memory ● ConcurrentModificationException when using Spark Streaming + Kafka 0.10 integration ○ Forced us to use 1 core per executor to avoid it ○ supposedly solved in 2.4.0 (possibly solving as well) ● We wish we could run it even less frequently ○ Remember - longer micro-batches result in a better aggregation ratio
  24. 24. Enter “streaming” over RDR RDR (or Raw Data Repository) is our Data Lake ●Kafka topic messages are stored on S3 in Parquet format ●RDR Loaders - stateless Spark Streaming applications ●Applications can read data from RDR for various use-cases ○ E.g analyzing data of the last 30 days Can we leverage our Data Lake and use it as the data source (instead of Kafka)?
  25. 25. How do we “stream” RDR files - producer side S3 RDRRDR Loaders 2. Write files 1. Read Messages 3. Write files’ paths Topic with the files’ paths as messages
  26. 26. How do we “stream” RDR files - consumer side S3 RDR 3. Process files 1. Read files’ paths 2. Read RDR files
  27. 27. How do we use the new RDR “streaming” infrastructure? 1. Read files’ paths 3. Write files 2. Read RDR files OLAP 4. Load Data
  28. 28. Did we solve the aforementioned problems? ● EMR clusters are now transient - no more idle clusters Application type Day 1 Day 2 Day 3 Old Spark Streaming app 1007.68$ 1007.68$ 1007.68$ “Streaming” over RDR app 150.08$ 198.73$ 174.68$
  29. 29. Did we solve the aforementioned problems? (cont.) ● No more extreme load of Kafka brokers’ disks ○ We still read old messages from Kafka, but now we only read about 1K messages per hour (rather than ~300M) ● The new infra doesn’t depend on the integration of Spark Streaming with Kafka ○ No more weird exceptions... ● We can run the Spark batch applications as (in)frequent as we’d like
  30. 30. Summary ● Initially replaced standalone Java with Spark & Scala ○ Still faced CSV-related issues ● Introduced Spark Streaming & Kafka for “stateless” use-cases ○ Quickly needed to handle stateful use-cases as well ● Tried Spark Streaming for stateleful use-cases (via “local” aggregations) ○ Required us to manage the state on our own ● Moved to Structured Streaming (for all use-cases) ○ Cons were mostly related to maturity
  31. 31. Summary (cont.) ● Went back to Spark Streaming (with Druid as OLAP) ○ Performance penalty in Kafka for long micro-batches ○ Under-utilized Spark clusters ○ Etc. ● Introduced “streaming” over our Data Lake ○ Eliminated Kafka performance penalty ○ Spark clusters are much better utilized = $$$ saved ○ And more...
  32. 32. DRUID ES Want to know more? ● Women in Big Data ○ A world-wide program that aims : ■ To inspire, connect, grow, and champion success of women in Big Data. ■ To grow women representation in Big Data field > 25% by 2020 ○ Visit the website ( ● Counting Unique Users in Real-Time: Here’s a Challenge for You! ○ Presented yesterday, ● NMC Tech Blog -
  33. 33. QUESTIONS
  34. 34. THANK YOU
  35. 35. Structured Streaming - additional slides
  36. 36. Structured Streaming - basic concepts Data stream Unbounded Table New data in the data streamer = New rows appended to a unbounded table Data stream as an unbonded table
  37. 37. Structured Streaming - basic concepts
  38. 38. Structured Streaming - WordCount example
  39. 39. Structured Streaming - basic terms ● Input sources : ○ File ○ Kafka ○ Socket, Rate (for testing) ● Output modes : ○ Append (default) ○ Complete ○ Update (added in Spark 2.1.1) ○ Different types of queries support different output modes ■ E.g for non-aggregation queries, Complete mode not supported as it is infeasible to keep all unaggregated data in the Result Table ● Output sinks : ○ File ○ Kafka (added in Spark 2.2.0) ○ Foreach ○ Console, Memory (for debugging) ○ Different types of sinks support different output modes
  40. 40. Fault tolerance ● The goal - end-to-end exactly-once semantics ● The means : ○ Trackable sources (i.e offsets) ○ Checkpointing ○ Idempotent sinks
  41. 41. Monitoring
  42. 42. Structured Streaming in production So we started moving to Structured Streaming Use case Previous architecture Old flow New architecture New flow Existing Spark app Periodic Spark batch job Read Parquet from S3 -> Transform -> Write Parquet to S3 Stateless Structured Streaming Read from Kafka -> Transform -> Write Parquet to S3 Existing Java app Periodic standalone Java process (“manual” scaling) Read CSV -> Transform and aggregate -> Write to RDBMS Stateful Structured Streaming Read from Kafka -> Transform and aggregate -> Write to RDBMS New app N/A N/A Stateful Structured Streaming Read from Kafka -> Transform and aggregate -> Write to RDBMS