Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

From Kafka to BigQuery - Strata Singapore

1,020 views

Published on

Describing the journey made by MyHeritage, streaming billions of events, every day, for data analysis

Published in: Technology
  • Be the first to comment

From Kafka to BigQuery - Strata Singapore

  1. 1. FROM KAFKA TO BigQuery Ofir Sharony A Guide For Delivering Billions of Daily Events
  2. 2. Why are we here? As data engineers, our goal is building GREAT data pipelines. Delivering data to analysis reliably & efficiently with the ability to scale. This session is based this post from MyHeritage tech blog: https://medium.com/myheritage-engineering
  3. 3. Considerations for building data pipelines ● End2End latency ● Data structure ● Data partitioning ● Reprocessing old data
  4. 4. Discover, preserve & share your FAMILY HISTORY Case Study MyHeritage data pipeline Reveal the secrets from your DNA
  5. 5. 100,000,000+ Registered Users 250,000,000+ Web Requests Daily Billions Change-data-capture events
  6. 6. User behavior and A/B testing
  7. 7. High level architecture Database Database Database Change-data-capture events Requests from multiple sources User Activities Google BigQuery Apache Kafka
  8. 8. Apache Kafka in a nutshell • Distributed streaming platform • Highly scalable • Fault tolerant • Topics and partitions Producer Producer Producer Consumer Consumer Consumer Kafka Broker Partition Partition Partition Topic
  9. 9. Google BigQuery in a nutshell • Cloud data-warehouse • Pay as you go • Data sources • Streaming support • Daily partitioning feature. Partitioning options: • By processing time - when the event was observed • By event time - when the event actually occurred
  10. 10. Our Data Delivery Journey
  11. 11. Take 1 - Batch loading to Google Cloud Storage
  12. 12. • Two steps for loading • Batch loading to GCS with Secor • Loading from GCS to BigQuery Batch loading to Google Cloud Storage Secor Google Cloud Storage Loading Script bq load dataset.table$20171201 gs://bucket/topic/dt=2017–12–01/*
  13. 13. • Partitioning options • By ingestion date (processing time) • By record timestamp (event time) • Concurrency and scaling • No extra cost for streaming • Data formats • End2End latency Tradeoffs - Batch loading with Secor
  14. 14. Take 2 - Streaming data via BigQuery API
  15. 15. Streaming data via BigQuery API • No preliminary step • Comes with a cost and quota limitation DIY Kafka Consumer Streaming Framework BigQuery Streaming API
  16. 16. Kafka Streams streaming framework • Streaming applications on top of Kafka • Simple Java application • Auto-replicated application state • Supports both processing time and event time semantics
  17. 17. Basic Kafka Streams application public static void main(String[] args) { Properties config = new Properties(); config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "host:port"); config.put(StreamsConfig.APPLICATION_ID_CONFIG, "AppId"); KStreamBuilder builder = new KStreamBuilder(); builder.stream(Serdes.String(), Serdes.String(), "my_topic") .filter((key, value) -> value.equals("Please, take me to BigQuery")) .foreach((key, value) -> deliverDataToBigQuery(key, value)); KafkaStreams streams = new KafkaStreams(builder, config); streams.start(); }
  18. 18. • Low latency • Complete control in your hands • You have to design a robust service • Handle streaming errors, record batching, etc Tradeoffs - Streaming with BigQuery API
  19. 19. Take 3 - Streaming data with Kafka Connect
  20. 20. Streaming with Kafka Connect • Deliver data in and out of Kafka • Supports many connectors out of the box • Simple connector creation - we used WePay’s BigQuery connector • You get redundancy and scaling for free Kafka Connector Kafka Connector Kafka Connector
  21. 21. • Plug and play solution • Leverages Kafka for scaling and auto-failover • Auto creation of BigQuery tables • Partition by processing time • Process data only from Kafka • Topic to table mapping Tradeoffs - Streaming with Kafka Connect
  22. 22. Take 4 - Streaming data with Apache Beam
  23. 23. Streaming data with Apache Beam • Unified model for batch and streaming • Portable platform • Managed Cloud Dataflow runner Cloud Dataflow Worker Cloud Dataflow Worker Cloud Dataflow Worker Apache Beam
  24. 24. Streaming to BigQuery with Beam String getDestination(ValueInSingleWindow<PageView> element) { String table = element.getValue().isBot() ? "bot_pageviews" : "user_pageviews"; Long eventTimestamp = element.getValue().getEventTimestamp(); String partition = new SimpleDateFormat("yyyyMMdd") .format(new Date(eventTimestamp)); return String.format("%s:%s.%s$%s", project, dataset, table, partition); }
  25. 25. Tradeoffs - Streaming with Beam • Unified model for multiple runners • Dataflow runner advantages • Can process data from different sources • Dynamic destinations • Extra cost of running managed workers • Not a part of Kafka ecosystem • New kid in the block
  26. 26. Batch vs. Stream loading summary
  27. 27. Q&A ? ofir.sharony@myheritage.com

×