Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Building Pinterest Real-Time Ads Platform Using Kafka Streams

401 views

Published on

Building Pinterest Real-Time Ads Platform Using Kafka Streams (Liquan Pei + Boyang Chen, Pinterest) Kafka Summit SF 2018

In this talk, we are sharing the experience of building Pinterest’s real-time Ads Platform utilizing Kafka Streams. The real-time budgeting system is the most mission-critical component of the Ads Platform as it controls how each ad is delivered to maximize user, advertiser and Pinterest value. The system needs to handle over 50,000 queries per section (QPS) impressions, requires less than five seconds of end-to-end latency and recovers within five minutes during outages. It also needs to be scalable to handle the fast growth of Pinterest’s ads business.

The real-time budgeting system is composed of real-time stream-stream joiner, real-time spend aggregator and a spend predictor. At Pinterest’s scale, we need to overcome quite a few challenges to make each component work. For example, the stream-stream joiner needs to maintain terabyte size state while supporting fast recovery, and the real-time spend aggregator needs to publish to thousands of ads servers while supporting over one million read QPS. We choose Kafka Streams as it provides milliseconds latency guarantee, scalable event-based processing and easy-to-use APIs. In the process of building the system, we performed tons of tuning to RocksDB, Kafka Producer and Consumer, and pushed several open source contributions to Apache Kafka. We are also working on adding a remote checkpoint for Kafka Streams state to reduce the time of code start when adding more machines to the application. We believe that our experience can be beneficial to people who want to build real-time streaming solutions at large scale and deeply understand Kafka Streams.

Published in: Technology
  • Be the first to comment

Building Pinterest Real-Time Ads Platform Using Kafka Streams

  1. 1. Building Pinterest Realtime Ads Platform Using Kafka Streams Liquan Pei, Boyang Chen Kafka Summit SF 2018
  2. 2. Liquan Pei liquanpei@pinterest.com Boyang Chen bychen@pinterest.com
  3. 3. Visual Discovery Engine 250M MAU 100B Pins created by people 10B Recommendations per day A great platform for ads
  4. 4. Ads Platform
  5. 5. Ads Platform ● A recommendation system ○ Machine learning models ● More than a recommendation system ○ Budgeting ○ New Ads Exploration
  6. 6. Ads Platform
  7. 7. Ads Platform
  8. 8. Budgeting
  9. 9. Budgeting
  10. 10. Stream-Stream Windowed Join
  11. 11. Joiner Topology Insertion store Action store ad_insertion user_action Join status
  12. 12. Joiner Algorithm Join Status 1 Join Status Action Store 4 Join Status 2 Join Status Action Store 3 Join Join Status Action Store 5 Insertion Store
  13. 13. Joiner Algorithm Join Status 1 Join Status 2 Join Status 3 Insertion Store Join 4 Join Status Insertion Store Join Status Action Store 5 Insertion Store
  14. 14. Realtime Spend Joiner ● Large state: TB data. ○ 24-hour join window ● Window store operations ○ Put/Get ○ Commit ● Requirements ○ Sub second latency ○ Fast recovery ○ Fast scale up/down
  15. 15. ● num.rolling.segments = number of RocksDB instances. ● A RocksDB is dropped when expired. Window Store Internal
  16. 16. Window Store Operations
  17. 17. ● Read/write performance ○ Use point query for fast lookup ■ fetch(key, timeFrom, timeTo); ■ fetch(key, windowStartTime); [ >=Kafka 2.0.0 ] ○ Increase block cache size ○ Reduce action state store size How to achieve sub-second latency?
  18. 18. Window Store Operations
  19. 19. ● Each commit triggers RocksDB flush to ensure data is persistent on disk. ● Each RocksDB flush creates SST. ● Accumulated number of SST files will trigger compaction. ● Tune commit.interval.ms. Kafka Streams Commit
  20. 20. Fast recovery ● State rebalance
  21. 21. Fast recovery ● Rolling restart could trigger multiple rebalances. ● State shuffling is expensive. Approaches: ● Recover faster: ○ increase max.poll.records for restore consumer (KIP-276) ○ RocksDB window store batch recovery (KAFKA-7023) ● Single rebalance: ○ Wait for all members to be ready = increase session.timeout.ms. ○ Restore faster: static membership (KIP-345)
  22. 22. Fast scale down/up ● Save state in remote storage. ○ S3 ○ HDFS
  23. 23. Budgeting
  24. 24. Windowed Aggregation
  25. 25. Aggregator ● Utilize Stream DSL API ● Requirements ○ End to end sub second latency. User action to ads serving. ○ Thousands of ads serving machines needs to consume this data.
  26. 26. Output to a compacted topic Cons: ○ High fanout, broker saturation. ○ Replay could be long. Pros: ○ Fast correction. ○ Logic simplicity.
  27. 27. Cons: ○ Event based: no way to reset. ○ Time based: expensive batch operation. Pros: ○ Very small volume. ○ Logic consolidation. Output to a signal topic
  28. 28. Streaming in budget change Pros: ○ Unblock signal reset without batch update. Cons: ○ Consistency guarantee. ○ Strong ordering guarantee.
  29. 29. Budgeting Summary ● Low level metrics are critical, especially storage layer. ● Large state shuffling is bad. ● Compacted topic as partitioned key-value store. ● Unified solution for serving stream output.
  30. 30. New Ads Exploration ● A new ad is created, however, the Ads Platform doesn’t know about the user engagement with this ad on different surfaces. ● The faster the Ads Platform knows about the performance of the newly created ad, the better value we provide to the user. ● Balance between exploiting good ads and exploring new ads. ● Solution: Add a boosting factor to new ads to increase the probability of winning auction.
  31. 31. New Ads Exploration
  32. 32. New Ads Exploration ● Need to compute <ad id, past X day impressions>. ● The result published to S3 for serving. ● Backfilling is needed. ○ Exactly same logic as the normal processing.
  33. 33. Backfilling
  34. 34. Backfilling
  35. 35. Backfilling
  36. 36. Stream Processing Patterns
  37. 37. Streaming Processing Patterns
  38. 38. Stream Processing Patterns
  39. 39. Stream Platform ● Usability ○ User should only focus on business logic. ○ Support for more state store backends. ○ Type system for easier code sharing. ● Scalability ○ Applications should be able to handle more QPS with more machines. ● Fault Tolerance ○ Application should recover within X minutes. ○ Application should support code and state rollback. ● Developer Velocity ○ The platform should provide standard ways of backfilling. ● Debuggability ○ The platform should provide standard ways of exposing debug information to be queryable.
  40. 40. Contributions ● KIP-91 Adding delivery.timeout.ms to Kafka producer. ● KIP-245 Replace StreamsConfig with Properties ● KIP-276 Add config prefix for different consumers ● KIP-300 (ongoing) Add windowed KTable API ● KIP-345 (ongoing) Reduce consumer rebalances through static membership ● KAFKA-6896 Export producer and consumer metrics in Kafka Streams ● KAFKA-7023 Move prepareForBulkLoad() call after customized RocksDBConfigSetter ● KAFKA-7103 Use bulkloading for RocksDBSegmentedBytesStore during init ● RocksDB Metrics Lib
  41. 41. Acknowledgements Guozhang and Matthias from Confluent Yu Yang, Zack Drach and Shawn Nguyen from Pinterest The Ads Realtime Team
  42. 42. Citations ● Search ads on Pinterest: https://business.pinterest.com/en/blog/introducing- search-ads-on-pinterest ● Ads demo: https://bn.co/pinterest-promoted-pin-campaign/ ● Strencils source: https://stenciltown.omnigroup.com/categories/all/ ● RocksDB tuning: https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning- Guide ● “Compact, delete” topic:https://issues.apache.org/jira/browse/KAFKA-4015 ● Monitoring your Kafka Streams Application: https://docs.confluent.io/current/streams/monitoring.html ● Join Support in Kafka streams: https://docs.confluent.io/current/streams/developer-guide/dsl- api.html#streams-developer-guide-dsl-joins

×