Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Leveraging Kafka Eco-System For Redis Streams: Sripathi Krishnan

44 views

Published on

RedisConf19

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Leveraging Kafka Eco-System For Redis Streams: Sripathi Krishnan

  1. 1. PRESENTED BY Leveraging Kafka Eco-System for Redis Streams Presenter Company, Title
  2. 2. PRESENTED BY ● Founder, RDBTools ● CTO @ HashedIn ● Long time Redis user ● Speak regularly at Redis Conferences About Me @srithedabbler sripathikrishnan
  3. 3. PRESENTED BY Image Credit: Universal Travel Adapter by Travel Inspira on Amazon.com
  4. 4. PRESENTED BY Can you use Kafka APIs and still read/write to Redis Streams?
  5. 5. PRESENTED BY 1 Crash course on Kafka 2 Crash course on Redis Streams 3 Marrying the two Agenda: 4 Demo!
  6. 6. PRESENTED BY Crash Course in Kafka
  7. 7. PRESENTED BY iot-stream-live producer producer producer consumer consumer consumer
  8. 8. PRESENTED BY iot-stream-live producer producer producer consumer consumer consumer partition-0 partition-1 partition-2
  9. 9. PRESENTED BY iot-stream-live producer producer producer consumer consumer consumer partition-0 partition-1 partition-2 Producer chooses partition based on record characteristics
  10. 10. PRESENTED BY anomaly-engine iot-stream-live producer producer producer consumer consumer consumer partition-0 partition-1 partition-2 billing-engine consumer consumer consumer
  11. 11. PRESENTED BY anomaly-engine iot-stream-live producer producer producer consumer-0 partition-0 consumer-1 partition-1 consumer-2 partition-2 partition-0 partition-1 partition-2 billing-engine
  12. 12. PRESENTED BY anomaly-engine iot-stream-live producer producer producer consumer-0 partition-0 consumer-1 partition-1 consumer-2 partition-2 partition-0 partition-1 partition-2 billing-engine
  13. 13. PRESENTED BY anomaly-engine iot-stream-live producer producer producer consumer-0 partition-0 consumer-1 partition-0 partition-1 consumer-2 partition-2 partition-0 partition-1 partition-2 billing-engine
  14. 14. PRESENTED BY Crash Course in Redis Streams
  15. 15. Copyright © 2018 HashedIn Technologies Pvt. Ltd. Redis Streams Server: 101 CPU: 83 Memory: 90 Server: 201 CPU: 43 Memory: 20 Server: 301 CPU: 55 Memory: 39 Server: 101 CPU: 70 Memory: 80 Server: 301 CPU: 50 Memory: 43 <unixtime>.<seq> 123.0 125.0 130.0 130.1 time
  16. 16. Copyright © 2018 HashedIn Technologies Pvt. Ltd. XADD adds new records at the end Server: 101 CPU: 83 Memory: 90 Server: 201 CPU: 43 Memory: 20 Server: 301 CPU: 55 Memory: 39 Server: 101 CPU: 70 Memory: 80 Server: 301 CPU: 50 Memory: 43 Server: 201 CPU: 45 Memory: 22 <unixtime>.<seq> 123.0 125.0 130.0 130.1 133.0 time > XADD server-metrics * server 201 cpu 45 memory 22 < 133.0 (Actually, a number like 1506872463535.0)
  17. 17. Copyright © 2018 HashedIn Technologies Pvt. Ltd. XRANGE fetches multiple records Server: 201 CPU: 45 Memory: 22 133.0 time > XRANGE server-metrics 123.0 130.0 < Highlighted Objects Server: 101 CPU: 83 Memory: 90 Server: 201 CPU: 43 Memory: 20 Server: 301 CPU: 55 Memory: 39 Server: 101 CPU: 70 Memory: 80 Server: 301 CPU: 50 Memory: 43 <unixtime>.<seq> 123.0 125.0 130.0 130.1
  18. 18. Copyright © 2018 HashedIn Technologies Pvt. Ltd. XREAD fetches records as they come Server: 201 CPU: 45 Memory: 22 133.0 time > XREAD BLOCK 5000 STREAMS server-metrics $ < Gets records if available, or blocks for 5 seconds Server: 101 CPU: 83 Memory: 90 Server: 301 CPU: 50 Memory: 43 <unixtime>.<seq> 130.1 Server: 201 CPU: 43 Memory: 20 Server: 301 CPU: 55 Memory: 39 Server: 101 CPU: 70 Memory: 80 123.0 125.0 130.0
  19. 19. PRESENTED BY Marrying Kafka & Redis Streams
  20. 20. PRESENTED BY Adapter for Kafka Producer public Future<RecordMetadata> send(ProducerRecord<K, V> record) xadd <topic>.<partition> * _key <key bytes> _value <value bytes>... Convert this: Into this:
  21. 21. PRESENTED BY 1 Kafka Partition = 1 Redis Stream - Redis Stream has no concept of partitions. - So to match Kafka’s API, we make 1 Kafka Partition == 1 Redis Stream - An extra dictionary per kafka topic stores metadata about the Topic
  22. 22. PRESENTED BY Primary Key (topic, partition, offset) Fields timestamp headers key (bytes) value (bytes) Format of Records Primary Key (redis key, <timestamp>-<counter>) Fields dictionary of key=value pairs
  23. 23. PRESENTED BY Mapping Offsets to Entry Ids Kafka offset is a monotonically increasing, sequential long eg. 3292 Redis entry id is a combination of two longs <timestamp>.<counter> eg. 1506872463535.3 No lossless way to combine 2 longs into 1 -:(
  24. 24. PRESENTED BY … but you can make some assumptions 1. Kafka offsets don’t have to be sequential (they have to always increase, but gaps are fine) 2. Reduced precision timestamps are probably fine (support for ~100 years instead of all eternity) 3. Support ~2M entries in the same stream per millisecond With these assumptions, we can translate one to another without loss of data!
  25. 25. PRESENTED BY Alternatively, you can manually generate streams Ids This will involve maintaining a counter in redis, and using it to generate sequential ids.
  26. 26. PRESENTED BY Adapter for Kafka Consumer public ConsumerRecords<K, V> poll(final Duration timeout) xread count 20 block <timeout> streams <topic1>.<partition1> … <offset1> Convert this: Into this:
  27. 27. PRESENTED BY Demo!
  28. 28. PRESENTED BY Github Repository https://github.com/hashedin/redkaf Experimental! Don’t try this in production, yet. Warning:
  29. 29. Thank you!
  30. 30. PRESENTED BY

×