Adapter for Kafka Producer
public Future<RecordMetadata> send(ProducerRecord<K, V> record)
xadd <topic>.<partition> * _key <key bytes> _value <value bytes>...
1 Kafka Partition = 1 Redis Stream
- Redis Stream has no concept of partitions.
- So to match Kafka’s API, we make 1 Kafka Partition == 1 Redis
- An extra dictionary per kafka topic stores metadata about the
(topic, partition, offset)
Format of Records
(redis key, <timestamp>-<counter>)
dictionary of key=value pairs
Mapping Offsets to Entry Ids
Kafka offset is a monotonically
increasing, sequential long
Redis entry id is a combination
of two longs <timestamp>.<counter>
No lossless way to combine 2 longs into 1 -:(
… but you can make some assumptions
1. Kafka offsets don’t have to be sequential
(they have to always increase, but gaps are fine)
2. Reduced precision timestamps are probably fine
(support for ~100 years instead of all eternity)
3. Support ~2M entries in the same stream per millisecond
With these assumptions, we can translate one to another without
loss of data!
Alternatively, you can manually generate streams Ids
This will involve maintaining a counter in redis, and
using it to generate sequential ids.
Adapter for Kafka Consumer
public ConsumerRecords<K, V> poll(final Duration timeout)
xread count 20 block <timeout> streams <topic1>.<partition1> …