 Producer
 Consumer
 Broker
 Topic
 Record
 Partitions
 Replications
 Zookeeper
$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.1.properties
$ bin/kafka-topics.sh --create --topic my-kafka-topic --
zookeeper localhost:2181 --partitions 3 --replication-factor
2
$ bin/kafka-console-producer.sh --broker-list
localhost:9093,localhost:9094,localhost:9095 --topic my-
kafka-topic
$ bin/kafka-console-consumer.sh --bootstrap-server
localhost:9093 --topic my-kafka-topic --from-beginning
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
$ bin/kafka-topics.sh --zookeeper localhost:2181 --delete -
-topic topic_name
$ bin/kafka-topics.sh --describe --
zookeeper localhost:2181 --topic my-topic
1. Fire and Forget
Producer<String, String> producer = new KafkaProducer <>(props);
ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value);
producer.send(record);
2. Synchronous Producer
Producer<String, String> producer = new KafkaProducer <>(props);
ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value);
try{
RecordMetadata metadata = producer.send(record).get();
System.out.println("Message is sent to Partition no " + metadata.partition() + " and offset " + metadata.offset());
System.out.println("SynchronousProducer Completed with success.");
}catch (Exception e) {
e.printStackTrace();
System.out.println("SynchronousProducer failed with an exception");
}finally{
producer.close();
}
3. Asynchronous Producer
Producer<String, String> producer = new KafkaProducer <>(props);
ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value);
producer.send(record, new MyProducerCallback());
System.out.println("AsynchronousProducer call completed");
producer.close();
class MyProducerCallback implements Callback{
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e != null)
System.out.println("AsynchronousProducer failed with an exception");
else
System.out.println("AsynchronousProducer call Success:");
}
}
Leader Followers
Ack=0
Kafka Cluster
Leader Followers
Ack=1
Kafka Cluster
Record Metadata
Offset
Timestamp
Leader Followers
Ack=all
Kafka Cluster
Record Metadata
Offset
Timestamp
1. Auto-commit(default)
enable.auto.commit = true
auto.commit.interval.ms = 5000(Default)
2. Manual-commit
consumer.commitSync();
consumer.commitAsync(callback);
https://github.com/LearningJournal/ApacheKafkaTutorials/blob/master/ConsumerExample
/ManualConsumer.java
3. Custom offset-management
https://github.com/LearningJournal/ApacheKafkaTutorials/tree/master/ConsumerExample
Kafka Streams is a client library for building
applications and microservices, where the input
and output data are stored in Kafka clusters. It
combines the simplicity of writing and deploying
standard Java and Scala applications on the client
side with the benefits of Kafka's server-side cluster
technology.
https://kafka.apache.org/documentation/streams/
 Schema Registry
It provides a RESTful interface for storing and
retrieving Avro schemas. It stores a versioned
history of all schemas, provides multiple
compatibility settings and allows evolution of
schemas according to the configured compatibility
settings and expanded Avro support.
https://github.com/LearningJournal/ApacheKafkaTutorials
 Connectors
These supported connectors are packaged and
available natively with Confluent Platform:
 Kafka Connect ActiveMQ Connector
 FileStream Connectors
 Kafka Connect HDFS
 Kafka Connect JDBC Connector
 Confluent Replicator
 Kafka Connect S3
 Kafka Connect Elasticsearch Connector
 Kafka Connect IBM MQ Connector
 Kafka Connect JMS Connector
 KSQL
It provides an easy-to-use yet powerful interactive SQL
interface for stream processing on Kafka, without the
need to write code in a programming language such as
Java or Python. KSQL is scalable, elastic, fault-tolerant,
and real-time. It supports a wide range of streaming
operations, including data filtering, transformations,
aggregations, joins, windowing, and sessionization.
https://docs.confluent.io/current/ksql/docs/tutorials/ex
amples.html
Training

Training

  • 6.
     Producer  Consumer Broker  Topic  Record  Partitions  Replications  Zookeeper
  • 7.
    $ bin/zookeeper-server-start.sh config/zookeeper.properties $bin/kafka-server-start.sh config/server.1.properties $ bin/kafka-topics.sh --create --topic my-kafka-topic -- zookeeper localhost:2181 --partitions 3 --replication-factor 2 $ bin/kafka-console-producer.sh --broker-list localhost:9093,localhost:9094,localhost:9095 --topic my- kafka-topic $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic my-kafka-topic --from-beginning
  • 8.
    $ bin/kafka-topics.sh --list--zookeeper localhost:2181 $ bin/kafka-topics.sh --zookeeper localhost:2181 --delete - -topic topic_name $ bin/kafka-topics.sh --describe -- zookeeper localhost:2181 --topic my-topic
  • 9.
    1. Fire andForget Producer<String, String> producer = new KafkaProducer <>(props); ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value); producer.send(record);
  • 10.
    2. Synchronous Producer Producer<String,String> producer = new KafkaProducer <>(props); ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value); try{ RecordMetadata metadata = producer.send(record).get(); System.out.println("Message is sent to Partition no " + metadata.partition() + " and offset " + metadata.offset()); System.out.println("SynchronousProducer Completed with success."); }catch (Exception e) { e.printStackTrace(); System.out.println("SynchronousProducer failed with an exception"); }finally{ producer.close(); }
  • 11.
    3. Asynchronous Producer Producer<String,String> producer = new KafkaProducer <>(props); ProducerRecord<String, String> record = new ProducerRecord<>(topicName,key,value); producer.send(record, new MyProducerCallback()); System.out.println("AsynchronousProducer call completed"); producer.close(); class MyProducerCallback implements Callback{ @Override public void onCompletion(RecordMetadata recordMetadata, Exception e) { if (e != null) System.out.println("AsynchronousProducer failed with an exception"); else System.out.println("AsynchronousProducer call Success:"); } }
  • 12.
  • 13.
  • 14.
  • 15.
    1. Auto-commit(default) enable.auto.commit =true auto.commit.interval.ms = 5000(Default) 2. Manual-commit consumer.commitSync(); consumer.commitAsync(callback); https://github.com/LearningJournal/ApacheKafkaTutorials/blob/master/ConsumerExample /ManualConsumer.java 3. Custom offset-management https://github.com/LearningJournal/ApacheKafkaTutorials/tree/master/ConsumerExample
  • 16.
    Kafka Streams isa client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology. https://kafka.apache.org/documentation/streams/
  • 17.
     Schema Registry Itprovides a RESTful interface for storing and retrieving Avro schemas. It stores a versioned history of all schemas, provides multiple compatibility settings and allows evolution of schemas according to the configured compatibility settings and expanded Avro support.
  • 19.
  • 20.
     Connectors These supportedconnectors are packaged and available natively with Confluent Platform:  Kafka Connect ActiveMQ Connector  FileStream Connectors  Kafka Connect HDFS  Kafka Connect JDBC Connector  Confluent Replicator  Kafka Connect S3  Kafka Connect Elasticsearch Connector  Kafka Connect IBM MQ Connector  Kafka Connect JMS Connector
  • 21.
     KSQL It providesan easy-to-use yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language such as Java or Python. KSQL is scalable, elastic, fault-tolerant, and real-time. It supports a wide range of streaming operations, including data filtering, transformations, aggregations, joins, windowing, and sessionization. https://docs.confluent.io/current/ksql/docs/tutorials/ex amples.html