5. execute schema registry
• at Confluent Home directory
sh bin/schema-registry-start etc/schema-registry/schema-registry.properties
• Port Open
firewall-cmd --permanent --zone=public --add-port=8081/tcp
firewall-cmd --reload
8. configuration (1/2)
• {confluent home directory}/etc/kafka/connect-standalone.properties
bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
plugin.path=/root/confluent-7.3.0/share/confluentinc-kafka-connect-jdbc-10.6.0
10. Execution
• sh bin/connect-standalone etc/kafka/connect-standalone.properties etc/kafka/sink-postgres.properties
[2022-12-08 07:42:54,056] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Resetting offset for partition employee-0 to
position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[slave2.kopo:9092 (id: 3 rack: null)], epoch=0}}.
(org.apache.kafka.clients.consumer.internals.SubscriptionState:399)
[2022-12-08 07:47:56,035] INFO [employee-sink|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)
[2022-12-08 07:47:56,456] INFO [employee-sink|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130)
[2022-12-08 07:47:56,456] INFO [employee-sink|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56)
[2022-12-08 07:47:56,531] INFO [employee-sink|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586)
[2022-12-08 07:47:56,548] INFO [employee-sink|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594)
[2022-12-08 07:47:56,578] INFO [employee-sink|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880)
[2022-12-08 07:47:56,589] INFO [employee-sink|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false, allowsNull=true,
sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64)
[2022-12-08 07:51:53,759] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient:937)
11. Test
• consumer shell ({confluent home dir}/bin)
./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee
• Producer shell ({confluent home dir}/bin)
./kafka-avro-console-producer --broker-list localhost:9092 --topic employee --property
value.schema='{"type":"record","name":"kafka_employee","fields":[{"name":"idx","type":"in
t"},{"name":"name","type":"string"}]}’
{"idx":1,"name":"hwang"}
{"idx":2,"name":"kim"} DB
14. Execution
• sh bin/connect-distributed etc/kafka/connect-distributed.properties
[2022-12-08 08:09:53,943] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:312)
[2022-12-08 08:09:53,944] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
[2022-12-08 08:09:54,419] INFO [sink-jdbc-postgre|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)
[2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130)
[2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56)
[2022-12-08 08:09:55,142] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586)
[2022-12-08 08:09:55,168] INFO [sink-jdbc-postgre|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594)
[2022-12-08 08:09:55,201] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880)
[2022-12-08 08:09:55,216] INFO [sink-jdbc-postgre|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false,
allowsNull=true, sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64)
15. check a registered connector
• curl --location --request GET 'localhost:8083/connector-plugins'
[{"class":"io.confluent.connect.jdbc.JdbcSinkConnector","type":"sink","version":"10.6.0"},{"class":"io.confluent.c
onnect.jdbc.JdbcSourceConnector","type":"source","version":"10.6.0"},{"class":"org.apache.kafka.connect.mirror.
MirrorCheckpointConnector","type":"source","version":"7.3.0-
ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.0-
ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.0-ccs"}]
26. Test (same as stand-alone mode)
• consumer shell ({confluent home dir}/bin)
./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee
DB
producer (Java)
consumer
avroRecord.put("name", "test-name");
DB