SlideShare a Scribd company logo
1 of 27
Stand-Alone Mode
Standalone
• table create on postgresql database
CREATE TABLE employee
(
idx int,
NAME CHARACTER VARYING(300)
)
Topic
• create a topic
kafka-topics.sh --create --topic employee --bootstrap-server
master:9092,slave1:9092,slave2:9092 –replication-factor 1 --partitions 1
• check created topic
kafka-topics.sh --bootstrap-server localhost:9092 --list
execute schema registry
• at Confluent Home directory
sh bin/schema-registry-start etc/schema-registry/schema-registry.properties
• Port Open
firewall-cmd --permanent --zone=public --add-port=8081/tcp
firewall-cmd --reload
download connector
https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc
unzip connector
• unzip connector at
{confluent home directory}/share/
configuration (1/2)
• {confluent home directory}/etc/kafka/connect-standalone.properties
bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
plugin.path=/root/confluent-7.3.0/share/confluentinc-kafka-connect-jdbc-10.6.0
configuration (2/2)
• {confluent home directory}/etc/kafka/sink-postgres.properties
name=employee-sink
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics={topic id eg) topic a, topic b, …}
connection.url=jdbc:postgresql://{db ip}:{db port}/{db name}
connection.user={db id}
connection.password={db pwd}
insert.mode=insert
auto.create=true
table.name.format={table name}
pk.mode=none
pk.fields=none
Execution
• sh bin/connect-standalone etc/kafka/connect-standalone.properties etc/kafka/sink-postgres.properties
[2022-12-08 07:42:54,056] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Resetting offset for partition employee-0 to
position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[slave2.kopo:9092 (id: 3 rack: null)], epoch=0}}.
(org.apache.kafka.clients.consumer.internals.SubscriptionState:399)
[2022-12-08 07:47:56,035] INFO [employee-sink|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)
[2022-12-08 07:47:56,456] INFO [employee-sink|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130)
[2022-12-08 07:47:56,456] INFO [employee-sink|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56)
[2022-12-08 07:47:56,531] INFO [employee-sink|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586)
[2022-12-08 07:47:56,548] INFO [employee-sink|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594)
[2022-12-08 07:47:56,578] INFO [employee-sink|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880)
[2022-12-08 07:47:56,589] INFO [employee-sink|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false, allowsNull=true,
sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64)
[2022-12-08 07:51:53,759] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Node -1 disconnected.
(org.apache.kafka.clients.NetworkClient:937)
Test
• consumer shell ({confluent home dir}/bin)
./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee
• Producer shell ({confluent home dir}/bin)
./kafka-avro-console-producer --broker-list localhost:9092 --topic employee --property
value.schema='{"type":"record","name":"kafka_employee","fields":[{"name":"idx","type":"in
t"},{"name":"name","type":"string"}]}’
{"idx":1,"name":"hwang"}
{"idx":2,"name":"kim"} DB
Distributed Mode
configuration
• etc/kafka/connect-distributed.properties
bootstrap.servers=master:9092,slave1:9091,slave2:9092
group.id=connect-cluster
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.topic=connect-offsets
offset.storage.replication.factor=1
config.storage.topic=connect-configs
config.storage.replication.factor=1
status.storage.topic=connect-status
status.storage.replication.factor=1
offset.flush.interval.ms=10000
plugin.path=/root/confluent-7.3.0/share/confluentinc-kafka-connect-jdbc-10.6.0
Execution
• sh bin/connect-distributed etc/kafka/connect-distributed.properties
[2022-12-08 08:09:53,943] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:312)
[2022-12-08 08:09:53,944] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56)
[2022-12-08 08:09:54,419] INFO [sink-jdbc-postgre|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79)
[2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130)
[2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56)
[2022-12-08 08:09:55,142] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586)
[2022-12-08 08:09:55,168] INFO [sink-jdbc-postgre|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594)
[2022-12-08 08:09:55,201] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880)
[2022-12-08 08:09:55,216] INFO [sink-jdbc-postgre|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false,
allowsNull=true, sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64)
check a registered connector
• curl --location --request GET 'localhost:8083/connector-plugins'
[{"class":"io.confluent.connect.jdbc.JdbcSinkConnector","type":"sink","version":"10.6.0"},{"class":"io.confluent.c
onnect.jdbc.JdbcSourceConnector","type":"source","version":"10.6.0"},{"class":"org.apache.kafka.connect.mirror.
MirrorCheckpointConnector","type":"source","version":"7.3.0-
ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.0-
ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.0-ccs"}]
Register a JDBC Sink
curl --request POST 'localhost:8083/connectors' 
--header 'Content-Type: application/json' 
--data '{
"name": "sink-jdbc-postgre",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": “{topic id}",
"connection.url":"jdbc:postgresql://{db ip}:{db port}/{db name}",
"connection.user":“{db id}",
"connection.password":“{db pwd}",
"auto.create":"true",
"table.name.format":“{table name}",
"insert.mode":"insert",
"auto.create":"true",
"pk.mode":"none",
"pk.fields":"none"
}
}'
query a connector
• curl -s localhost:8083/connectors/sink-jdbc-postgre
{"name":"sink-jdbc-postgre","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","table.name.format":"employee","connection.password":“{db
pwd}","tasks.max":"1","topics":"employee","connection.user":“{db id}","name":"sink-jdbc-postgre","auto.create":"true","connection.url":"jdbc:postgresql://{db ip}:{dp port}/{db
name}","insert.mode":"insert","pk.mode":"none","pk.fields":"none"},"tasks":[{"connector":"sink-jdbc-postgre","task":0}],"type":"sink"}
[FYI] connector delete command
curl -X DELETE "http://localhost:8083/connectors/sink-jdbc-postgre"
check a schema registry
• curl localhost:8081/subjects
• curl localhost:8081/subjects/employee-value/versions
• curl localhost:8081/subjects/employee-value/versions/1
Test (same as stand-alone mode)
• consumer shell ({confluent home dir}/bin)
./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee
• Producer shell ({confluent home dir}/bin)
./kafka-avro-console-producer --broker-list localhost:9092 --topic employee --property
value.schema='{"type":"record","name":"kafka_employee","fields":[{"name":"idx","type":"in
t"},{"name":"name","type":"string"}]}’
{"idx":1,"name":"hwang"}
{"idx":2,"name":"kim"} DB
Java Producer
Step1 – Maven Installation
# wget https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz -P /tmp
# tar -xzvf /tmp/apache-maven-3.6.3-bin.tar.gz -C /opt
# ln -s /opt/apache-maven-3.6.3 /opt/maven
# vi /etc/profile.d/maven.sh
# source /etc/profile.d/maven.sh
# mvn --version
export M2_HOME=/opt/maven
export MAVEN_HOME=/opt/maven
export PATH=${M2_HOME}/bin:${PATH}
Step2 – pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.kopo.kafka</groupId>
<artifactId>kafka-example</artifactId>
<version>1.0</version>
<repositories>
<repository>
<id>confluent</id>
<url>http://packages.confluent.io/maven/</url>
</repository>
</repositories>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<confluent.version>7.3.0</confluent.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>${confluent.version}</version>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>common-config</artifactId>
<version>${confluent.version}</version>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
<version>1.11.1</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>31.0-jre</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.6.4</version>
</dependency>
</dependencies>
</project>
mkdir ex_connect
cd ex_connect
vi pom.xml
Step3 – Implementation
mkdir -p src/main/java/com/kopo/kafka
vi src/main/java/com/kopo/kafka/ConnectProducer.java
package com.kopo.kafka;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import io.confluent.kafka.serializers.KafkaAvroSerializer;
import org.apache.avro.Schema;
import java.util.Properties;
import java.util.Random;
import java.util.UUID;
public class ConnectProducer {
private final static String TOPIC_NAME = "employee";
private final static String BOOTSTRAP_SERVERS = "localhost:9092";
public static void main(String[] args) {
Properties configs = new Properties();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
//configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
configs.setProperty("key.serializer", KafkaAvroSerializer.class.getName());
configs.setProperty("value.serializer", KafkaAvroSerializer.class.getName());
configs.setProperty("schema.registry.url", "http://192.168.56.30:8081");
String schema = "{"
// + ""namespace": "myrecord","
+ " "name": "kafka_employee","
+ " "type": "record","
+ " "fields": ["
+ " {"name": "idx", "type": "int"},"
+ " {"name": "name", "type": "string"}"
+ " ]"
+ "}";
Schema.Parser parser = new Schema.Parser();
Schema avroSchema1 = parser.parse(schema);
// generate avro generic record
GenericRecord avroRecord = new GenericData.Record(avroSchema1);
avroRecord.put("idx", 100);
avroRecord.put("name", "test-name");
KafkaProducer<String, GenericRecord> producer = new KafkaProducer<>(configs);
ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(TOPIC_NAME, avroRecord);
producer.send(record);
producer.flush();
producer.close();
}
}
schema registry running server
Step4 – Maven Install
mvn install
java -cp target/kafka-example-1.0.jar:/root/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-
3.3.1.jar:/root/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-
3.3.1.jar:/root/.m2/repository/org/apache/avro/avro/1.11.1/avro-
1.11.1.jar:/root/.m2/repository/io/confluent/kafka-avro-serializer/7.3.0/kafka-avro-serializer-
7.3.0.jar:/root/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/root/.m2/repository/org/slf4j/slf4j-
simple/1.6.4/slf4j-simple-1.6.4.jar:/root/.m2/repository/io/confluent/kafka-schema-serializer/7.3.0/kafka-
schema-serializer-7.3.0.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.3/jackson-
databind-2.13.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.13.3/jackson-core-
2.13.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.13.3/jackson-annotations-
2.13.3.jar:/root/.m2/repository/io/confluent/kafka-schema-registry-client/7.3.0/kafka-schema-registry-client-
7.3.0.jar:/root/.m2/repository/com/google/guava/guava/31.0-jre/guava-31.0-
jre.jar:/root/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar
com.kopo.kafka.ConnectProducer
Step5 – Execution Jar
Test (same as stand-alone mode)
• consumer shell ({confluent home dir}/bin)
./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee
DB
producer (Java)
consumer
avroRecord.put("name", "test-name");
DB
Reference
• https://mvnrepository.com/artifact/org.apache.avro/avro/1.11.1
• https://mvnrepository.com/artifact/com.google.guava/guava/31.0-jre
• https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer/7.3.0
• https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc
• https://github.com/datastax/kafka-examples/blob/master/producers/pom.xml
• https://itnext.io/howto-produce-avro-messages-to-kafka-ec0b770e1f54
• https://docs.confluent.io/platform/current/schema-registry/connect.html#protobuf
• https://www.youtube.com/watch?v=ZTVBfskFwYY
• https://wecandev.tistory.com/110
• https://docs.confluent.io/kafka-connectors/jdbc/current/sink-connector/overview.html#suggested-reading
• https://docs.confluent.io/kafka-connectors/self-managed/userguide.html#worker-configuration-properties-file
• https://wecandev.tistory.com/110

More Related Content

Similar to Kafka JDBC Connect Guide(Postgres Sink).pptx

HTTP Caching and PHP
HTTP Caching and PHPHTTP Caching and PHP
HTTP Caching and PHPDavid de Boer
 
Building Out Your Kafka Developer CDC Ecosystem
Building Out Your Kafka Developer CDC  EcosystemBuilding Out Your Kafka Developer CDC  Ecosystem
Building Out Your Kafka Developer CDC Ecosystemconfluent
 
Terraform 0.9 + good practices
Terraform 0.9 + good practicesTerraform 0.9 + good practices
Terraform 0.9 + good practicesRadek Simko
 
Functional Hostnames and Why they are Bad
Functional Hostnames and Why they are BadFunctional Hostnames and Why they are Bad
Functional Hostnames and Why they are BadPuppet
 
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann Danny Abukalam
 
Managing Infrastructure as Code
Managing Infrastructure as CodeManaging Infrastructure as Code
Managing Infrastructure as CodeAllan Shone
 
Solr As A SparkSQL DataSource
Solr As A SparkSQL DataSourceSolr As A SparkSQL DataSource
Solr As A SparkSQL DataSourceSpark Summit
 
Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...HostedbyConfluent
 
Kerberizing spark. Spark Summit east
Kerberizing spark. Spark Summit eastKerberizing spark. Spark Summit east
Kerberizing spark. Spark Summit eastJorge Lopez-Malla
 
Introduction to cloudforecast
Introduction to cloudforecastIntroduction to cloudforecast
Introduction to cloudforecastMasahiro Nagano
 
Spark on Yarn
Spark on YarnSpark on Yarn
Spark on YarnQubole
 
Bootstrapping multidc observability stack
Bootstrapping multidc observability stackBootstrapping multidc observability stack
Bootstrapping multidc observability stackBram Vogelaar
 
Scaling web applications with cassandra presentation
Scaling web applications with cassandra presentationScaling web applications with cassandra presentation
Scaling web applications with cassandra presentationMurat Çakal
 
Docker container management
Docker container managementDocker container management
Docker container managementKarol Kreft
 
Query Your Streaming Data on Kafka using SQL: Why, How, and What
Query Your Streaming Data on Kafka using SQL: Why, How, and WhatQuery Your Streaming Data on Kafka using SQL: Why, How, and What
Query Your Streaming Data on Kafka using SQL: Why, How, and WhatHostedbyConfluent
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraMario IC
 
Learning Puppet basic thing
Learning Puppet basic thing Learning Puppet basic thing
Learning Puppet basic thing DaeHyung Lee
 
Asian Spirit 3 Day Dba On Ubl
Asian Spirit 3 Day Dba On UblAsian Spirit 3 Day Dba On Ubl
Asian Spirit 3 Day Dba On Ublnewrforce
 

Similar to Kafka JDBC Connect Guide(Postgres Sink).pptx (20)

Linux configer
Linux configerLinux configer
Linux configer
 
HTTP Caching and PHP
HTTP Caching and PHPHTTP Caching and PHP
HTTP Caching and PHP
 
Building Out Your Kafka Developer CDC Ecosystem
Building Out Your Kafka Developer CDC  EcosystemBuilding Out Your Kafka Developer CDC  Ecosystem
Building Out Your Kafka Developer CDC Ecosystem
 
Terraform 0.9 + good practices
Terraform 0.9 + good practicesTerraform 0.9 + good practices
Terraform 0.9 + good practices
 
Functional Hostnames and Why they are Bad
Functional Hostnames and Why they are BadFunctional Hostnames and Why they are Bad
Functional Hostnames and Why they are Bad
 
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann
Matt Jarvis - Unravelling Logs: Log Processing with Logstash and Riemann
 
Managing Infrastructure as Code
Managing Infrastructure as CodeManaging Infrastructure as Code
Managing Infrastructure as Code
 
Solr As A SparkSQL DataSource
Solr As A SparkSQL DataSourceSolr As A SparkSQL DataSource
Solr As A SparkSQL DataSource
 
Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...Kafka High Availability in multi data center setup with floating Observers wi...
Kafka High Availability in multi data center setup with floating Observers wi...
 
Kerberizing spark. Spark Summit east
Kerberizing spark. Spark Summit eastKerberizing spark. Spark Summit east
Kerberizing spark. Spark Summit east
 
Introduction to cloudforecast
Introduction to cloudforecastIntroduction to cloudforecast
Introduction to cloudforecast
 
Spark on Yarn
Spark on YarnSpark on Yarn
Spark on Yarn
 
Bootstrapping multidc observability stack
Bootstrapping multidc observability stackBootstrapping multidc observability stack
Bootstrapping multidc observability stack
 
Scaling web applications with cassandra presentation
Scaling web applications with cassandra presentationScaling web applications with cassandra presentation
Scaling web applications with cassandra presentation
 
Docker container management
Docker container managementDocker container management
Docker container management
 
Query Your Streaming Data on Kafka using SQL: Why, How, and What
Query Your Streaming Data on Kafka using SQL: Why, How, and WhatQuery Your Streaming Data on Kafka using SQL: Why, How, and What
Query Your Streaming Data on Kafka using SQL: Why, How, and What
 
Puppet
PuppetPuppet
Puppet
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - Suestra
 
Learning Puppet basic thing
Learning Puppet basic thing Learning Puppet basic thing
Learning Puppet basic thing
 
Asian Spirit 3 Day Dba On Ubl
Asian Spirit 3 Day Dba On UblAsian Spirit 3 Day Dba On Ubl
Asian Spirit 3 Day Dba On Ubl
 

More from wonyong hwang

Hyperledger Explorer.pptx
Hyperledger Explorer.pptxHyperledger Explorer.pptx
Hyperledger Explorer.pptxwonyong hwang
 
하이퍼레저 페이지 단위 블록 조회
하이퍼레저 페이지 단위 블록 조회하이퍼레저 페이지 단위 블록 조회
하이퍼레저 페이지 단위 블록 조회wonyong hwang
 
토큰 증권 개요.pptx
토큰 증권 개요.pptx토큰 증권 개요.pptx
토큰 증권 개요.pptxwonyong hwang
 
Vue.js 기초 실습.pptx
Vue.js 기초 실습.pptxVue.js 기초 실습.pptx
Vue.js 기초 실습.pptxwonyong hwang
 
Deploying Hyperledger Fabric on Kubernetes.pptx
Deploying Hyperledger Fabric on Kubernetes.pptxDeploying Hyperledger Fabric on Kubernetes.pptx
Deploying Hyperledger Fabric on Kubernetes.pptxwonyong hwang
 
k8s practice 2023.pptx
k8s practice 2023.pptxk8s practice 2023.pptx
k8s practice 2023.pptxwonyong hwang
 
HyperLedger Fabric V2.5.pdf
HyperLedger Fabric V2.5.pdfHyperLedger Fabric V2.5.pdf
HyperLedger Fabric V2.5.pdfwonyong hwang
 
Ngrok을 이용한 Nginx Https 적용하기.pptx
Ngrok을 이용한 Nginx Https 적용하기.pptxNgrok을 이용한 Nginx Https 적용하기.pptx
Ngrok을 이용한 Nginx Https 적용하기.pptxwonyong hwang
 
Nginx Https 적용하기.pptx
Nginx Https 적용하기.pptxNginx Https 적용하기.pptx
Nginx Https 적용하기.pptxwonyong hwang
 
Nginx Reverse Proxy with Kafka.pptx
Nginx Reverse Proxy with Kafka.pptxNginx Reverse Proxy with Kafka.pptx
Nginx Reverse Proxy with Kafka.pptxwonyong hwang
 
Kafka monitoring using Prometheus and Grafana
Kafka monitoring using Prometheus and GrafanaKafka monitoring using Prometheus and Grafana
Kafka monitoring using Prometheus and Grafanawonyong hwang
 
주가 정보 다루기.pdf
주가 정보 다루기.pdf주가 정보 다루기.pdf
주가 정보 다루기.pdfwonyong hwang
 
App development with quasar (pdf)
App development with quasar (pdf)App development with quasar (pdf)
App development with quasar (pdf)wonyong hwang
 
Hyperledger Fabric practice (v2.0)
Hyperledger Fabric practice (v2.0) Hyperledger Fabric practice (v2.0)
Hyperledger Fabric practice (v2.0) wonyong hwang
 
Hyperledger fabric practice(pdf)
Hyperledger fabric practice(pdf)Hyperledger fabric practice(pdf)
Hyperledger fabric practice(pdf)wonyong hwang
 
Hyperledger composer
Hyperledger composerHyperledger composer
Hyperledger composerwonyong hwang
 

More from wonyong hwang (20)

Hyperledger Explorer.pptx
Hyperledger Explorer.pptxHyperledger Explorer.pptx
Hyperledger Explorer.pptx
 
하이퍼레저 페이지 단위 블록 조회
하이퍼레저 페이지 단위 블록 조회하이퍼레저 페이지 단위 블록 조회
하이퍼레저 페이지 단위 블록 조회
 
토큰 증권 개요.pptx
토큰 증권 개요.pptx토큰 증권 개요.pptx
토큰 증권 개요.pptx
 
Vue.js 기초 실습.pptx
Vue.js 기초 실습.pptxVue.js 기초 실습.pptx
Vue.js 기초 실습.pptx
 
Deploying Hyperledger Fabric on Kubernetes.pptx
Deploying Hyperledger Fabric on Kubernetes.pptxDeploying Hyperledger Fabric on Kubernetes.pptx
Deploying Hyperledger Fabric on Kubernetes.pptx
 
k8s practice 2023.pptx
k8s practice 2023.pptxk8s practice 2023.pptx
k8s practice 2023.pptx
 
HyperLedger Fabric V2.5.pdf
HyperLedger Fabric V2.5.pdfHyperLedger Fabric V2.5.pdf
HyperLedger Fabric V2.5.pdf
 
Ngrok을 이용한 Nginx Https 적용하기.pptx
Ngrok을 이용한 Nginx Https 적용하기.pptxNgrok을 이용한 Nginx Https 적용하기.pptx
Ngrok을 이용한 Nginx Https 적용하기.pptx
 
Nginx Https 적용하기.pptx
Nginx Https 적용하기.pptxNginx Https 적용하기.pptx
Nginx Https 적용하기.pptx
 
Nginx Reverse Proxy with Kafka.pptx
Nginx Reverse Proxy with Kafka.pptxNginx Reverse Proxy with Kafka.pptx
Nginx Reverse Proxy with Kafka.pptx
 
Kafka Rest.pptx
Kafka Rest.pptxKafka Rest.pptx
Kafka Rest.pptx
 
Kafka monitoring using Prometheus and Grafana
Kafka monitoring using Prometheus and GrafanaKafka monitoring using Prometheus and Grafana
Kafka monitoring using Prometheus and Grafana
 
주가 정보 다루기.pdf
주가 정보 다루기.pdf주가 정보 다루기.pdf
주가 정보 다루기.pdf
 
KAFKA 3.1.0.pdf
KAFKA 3.1.0.pdfKAFKA 3.1.0.pdf
KAFKA 3.1.0.pdf
 
App development with quasar (pdf)
App development with quasar (pdf)App development with quasar (pdf)
App development with quasar (pdf)
 
kubernetes practice
kubernetes practicekubernetes practice
kubernetes practice
 
Hyperledger Fabric practice (v2.0)
Hyperledger Fabric practice (v2.0) Hyperledger Fabric practice (v2.0)
Hyperledger Fabric practice (v2.0)
 
Docker practice
Docker practiceDocker practice
Docker practice
 
Hyperledger fabric practice(pdf)
Hyperledger fabric practice(pdf)Hyperledger fabric practice(pdf)
Hyperledger fabric practice(pdf)
 
Hyperledger composer
Hyperledger composerHyperledger composer
Hyperledger composer
 

Recently uploaded

英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
VK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web DevelopmentVK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web Developmentvyaparkranti
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Software Coding for software engineering
Software Coding for software engineeringSoftware Coding for software engineering
Software Coding for software engineeringssuserb3a23b
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxTier1 app
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...Technogeeks
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commercemanigoyal112
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfFerryKemperman
 

Recently uploaded (20)

英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Advantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your BusinessAdvantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your Business
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
VK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web DevelopmentVK Business Profile - provides IT solutions and Web Development
VK Business Profile - provides IT solutions and Web Development
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Software Coding for software engineering
Software Coding for software engineeringSoftware Coding for software engineering
Software Coding for software engineering
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...What is Advanced Excel and what are some best practices for designing and cre...
What is Advanced Excel and what are some best practices for designing and cre...
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commerce
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdf
 

Kafka JDBC Connect Guide(Postgres Sink).pptx

  • 1.
  • 3. Standalone • table create on postgresql database CREATE TABLE employee ( idx int, NAME CHARACTER VARYING(300) )
  • 4. Topic • create a topic kafka-topics.sh --create --topic employee --bootstrap-server master:9092,slave1:9092,slave2:9092 –replication-factor 1 --partitions 1 • check created topic kafka-topics.sh --bootstrap-server localhost:9092 --list
  • 5. execute schema registry • at Confluent Home directory sh bin/schema-registry-start etc/schema-registry/schema-registry.properties • Port Open firewall-cmd --permanent --zone=public --add-port=8081/tcp firewall-cmd --reload
  • 7. unzip connector • unzip connector at {confluent home directory}/share/
  • 8. configuration (1/2) • {confluent home directory}/etc/kafka/connect-standalone.properties bootstrap.servers=localhost:9092 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 # Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins plugin.path=/root/confluent-7.3.0/share/confluentinc-kafka-connect-jdbc-10.6.0
  • 9. configuration (2/2) • {confluent home directory}/etc/kafka/sink-postgres.properties name=employee-sink connector.class=io.confluent.connect.jdbc.JdbcSinkConnector tasks.max=1 topics={topic id eg) topic a, topic b, …} connection.url=jdbc:postgresql://{db ip}:{db port}/{db name} connection.user={db id} connection.password={db pwd} insert.mode=insert auto.create=true table.name.format={table name} pk.mode=none pk.fields=none
  • 10. Execution • sh bin/connect-standalone etc/kafka/connect-standalone.properties etc/kafka/sink-postgres.properties [2022-12-08 07:42:54,056] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Resetting offset for partition employee-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[slave2.kopo:9092 (id: 3 rack: null)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState:399) [2022-12-08 07:47:56,035] INFO [employee-sink|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79) [2022-12-08 07:47:56,456] INFO [employee-sink|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130) [2022-12-08 07:47:56,456] INFO [employee-sink|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56) [2022-12-08 07:47:56,531] INFO [employee-sink|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586) [2022-12-08 07:47:56,548] INFO [employee-sink|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594) [2022-12-08 07:47:56,578] INFO [employee-sink|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880) [2022-12-08 07:47:56,589] INFO [employee-sink|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false, allowsNull=true, sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64) [2022-12-08 07:51:53,759] INFO [employee-sink|task-0] [Consumer clientId=connector-consumer-employee-sink-0, groupId=connect-employee-sink] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient:937)
  • 11. Test • consumer shell ({confluent home dir}/bin) ./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee • Producer shell ({confluent home dir}/bin) ./kafka-avro-console-producer --broker-list localhost:9092 --topic employee --property value.schema='{"type":"record","name":"kafka_employee","fields":[{"name":"idx","type":"in t"},{"name":"name","type":"string"}]}’ {"idx":1,"name":"hwang"} {"idx":2,"name":"kim"} DB
  • 14. Execution • sh bin/connect-distributed etc/kafka/connect-distributed.properties [2022-12-08 08:09:53,943] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:312) [2022-12-08 08:09:53,944] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:56) [2022-12-08 08:09:54,419] INFO [sink-jdbc-postgre|task-0] Attempting to open connection #1 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider:79) [2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] Maximum table name length for database is 63 bytes (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect:130) [2022-12-08 08:09:55,020] INFO [sink-jdbc-postgre|task-0] JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter:56) [2022-12-08 08:09:55,142] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for existence of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:586) [2022-12-08 08:09:55,168] INFO [sink-jdbc-postgre|task-0] Using PostgreSql dialect TABLE "employee" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:594) [2022-12-08 08:09:55,201] INFO [sink-jdbc-postgre|task-0] Checking PostgreSql dialect for type of TABLE "employee" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect:880) [2022-12-08 08:09:55,216] INFO [sink-jdbc-postgre|task-0] Setting metadata for table "employee" to Table{name='"employee"', type=TABLE columns=[Column{'idx', isPrimaryKey=false, allowsNull=true, sqlType=int4}, Column{'name', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions:64)
  • 15. check a registered connector • curl --location --request GET 'localhost:8083/connector-plugins' [{"class":"io.confluent.connect.jdbc.JdbcSinkConnector","type":"sink","version":"10.6.0"},{"class":"io.confluent.c onnect.jdbc.JdbcSourceConnector","type":"source","version":"10.6.0"},{"class":"org.apache.kafka.connect.mirror. MirrorCheckpointConnector","type":"source","version":"7.3.0- ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"7.3.0- ccs"},{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"7.3.0-ccs"}]
  • 16. Register a JDBC Sink curl --request POST 'localhost:8083/connectors' --header 'Content-Type: application/json' --data '{ "name": "sink-jdbc-postgre", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector", "tasks.max": "1", "topics": “{topic id}", "connection.url":"jdbc:postgresql://{db ip}:{db port}/{db name}", "connection.user":“{db id}", "connection.password":“{db pwd}", "auto.create":"true", "table.name.format":“{table name}", "insert.mode":"insert", "auto.create":"true", "pk.mode":"none", "pk.fields":"none" } }'
  • 17. query a connector • curl -s localhost:8083/connectors/sink-jdbc-postgre {"name":"sink-jdbc-postgre","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","table.name.format":"employee","connection.password":“{db pwd}","tasks.max":"1","topics":"employee","connection.user":“{db id}","name":"sink-jdbc-postgre","auto.create":"true","connection.url":"jdbc:postgresql://{db ip}:{dp port}/{db name}","insert.mode":"insert","pk.mode":"none","pk.fields":"none"},"tasks":[{"connector":"sink-jdbc-postgre","task":0}],"type":"sink"} [FYI] connector delete command curl -X DELETE "http://localhost:8083/connectors/sink-jdbc-postgre"
  • 18. check a schema registry • curl localhost:8081/subjects • curl localhost:8081/subjects/employee-value/versions • curl localhost:8081/subjects/employee-value/versions/1
  • 19. Test (same as stand-alone mode) • consumer shell ({confluent home dir}/bin) ./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee • Producer shell ({confluent home dir}/bin) ./kafka-avro-console-producer --broker-list localhost:9092 --topic employee --property value.schema='{"type":"record","name":"kafka_employee","fields":[{"name":"idx","type":"in t"},{"name":"name","type":"string"}]}’ {"idx":1,"name":"hwang"} {"idx":2,"name":"kim"} DB
  • 21. Step1 – Maven Installation # wget https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz -P /tmp # tar -xzvf /tmp/apache-maven-3.6.3-bin.tar.gz -C /opt # ln -s /opt/apache-maven-3.6.3 /opt/maven # vi /etc/profile.d/maven.sh # source /etc/profile.d/maven.sh # mvn --version export M2_HOME=/opt/maven export MAVEN_HOME=/opt/maven export PATH=${M2_HOME}/bin:${PATH}
  • 22. Step2 – pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.kopo.kafka</groupId> <artifactId>kafka-example</artifactId> <version>1.0</version> <repositories> <repository> <id>confluent</id> <url>http://packages.confluent.io/maven/</url> </repository> </repositories> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <confluent.version>7.3.0</confluent.version> </properties> <dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.13</artifactId> <version>3.3.1</version> </dependency> <dependency> <groupId>io.confluent</groupId> <artifactId>kafka-avro-serializer</artifactId> <version>${confluent.version}</version> </dependency> <dependency> <groupId>io.confluent</groupId> <artifactId>common-config</artifactId> <version>${confluent.version}</version> </dependency> <dependency> <groupId>org.apache.avro</groupId> <artifactId>avro</artifactId> <version>1.11.1</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>31.0-jre</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.4</version> </dependency> </dependencies> </project> mkdir ex_connect cd ex_connect vi pom.xml
  • 23. Step3 – Implementation mkdir -p src/main/java/com/kopo/kafka vi src/main/java/com/kopo/kafka/ConnectProducer.java package com.kopo.kafka; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.serialization.StringSerializer; import io.confluent.kafka.serializers.KafkaAvroSerializer; import org.apache.avro.Schema; import java.util.Properties; import java.util.Random; import java.util.UUID; public class ConnectProducer { private final static String TOPIC_NAME = "employee"; private final static String BOOTSTRAP_SERVERS = "localhost:9092"; public static void main(String[] args) { Properties configs = new Properties(); configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); //configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); configs.setProperty("key.serializer", KafkaAvroSerializer.class.getName()); configs.setProperty("value.serializer", KafkaAvroSerializer.class.getName()); configs.setProperty("schema.registry.url", "http://192.168.56.30:8081"); String schema = "{" // + ""namespace": "myrecord"," + " "name": "kafka_employee"," + " "type": "record"," + " "fields": [" + " {"name": "idx", "type": "int"}," + " {"name": "name", "type": "string"}" + " ]" + "}"; Schema.Parser parser = new Schema.Parser(); Schema avroSchema1 = parser.parse(schema); // generate avro generic record GenericRecord avroRecord = new GenericData.Record(avroSchema1); avroRecord.put("idx", 100); avroRecord.put("name", "test-name"); KafkaProducer<String, GenericRecord> producer = new KafkaProducer<>(configs); ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(TOPIC_NAME, avroRecord); producer.send(record); producer.flush(); producer.close(); } } schema registry running server
  • 24. Step4 – Maven Install mvn install
  • 25. java -cp target/kafka-example-1.0.jar:/root/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13- 3.3.1.jar:/root/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients- 3.3.1.jar:/root/.m2/repository/org/apache/avro/avro/1.11.1/avro- 1.11.1.jar:/root/.m2/repository/io/confluent/kafka-avro-serializer/7.3.0/kafka-avro-serializer- 7.3.0.jar:/root/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/root/.m2/repository/org/slf4j/slf4j- simple/1.6.4/slf4j-simple-1.6.4.jar:/root/.m2/repository/io/confluent/kafka-schema-serializer/7.3.0/kafka- schema-serializer-7.3.0.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.3/jackson- databind-2.13.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.13.3/jackson-core- 2.13.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.13.3/jackson-annotations- 2.13.3.jar:/root/.m2/repository/io/confluent/kafka-schema-registry-client/7.3.0/kafka-schema-registry-client- 7.3.0.jar:/root/.m2/repository/com/google/guava/guava/31.0-jre/guava-31.0- jre.jar:/root/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar com.kopo.kafka.ConnectProducer Step5 – Execution Jar
  • 26. Test (same as stand-alone mode) • consumer shell ({confluent home dir}/bin) ./kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic employee DB producer (Java) consumer avroRecord.put("name", "test-name"); DB
  • 27. Reference • https://mvnrepository.com/artifact/org.apache.avro/avro/1.11.1 • https://mvnrepository.com/artifact/com.google.guava/guava/31.0-jre • https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer/7.3.0 • https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc • https://github.com/datastax/kafka-examples/blob/master/producers/pom.xml • https://itnext.io/howto-produce-avro-messages-to-kafka-ec0b770e1f54 • https://docs.confluent.io/platform/current/schema-registry/connect.html#protobuf • https://www.youtube.com/watch?v=ZTVBfskFwYY • https://wecandev.tistory.com/110 • https://docs.confluent.io/kafka-connectors/jdbc/current/sink-connector/overview.html#suggested-reading • https://docs.confluent.io/kafka-connectors/self-managed/userguide.html#worker-configuration-properties-file • https://wecandev.tistory.com/110