1

我必须将记录从 Aurora/Mysql 发送到 MSK,然后从那里发送到 Elastic 搜索服务

Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->弹性搜索

Aurora 表结构中的记录是这样的,
我认为记录会以这种格式进入 AWS MSK。

"o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-5d17-138e-9749-setwr23424","","","",,"","",""

所以为了通过弹性搜索消费,我需要使用正确的模式,所以我必须使用模式注册表。

我的问题

问题 1

对于上述类型的消息模式注册表,我应该如何使用模式注册表?我是否必须为此创建 JSON 结构,如果是,我将其保留在哪里。这里需要更多帮助才能理解这一点?

我已编辑

vim /usr/local/confluent/etc/schema-registry/schema-registry.properties

提到了 zookeper,但我没有什么是kafkastore.topic=_schema 如何将其链接到自定义架构。

即使我开始并得到这个错误

Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas not present in metadata after 60000 ms.

这是我所期待的,因为我没有对 schema 做任何事情。

我确实安装了 jdbc 连接器,当我启动时出现以下错误

Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`

问题 2 我可以在一个 ec2 上创建两个连接器(jdbc 和弹性 serach 一个)。如果是,我必须在单独的 cli 中同时启动吗?

问题 3 当我打开 vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties 我只看到如下属性值

name=test-source-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
mode=incrementing
incrementing.column.name=id
topic.prefix=trf-aurora-fspaudit-

在上面的属性文件中我可以提到架构名称和表名称?

根据答案,我正在更新我的 Kafka 连接 JDBC 配置

---------------启动 JDBC 连接弹性搜索 -----------------

wget /usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-5.2.0 /usr/local/confluent

wget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz
tar -xzf  mysql-connector-java-5.1.48.tar.gz
sudo mv mysql-connector-java-5.1.48 mv /usr/local/confluent/share/java/kafka-connect-jdbc

接着

vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

然后我修改了下面的属性

connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf
mode=incrementing
connection.user=admin
connection.password=Welcome123
table.whitelist=PANStatementInstanceLog
schema.pattern=dbo

最后我修改

vim /usr/local/confluent/etc/kafka/connect-standalone.properties

在这里我修改了以下属性

bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-east-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/confluent/share/java

当我列出主题时,我没有看到为表名列出的任何主题。

错误消息的堆栈跟踪

[2020-01-03 07:40:57,169] ERROR Failed to create job for /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache.kafka.connect.cli.ConnectStandalone:108)
[2020-01-03 07:40:57,169] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
        at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
        at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
        at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
        at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)

        curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/ -d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'
4

2 回答 2

2

我猜您打算使用 AVRO 来传输数据,所以在启动 Kafka Connect 工作人员时不要忘记将 AVROConverter 指定为默认转换器。如果您将使用 JSON,则不需要 Schema Registry。

1.1kafkastore.topic=_schema

您是否启动了自己的模式注册表?当您启动模式注册表时,您必须指定“模式”主题。基本上,Schema Registry 将使用这个主题来存储它注册的模式,如果发生故障,它可以从那里恢复它们。

1.2jdbc connector installed and when i start i get below error 默认情况下,JDBC 连接器仅适用于 SQLite 和 PostgreSQL。如果您希望它与 MySQL 数据库一起使用,那么您也应该将MySQL 驱动程序添加到类路径中。

2.这取决于您如何部署 Kafka Connect 工作人员。如果您选择分布式模式(推荐),那么您实际上不需要单独的 CLI。您可以通过 Kafka Connect REST API 部署连接器。

3.还有另一个属性table.whitelist,您可以在其上指定您的模式和表。例如:table.whitelistusers,products,transactions

于 2020-01-01T11:33:36.910 回答
2

架构注册表是必需的吗?

不可以。您可以在 json 记录中启用模式。JDBC 源可以根据表信息为您创建它们

value.converter=org.apache.kafka...JsonConverter 
value.converter.schemas.enable=true

提到了 zookeper,但我没有什么是 kafkastore.topic=_schema

如果你想使用 Schema Registry,你应该使用kafkastore.bootstrap.servers.with Kafka 地址,而不是 Zookeeper。所以删除kafkastore.connection.url

阅读文档以获取所有属性的说明

我没有对架构做任何事情。

没关系。注册表首次启动时创建模式主题

我可以在一个 ec2 上创建两个连接器吗

是(忽略可用的 JVM 堆空间)。同样,这在 Kafka Connect 文档中有详细说明。

使用独立模式,您首先传递连接工作程序配置,然后在一个命令中最多传递 N 个连接器属性

使用分布式模式,您可以使用 Kafka Connect REST API

https://docs.confluent.io/current/connect/managing/configuring.html

当我打开 vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

首先,这是针对 Sqlite,而不是针对 Mysql/Postgres。您不需要使用快速入门文件,它们仅供参考

同样,所有属性都有据可查

https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc

我确实安装了 jdbc 连接器,当我启动时出现以下错误

这是有关如何调试的更多信息

https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/


如前所述,我个人建议尽可能使用 Debezium/CDC

适用于 RDS Aurora 的 Debezium 连接器

于 2020-01-01T11:33:40.403 回答