0

我正在尝试connection.uri通过以下示例解决使用 FileConfigProvider 的问题: https ://docs.confluent.io/platform/current/connect/security.html#externalizing-secrets

我有以下 POST 请求:

POST http://localhost:8083/connectors/my-sink/config
{
    "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
    "topics": "topic",
    "database": "my-database",
    "connection.uri": "${file:/home/appuser/my-file.txt:mongo_uri}",
    "config.providers": "file",
    "config.providers.file.class": "org.apache.kafka.common.config.provider.FileConfigProvider"
}

我收到以下错误:

{
    "error_code": 400,
    "message": "Connector configuration is invalid and contains the following 1 error(s):\nInvalid value ${file:/home/appuser/my-file.txt:mongo_topic} for configuration connection.uri: The connection string is invalid. Connection strings must start with either 'mongodb://' or 'mongodb+srv://\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"
}

似乎在解析秘密值之前执行了配置验证。并且,由于这个原因,该值"connection.uri": "${my-secret}"不是有效的 mongodb 连接字符串。

有没有可能解决这个问题?

源代码:

/MyFolder
├── kafka-connect
│   └── Dockerfile
└── docker-compose.yml
MyFolder\docker-compose.yml:

version: "3"
services:
    zookeeper:
        image: confluentinc/cp-zookeeper:6.0.0
        container_name: zookeeper
        hostname: zookeeper
        ports:
            - "2181:2181"
        environment:
            ZOOKEEPER_CLIENT_PORT: 2181
            ZOOKEEPER_TICK_TIME: 2000
            
    kafka:
        image: confluentinc/cp-kafka:6.0.0
        container_name: kafka
        hostname: kafka
        depends_on:
            - zookeeper
        ports:
            - "29092:29092"
            - "9092:9092"
        environment:
            KAFKA_BROKER_ID: 1
            KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
            KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
            KAFKA_offsets_TOPIC_REPLICATION_FACTOR: 1
            
    kafka-connect:
        build:
            context: ./kafka-connect
            dockerfile: Dockerfile
        container_name: kafka_connect
        depends_on:
            - kafka
        ports:
            - "8083:8083"
            
    mongo:
        image: mongo
        container_name: mongo
        restart: unless-stopped
        depends_on:
            - kafka-connect
        environment:
            MONGO_INITDB_ROOT_USERNAME: root
            MONGO_INITDB_ROOT_PASSWORD: password
MyFolder\kafka-connect\Dockerfile:

FROM confluentinc/cp-kafka-connect:6.0.0

COPY ./plugins/ /usr/local/share/kafka/plugins/

ENV CONNECT_BOOTSTRAP_SERVERS=PLAINTEXT://kafka:29092
ENV CONNECT_REST_ADVERTISED_HOST_NAME=kafka_connect
ENV CONNECT_GROUP_ID=kafka-connect-group
ENV CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect-group-config
ENV CONNECT_OFFSET_STORAGE_TOPIC=connect-group-offset
ENV CONNECT_STATUS_STORAGE_TOPIC=kafka-connect-group-status
ENV CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_KEY_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_PLUGIN_PATH=/usr/local/share/kafka/plugins

EXPOSE 8083

我使用docker-compose up.

MongoSinkConnector使用 kafka-connect REST 端点进行配置。

在此处输入图像描述

4

1 回答 1

1

设置提供者的属性是针对 Connect 工作者(由容器启动的进程),而不是特定的连接器

kafka-connect:
       ... 
        depends_on:
            - kafka
            - mongo 
        ports:
            - "8083:8083"
        environment:
            ... 
            CONNECT_CONFIG_PROVIDERS: file 
            CONNECT_CONFIG_PROVIDERS_FILE_CLASS: org.apache.kafka.common.config.provider.FileConfigProvider
于 2021-09-07T12:12:44.850 回答