0

我遇到了我的主题,尽管运行和操作没有注册我的 MongoDB 中发生的事件。

每次我插入/修改记录时,我都不再从kafka-console-consumer命令中获取日志。

有没有办法清除卡夫卡的缓存/偏移量?源和接收器连接已启动并正在运行。整个集群也很健康,事情是一切都照常工作,但每隔几周我就会看到这种情况再次出现,或者当我从其他位置登录到我的 Mongo 云时。

--partition 0参数没有帮助,也更改retention_ms1

在此处输入图像描述

在此处输入图像描述

我检查了两个连接器的状态并得到RUNNING

curl localhost:8083/connectors | jq 在此处输入图像描述

curl localhost:8083/connectors/monit_people/status | jq 在此处输入图像描述

运行docker-compose logs connect我发现:

    WARN Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286

If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
 
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.

Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.

Resetting the offset will allow for the connector to be resume from the latest resume token. 
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumeToken
4

1 回答 1

1

问题需要使用 Confluent Platform 进行更多练习,因此现在我通过删除整个容器来重建整个环境:

docker system prune -a -f --volumes

docker container stop $(docker container ls -a -q -f "label=io.confluent.docker").

运行后docker-compose up -d一切正常并开始工作。

于 2021-01-04T14:56:49.337 回答