1

我们在 Strimzi kafkaconnect 集群上运行了 13 个 kafka debezium postgres 连接器。其中之一是失败Caused by: java.lang.OutOfMemoryError: Java heap space。将 jvm 选项从 2g 增加到 4g,但仍然因同样的问题而失败。

完整的日志:

java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOfRange(Arrays.java:3664)
    at java.lang.String.<init>(String.java:207)
    at com.fasterxml.jackson.core.util.TextBuffer.setCurrentAndReturn(TextBuffer.java:696)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2405)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getValueAsString(UTF8StreamJsonParser.java:312)
    at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:219)
    at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
    at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:213)
    at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
    at io.debezium.document.JacksonReader.parse(JacksonReader.java:102)
    at io.debezium.document.JacksonReader.read(JacksonReader.java:72)
    at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:54)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:418)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:412)
    at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
    at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:99)
    at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda$464/1759003957.run(Unknown Source)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)```
4

2 回答 2

1

尝试在 Debezium 道具下方进行调整

  • 增加max.batch.size
  • 减少max.queue.size
  • offset.flush.interval.ms根据您的应用要求进行调整
于 2020-04-26T06:13:17.713 回答
1

这看起来像您有一个非常大的事务消息,并且由于内存限制而导致解析失败。wal2json_streaming应该将消息分成更小的块来防止这个问题。

通常,如果可能,请使用 protobuf 或 pgoutput 解码器,因为它们是每次更改而不是每次事务都从数据库流式传输消息。

于 2020-04-27T07:41:42.397 回答