0

来源:Oracle 数据库目标:kafka

通过用于大数据的 Oracle Golden Adapter 将数据从源移动到目标。问题是数据移动得很好,但是当我插入 5 条记录时,它会作为主题中的一个文件。

我想把它分组。如果要进行 5 次插入,我需要主题中的五个单独条目(kafka)

kafka 处理程序,用于大数据的 gg 版本 12.3.1

我在源代码中插入五条记录,在 khafka 中得到所有插入,如下所示

{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.960000","pos":"00000000030000001263","after":{"TEST_ID":2,"TEST_NAME":"Francis","TEST_NAME_AR":"Francis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"I","op_ts":"2017-10-24 08:52:01.000000","current_ts":"2017-10-24T12:52:04.961000","pos":"00000000030000001437","after":{"TEST_ID":3,"TEST_NAME":"Ashfak","TEST_NAME_AR":"Ashfak"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:55:04.000000","current_ts":"2017-10-24T12:55:07.252000","pos":"00000000030000001734","before":{"TEST_ID":null,"TEST_NAME":"Francis"},"after":{"TEST_ID":null,"TEST_NAME":"updatefrancis"}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"D","op_ts":"2017-10-24 08:56:11.000000","current_ts":"2017-10-24T12:56:14.365000","pos":"00000000030000001865","before":{"TEST_ID":2}}
{"table":"MYSCHEMATOPIC.ELASTIC_TEST","op_type":"U","op_ts":"2017-10-24 08:57:43.000000","current_ts":"2017-10-24T12:57:45.817000","pos":"00000000030000002152","before":{"TEST_ID":3},"after":{"TEST_ID":4}}
4

2 回答 2

0

I would recommend using the Kafka Connect Handler, since it then registers the data's schema with the Confluent Schema Registry, making it much easier to stream onwards to targets such as Elasticsearch (using Kafka Connect).

In Kafka each record from Oracle will be one Kafka message.

于 2017-11-01T09:07:06.337 回答
0

在 .props 文件中制作

gg.handler.kafkahandler.mode=op。

它奏效了!!

于 2017-11-01T10:18:21.433 回答