我正在使用 Kafka 构建数据管道。数据流程如下:在mongodb中捕获数据变化,并发送到elasticsearch。
MongoDB
- 3.6版
- 分片集群
卡夫卡
- Confuent 平台 4.1.0
- mongoDB 源连接器:debezium 0.7.5
- elasticserach 水槽连接器
弹性搜索
- 版本 6.1.0
由于我仍在测试,因此与 Kafka 相关的系统正在单个服务器上运行。
启动 ZooKeeper
$ bin/zookeeper-server-start etc/kafka/zookeeper.properties
启动引导服务器
$ bin/kafka-server-start etc/kafka/server.properties
启动注册表模式
$ bin/schema-registry-start etc/schema-registry/schema-registry.properties
启动 mongodb 源连接器
$ bin/connect-standalone \ etc/schema-registry/connect-avro-standalone.properties \ etc/kafka/connect-mongo-source.properties $ cat etc/kafka/connect-mongo-source.properties >>> name=mongodb-source-connector connector.class=io.debezium.connector.mongodb.MongoDbConnector mongodb.hosts='' initial.sync.max.threads=1 tasks.max=1 mongodb.name=higee $ cat etc/schema-registry/connect-avro-standalone.properties >>> bootstrap.servers=localhost:9092 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false rest.port=8083
启动 elasticsearch sink 连接器
$ bin/connect-standalone \ etc/schema-registry/connect-avro-standalone2.properties \ etc/kafka-connect-elasticsearch/elasticsearch.properties $ cat etc/kafka-connect-elasticsearch/elasticsearch.properties >>> name=elasticsearch-sink connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector tasks.max=1 topics=higee.higee.higee key.ignore=true connection.url='' type.name=kafka-connect $ cat etc/schema-registry/connect-avro-standalone2.properties >>> bootstrap.servers=localhost:9092 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.\ JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false rest.port=8084
上述系统一切正常。Kafka 连接器捕获数据更改 (CDC) 并通过接收器连接器成功将其发送到 elasticsearch。问题是我无法将字符串类型消息数据转换为结构化数据类型。例如,让我们在对 mongodb 进行一些更改后使用主题数据。
$ bin/kafka-avro-console-consumer \
--bootstrap-server localhost:9092 \
--topic higee.higee.higee --from-beginning | jq
然后,我得到以下结果。
"after": null,
"patch": {
"string": "{\"_id\" : {\"$oid\" : \"5ad97f982a0f383bb638ecac\"},\"name\" : \"higee\",\"salary\" : 100,\"origin\" : \"South Korea\"}"
},
"source": {
"version": {
"string": "0.7.5"
},
"name": "higee",
"rs": "172.31.50.13",
"ns": "higee",
"sec": 1524214412,
"ord": 1,
"h": {
"long": -2379508538412995600
},
"initsync": {
"boolean": false
}
},
"op": {
"string": "u"
},
"ts_ms": {
"long": 1524214412159
}
}
然后,如果我去弹性搜索,我会得到以下结果。
{
"_index": "higee.higee.higee",
"_type": "kafka-connect",
"_id": "higee.higee.higee+0+3",
"_score": 1,
"_source": {
"after": null,
"patch": """{"_id" : {"$oid" : "5ad97f982a0f383bb638ecac"},
"name" : "higee",
"salary" : 100,
"origin" : "South Korea"}""",
"source": {
"version": "0.7.5",
"name": "higee",
"rs": "172.31.50.13",
"ns": "higee",
"sec": 1524214412,
"ord": 1,
"h": -2379508538412995600,
"initsync": false
},
"op": "u",
"ts_ms": 1524214412159
}
}
我想要实现的目标如下
{
"_index": "higee.higee.higee",
"_type": "kafka-connect",
"_id": "higee.higee.higee+0+3",
"_score": 1,
"_source": {
"oid" : "5ad97f982a0f383bb638ecac",
"name" : "higee",
"salary" : 100,
"origin" : "South Korea"
}"
}
我一直在尝试并仍在考虑的一些选项如下。
日志存储
案例1:不知道如何解析这些字符(/u0002,/u0001)
logstash.conf
input { kafka { bootstrap_servers => ["localhost:9092"] topics => ["higee.higee.higee"] auto_offset_reset => "earliest" codec => json { charset => "UTF-8" } } } filter { json { source => "message" } } output { stdout { codec => rubydebug } }
结果
{ "message" => "H\u0002�\u0001{\"_id\" : \ {\"$oid\" : \"5adafc0e2a0f383bb63910a6\"}, \ \"name\" : \"higee\", \ \"salary\" : 101, \ \"origin\" : \"South Korea\"} \ \u0002\n0.7.5\nhigee \ \u0018172.31.50.13\u001Ahigee.higee2 \ ��ح\v\u0002\u0002��̗���� \u0002\u0002u\u0002�����X", "tags" => [[0] "_jsonparsefailure"] }
案例2
logstash.conf
input { kafka { bootstrap_servers => ["localhost:9092"] topics => ["higee.higee.higee"] auto_offset_reset => "earliest" codec => avro { schema_uri => "./test.avsc" } } } filter { json { source => "message" } } output { stdout { codec => rubydebug } }
测试.avsc
{ "namespace": "example", "type": "record", "name": "Higee", "fields": [ {"name": "_id", "type": "string"}, {"name": "name", "type": "string"}, {"name": "salary", "type": "int"}, {"name": "origin", "type": "string"} ] }
结果
An unexpected error occurred! {:error=>#<NoMethodError: undefined method `type_sym' for nil:NilClass>, :backtrace=> ["/home/ec2-user/logstash- 6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:224:in `match_schemas'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:280:in `read_data'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:376:in `read_union'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:309:in `read_data'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:384:in `block in read_record'", "org/jruby/RubyArray.java:1734:in `each'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:382:in `read_record'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:310:in `read_data'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/avro- 1.8.2/lib/avro/io.rb:275:in `read'", "/home/ec2- user/logstash-6.1.0/vendor/bundle/jruby/2.3.0/gems/ logstash-codec-avro-3.2.3-java/lib/logstash/codecs/ avro.rb:77:in `decode'", "/home/ec2-user/logstash-6.1.0/ vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka- 8.0.2/lib/ logstash/inputs/kafka.rb:254:in `block in thread_runner'", "/home/ec2-user/logstash- 6.1.0/vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka- 8.0.2/lib/logstash/inputs/kafka.rb:253:in `block in thread_runner'"]}
蟒蛇客户端
- 在一些数据操作之后使用主题并产生不同的主题名称,以便弹性搜索接收器连接器可以只使用来自 python 操作主题的格式良好的消息
kafka
库:无法解码消息from kafka import KafkaConsumer consumer = KafkaConsumer( topics='higee.higee.higee', auto_offset_reset='earliest' ) for message in consumer: message.value.decode('utf-8') >>> 'utf-8' codec can't decode byte 0xe4 in position 6: invalid continuation byte
confluent_kafka
与 python 3 不兼容
知道如何在 elasticsearch 中对数据进行 jsonify 处理吗?以下是我搜索的来源。
提前致谢。
一些尝试
1) 我已按如下方式更改了我的 connect-mongo-source.properties 文件以测试转换。
$ cat etc/kafka/connect-mongo-source.properties
>>>
name=mongodb-source-connector
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.hosts=''
initial.sync.max.threads=1
tasks.max=1
mongodb.name=higee
transforms=unwrap
transforms.unwrap.type = io.debezium.connector.mongodbtransforms.UnwrapFromMongoDbEnvelope
以下是我得到的错误日志。我还不习惯 Kafka,更重要的是 debezium 平台,我无法调试这个错误。
ERROR WorkerSourceTask{id=mongodb-source-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
org.bson.json.JsonParseException: JSON reader expected a string but found '0'.
at org.bson.json.JsonReader.visitBinDataExtendedJson(JsonReader.java:904)
at org.bson.json.JsonReader.visitExtendedJSON(JsonReader.java:570)
at org.bson.json.JsonReader.readBsonType(JsonReader.java:145)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:82)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41)
at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101)
at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84)
at org.bson.BsonDocument.parse(BsonDocument.java:62)
at io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope.apply(UnwrapFromMongoDbEnvelope.java:45)
at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:218)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:194)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2)这一次,我改变了elasticsearch.properties,并没有改变connect-mongo-source.properties。
$ cat connect-mongo-source.properties
name=mongodb-source-connector
connector.class=io.debezium.connector.mongodb.MongoDbConnector
mongodb.hosts=''
initial.sync.max.threads=1
tasks.max=1
mongodb.name=higee
$ cat elasticsearch.properties
name=elasticsearch-sink
connector.class = io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=higee.higee.higee
key.ignore=true
connection.url=''
type.name=kafka-connect
transforms=unwrap
transforms.unwrap.type = io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope
我得到了以下错误。
ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
org.bson.BsonInvalidOperationException: Document does not contain key $set
at org.bson.BsonDocument.throwIfKeyAbsent(BsonDocument.java:844)
at org.bson.BsonDocument.getDocument(BsonDocument.java:135)
at io.debezium.connector.mongodb.transforms.UnwrapFromMongoDbEnvelope.apply(UnwrapFromMongoDbEnvelope.java:53)
at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:480)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
3) 更改 test.avsc 并运行 logstash。我没有收到任何错误消息,但结果不是我所期望的origin
,salary
,name
字段都是空的,即使它们被赋予了非空值。我什至能够通过控制台消费者正确读取数据。
$ cat test.avsc
>>>
{
"type" : "record",
"name" : "MongoEvent",
"namespace" : "higee.higee",
"fields" : [ {
"name" : "_id",
"type" : {
"type" : "record",
"name" : "HigeeEvent",
"fields" : [ {
"name" : "$oid",
"type" : "string"
}, {
"name" : "salary",
"type" : "long"
}, {
"name" : "origin",
"type" : "string"
}, {
"name" : "name",
"type" : "string"
} ]
}
} ]
}
$ cat logstash3.conf
>>>
input {
kafka {
bootstrap_servers => ["localhost:9092"]
topics => ["higee.higee.higee"]
auto_offset_reset => "earliest"
codec => avro {
schema_uri => "./test.avsc"
}
}
}
output {
stdout {
codec => rubydebug
}
}
$ bin/logstash -f logstash3.conf
>>>
{
"@version" => "1",
"_id" => {
"salary" => 0,
"origin" => "",
"$oid" => "",
"name" => ""
},
"@timestamp" => 2018-04-25T09:39:07.962Z
}