6

我正在运行 Filebeat 以从容器中运行的 Java 服务发送日志。该容器运行着许多其他服务,并且同一个 Filebeat 守护进程正在收集在主机中运行的所有容器日志。Filebeat 将日志转发到 Logstash,后者将它们转储到 Elastisearch。

我正在尝试使用 Filebeat 多行功能将 Java 异常中的日志行合并到一个日志条目中,使用以下 Filebeat 配置:

filebeat:
  prospectors:
    # container logs
    -
      paths:
        - "/log/containers/*/*.log"
      document_type: containerlog
      multiline:
        pattern: "^\t|^[[:space:]]+(at|...)|^Caused by:"
        match: after

output:
  logstash:
    hosts: ["{{getv "/logstash/host"}}:{{getv "/logstash/port"}}"]

应聚合为一个事件的 Java 堆栈跟踪示例:

此 Java 堆栈跟踪是 docker log 条目的副本(在运行docker logs java_service之后)

[2016-05-25 12:39:04,744][DEBUG][action.bulk              ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}}
MapperParsingException[Field name [events.created] cannot contain '.']
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
    at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
    at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

尽管如此,使用上面显示的 Filebeat 配置,我仍然将堆栈跟踪的每一行视为 Elasticsearch 中的一个事件。

知道我做错了什么吗?另请注意,由于我需要使用 filebeat 从多个文件发送日志,因此无法在 Logstash 端进行多行聚合。

版本

FILEBEAT_VERSION 1.1.0

4

1 回答 1

1

今天也偶然发现了这个问题。

这对我有用(filebeat.yml):

filebeat.prospectors:
- type: log
  multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:"
  multiline.negate: false
  multiline.match: after
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
output.elasticsearch:
  hosts: ["es-client.es-cluster:9200"]

我使用 Filebeat 6.2.2 将日志直接发送到 Elasticsearch

于 2018-03-20T10:28:29.040 回答