1

我已经成功设置了我的系统以使用:elasticsearch-logstash-filebeat-kibana 进行集中式日志记录。

我无法在 Kibana 中使用 filebeat 模板索引查看日志。当我尝试创建一个logstash过滤器以正确解析我的日志文件时,问题就来了。

我正在使用 grok 模式,所以我首先创建了这个模式(/opt/logstash/patterns/grok-paterns):

CUSTOMLOG %{TIMESTAMP_ISO8601:timestamp} - %{USER:auth} - %{LOGLEVEL:loglevel} - \[%{DATA:pyid}\]\[%{DATA:source}\]\[%{DATA:searchId}\] - %{GREEDYDATA:logmessage}

这是logstash过滤器(/etc/logstash/conf.d/11-log-filter.conf):

filter {
  if [type] == "log" {
    grok {
       match => { "message" => "%{CUSTOMLOG}" }
       patterns_dir => "/opt/logstash/patterns"
    }
    mutate {
       rename => [ "logmessage", "message" ]
    }

    date {
        timezone => "Europe/London"
        locale => "en"
        match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
    }
 }}

显然,当我从命令行对其进行测试时,解析器工作正常:

[root@XXXXX logstash]# bin/logstash -f test.conf 
Settings: Default pipeline workers: 4
Logstash startup completed

2016-06-03 12:55:57,718 - root - INFO - [27232][service][751282714902528] - here goes my message

{
   "message" => "here goes my message",
  "@version" => "1",
"@timestamp" => "2016-06-03T11:55:57.718Z",
      "host" => "19598",
 "timestamp" => "2016-06-03 12:55:57,718",
      "auth" => "root",
  "loglevel" => "INFO",
      "pyid" => "27232",
    "source" => "service",
  "searchId" => "751282714902528"
 }

然而......日志没有出现在 Kibana 中,我什至没有看到“_grokparsefailure” tasgs,所以我客人说解析器正在工作,但我可以找到日志。

我究竟做错了什么?我忘记了什么吗?

提前致谢。

编辑

输入(02-beats-input.conf):

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

输出(30-elasticsearch-output.conf):

output {
 elasticsearch {
   hosts => ["localhost:9200"]
   sniffing => true
   manage_template => false
   index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
   document_type => "%{[@metadata][type]}"
 }
}
4

0 回答 0