0

我已经配置了一个 ELK 服务器,所有组件都在同一个服务器上。虽然,我试图logstash-syslog.conf从基于 NFS 的中心点选择我的,所以我不需要logstash在每个客户端上安装。

1) 我的logstash-syslog.conf 档案

input {
  file {
    path => [ "/var/log/messages" ]
    type => "test"
  }
}

filter {
  if [type] == "test" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
        if "automount" in [message] {
        elasticsearch {
                hosts => "oida-elk:9200"
                #index => "newmsglog-%{+YYYY.MM.dd}"
                index => "%{type}-%{+YYYY.MM.dd}"
                document_type => "msg"
        }
        stdout {}
}
}

2)当我在客户端上运行以获取数据时,它启动线程并卡在那里..

[Myclient1 =~] # $ /home/data/logstash-6.0.0/bin/logstash -f /home/data/logstash-6.0.0/conf.d/ --path.data=/tmp/klm

3)运行上述命令后,它会显示以下日志,然后不要在任何地方继续...

[2018-03-05T21:20:51,014][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://oida-elk:9200/"}
[2018-03-05T21:20:51,078][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-03-05T21:20:51,085][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-03-05T21:20:51,101][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//oida-elk:9200"]}
[2018-03-05T21:20:51,297][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2ea3b180@/home/data/logstash-6.0.0/logstash-core/lib/logstash/pipeline.rb:290 run>"}
[2018-03-05T21:20:51,746][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=>"main"}
[2018-03-05T21:20:51,800][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}

请帮助/提出任何建议..

4

1 回答 1

0

您可以使用 -t 标志运行 logtash 验证吗?使用 -f 标志时,您还必须将完整路径传递给文件。您能否跟踪文件以查看它是否有更新的条目。我想补充一点,对于使用 filebeat 将文件读取到 logtash 是一个更好的选择。

于 2018-03-05T18:58:11.617 回答