1

我正在使用 OSSEC 收集日志并使用 logstash-forwarder 将 JSON 日志转发到 logstash。这是我的logstash 配置。

input {   
  lumberjack {
    port => 10516
    type => "lumberjack"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    codec => json
    }
}

filter {
  json {
    source => "message"
  }
}
output {
  elasticsearch {
    host => localhost
  }
}

我想提取括号内“位置”字段上指示的主机并创建一个专用标签,因为logstash仅将OSSEC视为源主机,因为它转发日志。下面是 logstash 的示例输出。

{
  "_index": "logstash-2015.09.23",
  "_type": "ossec-alerts",
  "_id": "AU_4Q1Hc5OjGfEBnRiWa",
  "_score": null,
  "_source": {
    "rule": {
      "level": 3,
      "comment": "Nginx error message.",
      "sidid": 31301
    },
    "srcip": "192.168.192.10",
    "location": "(logstash) 192.168.212.104->/var/log/nginx/error.log",
    "full_log": "2015/09/23 11:33:24 [error] 1057#0: *562 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.192.10, server: _, request: \"POST /elasticsearch/.kibana/__kibanaQueryValidator/_validate/query?explain=true&ignore_unavailable=true HTTP/1.1\", upstream: \"http://[::1]:5601/elasticsearch/.kibana/__kibanaQueryValidator/_validate/query?explain=true&ignore_unavailable=true\", host: \"192.168.212.104\", referrer: \"http://192.168.212.104/\"",
    "@version": "1",
    "@timestamp": "2015-09-23T03:33:25.588Z",
    "type": "ossec-alerts",
    "file": "/var/ossec/logs/alerts/alerts.json",
    "host": "ossec",
    "offset": "51048"
  },
  "fields": {
    "@timestamp": [
      1442979205588
    ]
  },
  "sort": [
    1442979205588
  ]
}
4

2 回答 2

1

应用 json{} 过滤器后,您会得到一堆字段。您现在可以对这些字段应用更多过滤器,包括 grok{} 以创建更多字段!

于 2015-09-23T07:57:35.287 回答
1

您需要的是grok 过滤器。您可以使用grok 调试器为您找到最佳模式。以下模式应该适用于您的location领域:

\(%{HOST:host}\) %{IP:srcip}->%{PATH:path}

在 logstash 过滤器部分

grok {
    match => { "location" => "\(%{HOST:host}\) %{IP:srcip}->%{PATH:path}" }
    overwrite => [ "host", "srcip" ]
}

overwrite是必要的,因为您已经有字段hostsrcip.

于 2015-09-23T07:59:27.283 回答