1

尝试在 Mac OS X 10.10.3 Yosemite 上使用 ElasticSearch(v1.5.2)-Logstash(v1.4.2)-Kibana(v4.0.2) 配置 suricata v2.0.8。

suricata.yaml:

# Extensible Event Format (nicknamed EVE) event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
            # custom allows additional http fields to be included in eve-log
            # the example below adds three additional fields when uncommented
            #custom: [Accept-Encoding, Accept-Language, Authorization]
        - dns
        - tls:
            extended: yes     # enable this for extended logging information
        - files:
            force-magic: yes   # force logging magic on all logged files
            force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
        #- smtp
        #- flow

logstash.conf:

input {
  file {
    path => ["/var/log/suricata/eve.json"]
    sincedb_path => ["/var/lib/logstash/"]
    codec =>   json
    type => "SuricataIDPS"
    start_position => "beginning"
  }

}

filter {
  if [type] == "SuricataIDPS" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
    ruby {
      code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
    }
  }

  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      #database => "/usr/local/opt/logstash/libexec/vendor/geoip/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          #database => "/usr/local/opt/logstash/libexec/vendor/geoip/GeoLiteCity.dat"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}

output {
  elasticsearch {
    host => localhost
    #protocol => http
  }
}

Suricata 将所有事件成功记录到eve.json中。当我在浏览器中打开 kibana 时,我看不到仪表板或来自 suricata 的任何信息......所以我假设 logstash 没有从 eve.json 读取数据或没有将数据解析到 elasticsearch(或两者)......有什么方法可以检查发生了什么?

4

2 回答 2

0

在 logstash 中打开调试输出:

output {
   stdout {
      codec = rubydebug
   }
}

此外,尝试直接(curl)而不是使用 kibana 对 Elasticsearch 运行查询。

于 2015-05-24T14:13:59.547 回答
-1

我将 nginx 日志改编为 suricata 日志。我可以在 suricata 日志中获取 geoip 信息。我通过 swatch 进行适配并发送到 filebeat 中配置的日志文件。

例如:nginx.access.referrer:用于 NAT 的 ET INFO 会话遍历实用程序(STUN 绑定请求)[**

nginx.access.geoip.location:{“lon”:-119.688,“lat”:45.8696}

使用样本读取 suricata 日志并将它们发送到将进行调整的 shell 脚本。

例如: echo "$IP - - [$nd4] \"GET $IP2:$PORT2 --- $TYPE HTTP/1.1\" 777 0 \"$CVE\" \"Mozilla/5.0 (NONE) (NONE) NONE \"" >> /var/log/suricata_mod.log

然后配置filebeat.yml:

  • 文档类型:nginx 访问

    路径:

    • /var/log/suricata_mod.log

重新启动文件节拍。

最后配置logstash:

    filter {
         if [type] == "nginx-access" {
       grok {
  match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[$
  remove_field => "message"} 

    mutate {      
    add_field => { "read_timestamp" => "%{@timestamp}" }} 

    date {
    match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
    remove_field => "[nginx][access][time]"}

    useragent {
       source => "[nginx][access][agent]"
       target => "[nginx][access][user_agent]"
       remove_field => "[nginx][access][agent]"} 

    geoip {
       source => "[nginx][access][remote_ip]"
       target => "[nginx][access][geoip]"
       database => "/opt/GeoLite2-City.mmdb"}} }     output {
      elasticsearch {
          hosts => [ "xxx.xxx.xxx.xxx:9200" ]
          manage_template => false
          document_type => "%{[@metadata][type]}"
          index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"}}

并重新启动logstash。在 Kibana 中创建一个 filebeat-* 索引。准备好。

于 2017-10-24T13:08:55.770 回答