1

我使用logstash 已经有一段时间了,在apache 访问日志和偶尔的mysql 日志方面取得了巨大的成功。我刚刚开始将它用于 qmail 日志,但想要一种更好的方法来根据 qmail ID 对 qmail 日志进行分组,并能够跟踪退回或其他传递失败和状态。我见过一些关于 postfix 但不是 qmail 的东西。

有人在qmail中使用过这样的logstash吗?您的 logstash 配置看起来如何?您的 Kibana 仪表板看起来如何?

任何帮助,将不胜感激。

以下是一些 qmail 日志的示例:

Oct 15 09:26:08 imappop1-mail qmail: 1413379568.510987 new msg 33592
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.511087 info msg 33592: bytes 10820 from <SmallBusinessLoan.martin.cota-martin.cota=example1.com@example.com> qp 3740 uid 89
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.513616 starting delivery 1314142: msg 33592 to local example1.com-martin.cota@example1.com
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.513686 status: local 1/4 remote 1/120
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576361 delivery 1314142: success: did_0+0+1/
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576491 status: local 0/4 remote 1/120
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576548 end msg 33592
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.579644 new msg 33603
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.579790 info msg 33603: bytes 4370 from <loansfidelity@example2.com> qp 5037 uid 89
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.582804 starting delivery 1314143: msg 33603 to local example3.com-daniel@example3.com
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.582967 status: local 1/4 remote 1/120
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619422 delivery 1314143: success: did_0+0+1/
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619512 status: local 0/4 remote 1/120
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619561 end msg 33603

理想情况下,我希望能够跟踪这些日志的整个解剖结构。这是我现在在logstash中的输入和过滤器:

{
  "network": {
    "servers": [ "192.168.115.61:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure",
        "/var/log/haraka.log",
        "/var/log/maillog"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
filter {
    if [type] == "syslog" {
        grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
            add_field => [ "received_at", "%{@timestamp}" ]
            add_field => [ "received_from", "%{host}" ]
        }
        multiline {
            pattern => "(([^\s]+)Exception.+)|(at:.+)"
            stream_identity => "%{logsource}.%{@type}"
            what => "previous"
        }
        syslog_pri { }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
    }
}
4

0 回答 0