我正在尝试用 grok 解析日志文件。我使用的配置允许我解析单行事件,但如果是多行的(使用 java 堆栈跟踪)则不能。
#what i get on KIBANA for a single line:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "mluzA57TnCpH-XBRbeg",
"_score": null,
"_source": {
"message": " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.310Z",
"path": "/root/test2.log",
"time": "2014-01-14 11:09:35,962",
"main": "main",
"loglevel": "INFO",
"class": "api.batch.ThreadPoolWorker",
"mydata": " user.country=US"
},
"sort": [
1423129101310,
1423129101310
]
}
#what i get for a multiline with Stack trace:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "9G6LsSO-aSpsas_jOw",
"_score": null,
"_source": {
"message": "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.380Z",
"path": "/root/test2.log",
"tags": [
"_grokparsefailure"
]
},
"sort": [
1423129101380,
1423129101380
]
}
input {
file {
path => "/root/test2.log"
start_position => "beginning"
codec => multiline {
pattern => "^ - %{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => "194.3.227.23"
}
# stdout { codec => rubydebug}
}
谁能告诉我我的配置文件做错了什么?谢谢。这是我的日志文件示例: - 2014-01-14 11:09:36,447 [main] INFO (support.context.ContextFactory) 创建默认上下文 - 2014-01-14 11:09:38,623 [main] 错误(支持.context.ContextFactory) 使用用户 cisuser 和驱动程序 oracle.jdbc.driver.OracleDriver java.sql.SQLException: ORA-28001: 密码已过期oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) 在 oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131) **
*> 编辑:这是我正在使用的最新配置
https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-output-L1 *
**