1

当日志似乎具有相同的时间戳时,其中一些会出现乱序。阅读了一些主题,我找不到合适的解决方案。但我相信必须对此采取一些措施,因为早在 Logstash 1 就报告了这个问题。

基本上,我使用logstash 使用json 行编解码器[在此处使用logstash-logback-encoder ] 在某个端口上侦听传入的tcp。目前,过滤器为空,我将数据输出到 elasticsearch 和 stdout(编解码器 ruby​​debug)。

控制台登录我的 IDE:

2017-Aug-30 15:15:30.191 [main] INFO  com.sbsatter.logbackLogstash.App - Testing LOG Order;
Expected Order: 1 => 10 
2017-Aug-30 15:15:30.193 [main] INFO  com.sbsatter.logbackLogstash.App - 1 
2017-Aug-30 15:15:30.194 [main] INFO  com.sbsatter.logbackLogstash.App - 2 
...
2017-Aug-30 15:15:30.195 [main] INFO  com.sbsatter.logbackLogstash.App - 9 
2017-Aug-30 15:15:30.195 [main] INFO  com.sbsatter.logbackLogstash.App - 10 

启动 logstash 的终端相应读取:

{
    "@timestamp" => 2017-08-30T09:15:30.197Z,
          "port" => 47820,
      "@version" => "1",
          "host" => "127.0.0.1",
          "time" => "2017-08-30 15:15:30.191+0600",
       "message" => "Testing LOG Order;\nExpected Order: 1 => 10"
}
{
    "@timestamp" => 2017-08-30T09:15:30.198Z,
          "port" => 47820,
      "@version" => "1",
          "host" => "127.0.0.1",
          "time" => "2017-08-30 15:15:30.193+0600",
       "message" => "1"
}
{
    "@timestamp" => 2017-08-30T09:15:30.198Z,
          "port" => 47820,
      "@version" => "1",
          "host" => "127.0.0.1",
          "time" => "2017-08-30 15:15:30.194+0600",
       "message" => "2"
}
.....
{
    "@timestamp" => 2017-08-30T09:15:30.216Z,
          "port" => 47820,
      "@version" => "1",
          "host" => "127.0.0.1",
          "time" => "2017-08-30 15:15:30.195+0600",
       "message" => "9"
}
{
    "@timestamp" => 2017-08-30T09:15:30.224Z,
          "port" => 47820,
      "@version" => "1",
          "host" => "127.0.0.1",
          "time" => "2017-08-30 15:15:30.195+0600",
       "message" => "10"
}

但是,kibana 显示以下内容:皱眉:: 结果是kibana

虽然这里的区别不是很大,但是,当与实时日志一起使用时,更改顺序会使日志显得无意义。我该如何解决这个问题?

注意,我也在elasticsearch 论坛上问过这个问题。我浏览了文档以找到与此相关的任何内容,但无济于事。

4

0 回答 0