1

我将弹性搜索配置为将日志写入文件。我发现当 DEBUG 日志来自 Elastic Search 时,先写日志,然后写所有的调用堆栈,用换行符分隔。

我只希望日志显示在我的日志文件中,我不想看到调用堆栈。

这是一个示例日志:

[2013-10-01 09:02:10,695][DEBUG][action.bulk] [Cap 'N Hawk] [metrics-2013.10.01][2] failed to execute bulk item (index) index {[metrics-2013.10.01][metrics][XTvepSybQZaUed6h4Xupag], source[{"..."}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [deviceTelephonyID]
    at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:396)
    at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:599)
    at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:467)
    at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:507)
    at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
    at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:306)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:386)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.NumberFormatException: For input string: "NOTELEPHONY"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1241)
    at java.lang.Double.parseDouble(Double.java:540)
    at org.elasticsearch.common.xcontent.support.AbstractXContentParser.doubleValue(AbstractXContentParser.java:95)
    at org.elasticsearch.index.mapper.core.DoubleFieldMapper.innerParseCreateField(DoubleFieldMapper.java:308)
    at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:167)
    at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:385)
    ... 12 more

我试过添加:

file:
  type: dailyRollingFile
  file: ${path.logs}/es_log.log
  datePattern: "'.'yyyy-MM-dd"
  layout:
    type: pattern
    conversionPattern: "[%d{ISO8601}][%p][%c] %m%n"
    alwaysWriteExceptions: false
    replace: 
      regex: "(\n.*)*"
      replacement: "" 

到 Elastic Search logging.yml 配置。按照:

https://logging.apache.org/log4j/2.x/manual/layouts.html

我希望用空字符串替换单个日志条目中第一个换行符之后的所有内容,只剩下:

[2013-10-01 09:02:10,695][DEBUG][action.bulk] [Cap 'N Hawk] [metrics-2013.10.01][2] failed to execute bulk item (index) index {[metrics-2013.10.01][metrics][XTvepSybQZaUed6h4Xupag], source[{"..."}]}

不幸的是,它似乎不起作用。任何人都可以看到这种方法的任何问题。

这篇文章:Log4j 格式化:是否可以截断堆栈跟踪?

似乎找到了一种替代解决方案,但是我不确定它是否可以使用 Elastic Search 进行配置...

4

1 回答 1

0

要禁用打印异常,您需要正确配置布局。布局中的模式是%xEx{none}. 只需将其放在布局中的任何位置即可。

我不确定为什么替换不起作用;我的猜测是,您要么需要将其设为多行正则表达式,要么正则表达式仅应用于消息本身,而不是例外。

也就是说,我认为抑制日志中的异常并不是一个好方法。

我要么将系统配置为不记录这些异常(通过抑制此特定记录器的输出),要么更改代码以更优雅地处理非数字输入。如果您禁用所有异常,您也不会看到重要/真正的错误。

于 2013-10-01T15:17:59.957 回答