我将弹性搜索配置为将日志写入文件。我发现当 DEBUG 日志来自 Elastic Search 时,先写日志,然后写所有的调用堆栈,用换行符分隔。
我只希望日志显示在我的日志文件中,我不想看到调用堆栈。
这是一个示例日志:
[2013-10-01 09:02:10,695][DEBUG][action.bulk] [Cap 'N Hawk] [metrics-2013.10.01][2] failed to execute bulk item (index) index {[metrics-2013.10.01][metrics][XTvepSybQZaUed6h4Xupag], source[{"..."}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [deviceTelephonyID]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:396)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:599)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:467)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:507)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:306)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:386)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.NumberFormatException: For input string: "NOTELEPHONY"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1241)
at java.lang.Double.parseDouble(Double.java:540)
at org.elasticsearch.common.xcontent.support.AbstractXContentParser.doubleValue(AbstractXContentParser.java:95)
at org.elasticsearch.index.mapper.core.DoubleFieldMapper.innerParseCreateField(DoubleFieldMapper.java:308)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:167)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:385)
... 12 more
我试过添加:
file:
type: dailyRollingFile
file: ${path.logs}/es_log.log
datePattern: "'.'yyyy-MM-dd"
layout:
type: pattern
conversionPattern: "[%d{ISO8601}][%p][%c] %m%n"
alwaysWriteExceptions: false
replace:
regex: "(\n.*)*"
replacement: ""
到 Elastic Search logging.yml 配置。按照:
https://logging.apache.org/log4j/2.x/manual/layouts.html
我希望用空字符串替换单个日志条目中第一个换行符之后的所有内容,只剩下:
[2013-10-01 09:02:10,695][DEBUG][action.bulk] [Cap 'N Hawk] [metrics-2013.10.01][2] failed to execute bulk item (index) index {[metrics-2013.10.01][metrics][XTvepSybQZaUed6h4Xupag], source[{"..."}]}
不幸的是,它似乎不起作用。任何人都可以看到这种方法的任何问题。
似乎找到了一种替代解决方案,但是我不确定它是否可以使用 Elastic Search 进行配置...