我有日志,我只想 grep 日志并发送到 Elasticsearch,其中包含“错误 =”400 - 被 Elasticsearch 拒绝”和“无法解析字段”并忽略其他日志。
log:2022-02-04 23:56:43 +0530 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: invalid_index_name_exception [reason]: 'Invalid index name [-2022.02.04], must not start with '_', '-', or '+''" location=nil
和
2022-02-03 01:42:40 +0530 [warn]: #0 dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch [error type]: mapper_parsing_exception [reason]: 'failed to parse field [log] of type [text] in document with id 'BnYRvH4BMXwCDVGBTa8Z'. Preview of field's value: '''" location=nil
我的配置>>
fluentd.conf: |-
<filter kubernetes.var.log.containers.fluentd**>
@type grep
<regexp>
key log
pattern /(^Rejected|^error="400|^mapper_parsing_exception)/
</regexp>
</filter>
<match kubernetes.var.log.containers.fluentd**>
@type elasticsearch
@log_level info
suppress_type_name true
host "eslogging.abc.com"
port 80
reload_connections false
logstash_format true
logstash_prefix "fluentd"
reconnect_on_error true
request_timeout 2147483648
retry_max_times 3
num_threads 4
compression_level best_compression
compression gzip
include_timestamp true
utc_index false
time_key_format "%Y-%m-%dT%H:%M:%S.%N%z"
time_key time
reload_on_failure true
<buffer>
@type file
path /var/log/fluentd-buffers/cluster-logging-fluentd.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 4
flush_interval 3s
retry_forever true
retry_max_interval 30
chunk_limit_size 8MB
queue_limit_length 20
overflow_action block
</buffer>
</match>