3

我的 ELK 设置如下
Kibana <-- ElasticSearch <-- Logstash <-- FileBeat (从不同的日志源获取日志)
当消息流入更多时,此设置会崩溃。正如我在互联网上读到的那样,人们建议在此设置中使用 redis 为 ES 消耗消息腾出空间。所以我现在希望设置类似这样
的 Kibana <-- ElasticSearch <-- Logstash <-- REDIS <-- FileBeat(从不同的日志源获取日志)
我希望 Redis 充当中间来保存消息,这样消费者端就不会遇到瓶颈。但是这里 redis dump.rdb 继续增长,一旦消息被 logstash 消耗,它就不会缩小(没有释放空间)。下面是我的 redis.conf

bind host
port port
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /var/run/redis.pid
loglevel notice
logfile "/tmp/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

编辑:FileBeat 配置:

filebeat:
  prospectors:
    -
      paths:
        - logPath
      input_type: log
      tail_files: true
output:
   redis:
     host: "host"
     port: port
     save_topology: true
     index: "filebeat"
     db: 0
     db_topology: 1
     timeout: 5
     reconnect_interval: 1
shipper:
logging:
  to_files: true
  files:
    path: /tmp
    name: mybeat.log
    rotateeverybytes: 10485760
  level: warning

Logstash 配置:

input {
  redis {
    host => "host"
    port => "port"
    type => "redis-input"
    data_type => "list"
    key => "filebeat"
  }
}
output {
  elasticsearch {
    hosts => ["hosts"]
    manage_template => false
    index => "filebeat-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

如果需要更多信息,请告诉我。蒂亚!!!

4

1 回答 1

0

我认为您的问题可能与消息从 Redis 存储和检索的方式有关。

理想情况下,您应该使用ListRedis 的数据结构,使用LPUSHandLPOP分别插入和检索消息。

于 2016-06-17T05:20:14.230 回答