在单节点弹性搜索和 logstash 中,我们在不同类型的 AWS 实例(即 Medium、Large 和 Xlarge)上测试了 20mb 和 200mb 文件解析到 Elastic Search。
环境详细信息:中型实例 3.75 RAM 1 核存储:4 GB SSD 64 位网络性能:中等运行实例:Logstash、弹性搜索
场景:1
**With default settings**
Result :
20mb logfile 23 mins Events Per/second 175
200mb logfile 3 hrs 3 mins Events Per/second 175
Added the following to settings:
Java heap size : 2GB
bootstrap.mlockall: true
indices.fielddata.cache.size: "30%"
indices.cache.filter.size: "30%"
index.translog.flush_threshold_ops: 50000
indices.memory.index_buffer_size: 50%
# Search thread pool
threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100
**With added settings**
Result:
20mb logfile 22 mins Events Per/second 180
200mb logfile 3 hrs 07 mins Events Per/second 180
方案 2
环境详细信息:R3 大型 15.25 RAM 2 核存储:32 GB SSD 64 位网络性能:中等运行实例:Logstash、弹性搜索
**With default settings**
Result :
20mb logfile 7 mins Events Per/second 750
200mb logfile 65 mins Events Per/second 800
Added the following to settings:
Java heap size: 7gb
other parameters same as above
**With added settings**
Result:
20mb logfile 7 mins Events Per/second 800
200mb logfile 55 mins Events Per/second 800
方案 3
环境详细信息:R3 High-Memory Extra Large r3.xlarge 30.5 RAM 4 核存储:32 GB SSD 64 位网络性能:中等运行实例:Logstash,弹性搜索
**With default settings**
Result:
20mb logfile 7 mins Events Per/second 1200
200mb logfile 34 mins Events Per/second 1200
Added the following to settings:
Java heap size: 15gb
other parameters same as above
**With added settings**
Result:
20mb logfile 7 mins Events Per/second 1200
200mb logfile 34 mins Events Per/second 1200
我想知道
- 性能的基准是什么?
- 性能是达到基准还是低于基准
- 为什么即使在我增加了 elasticsearch JVM 之后我也无法找到差异?
- 我如何监控 Logstash 并提高其性能?
感谢您对这方面的任何帮助,因为我是 logstash 和弹性搜索的新手。