3

on running this query:

{ "start_absolute":1359695700000, "end_absolute":1422853200000, "metrics":[{"tags":{"Building_id":["100"]},"name":"meterreadings","group_by":[{"name":"time","group_count":"12","range_size":{"value":"1","unit":"MONTHS"}}],"aggregators":[{"name":"sum","align_sampling":true,"sampling":{"value":"1","unit":"Months"}}]}]}

I am getting the following response:

500 {"errors":["Too many open files"]}

Here this link it is written that increase the size of file-max.

My file-max output is:

cat /proc/sys/fs/file-max
382994

it is already very large, do I need to increase its limit

4

1 回答 1

1

你用的是什么版本?您是否在查询中使用了很多组?您可能需要重新启动 kairosDB 作为解决方法。

你能检查你是否删除了(幽灵)文件句柄(在下面的命令行中替换为 kairosDB 进程 ID)?

ls -l /proc/<PID>/fd | grep kairos_cache | grep -v '(delete)' | wc -l  

0.9.5 中针对未关闭的文件句柄进行了修复。下一个版本 (1.0.1) 有待修复。

参看。https://github.com/kairosdb/kairosdb/pull/180https://github.com/kairosdb/kairosdb/issues/132https://github.com/kairosdb/kairosdb/issues/175

于 2015-08-18T20:17:13.180 回答