我们在索引中有 60M 文档。托管在 4 个节点的集群上。
我想确保配置针对文档上的聚合进行了优化。
这是示例查询:
select * from sources * where (sddocname contains ([{"implicitTransforms": false}]"tweet")) | all(group(n_tA_c) each(output(count() as(count))));
字段 n_tA_c 包含字符串数组。这是示例文档:
{
"fields": {
"add_gsOrd": 63829,
"documentid": "id:firehose:tweet::815347045032742912",
"foC": 467,
"frC": 315,
"g": 0,
"ln": "en",
"m": "ya just wants some fried rice",
"mTp": 2,
"n_c_p": [],
"n_tA_c": [
"fried",
"rice"
],
"n_tA_s": [],
"n_tA_tC": [],
"sN": "long_delaney1",
"sT_dlC": 0,
"sT_fC": 0,
"sT_lAT": 0,
"sT_qC": 0,
"sT_r": 0.0,
"sT_rC": 467,
"sT_rpC": 0,
"sT_rtC": 0,
"sT_vC": 0,
"sddocname": "tweet",
"t": 1483228858608,
"u": 377606303,
"v": "false"
},
"id": "id:firehose:tweet::815347045032742912",
"relevance": 0.0,
"source": "content-root-cluster"
}
n_tA_c 是具有快速搜索模式的属性
field n_tA_c type array<string> {
indexing: summary | attribute
attribute: fast-search
}
简单的术语聚合查询不会在 20 年代回归。和超时。我们需要哪些额外的检查清单来确保减少这种延迟?
$ curl 'http://localhost:8080/search/?yql=select%20*%20from%20sources%20*%20where%20(sddocname%20contains%20(%5B%7B%22implicitTransforms%22%3A%20false%7D%5D%22tweet%22))%20%7C%20all(group(n_tA_c)%20each(output(count()%20as(count))))%3B' | python -m json.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 270 100 270 0 0 13 0 0:00:20 0:00:20 --:--:-- 67
{
"root": {
"children": [
{
"continuation": {
"this": ""
},
"id": "group:root:0",
"relevance": 1.0
}
],
"errors": [
{
"code": 12,
"message": "Timeout while waiting for sc0.num0",
"source": "content-root-cluster",
"summary": "Timed out"
}
],
"fields": {
"totalCount": 0
},
"id": "toplevel",
"relevance": 1.0
}
}
这些节点是 aws i3.4x 大盒子。(16 核,120 GB)
我可能会错过一些愚蠢的东西。