0

我是 ELK 堆栈的新手,并使用 elasticsearch 版本 1.4.4、logstash 版本 1.4.2 和 kibana 版本 4 实现。我可以使用 logstash 将 csv 文件拉入 elasticsearch 并在 kibana 中显示。

当显示文件中的日期时,日期中的值被分离出来,就好像其中包含的破折号是分隔符一样(例如,当在 kibana 中显示时,字段中的值是 01-01-2015(无论显示类型如何)将有三个字段条目,01、01 和 2015)。Kibana 给出一条消息,这是因为它是一个分析字段。

Kibana 4 有一个功能,可以直接从仪表板构建器 Visualization 中使用 json,将其更改为非分析字段,以便使用整个字符串,而不是将其分开。

我尝试了多种格式,但这是似乎应该工作的一种格式,因为 kibana 将其识别为有效语法:

{ "index" : "not_analyzed" }

但是在尝试应用更改时,仪表板不会更改其结构,并且 kibana 会生成以下异常:

Visualize: Request to Elasticsearch failed: {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][0]: SearchParseException[[csvtest][0]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][0]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][1]: SearchParseException[[csvtest][1]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][1]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][2]: SearchParseException[[csvtest][2]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][2]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][3]: SearchParseException[[csvtest][3]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][3]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }{[ftpEMbcOTxu0Tdf0e8i-Ig][csvtest][4]: SearchParseException[[csvtest][4]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Failed to parse source [{\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"gte\":1420092000000,\"lte\":1451627999999}}}],\"must_not\":[]}}}},\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"Conn Dt\",\"size\":100,\"order\":{\"1\":\"desc\"},\"index\":\"not_analyzed\"},\"aggs\":{\"1\":{\"cardinality\":{\"field\":\"Area Cd\"}}}}}}]]]; nested: SearchParseException[[csvtest][4]: query[ConstantScore(BooleanFilter(+cache(@timestamp:[1420092000000 TO 1451627999999])))],from[-1],size[0]: Parse Failure [Unknown key for a VALUE_STRING in [2]: [index].]]; }]"} less

可以看出其中index:从analyzed改为not_analyzed;还分析了通配符的设置:true 也被更改为 false 与具有相同结果的高级对象配置。

4

2 回答 2

1

尝试索引映射并将日期字段设置为未分析。

例如:

"<index name>": {
      "mappings": {
         "<Mapping type>": {
            "properties": {
               "City": {
                  "type": "string",
                  "index": "not_analyzed"
               },
               "Date": {
                  "type": "string",
                  "index": "not_analyzed"
               }
           }
        }
于 2016-05-18T13:06:48.640 回答
0

我今天遇到了类似的问题,消息如下:

Parse Failure [Unknown key for a VALUE_STRING in [logTime]: [offset].]]; }]

我正在使用以下有效负载针对 Elasticsearch 1.4.5 发送日期直方图聚合请求:

['logTime'].forEach(function (field) {
    body.aggregations[field] = {
        date_histogram: {
            field: field,
            interval: 'week',
            time_zone: '+00:00',
            offset: '15h',
            min_doc_count: 0,
            extended_bounds: {
                min: 1440946800000,
                max: 1441551599999
            }
        }
    };
});

注意offset参数的使用date_histogram。此参数仅在 Elasticsearch 版本 1.5.0 中引入。所以,我的 1.4.5 ES 抱怨这个offset键是Unknown.

替换post_offset如下解决了这个问题,尽管我也必须调整time_zone参数的值。作为旁注,自 v1.5 起post_offset已弃用并替换为。offset

['logTime'].forEach(function (field) {
    body.aggregations[field] = {
        date_histogram: {
            field: field,
            interval: 'week',
            time_zone: '+09:00',
            post_offset: '-9h',
            min_doc_count: 0,
            extended_bounds: {
                min: 1440946800000,
                max: 1441551599999
            }
        }
    };
});
于 2015-09-03T05:49:05.210 回答