3

我试图让用户提交的查询“Joe Frankle”、“Joe Frankle”、“Joe Frankle's”以匹配原始文本“Joe Frankle's”。现在我们正在索引这个文本所在的字段(轮胎/红宝石格式):

{ :type => 'string', :analyzer => 'snowball' }

并搜索:

query { string downcased_query, :default_operator => 'AND' }

我试过这个不成功:

          create :settings => {
              :analysis => {
                :char_filter => {
                   :remove_accents => {
                     :type => "mapping",
                     :mappings => ["`=>", "'=>"]
                   }
                },
                :analyzer => {
                  :myanalyzer => {
                    :type => 'custom',
                    :tokenizer => 'standard',
                    :char_filter => ['remove_accents'],
                    :filter => ['standard', 'lowercase', 'stop', 'snowball', 'ngram']
                  }
                },
                :default => {
                  :type => 'myanalyzer'
                }
            }
          },
4

3 回答 3

3

There's two official ways of handling possessive apostrophes:

1) Use the "possessive_english" stemmer as described in the ES docs: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-stemmer-tokenfilter.html

Example:

{
  "index" : {
    "analysis" : {
        "analyzer" : {
            "my_analyzer" : {
                "tokenizer" : "standard",
                "filter" : ["standard", "lowercase", "my_stemmer"]
            }
        },
        "filter" : {
            "my_stemmer" : {
                "type" : "stemmer",
                "name" : "possessive_english"
            }
        }
    }
  }
}

Use other stemmers or snowball in addition to the "possessive_english" filter if you like. Should/Must work, but it's untested code.

2) Use the "word_delimiter" filter:

{
  "index" : {
    "analysis" : {
        "analyzer" : {
            "my_analyzer" : {
                "tokenizer" : "standard",
                "filter" : ["standard", "lowercase", "my_word_delimiter"]
            }
        },
        "filter" : {
            "my_word_delimiter" : {
                "type" : "word_delimiter",
                "preserve_original": "true"
            }
        }
    }
  }
}

Works for me :-) ES docs: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-word-delimiter-tokenfilter.html

Both will cut off "'s".

于 2014-05-10T19:35:15.840 回答
1

我遇到了类似的问题,单独的雪球分析仪对我不起作用。不知道该不该。这是我使用的:

properties: {
  name: {
    boost: 10,
    type:  'multi_field',
    fields: {
      name:      { type: 'string', index: 'analyzed', analyzer: 'title_analyzer' },
      untouched: { type: 'string', index: 'not_analyzed' }
    }
  }
}

analysis: {
  char_filter: {
    remove_accents: {
      type: "mapping",
      mappings: ["`=>", "'=>"]
    }
  },
  filter: {},
  analyzer: {
    title_analyzer: {
      type: 'custom',
      tokenizer: 'standard',
      char_filter: ['remove_accents'],
    }
  }
}

使用分析器时,管理索引分析工具也很棒。

于 2013-04-25T18:09:13.073 回答
0

看起来在您的查询中您正在搜索_all字段,但您的分析器仅应用于单个字段。要为该_all字段启用此功能,只需将 snowball 设为您的默认分析器

于 2013-04-25T12:45:06.743 回答