0

我目前正在为 Mahout 集群项目开发自定义分析器。由于 Mahout 0.8 将 Lucene 更新到 4.3,我无法从本书过时的示例中生成标记化文档文件或 SequenceFile。以下代码是我对 Mahout in Action 书中示例代码的修改。但是,它给了我非法状态异常。

public class MyAnalyzer extends Analyzer {

private final Pattern alphabets = Pattern.compile("[a-z]+");
Version version = Version.LUCENE_43;

@Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    Tokenizer source = new StandardTokenizer(version, reader);
    TokenStream filter = new StandardFilter(version, source);

    filter = new LowerCaseFilter(version, filter);
    filter = new StopFilter(version, filter, StandardAnalyzer.STOP_WORDS_SET);

    CharTermAttribute termAtt = (CharTermAttribute)filter.addAttribute(CharTermAttribute.class);
    StringBuilder buf = new StringBuilder();

    try {

        filter.reset();
        while(filter.incrementToken()){
            if(termAtt.length()>10){
                continue;
            }
            String word = new String(termAtt.buffer(), 0, termAtt.length());
            Matcher matcher = alphabets.matcher(word);
            if(matcher.matches()){
                buf.append(word).append(" ");
            }
        }
    } catch (IOException e) {
        e.printStackTrace();
    }
    source = new WhitespaceTokenizer(version, new StringReader(buf.toString()));

    return new TokenStreamComponents(source, filter);

}

}

4

1 回答 1

0

不太清楚为什么你有一个IllegalStateException,但有一些可能的可能性。通常,您的分析器将在标记器之上构建过滤器。您这样做,然后创建另一个标记器并将其传回,因此传回的过滤器与标记器没有直接关系。另外,您构建的过滤器在传回时已经结束,所以reset我想您可以尝试一下。

但主要问题是这createComponents并不是实现解析逻辑的好地方。这是您设置 Tokenizer 和过滤器堆栈的地方。在过滤器中实现自定义过滤逻辑会更有意义,扩展TokenStream(或AttributeSource,或类似的)。

不过,我认为您正在寻找的内容已经在以下位置实现PatternReplaceCharFilter

private final Pattern nonAlpha = Pattern.compile(".*[^a-z].*");
@Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    Tokenizer source = new StandardTokenizer(version, reader);
    TokenStream filter = new StandardFilter(version, source);
    filter = new LowerCaseFilter(version, filter);
    filter = new StopFilter(version, filter, StandardAnalyzer.STOP_WORDS_SET);
    filter = new PatternReplaceCharFilter(nonAlpha, "", filter);
    return new TokenStreamComponents(source, filter);
}

或者像这样更简单的东西可能会起作用:

@Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    Tokenizer source = new LowerCaseTokenizer(version, reader);
    TokenStream filter = new StopFilter(version, filter, StandardAnalyzer.STOP_WORDS_SET);
    return new TokenStreamComponents(source, filter);
}
于 2013-10-16T23:51:19.930 回答