6

我遇到了关于 lucene termvector 偏移的问题,当我使用自定义分析器分析字段时,它会给出 termvector 的无效偏移,但使用标准分析器就可以了,这是我的分析器代码

public class AttachmentNameAnalyzer extends Analyzer {
    private boolean stemmTokens;
    private String name;

    public AttachmentNameAnalyzer(boolean stemmTokens, String name) {
        super();
        this.stemmTokens    = stemmTokens;
        this.name           = name;
    }

    @Override
    public TokenStream tokenStream(String fieldName, Reader reader) {
        TokenStream stream = new AttachmentNameTokenizer(reader);
        if (stemmTokens)
            stream = new SnowballFilter(stream, name);
        return stream;
    }

    @Override
    public TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException {
        TokenStream stream = (TokenStream) getPreviousTokenStream();

        if (stream == null) {
            stream = new AttachmentNameTokenizer(reader);
            if (stemmTokens)
                stream = new SnowballFilter(stream, name);
            setPreviousTokenStream(stream);
        } else if (stream instanceof Tokenizer) {
            ( (Tokenizer) stream ).reset(reader);
        }

        return stream;
    }
}

这个“需要帮助”有什么问题

4

2 回答 2

0

您使用的是哪个版本的 Lucene?我正在查看每个版本的 3x 分支和行为更改的超类代码

您可能需要检查计算public final boolean incrementToken()位置的代码offset

我也看到了这个:

/**
 * <p>
 * As of Lucene 3.1 the char based API ({@link #isTokenChar(char)} and
 * {@link #normalize(char)}) has been depreciated in favor of a Unicode 4.0
 * compatible int based API to support codepoints instead of UTF-16 code
 * units. Subclasses of {@link CharTokenizer} must not override the char based
 * methods if a {@link Version} >= 3.1 is passed to the constructor.
 * <p>
 * <p>
 * NOTE: This method will be marked <i>abstract</i> in Lucene 4.0.
 * </p>
 */

顺便说一句,您可以像这样重写 switch 语句

@Override
protected boolean isTokenChar(int c) {
    switch(c)
    {
        case ',': case '.':
        case '-': case '_':
        case ' ':
            return false;
        default:
            return true;
    }
}
于 2011-06-11T15:21:10.717 回答
0

分析器的问题,因为我之前发布了分析器的代码,实际上,对于要标记化的每个新文本条目,令牌流都需要休息。

 public TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException {
        TokenStream stream = (TokenStream) getPreviousTokenStream();

        if (stream == null) {
            stream = new AttachmentNameTokenizer(reader);
            if (stemmTokens)
                stream = new SnowballFilter(stream, name);
            setPreviousTokenStream(stream); // --------------->  problem was here 
        } else if (stream instanceof Tokenizer) {
            ( (Tokenizer) stream ).reset(reader); 
        }

        return stream;
    }

每次当我设置前一个标记流时,下一个即将到来的文本字段必须单独标记它总是以最后一个标记流的结束偏移量开始,这使得术语向量偏移量对于新流来说是错误的,现在它可以像这样正常工作

ublic TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException {
            TokenStream stream = (TokenStream) getPreviousTokenStream();

            if (stream == null) {
                stream = new AttachmentNameTokenizer(reader);
                if (stemmTokens)
                    stream = new SnowballFilter(stream, name);
            } else if (stream instanceof Tokenizer) {
                ( (Tokenizer) stream ).reset(reader); 
            }

            return stream;
        }
于 2011-07-04T08:00:20.347 回答