6

我正在尝试使用 Lucene 从 txt 文件中标记和删除停用词。我有这个:

public String removeStopWords(String string) throws IOException {

Set<String> stopWords = new HashSet<String>();
    stopWords.add("a");
    stopWords.add("an");
    stopWords.add("I");
    stopWords.add("the");

    TokenStream tokenStream = new StandardTokenizer(Version.LUCENE_43, new StringReader(string));
    tokenStream = new StopFilter(Version.LUCENE_43, tokenStream, stopWords);

    StringBuilder sb = new StringBuilder();

    CharTermAttribute token = tokenStream.getAttribute(CharTermAttribute.class);
    while (tokenStream.incrementToken()) {
        if (sb.length() > 0) {
            sb.append(" ");
        }
        sb.append(token.toString());
    System.out.println(sb);    
    }
    return sb.toString();
}}

我的主要看起来像这样:

    String file = "..../datatest.txt";

    TestFileReader fr = new TestFileReader();
    fr.imports(file);
    System.out.println(fr.content);

    String text = fr.content;

    Stopwords stopwords = new Stopwords();
    stopwords.removeStopWords(text);
    System.out.println(stopwords.removeStopWords(text));

这给了我一个错误,但我不知道为什么。

4

3 回答 3

10

我有同样的问题。要删除停用词,Lucene您可以使用方法使用它们的默认停止集EnglishAnalyzer.getDefaultStopSet();。否则,您可以创建自己的自定义停用词列表。

下面的代码显示了您的正确版本removeStopWords()

public static String removeStopWords(String textFile) throws Exception {
    CharArraySet stopWords = EnglishAnalyzer.getDefaultStopSet();
    TokenStream tokenStream = new StandardTokenizer(Version.LUCENE_48, new StringReader(textFile.trim()));

    tokenStream = new StopFilter(Version.LUCENE_48, tokenStream, stopWords);
    StringBuilder sb = new StringBuilder();
    CharTermAttribute charTermAttribute = tokenStream.addAttribute(CharTermAttribute.class);
    tokenStream.reset();
    while (tokenStream.incrementToken()) {
        String term = charTermAttribute.toString();
        sb.append(term + " ");
    }
    return sb.toString();
}

要使用自定义停用词列表,请使用以下命令:

//CharArraySet stopWords = EnglishAnalyzer.getDefaultStopSet(); //this is Lucene set 
final List<String> stop_Words = Arrays.asList("fox", "the");
final CharArraySet stopSet = new CharArraySet(Version.LUCENE_48, stop_Words, true);
于 2014-05-16T15:54:15.777 回答
0

您可以在调用 tokenStream.incrementToken() 之前尝试调用 tokenStream.reset()

于 2014-03-02T07:11:06.597 回答
0

Lucene 发生了变化,因此建议的答案(2014 年发布)将无法编译。这是与 Lucene 8.6.3 和 Java 8 一起使用的代码 @user1050755 的略微修改版本:

final String text = "This is a short test!"
final List<String> stopWords = Arrays.asList("short","test"); //Filters both words
final CharArraySet stopSet = new CharArraySet(stopWords, true);

try {
    ArrayList<String> remaining = new ArrayList<String>();

    Analyzer analyzer = new StandardAnalyzer(stopSet); // Filters stop words in the given "stopSet"
    //Analyzer analyzer = new StandardAnalyzer(); // Only filters punctuation marks out of the box, you have to provide your own stop words!
    //Analyzer analyzer = new EnglishAnalyzer(); // Filters the default English stop words (see link below)
    //Analyzer analyzer = new EnglishAnalyzer(stopSet); // Only uses the given "stopSet" but also runs a stemmer, so the result might not look like what you expected.
    
    TokenStream tokenStream = analyzer.tokenStream(CONTENTS, new StringReader(text));
    CharTermAttribute term = tokenStream.addAttribute(CharTermAttribute.class);
    tokenStream.reset();

    while(tokenStream.incrementToken()) {
        System.out.print("[" + term.toString() + "] ");
        remaining.add(term.toString());
    }

    tokenStream.close();
    analyzer.close();
} catch (IOException e) {
    e.printStackTrace();
}

您可以在官方Github此处)上找到 EnglishAnalyzer 的默认停用词。

打印结果:

  • StandardAnalyzer(stopSet)[this] [is] [a]
  • StandardAnalyzer()[this] [is] [a] [short] [test]
  • EnglishAnalyzer()[this] [short] [test]
  • EnglishAnalyzer(stopSet):([thi] [is] [a] 不,这不是错字,它真的输出thi!)

可以将默认停用词和您自己的停用词结合起来,但最好使用 a CustomAnalyzer(查看此答案)。

于 2020-10-22T15:37:03.073 回答