9

我想做一个非常简单的工作:给定一个包含代词的字符串,我想解析它们。

例如,我想将句子“玛丽有一只小羊羔。她很可爱”。在“玛丽有一只小羊羔。玛丽很可爱。”。

我曾尝试使用斯坦福 CoreNLP。但是,我似乎无法启动解析器。我已经使用 Eclipse 在我的项目中导入了所有包含的 jar,并且我已经为 JVM (-Xmx3g) 分配了 3GB。

错误非常尴尬:

线程“主”java.lang.NoSuchMethodError 中的异常:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser/lexparser/词法解析器;

我不明白那个 L 来自哪里,我认为这是我问题的根源......这很奇怪。我试图进入源文件,但那里没有错误的参考。

代码:

import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.dcoref.CorefChain;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.IntTuple;
import edu.stanford.nlp.util.Pair;
import edu.stanford.nlp.util.Timing;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import java.util.Properties;

public class Coref {

/**
 * @param args the command line arguments
 */
public static void main(String[] args) throws IOException, ClassNotFoundException {
    // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties();
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = "Mary has a little lamb. She is very cute."; // Add your text here!

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    // these are all the sentences in this document
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
    List<CoreMap> sentences = document.get(SentencesAnnotation.class);

    for(CoreMap sentence: sentences) {
      // traversing the words in the current sentence
      // a CoreLabel is a CoreMap with additional token-specific methods
      for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
        // this is the text of the token
        String word = token.get(TextAnnotation.class);
        // this is the POS tag of the token
        String pos = token.get(PartOfSpeechAnnotation.class);
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);       
      }

      // this is the parse tree of the current sentence
      Tree tree = sentence.get(TreeAnnotation.class);
      System.out.println(tree);

      // this is the Stanford dependency graph of the current sentence
      SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
    }

    // This is the coreference link graph
    // Each chain stores a set of mentions that link to each other,
    // along with a method for getting the most representative mention
    // Both sentence and token offsets start at 1!
    Map<Integer, CorefChain> graph = 
      document.get(CorefChainAnnotation.class);
    System.out.println(graph);
  }
}

完整的堆栈跟踪:

添加注释器标记化添加注释器 ssplit 添加注释器 pos 加载 POS 模型 [edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distim.tagger] ... 从训练有素的标注器 edu/stanford/ 加载默认属性nlp/models/pos-tagger/english-left3words/english-left3words-distim.tagger 从 edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distim.tagger 读取词性标注模型 ...完成 [2.1 秒]。完成 [2.2 秒]。添加注释器引理添加注释器 ner 从 edu/stanford/nlp/models/ner/english.all.3class.distim.crf.ser.gz 加载分类器...完成 [4.0 秒]。从 edu/stanford/nlp/models/ner/english.muc.dissim.crf.ser.gz 加载分类器...完成 [3.0 秒]。从 edu/stanford/nlp/models/ner/english.conll.dissim.crf.ser.gz 加载分类器...完成 [3.3 秒]。在线程“main”java.lang.NoSuchMethodError 中添加注释器解析异常:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser /lexparser/词法分析器;在 edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115) 在 edu.stanford.nlp.pipeline.ParserAnnotator.(ParserAnnotator.java:64) 在 edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create (StanfordCoreNLP.java:603) at edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create(StanfordCoreNLP.java:585) at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62) at edu.stanford .nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329) 在 edu.stanford.nlp.pipeline.StanfordCoreNLP.(S​​tanfordCoreNLP.java:196) 在 edu.stanford.nlp。

4

1 回答 1

9

是的,从 Java 1.0 开始,L 就是一个奇怪的 Sun 东西。

LexicalizedParser.loadModel(String, String ...)是添加到解析器的新方法,目前没有找到。我怀疑这意味着您的类路径中有另一个版本的解析器正在被使用。

试试这个:在任何 IDE 外部的 shell 中,给出这些命令(适当地给出 stanford-corenlp 的路径,并将 : 更改为 ; 如果在 Windows 上:

javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java
java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref

解析器加载并且您的代码对我来说正确运行 - 只需要添加一些打印语句,这样您就可以看到它做了什么:-)。

于 2012-05-23T18:56:07.057 回答