9

我想用斯坦福 NLP 解析器解析句子列表。我的列表是一个ArrayList,如何解析所有列表LexicalizedParser

我想从每个句子中得到这种形式:

Tree parse =  (Tree) lp1.apply(sentence);
4

3 回答 3

20

尽管可以深入研究文档,但我将在此处提供有关 SO 的代码,尤其是在链接移动和/或死亡的情况下。这个特定的答案使用了整个管道。如果对整个管道不感兴趣,我会在一秒钟内提供一个替代答案。

下面的例子是使用斯坦福管道的完整方式。如果对共指解析不感兴趣,dcoref请从第 3 行代码中删除。因此,在下面的示例中,如果您只是将其输入文本主体(文本变量),则管道会为您(ssplit 注释器)进行句子拆分。只有一句话?好吧,没关系,您可以将其作为文本变量输入。

   // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties();
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = ... // Add your text here!

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    // these are all the sentences in this document
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
    List<CoreMap> sentences = document.get(SentencesAnnotation.class);

    for(CoreMap sentence: sentences) {
      // traversing the words in the current sentence
      // a CoreLabel is a CoreMap with additional token-specific methods
      for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
        // this is the text of the token
        String word = token.get(TextAnnotation.class);
        // this is the POS tag of the token
        String pos = token.get(PartOfSpeechAnnotation.class);
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);       
      }

      // this is the parse tree of the current sentence
      Tree tree = sentence.get(TreeAnnotation.class);

      // this is the Stanford dependency graph of the current sentence
      SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
    }

    // This is the coreference link graph
    // Each chain stores a set of mentions that link to each other,
    // along with a method for getting the most representative mention
    // Both sentence and token offsets start at 1!
    Map<Integer, CorefChain> graph = 
      document.get(CorefChainAnnotation.class);
于 2014-01-28T14:38:04.350 回答
1

正如所承诺的,如果您不想访问完整的斯坦福管道(尽管我相信这是推荐的方法),您可以直接使用 LexicalizedParser 类。在这种情况下,您将下载最新版本的 Stanford Parser(而另一个将使用 CoreNLP 工具)。确保除了解析器 jar 之外,您还有要使用的相应解析器的模型文件。示例代码:

LexicalizedParser lp1 = new LexicalizedParser("englishPCFG.ser.gz", new Options());
String sentence = "It is a fine day today";
Tree parse = lp.parse(sentence);

请注意,这适用于解析器的 3.3.1 版本。

于 2014-01-28T18:12:08.787 回答
1

实际上,斯坦福 NLP 的文档提供了如何解析句子的示例。

你可以在这里找到文档

于 2012-01-12T02:15:13.877 回答