我是 Java 和斯坦福 NLP 工具包的新手,并试图将它们用于项目。具体来说,我正在尝试使用 Stanford Corenlp 工具包来注释文本(使用 Netbeans 而不是命令行)并且我尝试使用http://nlp.stanford.edu/software/corenlp.shtml#Usage上提供的代码(使用斯坦福 CoreNLP API).. 问题是:谁能告诉我如何在文件中获取输出,以便我可以进一步处理它?
我已经尝试将图表和句子打印到控制台,只是为了查看内容。这样可行。基本上我需要的是返回带注释的文档,这样我就可以从我的主类中调用它并输出一个文本文件(如果可能的话)。我正在尝试查看 stanford corenlp 的 API,但鉴于我缺乏经验,我真的不知道返回此类信息的最佳方式是什么。
这是代码:
Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// read some text in the text variable
String text = "the quick fox jumps over the lazy dog";
// create an empty Annotation just with the given text
Annotation document = new Annotation(text);
// run all Annotators on this text
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// this is the text of the token
String word = token.get(TextAnnotation.class);
// this is the POS tag of the token
String pos = token.get(PartOfSpeechAnnotation.class);
// this is the NER label of the token
String ne = token.get(NamedEntityTagAnnotation.class);
}
// this is the parse tree of the current sentence
Tree tree = sentence.get(TreeAnnotation.class);
// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
}
// This is the coreference link graph
// Each chain stores a set of mentions that link to each other,
// along with a method for getting the most representative mention
// Both sentence and token offsets start at 1!
Map<Integer, CorefChain> graph =
document.get(CorefChainAnnotation.class);