0

我使用以下命令为德语模型提供 corenlp 服务器,这些模型在 classpath 中作为 jar 下载,但它不输出德语标签或解析,而只加载英语模型:

 java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer   -props ./german.prop

German.prop 内容:

annotators = tokenize, ssplit, pos, depparse, parse

tokenize.language = de

pos.model = edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger

ner.model = edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz
ner.applyNumericClassifiers = false
ner.useSUTime = false

parse.model = edu/stanford/nlp/models/lexparser/germanFactored.ser.gz
depparse.model = edu/stanford/nlp/models/parser/nndep/UD_German.gz

客户端命令:

wget --post-data ' Meine Mutter ist aus Wuppertal' 'localhost:9000/?properties"="{"tokenize.whitespace":"true","annotators":"tokenize, ssplit, pos, depparse, parse","outputFormat":"text","tokenize.language" :"de" ,
 "pos.model":" edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger",
"depparse.model" : "edu/stanford/nlp/models/parser/nndep/UD_German.gz",
"parse.model" : "edu/stanford/nlp/models/lexparser/germanFactored.ser.gz"

 }' -O -

我得到以下不正确的输出:

 {"dep":"dep","governor":4,"governorGloss":"aus","dependent":5,"dependentGloss":"Wuppertal"}],"openie":[{"subject":"Wuppertal","subjectSpan":[4,5],"relation":"is ist aus of","relationSpan":[2,4],"object":"Meine Mutter","objectSpan":[0,2]}],"tokens":[{"index":1,"word":"Meine","originalText":"Meine","lemma":"Meine","characterOffsetBegin":1,"characterOffsetEnd":6,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":2,"word":"Mutter","originalText":"Mutter","lemma":"Mutter","characterOffsetBegin":7,"characterOffsetEnd":13,"pos":"NNP","ner":"PERSON","speaker":"PER0","before":" ","after":" "},{"index":3,"word":"ist","originalText":"ist","lemma":"ist","characterOffsetBegin":14,"characterOffsetEnd":17,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":4,"word":"aus","originalText":"aus","lemma":"aus","characterOffsetBegin":18,"characterOffsetEnd":21,"pos":"NN","ner":"O","speaker":"PER0","before":" ","after":" "},{"index":5,"word":"Wuppertal","originalText":"Wuppertal","lemma":"Wuppertal","characterOffsetBegin":22,"characterOffsetEnd":31,"pos":"NNP","ner":"LOCATI100%[==========================================================================>] 2,

在服务器日志中,我看到它加载了英文模型,尽管它在启动时列出了德国模型:

pos.model=edu/stanford/nlp/models/pos-tagger/ge...
parse.model=edu/stanford/nlp/models/lexparser/ger...
tokenize.language=de
depparse.model=edu/stanford/nlp/models/parser/nndep/...
annotators=tokenize, ssplit, pos, depparse, parse
Starting server on port 9000 with timeout of 5000 milliseconds.
StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[/203.:61563] API call w/annotators tokenize,ssplit,pos,depparse
Die Katze liegt auf der Matte.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [1.5 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
PreComputed 100000, Elapsed Time: 1.396 (s)

以下关于法国模型中相同错误的问题也指向相同的问题,但即使在遵循之后,它也不能解决服务器案例的问题,我能够在不使用服务器的情况下获得正确的输出,而只需使用edu.stanford.nlp.pipeline.StanfordCoreNLP command,它是edu.stanford.nlp.pipeline.StanfordCoreNLPServer默认为英语 的服务器命令: French dependency parsing using CoreNLP

4

1 回答 1

2

让外语内容在服务器上工作存在一些问题。

如果您使用我们 GitHub 站点上提供的最新版本,它应该可以工作。

GitHub 站点在这里:https ://github.com/stanfordnlp/CoreNLP

该链接包含使用最新版本代码构建 jar 的说明。

我在一些示例德语文本上运行了这个命令,它看起来工作正常:

wget --post-data '<sample german text>' 'localhost:9000/?properties={"pipelineLanguage":"german","annotators":"tokenize,ssplit,pos,ner,parse", "parse.model":"edu/stanford/nlp/models/lexparser/germanFactored.ser.gz","tokenize.language":"de","pos.model":"edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger", "ner.model":"edu/stanford/nlp/models/ner/german.hgc_175m_600.crf.ser.gz", "ner.applyNumericClassifiers":"false", "ner.useSUTime":"false"}' -O -

我应该注意到,神经网络德语依赖解析器已完全损坏,我们正在尽快修复它,因此您应该使用我在该命令中指定的德语设置。

关于服务器的更多信息可以在这里找到:http ://stanfordnlp.github.io/CoreNLP/corenlp-server.html

于 2016-09-27T12:11:09.143 回答