我在用自己的词汇训练 word2vec 时出错。我也不明白为什么会这样。
代码:
from gensim.models import word2vec
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = word2vec.LineSentence('test_data')
model = word2vec.Word2Vec(sentences, size=20)
model.build_vocab(sentences,update=True)
model.train(sentences)
print model.most_similar(['course'])
它抛出一个错误
2017-08-27 16:50:04,590 : INFO : precomputing L2-norms of word weight vectors
Traceback (most recent call last):
File "tryword2vec.py", line 23, in <module>
print model.most_similar(['course'])
File "/usr/local/lib/python2.7/dist-packages/gensim/models/word2vec.py", line 1285, in most_similar
return self.wv.most_similar(positive, negative, topn, restrict_vocab, indexer)
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 97, in most_similar
raise KeyError("word '%s' not in vocabulary" % word)
KeyError: "word 'course' not in vocabulary"
test_data 包含:
英格学士是一门课程。M.Tech 是一门课程。我是一门课程。B.Tech是一门课程。文学学士学位是一门课程。时装设计是一门课程。多媒体是一门课程。机械工程是一门课程。计算机科学是一门课程。电子是一个来源。工程学是一门课程。MBA是一门课程。BBA是一门课程。
任何帮助表示赞赏?