下面的链接显示了一个类似的问题,除了他解决了下载包并且我已经下载了包... 资源 u'tokenizers/punkt/english.pickle' 未找到
但奇怪的是,我能够在终端上毫无错误地运行它,但是我有一个 js 文件对这个 .py 文件进行 ajax 调用,并且当它尝试执行它时。它返回那个错误..但我不知道为什么
Errors more
Resource u'tokenizers/punkt/english.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/var/www/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- u'
Traceback (most recent call last):
File "/var/www/CSCE-470-Anime-Recommender/py/app.py", line 40, in <module>
cl = NaiveBayesClassifier(Functions.classify(UserData))
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 192, in __init__
self.train_features = [(self.extract_features(d), c) for d, c in self.train_set]
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 169, in extract_features
return self.feature_extractor(text, self.train_set)
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 81, in basic_extractor
word_features = _get_words_from_dataset(train_set)
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 63, in _get_words_from_dataset
return set(all_words)
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 62, in <genexpr>
all_words = chain.from_iterable(tokenize(words) for words, _ in dataset)
File "/usr/local/lib/python2.7/dist-packages/textblob/classifiers.py", line 59, in tokenize
return word_tokenize(words, include_punc=False)
File "/usr/local/lib/python2.7/dist-packages/textblob/tokenizers.py", line 72, in word_tokenize
for sentence in sent_tokenize(text))
File "/usr/local/lib/python2.7/dist-packages/textblob/base.py", line 64, in itokenize
return (t for t in self.tokenize(text, *args, **kwargs))
File "/usr/local/lib/python2.7/dist-packages/textblob/decorators.py", line 38, in decorated
raise MissingCorpusError()
MissingCorpusError:
Looks like you are missing some required data for this feature.
To download the necessary data, simply run
python -m textblob.download_corpora
or use the NLTK downloader to download the missing data: http://nltk.org/data.html
If this doesn't fix the problem, file an issue at https://github.com/sloria/TextBlob/issues.