0
def word_feats(words):
     return dict([(word, True) for word in words])

for tweet in negTweets:
     words = re.findall(r"[\w']+|[.,!?;]", tweet) #splits the tweet into words
     negwords = [(word_feats(words), 'neg')] #tag the words with feature
     negfeats.append(negwords) #add the words to the feature list
for tweet in posTweets:
     words = re.findall(r"[\w']+|[.,!?;]", tweet)
     poswords = [(word_feats(words), 'pos')]
     posfeats.append(poswords)

negcutoff = len(negfeats)*3/4 #take 3/4ths of the words
poscutoff = len(posfeats)*3/4

trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff] #assemble the train set
testfeats = negfeats[negcutoff:] + posfeats[poscutoff:]

classifier = NaiveBayesClassifier.train(trainfeats)
print 'accuracy:', nltk.classify.util.accuracy(classifier, testfeats)
classifier.show_most_informative_features()

运行此代码时出现以下错误...

File "C:\Python27\lib\nltk\classify\naivebayes.py", line 191, in train

for featureset, label in labeled_featuresets:

ValueError: need more than 1 value to unpack

错误来自分类器 = NaiveBayesClassifier.train(trainfeats) 行,我不知道为什么。我以前做过类似的事情,我的 trainfeats 接缝的格式与当时的格式相同……下面列出了格式中的一个示例……

[[({'me': True, 'af': True, 'this': True, 'joy': True, 'high': True, 'hookah': True, 'got': True}, 'pos' )]]

我的 trainfeats 还需要什么其他值来创建分类器?强调文本

4

1 回答 1

1

@Prune 的评论是正确的:您labeled_featuresets应该是一个对序列(双元素列表或元组):每个数据点的特征字典和类别。相反,您的每个元素trainfeats都是一个包含一个元素的列表:这两个元素的元组。丢失两个特征构建循环中的方括号,这部分应该可以正常工作。例如,

negwords = (word_feats(words), 'neg')
negfeats.append(negwords)

还有两件事:考虑使用nltk.word_tokenize()而不是自己进行标记化。并且您应该随机化您的训练数据的顺序,例如使用random.scramble(trainfeats).

于 2016-11-10T20:24:40.493 回答