2

我正在尝试从数据集创建主题模型。该代码能够正确使用 NMF 从解析的数据中生成任务数量的主题,但是当语料库长度 = 20 时它会中断,如下所示

20
[u'bell', u'closed', u'day', u'drinks', u'enjoy', u'food', u'good', u'great', u'll', u'new', u'nice', u'original', u'people', u'phoenix', u'place', u'rd', u'reopened', u'terrific', u'try', u'weekly']
Traceback (most recent call last):
  File "sklearnTfidf.py", line 238, in <module>
    trainTest()
  File "sklearnTfidf.py", line 185, in trainTest
    posDic += buildDictionary(pos_reviews)
  File "sklearnTfidf.py", line 143, in buildDictionary
    sortedDict = buildTFIDF(review)
  File "sklearnTfidf.py", line 110, in buildTFIDF
    nmf = NMF(n_components=no_topics, random_state=1, init='nndsvd').fit(tfidf)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 551, in fit
    self.fit_transform(X, **params)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 485, in fit_transform
    W, H = self._init(X)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 395, in _init
    W, H = _initialize_nmf(X, self.n_components_)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 116, in _initialize_nmf
    x, y = U[:, j], V[j, :]
IndexError: index 1 is out of bounds for axis 1 with size 1

我仍然熟悉 sklearn 工具集,所以我接受这可能是我的一个简单的忽略,因为大部分代码都是从各种示例中录制的。

# Create a dictionary of words from review
def buildDictionary(review) :
    buildTFIDF(review)
    #[unrelated code]


# Extract topic models from corpus
def buildTFIDF(corpus) :
    no_topics = 5
    no_features = 100
    no_top_words = 10
    tfidf_vectorizer = TfidfVectorizer(min_df=1, max_df=1.0, max_features=no_features, stop_words='english')
    tfidf = tfidf_vectorizer.fit_transform(corpus)
    tfidf_feature_names = tfidf_vectorizer.get_feature_names()

    print tfidf.getnnz()        # sanity checking
    print tfidf_feature_names   # sanity checking
    nmf = NMF(n_components=no_topics, random_state=1, init='nndsvd').fit(tfidf)

    display_topics(nmf, tfidf_feature_names, no_top_words)
    print ''


# Prints no_top_words for each feature
def display_topics(model, feature_names, no_top_words):
    for topic_idx, topic in enumerate(model.components_):
        print "Topic %d:" %(topic_idx)
        print " ".join([feature_names[i]
                for i in topic.argsort()[:-no_top_words - 1:-1]])

究竟是什么导致了这个索引错误,我该如何纠正它?

4

1 回答 1

0

我建议你看看这个答案:https ://stackoverflow.com/a/43336816/8187340 。问题是您的decomposition.NMF(n_components)参数的值。此参数必须等于或小于您的语料库数。
示例:如果dtm.shape返回 (6, 6030),因此no_topics <= 6

于 2017-10-18T14:16:24.620 回答