3

gensim 字典对象有一个非常好的过滤功能,可以删除出现在少于一组文档中的标记。但是,我希望删除在corpus中恰好出现一次的标记。有谁知道一种快速简便的方法来做到这一点?

4

4 回答 4

4

您可能应该在问题中包含一些可重现的代码;但是,我将使用上一篇文章中的文档。我们可以在不使用 gensim 的情况下实现您的目标。

from collections import defaultdict
documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
              "The EPS user interface management system",
              "System and human system engineering testing of EPS",
              "Relation of user perceived response time to error measurement",
              "The generation of random binary unordered trees",
              "The intersection graph of paths in trees",
              "Graph minors IV Widths of trees and well quasi ordering",
              "Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents]

# word frequency
d=defaultdict(int)
for lister in texts:
    for item in lister:
        d[item]+=1

# remove words that appear only once
tokens=[key for key,value in d.items() if value>1]
texts = [[word for word in document if word in tokens] for document in texts]


不过,要添加一些信息,您可能会认为 gensim 教程除了前面提到的方法外,还有一种更节省内存的技术。我添加了一些打印语句,以便您可以看到每个步骤发生了什么。您的具体问题在 DICTERATOR 步骤中得到解答;我意识到以下答案可能对您的问题有点过分,但是如果您需要进行任何类型的主题建模,那么此信息将是朝着正确方向迈出的一步。

$cat mycorpus.txt

Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey  

运行以下 create_corpus.py:

#!/usr/bin/env python
from gensim import corpora, models, similarities

stoplist = set('for a of the and to in'.split())

class MyCorpus(object):
    def __iter__(self):
        for line in open('mycorpus.txt'):
            # assume there's one document per line, tokens separated by whitespace
            yield dictionary.doc2bow(line.lower().split()) 

# TOKENIZERATOR: collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
print (dictionary)
print (dictionary.token2id)

# DICTERATOR: remove stop words and words that appear only once 
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq == 1]
dictionary.filter_tokens(stop_ids + once_ids)
print (dictionary)
print (dictionary.token2id)

dictionary.compactify() # remove gaps in id sequence after words that were removed
print (dictionary)
print (dictionary.token2id)

# VECTORERATOR: map tokens frequency per doc to vectors
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
for item in corpus_memory_friendly:
    print item

祝你好运!

于 2014-03-31T19:45:48.607 回答
4

您可能想要查找 gensim 字典filter_extremes方法:

filter_extremes(no_below=5, no_above=0.5, keep_n=100000)

于 2017-02-07T21:33:53.813 回答
0

在 Gensim教程中找到了这个:

from gensim import corpora, models, similarities

documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
         for document in documents]

# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once]
        for text in texts]
print texts

[['human', 'interface', 'computer'],
 ['survey', 'user', 'computer', 'system', 'response', 'time'],
 ['eps', 'user', 'interface', 'system'],
 ['system', 'human', 'system', 'eps'],
 ['user', 'response', 'time'],
 ['trees'],
 ['graph', 'trees'],
 ['graph', 'minors', 'trees'],
 ['graph', 'minors', 'survey']]

基本上,遍历包含整个语料库的列表,如果每个单词只出现一次,则将其添加到标记列表中。然后遍历每个文档中的每个单词,如果它在语料库中出现一次的标记列表中,则删除该单词。

我假设这是最好的方法,否则教程会提到其他内容。但我可能是错的。

于 2014-03-02T02:54:17.110 回答
0
def get_term_frequency(dictionary,cutoff_freq):
    """This returns a list of tuples (term,frequency) after removing all tuples with frequency smaller than cutoff_freq
       dictionary (gensim.corpora.Dictionary): corpus dictionary
       cutoff_freq (int): terms with whose frequency smaller than this will be dropped
    """
    tf = []
    for k,v in dictionary.dfs.iteritems():
        tf.append((str(dictionary.get(k)),v))
    return reduce(lambda t:t[1]>cutoff_freq)
于 2016-10-21T00:18:38.127 回答