14

我正在尝试加快我的项目来计算词频。我有 360 多个文本文件,我需要获取单词总数和另一个单词列表中每个单词出现的次数。我知道如何使用单个文本文件来做到这一点。

>>> import nltk
>>> import os
>>> os.chdir("C:\Users\Cameron\Desktop\PDF-to-txt")
>>> filename="1976.03.txt"
>>> textfile=open(filename,"r")
>>> inputString=textfile.read()
>>> word_list=re.split('\s+',file(filename).read().lower())
>>> print 'Words in text:', len(word_list)
#spits out number of words in the textfile
>>> word_list.count('inflation')
#spits out number of times 'inflation' occurs in the textfile
>>>word_list.count('jobs')
>>>word_list.count('output')

获取“通货膨胀”、“工作”、“产出”个体的频率太繁琐了。我可以把这些词放到一个列表中,同时找出列表中所有词出现的频率吗?基本上与Python。

示例:而不是这个:

>>> word_list.count('inflation')
3
>>> word_list.count('jobs')
5
>>> word_list.count('output')
1

我想这样做(我知道这不是真正的代码,这是我寻求帮助的内容):

>>> list1='inflation', 'jobs', 'output'
>>>word_list.count(list1)
'inflation', 'jobs', 'output'
3, 5, 1

我的单词列表将有 10-20 个术语,因此我需要能够将 Python 指向单词列表以获取计数。如果输出能够复制+粘贴到Excel电子表格中,其中单词为列,频率为行,那也很好

例子:

inflation, jobs, output
3, 5, 1

最后,任何人都可以帮助为所有文本文件自动执行此操作吗?我想我只是将 Python 指向文件夹,它可以从新列表中为 360 多个文本文件中的每一个进行上述字数计数。似乎很容易,但我有点卡住了。有什么帮助吗?

像这样的输出会很棒:Filename1 通货膨胀,工作,输出 3、5、1

Filename2
inflation, jobs, output
7, 2, 4

Filename3
inflation, jobs, output
9, 3, 5

谢谢!

4

4 回答 4

20

如果我理解您的问题,collections.Counter()已经涵盖了这一点。

文档中的示例似乎与您的问题相匹配。

# Tally occurrences of words in a list
cnt = Counter()
for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:
    cnt[word] += 1
print cnt


# Find the ten most common words in Hamlet
import re
words = re.findall('\w+', open('hamlet.txt').read().lower())
Counter(words).most_common(10)

从上面的例子你应该能够做到:

import re
import collections
words = re.findall('\w+', open('1976.03.txt').read().lower())
print collections.Counter(words)

编辑天真的方法来展示一种方式。

wanted = "fish chips steak"
cnt = Counter()
words = re.findall('\w+', open('1976.03.txt').read().lower())
for word in words:
    if word in wanted:
        cnt[word] += 1
print cnt
于 2013-02-17T13:15:07.257 回答
5

一种可能的实现(使用计数器)...

我认为不是打印输出,而是写入 csv 文件并将其导入 Excel 会更简单。查看http://docs.python.org/2/library/csv.html并替换print_summary.

import os
from collections import Counter
import glob

def word_frequency(fileobj, words):
    """Build a Counter of specified words in fileobj"""
    # initialise the counter to 0 for each word
    ct = Counter(dict((w, 0) for w in words))
    file_words = (word for line in fileobj for word in line.split())
    filtered_words = (word for word in file_words if word in words)
    return Counter(filtered_words)


def count_words_in_dir(dirpath, words, action=None):
    """For each .txt file in a dir, count the specified words"""
    for filepath in glob.iglob(os.path.join(dirpath, '*.txt')):
        with open(filepath) as f:
            ct = word_frequency(f, words)
            if action:
                action(filepath, ct)


def print_summary(filepath, ct):
    words = sorted(ct.keys())
    counts = [str(ct[k]) for k in words]
    print('{0}\n{1}\n{2}\n\n'.format(
        filepath,
        ', '.join(words),
        ', '.join(counts)))


words = set(['inflation', 'jobs', 'output'])
count_words_in_dir('./', words, action=print_summary)
于 2013-02-17T14:12:24.810 回答
0
import re, os, sys, codecs, fnmatch
import decimal
import zipfile
import glob
import csv

path= 'C:\\Users\\user\\Desktop\\sentiment2020\\POSITIVE'

files=[]
for r,d,f in os.walk(path):
    for file in f:
        if'.txt' in  file:
            files.append(os.path.join(r,file))

for f in files:
    print(f)
    file1= codecs.open(f,'r','utf8',errors='ignore')
    content=file1.read()

words=content.split()
for x in words:
    print (x)

dicts=[]
if __name__=="__main__":  
    str =words
    str2 = [] 
    for i in str:              
        if i not in str2: 
              str2.append(i)  
    for i in range(0, len(str2)):
        a= {str2[i]:str.count(str2[i])}
        dicts.append(a)
for i in dicts:        
    print(dicts)



#  for i in range(len(files)):
  #    with codecs.open('C:\\Users\\user\\Desktop\\sentiment2020\\NEGETIVE1\\sad1%s.txt' % i, 'w',"utf8") as filehandle:
  #         filehandle.write('%s\n' % dicts) 
于 2020-04-12T03:05:45.463 回答
0

一个简单的功能代码来计算文本文件中的词频:

{
import string

def process_file(filename):
hist = dict()
f = open(filename,'rb')
for line in f:
    process_line(line,hist)
return hist

def process_line(line,hist):

line = line.replace('-','.')

for word in line.split():
    word = word.strip(string.punctuation + string.whitespace)
    word.lower()

    hist[word] = hist.get(word,0)+1

hist = process_file(filename)
print hist
}
于 2016-02-19T20:05:42.760 回答