0

我有一个词汇文件,其中包含我需要在其他文本文档中找到的单词。如果有的话,我需要找出每个单词的数量。例如:

词汇表.txt:

thought
await
thorough
away
red

测试.txt:

I thought that if i await thorough enough, my thought would take me away.
Away I thought the thought.

最后,我应该看到有4个思想实例,1个等待,2个离开,1个彻底,0个红色。我试过这样:

for vocabLine in vocabOutFile:
    wordCounter = 0
    print >> sys.stderr, "Vocab word:", vocabLine
    for line in testFile:
        print >> sys.stderr, "Line 1 :", line
        if vocabLine.rstrip('\r\n') in line.rstrip('\r\n'):
            print >> sys.stderr, "Vocab word is in line"
            wordCounter = wordCounter + line.count(vocabLine)
            print >> sys.stderr, "Word counter", wordCounter
    testFile.seek(0, 0)

我有一种奇怪的感觉,由于词汇文件中的返回字符,它无法识别文件中的单词,因为在调试过程中我确定它正确计算了匹配字符串末尾的任何单词。但是,使用 rstrip() 后,计数仍然不正确。完成所有这些之后,我必须从词汇表中删除不超过 2 次的单词。

我做错了什么?

谢谢!

4

2 回答 2

2

使用regexcollections.Counter

import re
from collections import Counter
from itertools import chain

with open("voc") as v, open("test") as test:
    #create a set of words from vocabulary file
    words = set(line.strip().lower() for line in v) 

    #find words in test file using regex
    words_test = [ re.findall(r'\w+', line) for line in test ]

    #Create counter of words that are found in words set from vocab file
    counter = Counter(word.lower()  for word in chain(*words_test)\
                                          if word.lower() in words)
    for word in words:
        print word, counter[word]

输出

thought 4
away 2
await 1
red 0
thorough 1
于 2013-05-29T21:51:33.723 回答
2

为你的词汇制作一本字典是个好主意。

vocab_counter = {vocabLine.strip().lower(): 0 for vocabLine in vocabOutFile}

然后只扫描一次 testFile(这更有效)增加每个单词的计数

for line in testFile:
    for word in re.findall(r'\w+', line.lower()):
        if word in vocab_counter:
            vocab_counter[word] += 1
于 2013-05-29T21:54:04.983 回答