0

我正在做一个信息检索任务。作为预处理的一部分,我想做。

  1. 停用词删除
  2. 代币化
  3. 词干(波特词干)

最初,我跳过了标记化。结果我得到了这样的条款:

broker
broker'
broker,
broker.
broker/deal
broker/dealer'
broker/dealer,
broker/dealer.
broker/dealer;
broker/dealers),
broker/dealers,
broker/dealers.
brokerag
brokerage,
broker-deal
broker-dealer,
broker-dealers,
broker-dealers.
brokered.
brokers,
brokers.

所以,现在我意识到了代币化的重要性。是否有任何用于英语标记化的标准算法?基于string.whitespace和常用的标点符号。我写

def Tokenize(text):
    words = text.split(['.',',', '?', '!', ':', ';', '-','_', '(', ')', '[', ']', '\'', '`', '"', '/',' ','\t','\n','\x0b','\x0c','\r'])    
    return [word.strip() for word in words if word.strip() != '']
  1. 我收到TypeError: coercing to Unicode: need string or buffer, list found错误!
  2. 如何改进这个标记化例程?
4

2 回答 2

1

There is no single perfect algorithm for tokenization, though your algorithm may suffice for information retrieval purposes. It will be easier to implement using a regular expression:

def Tokenize(text):
    words = re.split(r'[-\.,?!:;_()\[\]\'`"/\t\n\r \x0b\x0c]+', text)
    return [word.strip() for word in words if word.strip() != '']

It can be improved in various ways, such as handling abbreviations properly:

>>> Tokenize('U.S.')
['U', 'S']

And watch out what you do with the dash (-). Consider:

>>> Tokenize('A-level')
['A', 'level']

If 'A' or 'a' occurs in your stop list, this will be reduced to just level.

I suggest you check out Natural Language Processing with Python, chapter 3, and the NLTK toolkit.

于 2010-10-31T15:02:03.013 回答
0

正如 larsman 所提到的,ntlk 有各种不同的标记器,可以接受各种选项。使用默认值:

>>> import nltk
>>> words = nltk.wordpunct_tokenize('''
... broker
... broker'
... broker,
... broker.
... broker/deal
... broker/dealer'
... broker/dealer,
... broker/dealer.
... broker/dealer;
... broker/dealers),
... broker/dealers,
... broker/dealers.
... brokerag
... brokerage,
... broker-deal
... broker-dealer,
... broker-dealers,
... broker-dealers.
... brokered.
... brokers,
... brokers.
... ''')
['broker', 'broker', "'", 'broker', ',', 'broker', '.', 'broker', '/', 'deal',       'broker', '/', 'dealer', "'", 'broker', '/', 'dealer', ',', 'broker', '/', 'dealer', '.', 'broker', '/', 'dealer', ';', 'broker', '/', 'dealers', '),', 'broker', '/', 'dealers', ',', 'broker', '/', 'dealers', '.', 'brokerag', 'brokerage', ',', 'broker', '-', 'deal', 'broker', '-', 'dealer', ',', 'broker', '-', 'dealers', ',', 'broker', '-', 'dealers', '.', 'brokered', '.', 'brokers', ',', 'brokers', '.']

如果您想过滤掉仅是标点符号的列表项,您可以执行以下操作:

>>> filter_chars = "',.;()-/"
>>> def is_only_punctuation(s):
        '''
        returns bool(set(s) is not a subset of set(filter_chars))
        '''
        return not set(list(i)) < set(list(filter_chars))
>>> filter(is_only_punctuation, words)

返回

>>> ['broker', 'broker', 'broker', 'broker', 'broker', 'deal', 'broker', 'dealer', 'broker', 'dealer', 'broker', 'dealer', 'broker', 'dealer', 'broker', 'dealers', 'broker', 'dealers', 'broker', 'dealers', 'brokerag', 'brokerage', 'broker', 'deal', 'broker', 'dealer', 'broker', 'dealers', 'broker', 'dealers', 'brokered', 'brokers', 'brokers']
于 2010-10-31T15:58:21.997 回答