我有一些文字:
s="Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
我想将其解析为单个单词。我迅速查看了附魔和 nltk,但没有看到任何看起来立即有用的东西。如果我有时间在这方面进行投资,我会考虑编写一个动态程序,该程序具有 enchant 检查单词是否为英文的能力。我本来以为网上会有这样的事情,我错了吗?
使用Biopython ( )试试这个pip install biopython
:
from Bio import trie
import string
def get_trie(dictfile='/usr/share/dict/american-english'):
tr = trie.trie()
with open(dictfile) as f:
for line in f:
word = line.rstrip()
try:
word = word.encode(encoding='ascii', errors='ignore')
tr[word] = len(word)
assert tr.has_key(word), "Missing %s" % word
except UnicodeDecodeError:
pass
return tr
def get_trie_word(tr, s):
for end in reversed(range(len(s))):
word = s[:end + 1]
if tr.has_key(word):
return word, s[end + 1: ]
return None, s
def main(s):
tr = get_trie()
while s:
word, s = get_trie_word(tr, s)
print word
if __name__ == '__main__':
s = "Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
s = s.strip(string.punctuation)
s = s.replace(" ", '')
s = s.lower()
main(s)
>>> if __name__ == '__main__':
... s = "Imageclassificationmethodscan beroughlydividedinto two broad families of approaches:"
... s = s.strip(string.punctuation)
... s = s.replace(" ", '')
... s = s.lower()
... main(s)
...
image
classification
methods
can
be
roughly
divided
into
two
broad
families
of
approaches
英语中有一些退化的情况,这是行不通的。您需要使用回溯来处理这些问题,但这应该可以帮助您入门。
>>> main("expertsexchange")
experts
exchange
这是亚洲 NLP 中经常出现的问题。如果你有字典,那么你可以使用这个http://code.google.com/p/mini-segmenter/(免责声明:我写的,希望你不介意)。
请注意,搜索空间可能非常大,因为字母英文中的字符数肯定比音节中文/日文长。