我正在使用nltk进行语言建模我将这篇文章用作mypet.txt文件中的语料库。对于大多数三元组,我得到 0.25 Kneser Ney 概率分布。我不知道为什么。这样对吗?为什么要这样做?这是我的word_ngram.py文件:
import io
import nltk
from nltk.util import ngrams
from nltk.tokenize import sent_tokenize
from preprocessor import utf8_to_ascii
with io.open("mypet.txt",'r',encoding='utf8') as utf_file:
file_content = utf_file.read()
ascii_content = utf8_to_ascii(file_content)
sentence_tokenize_list = sent_tokenize(ascii_content)
all_tgrams = []
for sentence in sentence_tokenize_list:
sentence = sentence.rstrip('.!?')
tokens = nltk.re.findall(r"\w+(?:[-']\w+)*|'|[-.(]+|\S\w*", sentence)
tgrams = ngrams(tokens, 3,pad_left=True,pad_right=True,left_pad_symbol='<s>', right_pad_symbol="</s>")
all_tgrams.extend(tgrams)
frequency_distribution = nltk.FreqDist(all_tgrams)
kneser_ney = nltk.KneserNeyProbDist(frequency_distribution)
for i in kneser_ney.samples():
print "{0}: {1}".format(kneser_ney.prob(i), i)
这是我处理 utf-8 字符的preprocessor.py文件:
# -*- coding: utf-8 -*-
import json
def utf8_to_ascii(utf8_text):
with open("utf_to_ascii.json") as data_file:
data = json.load(data_file)
utf_table = data["chars"]
for key, value in utf_table.items():
utf8_text = utf8_text.replace(key, value)
return utf8_text.encode('ascii')
这是我用来将 utf-8 char 替换为 ascii char 的utf_to_ascii.json文件:
{
"chars": {
"“":"",
"”":"",
"’":"'",
"—":"-",
"–":"-"
}
}
这是几个三元组的示例输出:
0.25: ('side', '</s>', '</s>')
0.25: ('I', 'throw', 'a')
0.25: ('it', 'to', 'us')
0.25: ('guards', 'the', 'house')
0.0277777777778: ('<s>', 'I', 'am')
0.25: ('a', 'fire', 'broke')
0.125: ('our', 'house', 'at')
0.25: ('that', 'a', 'heap')
0.25: ('is', 'covered', 'with')
0.25: ('with', 'a', 'soft')
0.00862068965517: ('<s>', 'It', 'begins')
0.25: ('swim', '</s>', '</s>')
0.25: ('a', 'member', 'of')
0.25: ('bread', '</s>', '</s>')
0.25: ('love', '</s>', '</s>')
0.25: ('a', 'soft', 'fur')
0.25: ('body', 'is', 'covered')
0.25: ('I', 'bathe', 'it')
0.25: ('it', 'is', 'out')
0.25: ('<s>', 'A', 'thief')
0.25: ('go', 'hunting', '</s>')
0.025: ('It', 'is', 'loved')
0.25: ('it', 'a', 'loving')
0.25: ('with', 'soap', 'every-day')
0.25: ('other', 'members', 'of')
0.25: ('lying', 'there', 'was')
0.25: ('sensitive', 'to', 'sound')
0.25: ('and', 'the', 'flames')
0.25: ('kitchen', '</s>', '</s>')
0.25: ('strong', 'instinct', '</s>')