总结:我想比较ɔ̃、ɛ̃和ɑ̃与ɔ、ɛ和a,它们都是不同的,但是我的文本文件中有ɔ̃、ɛ̃和ɑ̃写成ɔ~、ɛ~和a~。
我写了一个脚本,它同时沿着两个单词中的字符移动,比较它们以找到不同的字符对两个法语单词之间只有一个音位。
最终目标是过滤 anki 卡片列表,以便仅包含某些音素对,因为其他对太容易识别。每对单词代表一个 anki 音符。
为此,我需要区分鼻音 ɔ̃、ɛ̃ 和 ɑ̃ 形成其他声音,因为它们只会与自己混淆。
如所写,代码将重音字符视为字符加〜,因此视为两个字符。因此,如果一个单词的唯一区别是最后一个重音字符和一个重音字符之间的区别,则脚本在最后一个字母上没有发现任何差异,并且按照所写的那样,然后会发现一个词比另一个词短(另一个词仍然有 ~ 左边)和尝试再比较一个字符时抛出错误。这本身就是一个完整的“问题”,但是如果我可以让重音字符作为单个单元读取,那么单词将具有相同的长度,并且会消失。
我不想用非重音字符替换重音字符,就像有些人为了比较所做的那样,因为它们是不同的声音。
我已经尝试将 unicode '规范化'为'组合'形式,例如
unicodedata.normalize('NFKC', line)
,但它没有改变任何东西。
这是一些输出,包括它刚刚抛出错误的行;打印输出显示代码正在比较的每个单词的单词和字符;数字是单词中该字符的索引。因此,最后一个字母是脚本“认为”这两个字符的内容,并且它认为 ɛ̃ 和 ɛ 是相同的。当它报告差异时,它也会选择错误的字母对,重要的是这对是正确的,因为我与允许对的主列表进行比较。
0 alyʁ alɔʁ a a # this first word is done well
1 alyʁ alɔʁ l l
2 alyʁ alɔʁ y ɔ # it doesn't continue to compare the ʁ because it found the difference
...
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ ɑ
1 ɑ̃bisjø ɑ̃bisjɔ̃ ̃ ̃ # the tildes are compared / treated separately
2 ɑ̃bisjø ɑ̃bisjɔ̃ b b
3 ɑ̃bisjø ɑ̃bisjɔ̃ i i
4 ɑ̃bisjø ɑ̃bisjɔ̃ s s
5 ɑ̃bisjø ɑ̃bisjɔ̃ j j
6 ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ # luckily that wasn't where the difference was, this is
...
0 osi ɛ̃si o ɛ # here it should report (o, ɛ̃), not (o, ɛ)
...
0 bɛ̃ bɔ̃ b b
1 bɛ̃ bɔ̃ ɛ ɔ # an error of this type
...
0 bo ba b b
1 bo ba o a # this is working correctly
...
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 bjɛ bjɛ̃ ɛ ɛ # AND here's the money, it thinks these are the same letter, but it has also run out of characters to compare from the first word, so it throws the error below
Traceback (most recent call last):
File "C:\Users\tchak\OneDrive\Desktop\French.py", line 42, in <module>
letter1 = line[0][index]
IndexError: string index out of range
这是代码:
def lens(word):
return len(word)
# open file, and new file to write to
input_file = "./phonetics_input.txt"
output_file = "./phonetics_output.txt"
set1 = ["e", "ɛ", "œ", "ø", "ə"]
set2 = ["ø", "o", "œ", "ɔ", "ə"]
set3 = ["ə", "i", "y"]
set4 = ["u", "y", "ə"]
set5 = ["ɑ̃", "ɔ̃", "ɛ̃", "ə"]
set6 = ["a", "ə"]
vowelsets = [set1, set2, set3, set4, set5, set6]
with open(input_file, encoding="utf8") as ipf, open(output_file, encoding="utf8") as opf:
# for line in file;
vowelpairs= []
acceptedvowelpairs = []
input_lines = ipf.readlines()
print(len(input_lines))
for line in input_lines:
#find word ipa transctipts
unicodedata.normalize('NFKC', line)
line = line.split("/")
line.sort(key = lens)
line = line[0:2] # the shortest two strings after splitting are the ipa words
index = 0
letter1 = line[0][index]
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
linelen = max(len(line[0]), len(line[1]))
while letter1 == letter2:
index += 1
letter1 = line[0][index] # throws the error here, technically, after printing the last characters and incrementing the index one more
letter2 = line[1][index]
print(index, line[0], line[1], letter1, letter2)
vowelpairs.append((letter1, letter2))
for i in vowelpairs:
for vowelset in vowelsets:
if set(i).issubset(vowelset):
acceptedvowelpairs.append(i)
print(len(vowelpairs))
print(len(acceptedvowelpairs))