这是第一次编写的标记器的优化版本,它工作得相当好。辅助标记器可以解析此函数的输出以创建更具体的分类标记。
def tokenize(source):
return (token for token in (token.strip() for line
in source.replace('\r\n', '\n').replace('\r', '\n').split('\n')
for token in line.split('#', 1)[0].split(';')) if token)
我的问题是:如何简单地用re
模块编写?以下是我无效的尝试。
def tokenize2(string):
search = re.compile(r'^(.+?)(?:;(.+?))*?(?:#.+)?$', re.MULTILINE)
for match in search.finditer(string):
for item in match.groups():
yield item
编辑:这是我从标记器中寻找的输出类型。解析文本应该很容易。
>>> def tokenize(source):
return (token for token in (token.strip() for line
in source.replace('\r\n', '\n').replace('\r', '\n').split('\n')
for token in line.split('#', 1)[0].split(';')) if token)
>>> for token in tokenize('''\
a = 1 + 2; b = a - 3 # create zero in b
c = b * 4; d = 5 / c # trigger div error
e = (6 + 7) * 8
# try a boolean operation
f = 0 and 1 or 2
a; b; c; e; f'''):
print(repr(token))
'a = 1 + 2'
'b = a - 3 '
'c = b * 4'
'd = 5 / c '
'e = (6 + 7) * 8'
'f = 0 and 1 or 2'
'a'
'b'
'c'
'e'
'f'
>>>