我正在使用类似于以下简化脚本的内容来解析较大文件中的 python 片段:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src)
尽管python2.x中的代码不一样,但它使用相同的习语并且工作得很好。然而,使用 python3.0 运行上面的代码片段,我得到了这个输出:
(57, 'utf-8', (0, 0), (0, 0), '')
(1, 'foo', (1, 0), (1, 3), 'foo="bar"')
(53, '=', (1, 3), (1, 4), 'foo="bar"')
(3, '"bar"', (1, 4), (1, 9), 'foo="bar"')
(0, '', (2, 0), (2, 0), '')
Traceback (most recent call last):
File "q.py", line 13, in <module>
src = tokenize.untokenize(src)
File "/usr/local/lib/python3.0/tokenize.py", line 236, in untokenize
out = ut.untokenize(iterable)
File "/usr/local/lib/python3.0/tokenize.py", line 165, in untokenize
self.add_whitespace(start)
File "/usr/local/lib/python3.0/tokenize.py", line 151, in add_whitespace
assert row <= self.prev_row
AssertionError
我已经搜索了有关此错误及其原因的引用,但找不到任何内容。我做错了什么,我该如何纠正?
[编辑]
在partisann观察到在源代码中添加换行符会导致错误消失之后,我开始弄乱我正在取消标记的列表。如果没有在换行符之前立即出现该标记,则该标记似乎EOF
会导致错误,因此删除它可以消除该错误。以下脚本运行没有错误:
import io
import tokenize
src = 'foo="bar"'
src = bytes(src.encode())
src = io.BytesIO(src)
src = list(tokenize.tokenize(src.readline))
for tok in src:
print(tok)
src = tokenize.untokenize(src[:-1])