我想解析一个巨大的文件 xml-file。这个巨大文件中的记录确实看起来像这样。一般来说,文件看起来像这样
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE dblp SYSTEM "dblp.dtd">
<dblp>
record_1
...
record_n
</dblp>
我写了一些代码,可以让我从这个文件中选择录音。
如果我让代码运行(包括存储在 MySQL 数据库中需要将近 50 分钟),我会注意到有一条记录,其中有近百万作者。这一定是错的。我什至通过查看文件来检查它,确保文件中没有错误。这篇论文只有 5 或 6 位作者,所以 dblp.xml 一切正常。所以我假设我的代码中有一个逻辑错误。但我不知道这可能在哪里。也许有人可以告诉我,错误在哪里?
代码在行中停止if len(auth) > 2000
。
import sys
import MySQLdb
from lxml import etree
elements = ['article', 'inproceedings', 'proceedings', 'book', 'incollection']
tags = ["author", "title", "booktitle", "year", "journal"]
def fast_iter(context, cursor):
mydict = {} # represents a paper with all its tags.
auth = [] # a list of authors who have written the paper "together".
counter = 0 # counts the papers
for event, elem in context:
if elem.tag in elements and event == "start":
mydict["element"] = elem.tag
mydict["mdate"] = elem.get("mdate")
mydict["key"] = elem.get("key")
elif elem.tag == "title" and elem.text != None:
mydict["title"] = elem.text
elif elem.tag == "booktitle" and elem.text != None:
mydict["booktitle"] = elem.text
elif elem.tag == "year" and elem.text != None:
mydict["year"] = elem.text
elif elem.tag == "journal" and elem.text != None:
mydict["journal"] = elem.text
elif elem.tag == "author" and elem.text != None:
auth.append(elem.text)
elif event == "end" and elem.tag in elements:
counter += 1
print counter
#populate_database(mydict, auth, cursor)
mydict.clear()
auth = []
if mydict or auth:
sys.exit("Program aborted because auth or mydict was not deleted properly!")
if len(auth) > 200: # There are up to ~150 authors per paper.
sys.exit("auth: It seams there is a paper which has too many authors.!")
if len(mydict) > 50: # A paper can have much metadata.
sys.exit("mydict: It seams there is a paper which has too many tags.")
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
del context
def main():
cursor = connectToDatabase()
cursor.execute("""SET NAMES utf8""")
context = etree.iterparse(PATH_TO_XML, dtd_validation=True, events=("start", "end"))
fast_iter(context, cursor)
cursor.close()
if __name__ == '__main__':
main()
编辑:
当我写这个函数时,我完全被误导了。我忽略了一个巨大的错误,即在尝试跳过一些不需要的录音时,会弄乱一些想要的录音。在文件中的某个点,我连续跳过了近一百万条记录,下面的通缉记录被炸毁了。
在 John 和 Paul 的帮助下,我设法重写了我的代码。它现在正在解析,并且接缝做得很好。如果某些意外错误仍未解决,我会报告。否则谢谢大家的帮助!我真的很感激!
def fast_iter2(context, cursor):
elements = set([
'article', 'inproceedings', 'proceedings', 'book', 'incollection',
'phdthesis', "mastersthesis", "www"
])
childElements = set(["title", "booktitle", "year", "journal", "ee"])
paper = {} # represents a paper with all its tags.
authors = [] # a list of authors who have written the paper "together".
paperCounter = 0
for event, element in context:
tag = element.tag
if tag in childElements:
if element.text:
paper[tag] = element.text
# print tag, paper[tag]
elif tag == "author":
if element.text:
authors.append(element.text)
# print "AUTHOR:", authors[-1]
elif tag in elements:
paper["element"] = tag
paper["mdate"] = element.get("mdate")
paper["dblpkey"] = element.get("key")
# print tag, element.get("mdate"), element.get("key"), event
if paper["element"] in ['phdthesis', "mastersthesis", "www"]:
pass
else:
populate_database(paper, authors, cursor)
paperCounter += 1
print paperCounter
paper = {}
authors = []
# if paperCounter == 100:
# break
element.clear()
while element.getprevious() is not None:
del element.getparent()[0]
del context