2

我想解析一个巨大的文件 xml-file。这个巨大文件中的记录确实看起来像这样。一般来说,文件看起来像这样

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE dblp SYSTEM "dblp.dtd">
<dblp>
    record_1
    ...
    record_n
</dblp>

我写了一些代码,可以让我从这个文件中选择录音。

如果我让代码运行(包括存储在 MySQL 数据库中需要将近 50 分钟),我会注意到有一条记录,其中有近百万作者。这一定是错的。我什至通过查看文件来检查它,确保文件中没有错误。这篇论文只有 5 或 6 位作者,所以 dblp.xml 一切正常。所以我假设我的代码中有一个逻辑错误。但我不知道这可能在哪里。也许有人可以告诉我,错误在哪里?

代码在行中停止if len(auth) > 2000

import sys
import MySQLdb
from lxml import etree


elements = ['article', 'inproceedings', 'proceedings', 'book', 'incollection']
tags = ["author", "title", "booktitle", "year", "journal"]


def fast_iter(context, cursor):
    mydict = {} # represents a paper with all its tags.
    auth = [] # a list of authors who have written the paper "together".
    counter = 0 # counts the papers

    for event, elem in context:
        if elem.tag in elements and event == "start":
            mydict["element"] = elem.tag
            mydict["mdate"] = elem.get("mdate")
            mydict["key"] = elem.get("key")

        elif elem.tag == "title" and elem.text != None:
            mydict["title"] = elem.text
        elif elem.tag == "booktitle" and elem.text != None:
            mydict["booktitle"] = elem.text
        elif elem.tag == "year" and elem.text != None:
            mydict["year"] = elem.text
        elif elem.tag == "journal" and elem.text != None:
            mydict["journal"] = elem.text
        elif elem.tag == "author" and elem.text != None:
            auth.append(elem.text)
        elif event == "end" and elem.tag in elements:
            counter += 1
            print counter
            #populate_database(mydict, auth, cursor)
            mydict.clear()
            auth = []
            if mydict or auth:
                sys.exit("Program aborted because auth or mydict was not deleted properly!")
        if len(auth) > 200: # There are up to ~150 authors per paper. 
            sys.exit("auth: It seams there is a paper which has too many authors.!")
        if len(mydict) > 50: # A paper can have much metadata.
            sys.exit("mydict: It seams there is a paper which has too many tags.")

        elem.clear()
        while elem.getprevious() is not None:
            del elem.getparent()[0]
    del context


def main():
        cursor = connectToDatabase()
        cursor.execute("""SET NAMES utf8""")

        context = etree.iterparse(PATH_TO_XML, dtd_validation=True, events=("start", "end"))
        fast_iter(context, cursor)

        cursor.close()


if __name__ == '__main__':
    main()

编辑:

当我写这个函数时,我完全被误导了。我忽略了一个巨大的错误,即在尝试跳过一些不需要的录音时,会弄乱一些想要的录音。在文件中的某个点,我连续跳过了近一百万条记录,下面的通缉记录被炸毁了。

在 John 和 Paul 的帮助下,我设法重写了我的代码。它现在正在解析,并且接缝做得很好。如果某些意外错误仍未解决,我会报告。否则谢谢大家的帮助!我真的很感激!

def fast_iter2(context, cursor):
    elements = set([
        'article', 'inproceedings', 'proceedings', 'book', 'incollection',
        'phdthesis', "mastersthesis", "www"
        ])
    childElements = set(["title", "booktitle", "year", "journal", "ee"])

    paper = {} # represents a paper with all its tags.
    authors = []   # a list of authors who have written the paper "together".
    paperCounter = 0
    for event, element in context:
        tag = element.tag
        if tag in childElements:
            if element.text:
                paper[tag] = element.text
                # print tag, paper[tag]
        elif tag == "author":
            if element.text:
                authors.append(element.text)
                # print "AUTHOR:", authors[-1]
        elif tag in elements:
            paper["element"] = tag
            paper["mdate"] = element.get("mdate")
            paper["dblpkey"] = element.get("key")
            # print tag, element.get("mdate"), element.get("key"), event
            if paper["element"] in ['phdthesis', "mastersthesis", "www"]:
                pass
            else:
                populate_database(paper, authors, cursor)
            paperCounter += 1
            print paperCounter
            paper = {}
            authors = []
            # if paperCounter == 100:
            #     break
            element.clear()
            while element.getprevious() is not None:
                del element.getparent()[0]
    del context
4

2 回答 2

3

在您检测元素中标记的开始和停止的代码块中添加打印语句,以确保您正确检测到这些。我怀疑由于某种原因你没有得到清除作者列表的代码。

尝试注释掉这段代码(或者至少将它移到“结束”处理块中):

    elem.clear()
    while elem.getprevious() is not None:
        del elem.getparent()[0]

当您遍历 XML 时,Python 应该负责为您清除这些元素。“del context”也是多余的。让参考计数器在这里为您完成工作。

于 2011-05-17T10:34:34.117 回答
3

请消除一个混淆来源:您实际上并没有说您显示的代码确实在您的“事物计数> 2000”测试之一中发生了故障。如果不是,那么问题出在数据库更新代码(您没有向我们展示)。

如果这样做会绊倒:

(1) 将限制从 2000 减少到合理的值(mydict 大约为 20,auth而 mydict 正好为 7)

(2)当旅行发生时,print repr(mydict); print; print repr(auth)与您的文件对比分析内容。

另外:使用 iterparse(),不能保证 elem.text 在“开始”事件发生时被解析。为了节省一些运行时间,您应该仅在“结束”事件发生时访问 elem.text。事实上,似乎根本没有理由想要“开始”事件。您还定义了一个列表tags,但从不使用它。你的函数的开头可以写得更简洁:

def fast_iter(context, cursor):
    mydict = {} # represents a paper with all its tags.
    auth = [] # a list of authors who have written the paper "together".
    counter = 0 # counts the papers
    tagset1 = set(['article', 'inproceedings', 'proceedings', 'book', 'incollection'])
    tagset2 = set(["title", "booktitle", "year", "journal"])
    for event, elem in context:
        tag = elem.tag
        if tag in tagset2:
            if elem.text:
                mydict[tag] = elem.text
        elif tag == "author":
            if elem.text:
                auth.append(elem.text)
        elif tag in tagset1:
            counter += 1
            print counter
            mydict["element"] = tag
            mydict["mdate"] = elem.get("mdate")
            mydict["dblpkey"] = elem.get("key")
            #populate_database(mydict, auth, cursor)
            mydict.clear() # Why not just do mydict = {} ??
            auth = []
            # etc etc

不要忘记修复对 iterparse() 的调用以删除事件 arg。

此外,我有理由确定 elem.clear() 仅应在事件“结束”时完成,并且仅在tag in tagset1. 仔细阅读相关文档。在“开始”事件中进行清理很可能会损坏您的树。

于 2011-05-17T12:37:19.033 回答