5

我有一些xml:

<article>
<uselesstag></uslesstag>
<topic>oil, gas</topic>
<body>body text</body>
</article>

<article>
<uselesstag></uslesstag>
<topic>food</topic>
<body>body text</body>
</article>

<article>
<uselesstag></uslesstag>
<topic>cars</topic>
<body>body text</body>
</article>

有很多很多无用的标签。我想使用beautifulsoup 来收集正文标签中的所有文本及其相关的主题文本,以创建一些新的xml。

我是 python 新手,但我怀疑某种形式的

import arff
from xml.etree import ElementTree
import re
from StringIO import StringIO

import BeautifulSoup
from BeautifulSoup import BeautifulSoup

totstring=""

with open('reut2-000.sgm', 'r') as inF:
    for line in inF:
        string=re.sub("[^0-9a-zA-Z<>/\s=!-\"\"]+","", line)
    totstring+=string


soup = BeautifulSoup(totstring)

body = soup.find("body")



for anchor in soup.findAll('body'):
    #Stick body and its topics in an associated array?




file.close

将工作。

1)我该怎么做?2) 我应该向 XML 添加根节点吗?否则它不是正确的 XML 是吗?

非常感谢

编辑:

我想要结束的是:

<article>
<topic>oil, gas</topic>
<body>body text</body>
</article>

<article>
<topic>food</topic>
<body>body text</body>
</article>

<article>
<topic>cars</topic>
<body>body text</body>
</article>

有很多很多无用的标签。

4

2 回答 2

9

行。这是解决方案,

首先,确保您安装了“beautifulsoup4”:http ://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-beautiful-soup

这是我获取所有正文和主题标签的代码:

from bs4 import BeautifulSoup
html_doc= """
<article>
<topic>oil, gas</topic>
<body>body text</body>
</article>

<article>
<topic>food</topic>
<body>body text</body>
</article>

<article>
<topic>cars</topic>
<body>body text</body>
</article>
"""
soup = BeautifulSoup(html_doc)

bodies = [a.get_text() for a in soup.find_all('body')]
topics = [a.get_text() for a in soup.find_all('topic')]
于 2012-05-09T15:49:14.227 回答
1

另一种删除空 xml 或 html 标签的方法是使用递归函数来搜索空标签并使用 .extract() 删除它们。这样,您不必手动列出要保留的标签。它还可以清除嵌套的空标签。

from bs4 import BeautifulSoup
import re
nonwhite=re.compile(r'\S+',re.U)

html_doc1="""
<article>
<uselesstag2>
<uselesstag1>
</uselesstag1>
</uselesstag2>
<topic>oil, gas</topic>
<body>body text</body>
</article>

<p>21.09.2009</p> 
<p> </p> 
<p1><img src="http://www.www.com/"></p1> 
<p></p> 

<!--- This article is about cars--->
<article>
<topic>cars</topic>
<body>body text</body>
</article>
"""

def nothing_inside(thing):
    # select only tags to examine, leave comments/strings
    try:
        # check for img empty tags
        if thing.name=='img' and thing['src']<>'':
            return False
        else:
            pass
        # check if any non-whitespace contents
        for item in thing.contents:
            if nonwhite.match(item):
                return False
            else:
                pass
        return True
    except:
        return False

def scrub(thing):
    # loop function as long as an empty tag exists
    while thing.find_all(nothing_inside,recursive=True) <> []:
        for emptytag in thing.find_all(nothing_inside,recursive=True):
            emptytag.extract()
            scrub(thing)
    return thing

soup=BeautifulSoup(html_doc1)
print scrub(soup)

结果:

<article>

<topic>oil, gas</topic>
<body>body text</body>
</article>
<p>21.09.2009</p>

<p1><img src="http://www.www.com/"/></p1>

<!--- This article is about cars--->
<article>
<topic>cars</topic>
<body>body text</body>
</article>
于 2012-08-16T16:51:45.373 回答