0

我整天都在用这个拉头发。基本上我无法从标签中提取信息,例如:

<REUTERS LEWISSPLIT="TRAIN">

我无法获取 LEWISSPLIT 的值并将其存储在列表中

我有以下代码:

import arff
from xml.etree import ElementTree
import re
from StringIO import StringIO

import BeautifulSoup
from BeautifulSoup import BeautifulSoup

totstring=""

with open('reut2-000.sgm', 'r') as inF:
    for line in inF:
        string=re.sub("[^0-9a-zA-Z<>/\s=!-\"\"]+","", line)
    totstring+=string

soup = BeautifulSoup(totstring)

bodies = list()
topics = list()
tags = list()

for a in soup.findAll("body"):
    bodies.append(a)


for b in soup.findAll("topics"):
    topics.append(b)

for item in soup.findAll('REUTERS'):
    tags.append(item['TOPICS'])



outputstring=""

for x in range(0,len(bodies)):
    if topics[x].text=="":
        continue
    outputstring=outputstring+"<TOPICS>"+topics[x].text+"</TOPICS>\n"+"<BODY>"+bodies[x].text+"</BODY>\n"

outfile=open("output.sgm","w")
outfile.write(outputstring)

outfile.close()

print tags[0]

file.close

用于解析一些看起来有点像这样的旧的 reuters XML:

<!DOCTYPE lewis SYSTEM "lewis.dtd">
<REUTERS TOPICS="YES" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="5544" NEWID="1">
<DATE>26-FEB-1987 15:01:01.79</DATE>
<TOPICS><D>cocoa</D></TOPICS>
<PLACES><D>el-salvador</D><D>usa</D><D>uruguay</D></PLACES>
<PEOPLE></PEOPLE>
<ORGS></ORGS>
<EXCHANGES></EXCHANGES>
<COMPANIES></COMPANIES>
<UNKNOWN> 
&#5;&#5;&#5;C T
&#22;&#22;&#1;f0704&#31;reute
u f BC-BAHIA-COCOA-REVIEW   02-26 0105</UNKNOWN>
<TEXT>&#2;
<TITLE>BAHIA COCOA REVIEW</TITLE>
<DATELINE>    SALVADOR, Feb 26 - </DATELINE><BODY>Showers continued throughout the week in
the Bahia cocoa zone, alleviating the drought since early
January and improving prospects for the coming temporao,
although normal humidity levels have not been restored,
Comissaria Smith said in its weekly review.
&#3;</BODY></TEXT>
</REUTERS>
<REUTERS TOPICS="NO" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="5545" NEWID="2">
<DATE>26-FEB-1987 15:02:20.00</DATE>
<TOPICS></TOPICS>
<PLACES><D>usa</D></PLACES>
<PEOPLE></PEOPLE>
<ORGS></ORGS>
<EXCHANGES></EXCHANGES>
<COMPANIES></COMPANIES>
<UNKNOWN> 
&#5;&#5;&#5;F Y
&#22;&#22;&#1;f0708&#31;reute
d f BC-STANDARD-OIL-&lt;SRD>-TO   02-26 0082</UNKNOWN>
<TEXT>&#2;
<TITLE>STANDARD OIL &lt;SRD> TO FORM FINANCIAL UNIT</TITLE>
<DATELINE>    CLEVELAND, Feb 26 - </DATELINE><BODY>Standard Oil Co and BP North America
Inc said they plan to form a venture to manage the money market
borrowing and investment activities of both companies.
    BP North America is a subsidiary of British Petroleum Co
Plc &lt;BP>, which also owns a 55 pct interest in Standard Oil.
    The venture will be called BP/Standard Financial Trading
and will be operated by Standard Oil under the oversight of a
joint management committee.
&#3;</BODY></TEXT>
</REUTERS>
<REUTERS TOPICS="NO" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="5546" NEWID="3">
<DATE>26-FEB-1987 15:03:27.51</DATE>
<TOPICS></TOPICS>
<PLACES><D>usa</D></PLACES>
<PEOPLE></PEOPLE>
<ORGS></ORGS>
<EXCHANGES></EXCHANGES>
<COMPANIES></COMPANIES>
<UNKNOWN> 
&#5;&#5;&#5;F A
&#22;&#22;&#1;f0714&#31;reute
d f BC-TEXAS-COMMERCE-BANCSH   02-26 0064</UNKNOWN>
<TEXT>&#2;
<TITLE>TEXAS COMMERCE BANCSHARES &lt;TCB> FILES PLAN</TITLE>
<DATELINE>    HOUSTON, Feb 26 - </DATELINE><BODY>Texas Commerce Bancshares Inc's Texas
Commerce Bank-Houston said it filed an application with the
Comptroller of the Currency in an effort to create the largest
banking network in Harris County.
    The bank said the network would link 31 banks having
13.5 billion dlrs in assets and 7.5 billion dlrs in deposits.

 Reuter
&#3;</BODY></TEXT>
</REUTERS>

我有兴趣删除特殊字符,提取正文和主题标签的内容并从中构建新的 xml:

<topic>oil</topic>
<body>asdsd</body>
<topic>grain</topic>
<body>asdsdds</body>

我想根据值拆分这些数据LEWISSPLIT

到目前为止,除了根据 lewissplit 的值进行拆分之外,我已经能够完成所有这些工作。

这是因为我无法从<reuters>标签中提取值。我从这个网站和官方文档中尝试了许多不同的技术,但是在运行时

for item in soup.findAll('REUTERS'):
    tags.append(item['LEWISSPLIT'])

print tags[0]

我得到的只是[]

为什么从标签中提取 LEWISSPLIT 属性的值这么难<REUTERS>

非常感谢您阅读本文。

另请参阅使用 beautifulsoup 和 python 提取标签信息

4

1 回答 1

0

乔尔·科内特是正确的,

“reuters”和“lewissplit”应该是小写的:(正确的语法:

for item in soup.findAll('reuters'):
    tags.append(item['lewissplit'])
于 2012-05-10T11:42:33.063 回答