0

我设法隔离了 html 文件的这一部分

<div class="item_display_label"><strong>**Name of Office:** </strong></div>
<div class="item_display_field">**Dacotah**</div>
<div class="item_display_label"><strong>**Federal Electoral District:** </strong></div>
<div class="item_display_field">
**St. Boniface
(Manitoba)**
</div>
<div class="item_display_label"><strong>**Dates:** </strong></div>
<div class="item_display_field">
<table border="0" cellpadding="0" cellspacing="0" class="data_table">
<tr>
<th class="th_heading" valign="top">Establishment Re-openings</th>
<th class="th_heading" valign="top">Closings</th>
</tr>
<tr>
<td class="index" valign="top">**1903-05-01**</td>
<td class="index" valign="top">**1970-05-01**</td>
</tr>
</table>
</div>
<div class="item_display_label"><strong>Postmaster Information: </strong></div>
<div class="item_display_label"><strong>**Additional Information:** </strong></div>
<div class="item_display_field">
**Closed due to Rural Mail Delivery service via Headingley, R.R. 1**<br/><br/>
**Sec. 25, Twp. 10, R. 2, WPM - 1903-05-01**<br/><br/>
**Sec. 34, Twp. 10, R. 2, WP**M<br/><br/>
**SW 1/4 Sec. 35, Twp. 10, R. 2, WPM**<br/><br/>
</div>

通过使用:

    from bs4 import BeautifulSoup
    soup = BeautifulSoup(open("post2.html"))
    with open("post2.txt", "wb") as file:
    for link in soup.find_all('div',['item_display_label','item_display_field']):
    print  link

我需要使用 Beautiful Soup 将粗体字段导出到 csv 中。我尝试了不同的方法,但没有结果。csv 文件的列应为:“办公室名称”、“联邦选区”、“开幕”、“闭幕”、“信息”。有什么线索吗?非常感谢

编辑:

我正在尝试用这个编写csv:

    from bs4 import BeautifulSoup
    import csv
    soup = BeautifulSoup(open("post2.html"))
    f= csv.writer(open("post2.csv", "w"))   
    f.writerow(["Name", "District", "Open", "Close","Info"]) 
    for link in soup.find_all('div', ['item_display_label',   'item_display_field'].__contains__):
   print  link.text.strip()
   Name = link.contents[0]
   District = link.contents[1]
   Open = link.contents[2]
   Close = link.contents[3]
   Info = link.contents[4]
   f.writerow([Name, District, Open, Close, Info])

但我首先只获得最后一个字段(信息)。

4

1 回答 1

0

尝试以下操作:

from bs4 import BeautifulSoup
soup = BeautifulSoup(open("post2.html"))
#for link in soup.find_all('div', lambda cls: cls in ['item_display_label', 'item_display_field']):
for link in soup.find_all('div', ['item_display_label', 'item_display_field'].__contains__):
    print  link.text.strip()

lxml与 xpath 一起使用:

import lxml.html

tree = lxml.html.parse('post2.html')
for x in tree.xpath('.//div[@class="item_display_label"]//text()|.//div[@class="item_display_field"]//text()'):
    print x.strip()
于 2013-09-14T05:12:07.770 回答