0

我正在做一个项目,我试图从这个维基百科页面中抓取数据,我想要带有年份的列(恰好是 a <th>)和第四列“沃尔特迪斯尼公园和度假村”。

代码:

from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
bsObj = BeautifulSoup(html, "html.parser")

t = open("scrape_project.txt", "w")

year = bsObj.find("table", {"class":"wikitable"}).tr.next_sibling.next_sibling.th
money = bsObj.find("table", {"class":"wikitable"}).td.next_sibling.next_sibling.next_sibling.next_sibling

for year_data in year:
    year.sup.clear()
    print(year.get_text())

for revenue in money:
    print(money.get_text())


t.close()

现在,当我通过终端运行它时,所有打印的都是 1991(两次)和 2,794。我需要它来打印沃尔特迪斯尼乐园和度假村的所有年份和相关收入。我也试图让它写入文件“scrape_project.tx”

任何帮助,将不胜感激!

4

2 回答 2

0
from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
soup = BeautifulSoup(html)

t = open("scrape_project.txt", "w")

table = soup.find('table', {"class": "wikitable"})

# get all rows, skipping first empty
data = table.select("tr")[1:]

# year data is in the scope attribute
years = [td.select("th[scope]")[0].text[:4] for td in data]

# Walt Disney Parks and Resort is the third element in each row
rec = [td.select("td")[2].text for td in data]

from pprint import pprint as pp

pp(years)
pp(rec)

这将为您提供数据:

['1991',
 '1992',
 '1993',
 '1994',
 '1995',
 '1996',
 '1997',
 '1998',
 '1999',
 '2000',
 '2001',
 '2002',
 '2003',
 '2004',
 '2005',
 '2006',
 '2007',
 '2008',
 '2009',
 '2010',
 '2011',
 '2012',
 '2013',
 '2014']
['2,794.0',
 '3,306',
 '3,440.7',
 '3,463.6',
 '3,959.8',
 '4,142[Rev 3]',
 '5,014',
 '5,532',
 '6,106',
 '6,803',
 '6,009',
 '6,691',
 '6,412',
 '7,750',
 '9,023',
 '9,925',
 '10,626',
 '11,504',
 '10,667',
 '10,761',
 '11,797',
 '12,920',
 '14,087',
 '15,099']

我用 分割了修订版text[:4],如果您想保留信息,请不要分割。如果您还想从钱中删除,即从 中删除 Rev 3 '4,142[Rev 3]',您可以使用正则表达式:

import re

m = re.compile("\d+,\d+")

rec = [m.search(td.select("td")[2].text).group() for td in data]

这会给你:

['2,794',
 '3,306',
 '3,440',
 '3,463',
 '3,959',
 '4,142',
 '5,014',
 '5,532',
 '6,106',
 '6,803',
 '6,009',
 '6,691',
 '6,412',
 '7,750',
 '9,023',
 '9,925',
 '10,626',
 '11,504',
 '10,667',
 '10,761',
 '11,797',
 '12,920',
 '14,087',
 '15,099']
于 2016-03-19T15:20:59.830 回答
-1

必须有一种更清洁的方法才能进入那里,但这会做。

from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
soup = BeautifulSoup(html, "html.parser")

table = soup.find("table", {"class":"wikitable"})

rows = [row for row in table.findAll("th", {"scope":"row"})]

for each in rows:
    string = each.text[:4] + ", $" + \
          each.next_sibling.next_sibling.next_sibling.next_sibling.next_sibling.next_sibling.text)
于 2016-03-19T03:09:48.870 回答