0

我正在尝试抓取一个通过 Javascript 返回其数据的网站。我使用 BeautifulSoup 编写的代码运行良好,但在抓取过程中随机出现以下错误:

Traceback (most recent call last):
File "scraper.py", line 48, in <module>
accessible = accessible[0].contents[0]
IndexError: list index out of range

有时我可以抓取 4 个 url,有时是 15 个,但在某些时候脚本最终会失败并给我上述错误。我找不到失败背后的模式,所以我真的很茫然 - 我做错了什么?

from bs4 import BeautifulSoup
import urllib
import urllib2
import jabba_webkit as jw
import csv
import string
import re
import time

countries = csv.reader(open("countries.csv", 'rb'), delimiter=",")
database = csv.writer(open("herdict_database.csv", 'w'), delimiter=',')

basepage = "https://www.herdict.org/explore/"
session_id = "indepth;jsessionid=C1D2073B637EBAE4DE36185564156382"
ccode = "#fc=IN"
end_date = "&fed=12/31/"
start_date = "&fsd=01/01/"

year_range = range(2009, 2011)
years = [str(year) for year in year_range]

def get_number(var):
    number = re.findall("(\d+)", var)

    if len(number) > 1:
        thing = number[0] + number[1]
    else:
        thing = number[0]

    return thing

def create_link(basepage, session_id, ccode, end_date, start_date, year):
    link = basepage + session_id + ccode + end_date + year + start_date + year
    return link



for ccode, name in countries:
    for year in years:
        link = create_link(basepage, session_id, ccode, end_date, start_date, year)
        print link
        html = jw.get_page(link)
        soup = BeautifulSoup(html, "lxml")

        accessible = soup.find_all("em", class_="accessible")
        inaccessible = soup.find_all("em", class_="inaccessible")

        accessible = accessible[0].contents[0]
        inaccessible = inaccessible[0].contents[0]

        acc_num = get_number(accessible)
        inacc_num = get_number(inaccessible)

        print acc_num
        print inacc_num
        database.writerow([name]+[year]+[acc_num]+[inacc_num])

        time.sleep(2)
4

2 回答 2

4

您需要在代码中添加错误处理。在抓取大量网站时,有些网站会出现格式错误或以某种方式损坏。发生这种情况时,您将尝试操作空对象。

查看代码,找到所有假设它有效的假设,并检查错误。

对于这种特定情况,我会这样做:

if not inaccessible or not accessible:
    # malformed page
    continue
于 2013-01-24T20:14:16.130 回答
3

soup.find_all("em", class_="accessible")可能返回一个空列表。你可以试试:

if accessible:
    accessible = accessible[0].contents[0]

或更一般地说:

if accessibe and inaccesible:
    accessible = accessible[0].contents[0]
    inaccessible = inaccessible[0].contents[0]
else:
    print 'Something went wrong!'
    continue
于 2013-01-24T20:11:08.517 回答