我正在开展一个项目,该项目在每个 20 条记录的表中抓取大约 400,000 条记录。目前,我的脚本为页面创建了一个完整的 URL 列表,然后为每个 URL 打开页面,找到带有 BeautifulSoup 的表,并抓取每一行。当它刮取每一行时,它将该行写入 CSV:
def scrape_table(url):
soup = get_soup(url)
table = soup.find('table' , {'id' : 'BigTable'})
for row in table.find_all('tr'):
cells = row.find_all('td')
if len(cells) > 0:
name_bits = cells[0].get_text().strip().split(',')
first_name = name_bits[0].strip()
last_name = name_bits[1].strip()
species = cells[1].get_text().strip()
bunch = re.sub(u'[\xa0\xc2\s]+',' ',str(cells[5]),flags=re.UNICODE).strip()
bunch_strings = list(BeautifulSoup(bunch).td.strings)
weight = bunch_strings[1].strip()
bunch_match = re.match("dob:(.*) Mother: \$(.*)",bunch_strings[2].strip())
dob = date_format(bunch_match.groups()[0].strip())
mother = bunch_match.groups()[1].strip()
row_of_data = {
'first_name': first_name,
'last_name' : last_name,
'species' : species,
'weight' : weight,
'dob' : dob,
'mother' : mother
}
data_order = ['first_name', 'last_name', 'dob', 'mother', 'weight', 'kesavan']
csv_row(row_of_data,data_order,'elephants')
else:
continue
def csv_row(data,fieldorder,filename, base=__base__):
full_path = __base__+filename+'.csv'
print "writing", full_path
with open(full_path, 'a+') as csvfile:
linewriter = csv.DictWriter(csvfile, fieldorder, delimiter='|',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
linewriter.writerow(data)
我想知道如果我一次将每一页结果写入 CSV 而不是写入每一行,这是否会更有效。还是会使用更多的 RAM 并减慢我计算机的其余部分?其他提高效率的方法?