有没有办法通过让多台计算机参与处理 url 列表来加速网络爬虫?就像计算机 A 使用 url 1 - 500 和计算机 B 使用 url 501 - 1000 等一样。我正在寻找一种方法来构建尽可能快的网络爬虫,并利用普通人可用的资源。
我已经在使用 grequests 模块中的多处理。这是 gevent + request 组合。
这种抓取不需要不断地运行,而是在每天早上的特定时间(早上 6 点)运行,并且在它开始时就完成。我正在寻找快速准时的东西。
此外,我正在查看零售商店的网址(即:target、bestbuy、newegg 等),并使用它来检查当天有哪些商品。
这是用于在我试图放在一起的脚本中获取这些 url 的代码段:
import datetime
import grequests
thread_number = 20
nnn = int(len(product_number_list)/100)
float_nnn = (len(product_number_list)/100)
# Product number list is a list of product numbers, too big for me to include the full list. Here are like three:
product_number_list = ['N82E16820232476', 'N82E16820233852', 'N82E16820313777']
base_url = 'https://www.newegg.com/Product/Product.aspx?Item={}'
url_list = []
for number in product_number_list:
url_list.append(base_url.format(product_number_list))
# The above three lines create a list of urls.
results = []
appended_number = 0
for x in range(0, len(product_number_list), thread_number):
attempts = 0
while attempts < 10:
try:
rs = (grequests.get(url, stream=False) for url in url_list[x:x+thread_number])
reqs = grequests.map(rs, stream=False, size=20)
append = 'yes'
for i in reqs:
if i.status_code != 200:
append = 'no'
print('Bad Status Code. Nothing Appended.')
attempts += 1
break
if append == 'yes':
appended_number += 1
results.extend(reqs)
break
except:
print('Something went Wrong. Try Section Failed.')
attempts += 1
time.sleep(5)
if appended_number % nnn == 0:
now = datetime.datetime.today()
print(str(int(20*appended_number/float_nnn)) + '% of the way there at: ' + str(now.strftime("%I:%M:%S %p")))
if attempts == 10:
print('Failed ten times to get urls.')
time.sleep(3600)
if len(results) != len(url_list):
print('Results count is off. len(results) == "' + str(len(results)) + '". len(url_list) == "' + str(len(url_list)) + '".')
这不是我的代码,它来自这两个链接:
使用 grequests 向 sourceforge 发出数千个 get 请求,得到“Max retries exceeded with url”