Maxime Lorant 的回答终于解决了我在自己的脚本中构建爬虫时遇到的问题。它解决了我遇到的两个问题:
它允许连续调用蜘蛛两次(在scrapy教程中的简单示例中,这会导致崩溃,因为您无法两次启动twister reactor)
它允许将变量从蜘蛛返回到脚本中。
只有一件事:这个例子不适用于我现在使用的scrapy版本(Scrapy 1.5.2)和Python 3.7
在玩了一些代码之后,我得到了一个我想分享的工作示例。我还有一个问题,请参阅下面的脚本。这是一个独立的脚本,所以我也添加了一个蜘蛛
import logging
import multiprocessing as mp
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.signals import item_passed
from scrapy.utils.project import get_project_settings
from scrapy.xlib.pydispatch import dispatcher
class CrawlerWorker(mp.Process):
name = "crawlerworker"
def __init__(self, spider, result_queue):
mp.Process.__init__(self)
self.result_queue = result_queue
self.items = list()
self.spider = spider
self.logger = logging.getLogger(self.name)
self.settings = get_project_settings()
self.logger.setLevel(logging.DEBUG)
self.logger.debug("Create CrawlerProcess with settings {}".format(self.settings))
self.crawler = CrawlerProcess(self.settings)
dispatcher.connect(self._item_passed, item_passed)
def _item_passed(self, item):
self.logger.debug("Adding Item {} to {}".format(item, self.items))
self.items.append(item)
def run(self):
self.logger.info("Start here with {}".format(self.spider.urls))
self.crawler.crawl(self.spider, urls=self.spider.urls)
self.crawler.start()
self.crawler.stop()
self.result_queue.put(self.items)
class QuotesSpider(scrapy.Spider):
name = "quotes"
def __init__(self, **kw):
super(QuotesSpider, self).__init__(**kw)
self.urls = kw.get("urls", [])
def start_requests(self):
for url in self.urls:
yield scrapy.Request(url=url, callback=self.parse)
else:
self.log('Nothing to scrape. Please pass the urls')
def parse(self, response):
""" Count number of The's on the page """
the_count = len(response.xpath("//body//text()").re(r"The\s"))
self.log("found {} time 'The'".format(the_count))
yield {response.url: the_count}
def report_items(message, item_list):
print(message)
if item_list:
for cnt, item in enumerate(item_list):
print("item {:2d}: {}".format(cnt, item))
else:
print(f"No items found")
url_list = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
'http://quotes.toscrape.com/page/3/',
'http://quotes.toscrape.com/page/4/',
]
result_queue1 = mp.Queue()
crawler = CrawlerWorker(QuotesSpider(urls=url_list[:2]), result_queue1)
crawler.start()
# wait until we are done with the crawl
crawler.join()
# crawl again
result_queue2 = mp.Queue()
crawler = CrawlerWorker(QuotesSpider(urls=url_list[2:]), result_queue2)
crawler.start()
crawler.join()
#
report_items("First result", result_queue1.get())
report_items("Second result", result_queue2.get())
如您所见,代码几乎相同,除了一些导入由于scrapy API的变化而发生变化。
有一件事:我收到了 pydistatch 导入的弃用警告:
ScrapyDeprecationWarning: Importing from scrapy.xlib.pydispatch is deprecated and will no longer be supported in future Scrapy versions. If you just want to connect signals use the from_crawler class method, otherwise import pydispatch directly if needed. See: https://github.com/scrapy/scrapy/issues/1762
module = self._system_import(name, *args, **kwargs)
我在这里找到了如何解决这个问题。但是,我无法让这个工作。有人知道如何应用 from_crawler 类方法来摆脱弃用警告吗?