6

我有一个名为 algorithm.py 的脚本,我希望能够在脚本期间调用 Scrapy spider。文件结构为:

algorithm.py MySpiders/

其中 MySpiders 是一个包含多个scrapy 项目的文件夹。我想创建方法 perform_spider1()、perform_spider2()... 我可以在 algorithm.py 中调用它们。

我如何构建这个方法?

我已经设法使用以下代码调用了一只蜘蛛,但是,它不是一种方法,它只适用于一只蜘蛛。我是需要帮助的初学者!

import sys,os.path
sys.path.append('path to spider1/spider1')
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from scrapy.xlib.pydispatch import dispatcher
from spider1.spiders.spider1_spider import Spider1Spider

def stop_reactor():
    reactor.stop()

dispatcher.connect(stop_reactor, signal=signals.spider_closed)

spider = RaListSpider()
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Running reactor...')
reactor.run() # the script will block here
log.msg('Reactor stopped.')
4

2 回答 2

5

只需通过您的蜘蛛并通过调用设置它们configure, crawland start,然后才调用log.start()and reactor.run()。而scrapy 会在同一个进程中运行多个蜘蛛。

有关更多信息,请参阅文档此线程

另外,考虑通过scrapyd运行你的蜘蛛。

希望有帮助。

于 2013-06-08T11:04:10.110 回答
2

根据 alecxe 的良好建议,这是一个可能的解决方案。

import sys,os.path
sys.path.append('/path/ra_list/')
sys.path.append('/path/ra_event/')
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from scrapy.xlib.pydispatch import dispatcher
from ra_list.spiders.ra_list_spider import RaListSpider
from ra_event.spiders.ra_event_spider import RaEventSpider

spider_count = 0
number_of_spiders = 2

def stop_reactor_after_all_spiders():
    global spider_count
    spider_count = spider_count + 1
    if spider_count == number_of_spiders:
        reactor.stop()


dispatcher.connect(stop_reactor_after_all_spiders, signal=signals.spider_closed)

def crawl_resident_advisor():

    global spider_count
    spider_count = 0

    crawler = Crawler(Settings())
    crawler.configure()
    crawler.crawl(RaListSpider())
    crawler.start()

    crawler = Crawler(Settings())
    crawler.configure()
    crawler.crawl(RaEventSpider())
    crawler.start()

    log.start()
    log.msg('Running in reactor...')
    reactor.run() # the script will block here
    log.msg('Reactor stopped.')
于 2013-06-08T20:52:14.117 回答