当 SplashRequest 的回调由 scrapyd 执行时,我确实遇到了一些奇怪的行为(据我所知)。
Scrapy 源代码
from scrapy.spiders.Spider import Spider
from scrapy import Request
import scrapy
from scrapy_splash import SplashRequest
class SiteSaveSpider(Spider):
def __init__(self, domain='', *args, **kwargs):
super(SiteSaveSpider, self).__init__(*args, **kwargs)
self.start_urls = [domain]
self.allowed_domains = [domain]
name = "sitesavespider"
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url, callback=self.parse, args={'wait':0.5})
print "TEST after yield"
def parse(self, response):
print "TEST in parse"
with open('/some_path/test.html', 'w') as f:
for line in response.body:
f.write(line)
内部 Scrapy Crawler 的日志
回调解析函数在启动时执行
scrapy crawl sitesavespider -a domain="https://www.facebook.com"
...
2017-01-29 14:12:37 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:12:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
TEST after yield
2017-01-29 14:12:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
TEST in parse
2017-01-29 14:12:55 [scrapy.core.engine] INFO: Closing spider (finished)
...
记录scrapyd
当使用 scrapyd 启动同一个蜘蛛时,它会在 SplashRequest 之后直接返回:
>>>scrapyd.schedule("feedbot","sitesavespider",domain="https://www.facebook.com")
u'f2f4e090e62d11e69da1342387f8a0c9'
cat f2f4e090e62d11e69da1342387f8a0c9.log
...
2017-01-29 14:19:34 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:19:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-29 14:19:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
2017-01-29 14:19:58 [scrapy.core.engine] INFO: Closing spider (finished)
...
有人知道这个问题或可以帮助我找到我的错误吗?