我有一段代码来测试scrapy。我的目标是使用scrapy,而不必scrapy
从终端调用命令,所以我可以将此代码嵌入到其他地方。
代码如下:
from scrapy import Spider
from scrapy.selector import Selector
from scrapy.item import Item, Field
from scrapy.crawler import CrawlerProcess
import json
class JsonWriterPipeline(object):
file = None
def open_spider(self, spider):
self.file = open('items.json', 'wb')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
class StackItem(Item):
title = Field()
url = Field()
class StackSpider(Spider):
name = "stack"
allowed_domains = ["stackoverflow.com"]
start_urls = ["http://stackoverflow.com/questions?pagesize=50&sort=newest"]
def parse(self, response):
questions = Selector(response).xpath('//div[@class="summary"]/h3')
for question in questions:
item = StackItem()
item['title'] = question.xpath('a[@class="question-hyperlink"]/text()').extract()[0]
item['url'] = question.xpath('a[@class="question-hyperlink"]/@href').extract()[0]
yield item
if __name__ == '__main__':
settings = dict()
settings['USER_AGENT'] = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
settings['ITEM_PIPELINES'] = {'JsonWriterPipeline': 1}
process = CrawlerProcess(settings=settings)
spider = StackSpider()
process.crawl(spider)
process.start()
如您所见,代码是自包含的,我覆盖了两个设置;USER_AGENT 和 ITEM_PIPELINES。但是,当我在JsonWriterPipeline
类中设置调试点时,我看到代码已执行并且从未达到调试点,因此未使用自定义管道。
如何解决这个问题?