我有一个 Scrapy 项目,它加载管道但不将项目传递给它们。任何帮助表示赞赏。
蜘蛛的精简版:
#imports
class MySpider(CrawlSpider):
#RULES AND STUFF
def parse_item(self, response):
'''Takes HTML response and turns it into an item ready for database. I hope.
'''
#A LOT OF CODE
return item
此时打印出该项目会产生预期的结果,而settings.py非常简单:
ITEM_PIPELINES = [
'mySpider.pipelines.MySpiderPipeline',
'mySpider.pipelines.PipeCleaner',
'mySpider.pipelines.DBWriter',
]
并且管道似乎是正确的(无进口):
class MySpiderPipeline(object):
def process_item(self, item, spider):
print 'PIPELINE: got ', item['name']
return item
class DBWriter(object):
"""Writes each item to a DB. I hope.
"""
def __init__(self):
self.dbpool = adbapi.ConnectionPool('MySQLdb'
, host=settings['HOST']
, port=int(settings['PORT'])
, user=settings['USER']
, passwd=settings['PASS']
, db=settings['BASE']
, cursorclass=MySQLdb.cursors.DictCursor
, charset='utf8'
, use_unicode=True
)
print('init DBWriter')
def process_item(self, item, spider):
print 'DBWriter process_item'
query = self.dbpool.runInteraction(self._insert, item)
query.addErrback(self.handle_error)
return item
def _insert(self, tx, item):
print 'DBWriter _insert'
# A LOT OF UNRELATED CODE HERE
return item
class PipeCleaner(object):
def __init__(self):
print 'Cleaning these pipes.'
def process_item(self, item, spider):
print item['name'], ' is cleeeeaaaaannn!!'
return item
当我运行蜘蛛时,我在启动时得到这个输出:
Cleaning these pipes.
init DBWriter
2012-10-23 15:30:04-0400 [scrapy] DEBUG: Enabled item pipelines: MySpiderPipeline, PipeCleaner, DBWriter
与在爬虫启动时打印到屏幕的init子句不同,process_item 方法不打印(或处理)任何内容。我在祈祷我忘记了一些非常简单的事情。