5

I have a Scrapy project that uses custom middleware and a custom pipeline to check and store entries in a Postgres DB. The middleware looks a bit like this:

class ExistingLinkCheckMiddleware(object):

    def __init__(self):

        ... open connection to database

    def process_request(self, request, spider):

        ... before each request check in the DB
        that the page hasn't been scraped before

The pipeline looks similar:

class MachinelearningPipeline(object):

    def __init__(self):

        ... open connection to database

    def process_item(self, item, spider):

        ... save the item to the database

It works fine, but I can't find a way to cleanly close these database connections when the spider finishes, which irks me.

Does anyone know how to do that?

4

1 回答 1

6

我认为最好的方法是使用scrapy的信号spider_closed,例如:

from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher

class ExistingLinkCheckMiddleware(object):

    def __init__(self):
        # open connection to database

        dispatcher.connect(self.spider_closed, signals.spider_closed)

    def spider_closed(self, spider, reason):
        # close db connection

    def process_request(self, request, spider):
        # before each request check in the DB
        # that the page hasn't been scraped before

也可以看看:

希望有帮助。

于 2013-05-23T10:30:17.680 回答