1

我正在寻找一个 Scrapy Spider,它不是获取 URL 并抓取它们,而是获取一个WARC文件(最好来自 S3)作为输入并将内容发送到parse方法。

我实际上需要跳过所有下载阶段,这意味着start_requests我想从方法返回一个Response然后发送到该parse方法。

这是我到目前为止所拥有的:

class WarcSpider(Spider):

    name = "warc_spider"

    def start_requests(self):
        f = warc.WARCFile(fileobj=gzip.open("file.war.gz"))
        for record in f:
            if record.type == "response":
                payload = record.payload.read()
                headers, body = payload.split('\r\n\r\n', 1)
                url=record['WARC-Target-URI']
                yield Response(url=url, status=200, body=body, headers=headers)


    def parse(self, response):
        #code that creates item
        pass

有什么想法Scarpy吗?

4

1 回答 1

1

你想要做的是这样的:

class DummyMdw(object):

    def process_request(self, request, spider):
        record = request.meta['record']
        payload = record.payload.read()
        headers, body = payload.split('\r\n\r\n', 1)
        url=record['WARC-Target-URI']
        return Response(url=url, status=200, body=body, headers=headers)


class WarcSpider(Spider):

    name = "warc_spider"

    custom_settings = {
            'DOWNLOADER_MIDDLEWARES': {'x.DummyMdw': 1}
            }

    def start_requests(self):
        f = warc.WARCFile(fileobj=gzip.open("file.war.gz"))
        for record in f:
            if record.type == "response":
                yield Request(url, callback=self.parse, meta={'record': record})


    def parse(self, response):
        #code that creates item
        pass
于 2014-11-27T20:21:43.750 回答