11

我是scrapy的新手。我正在编写一个蜘蛛,旨在检查一长串服务器状态代码的 url,并在适当的情况下,它们被重定向到哪些 URL。重要的是,如果有一系列重定向,我需要知道每次跳转时的状态码和 url。我正在使用 response.meta['redirect_urls'] 来捕获 url,但不确定如何捕获状态代码 - 似乎没有响应元键。

我意识到我可能需要编写一些自定义中间件来公开这些值,但不太清楚如何记录每个跃点的状态代码,也不清楚如何从蜘蛛访问这些值。我看过但找不到任何人这样做的例子。如果有人能指出我正确的方向,将不胜感激。

例如,

    items = []
    item = RedirectItem()
    item['url'] = response.url
    item['redirected_urls'] = response.meta['redirect_urls']     
    item['status_codes'] = #????
    items.append(item)

编辑- 基于来自 warawauk 的反馈和来自 IRC 频道 (freenode #scrappy) 上的人的一些真正主动的帮助,我已经设法做到了这一点。我相信这有点hacky,所以欢迎任何改进意见:

(1)在设置中禁用默认中间件,添加自己的:

DOWNLOADER_MIDDLEWARES = {
    'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': None,
    'myproject.middlewares.CustomRedirectMiddleware': 100,
}

(2) 在你的 middlewares.py 中创建你的 CustomRedirectMiddleware。它继承自主重定向中间件类并捕获重定向:

class CustomRedirectMiddleware(RedirectMiddleware):
    """Handle redirection of requests based on response status and meta-refresh html tag"""

    def process_response(self, request, response, spider):
        #Get the redirect status codes
        request.meta.setdefault('redirect_status', []).append(response.status)
        if 'dont_redirect' in request.meta:
            return response
        if request.method.upper() == 'HEAD':
            if response.status in [301, 302, 303, 307] and 'Location' in response.headers:
                redirected_url = urljoin(request.url, response.headers['location'])
                redirected = request.replace(url=redirected_url)

                return self._redirect(redirected, request, spider, response.status)
            else:
                return response

        if response.status in [302, 303] and 'Location' in response.headers:
            redirected_url = urljoin(request.url, response.headers['location'])
            redirected = self._redirect_request_using_get(request, redirected_url)
            return self._redirect(redirected, request, spider, response.status)

        if response.status in [301, 307] and 'Location' in response.headers:
            redirected_url = urljoin(request.url, response.headers['location'])
            redirected = request.replace(url=redirected_url)
            return self._redirect(redirected, request, spider, response.status)

        if isinstance(response, HtmlResponse):
            interval, url = get_meta_refresh(response)
            if url and interval < self.max_metarefresh_delay:
                redirected = self._redirect_request_using_get(request, url)
                return self._redirect(redirected, request, spider, 'meta refresh')


        return response

(3) 您现在可以使用以下命令访问蜘蛛中的重定向列表

request.meta['redirect_status']
4

3 回答 3

7

我相信这是可用的

response.status

请参阅http://doc.scrapy.org/en/0.14/topics/request-response.html#scrapy.http.Response

于 2012-06-11T14:54:09.717 回答
4

response.meta['redirect_urls'RedirectMiddleware填充。您的蜘蛛回调将永远不会在两者之间收到响应,只会在所有重定向之后收到最后一个响应。

如果你想控制进程, subclass RedirectMiddleware,禁用原来的,然后启用你的。然后您可以控制重定向过程,包括跟踪响应状态。

这是原始实现(scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware):

class RedirectMiddleware(object):
    """Handle redirection of requests based on response status and meta-refresh html tag"""

    def _redirect(self, redirected, request, spider, reason):
        ...
            redirected.meta['redirect_urls'] = request.meta.get('redirect_urls', []) + \
                [request.url]

如您所见_redirect,从不同部分调用的方法创建meta['redirect_urls']

并且在process_response方法return self._redirect(redirected, request, spider, response.status)中被调用,意味着原始响应没有传递给蜘蛛。

于 2012-06-12T06:12:33.340 回答
0

KISS 解决方案:我认为最好添加严格的最少代码来捕获新的重定向字段,并让 RedirectMiddleware 完成其余的工作:

from scrapy.contrib.downloadermiddleware.redirect import RedirectMiddleware

class CustomRedirectMiddleware(RedirectMiddleware):
  """Handle redirection of requests based on response status and meta-refresh html tag"""

  def process_response(self, request, response, spider):
    #Get the redirect status codes
    request.meta.setdefault('redirect_status', []).append(response.status)
    response = super(CustomRedirectMiddleware, self).process_response(request, response, spider)
    return response

然后,继承 BaseSpider,您可以使用以下内容访问 redirect_status:

    def parse(self, response):
      item = ScrapyGoogleindexItem()
      item['redirections'] = response.meta.get('redirect_times', 0)
      item['redirect_status'] = response.meta['redirect_status']
      return item
于 2016-03-21T20:03:34.580 回答