我正在尝试构建一个可以更改请求对象的url的scrapy下载middleware。但我不能让它与 process_request 一起工作,因为下载页面仍然是原始 url 之一。我的代码如下:
#middlewares.py
class UrlModifyMiddleware(object):
def process_request(self, request, spider):
original_url = request.url
m_url = 'http://whatsmyuseragent.com/'
request.url = m_url
#request = request.replace(url=relay_url)
蜘蛛的代码:
#spider/test_spider.py
from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import Request
class TestSpider(CrawlSpider):
name = "urltest"
start_url = "http://www.icanhazip.com/"
def start_requests(self):
yield Request(self.start_url,callback=self.parse_start)
def parse_start(self,response):
html_page = response.body
open('test.html', 'wb').write(html_page)
在 settings.py 我设置:
DOWNLOADER_MIDDLEWARES = {
'official_index.middlewares.UrlModifyMiddleware': 100,
}