在使用 Splash 发出请求后,我正在尝试访问 cookie。以下是我构建请求的方式。
script = """
function main(splash)
splash:init_cookies(splash.args.cookies)
assert(splash:go{
splash.args.url,
headers=splash.args.headers,
http_method=splash.args.http_method,
body=splash.args.body,
})
assert(splash:wait(0.5))
local entries = splash:history()
local last_response = entries[#entries].response
return {
url = splash:url(),
headers = last_response.headers,
http_status = last_response.status,
cookies = splash:get_cookies(),
html = splash:html(),
}
end
"""
req = SplashRequest(
url,
self.parse_page,
args={
'wait': 0.5,
'lua_source': script,
'endpoint': 'execute'
}
)
该脚本是 Splash 文档的精确副本。
所以我试图访问网页上设置的 cookie。当我不使用 Splash 时,下面的代码会按我的预期工作,但在使用 Splash 时不会。
self.logger.debug('Cookies: %s', response.headers.get('Set-Cookie'))
使用 Splash 时返回:
2017-01-03 12:12:37 [蜘蛛] 调试:Cookie:无
当我不使用 Splash 时,此代码有效并返回网页提供的 cookie。
Splash 的文档将此代码显示为示例:
def parse_result(self, response):
# here response.body contains result HTML;
# response.headers are filled with headers from last
# web page loaded to Splash;
# cookies from all responses and from JavaScript are collected
# and put into Set-Cookie response header, so that Scrapy
# can remember them.
我不确定我是否理解正确,但我想说我应该能够以与不使用 Splash 时相同的方式访问 cookie。
中间件设置:
# Download middlewares
DOWNLOADER_MIDDLEWARES = {
# Use a random user agent on each request
'crawling.middlewares.RandomUserAgentDownloaderMiddleware': 400,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
# Enable crawlera proxy
'scrapy_crawlera.CrawleraMiddleware': 600,
# Enable Splash to render javascript
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
所以我的问题是:如何在使用 Splash 请求时访问 cookie?