我试图让 Scrapy 登录到一个网站,然后能够转到它的特定页面,然后抓取信息。我有以下代码:
class DemoSpider(InitSpider):
name = "demo"
allowed_domains = ['example.com']
login_page = "https://www.example.com/"
start_urls = ["https://www.example.com/secure/example"]
rules = (Rule(SgmlLinkExtractor(allow=r'\w+'),callback='parse_item', follow=True),)
# Initialization
def init_request(self):
"""This function is called before crawling starts."""
return Request(url=self.login_page, callback=self.login)
# Perform login with the username and password
def login(self, response):
"""Generate a login request."""
return FormRequest.from_response(response,
formdata={'name': 'user', 'password': 'password'},
callback=self.check_login_response)
# Check the response after logging in, make sure it went well
def check_login_response(self, response):
"""Check the response returned by a login request to see if we are
successfully logged in.
"""
if "authentication failed" in response.body:
self.log("Login failed", level=log.ERROR)
return
else:
self.log('will initialize')
self.initialized(response)
def parse_item(self, response):
self.log('got to the parse item page')
每次我运行蜘蛛时,它都会登录并进行初始化。但是,它从不匹配规则。是否有一个原因?我检查了以下网站:
还有许多其他站点,包括文档。为什么初始化后,它从来没有通过start_urls
然后刮掉每一页?