0

我想用 Scrappy 登录一个站点,然后调用另一个 url。到目前为止一切顺利,我安装了 Scrappy 并制作了这个脚本:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest

class LoginSpider2(BaseSpider):
    name = 'github_login'
    start_urls = ['https://github.com/login']

    def parse(self, response):
        return [FormRequest.from_response(response, formdata={'login': 'username',   'password': 'password'}, callback=self.after_login)]

    def after_login(self, response):
        if "authentication failed" in response.body:
            self.log("Login failed", level=log.ERROR)
        else:
            self.log("Login succeed", response.body)

启动此脚本后,我得到了日志“登录成功”。然后我添加了另一个 URL,但它不起作用:为此,我替换了:

start_urls = ['https://github.com/login']

经过

start_urls = ['https://github.com/login', 'https://github.com/MyCompany/MyPrivateRepo']

但我得到了这些错误:

2013-06-11 22:23:40+0200 [scrapy] DEBUG: Enabled item pipelines: 
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 4, in <module>
    execute()
  File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 76, in _run_print_help
    func(*a, **kw)
  File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/Library/Python/2.7/site-packages/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/Library/Python/2.7/site-packages/scrapy/spidermanager.py", line 43, in create
    raise KeyError("Spider not found: %s" % spider_name)

我做错了什么?我在stackoverflow上进行了搜索,但没有找到正确的响应..

谢谢

4

2 回答 2

1

您的错误表明 Scrapy 无法找到蜘蛛。您是否在 project/spiders 文件夹中创建它?

无论如何,一旦你让它运行,你会发现第二个问题:start_url请求的默认回调是self.parse,这对于 repo 页面将失败(那里没有登录表单)。而且它们可能会并行运行,所以当它访问私有仓库时,它会得到一个错误:P

如果有效,您应该只保留登录 url ,并在方法中start_urls返回一个新的。像这样:Requestafter_login

def after_login(self, response):
    ...
    else:
        return Request('https://github.com/MyCompany/MyPrivateRepo', 
                       callback=self.parse_repo)
于 2013-06-12T21:34:57.717 回答
0

蜘蛛的名称属性是否仍然设置正确?不正确/缺失的设置name通常会导致此类错误。

于 2013-06-13T11:17:32.570 回答