4

我试过类似的东西:

payload = {"project": settings['BOT_NAME'],
             "spider": crawler_name,
             "start_urls": ["http://www.foo.com"]}
response = requests.post("http://192.168.1.41:6800/schedule.json",
                           data=payload)

当我检查日志时,我得到了这个错误代码:

File "/usr/lib/pymodules/python2.7/scrapy/spider.py", line 53, in make_requests_from_url
    return Request(url, dont_filter=True)
  File "/usr/lib/pymodules/python2.7/scrapy/http/request/__init__.py", line 26, in __init__
    self._set_url(url)
  File "/usr/lib/pymodules/python2.7/scrapy/http/request/__init__.py", line 61, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
exceptions.ValueError: Missing scheme in request url: h

看起来只有“ http://www.foo.com ”的第一个字母被用作request.url,我真的不知道为什么。

更新

也许 start_urls 应该是一个字符串而不是一个包含 1 个元素的列表,所以我也尝试了:

"start_urls": "http://www.foo.com"

"start_urls": [["http://www.foo.com"]]

只是得到同样的错误。

4

1 回答 1

3

您可以修改您的蜘蛛以接收url参数并将其附加到start_urlson init

class MySpider(Spider):

    start_urls = []

    def __init__(self, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs)
        self.start_urls.append(kwargs.get('url'))

    def parse(self, response):
        # do stuff

payload现在将是:

payload = {
    "project": settings['BOT_NAME'],
    "spider": crawler_name,
    "url": "http://www.foo.com"
}
于 2014-08-25T08:35:22.237 回答