4

当我尝试在 scrapyd 上运行现有的 scrapy 项目时出现错误。

我有一个工作的scrapy项目(url_finder)和一个用于测试目的的工作蜘蛛(test_ip_spider_1x),它只是下载whatismyip.com。

我成功安装了scrapyd(使用apt-get),现在我想在scrapyd 上运行蜘蛛。所以我执行:

curl http://localhost:6800/schedule.json -d project=url_finder -d spider=test_ip_spider_1x

这将返回:

{"status": "error", "message": "'url_finder'"}

这似乎表明该项目存在问题。但是,当我执行时:scrapy crawl test_ip_spider_1x 一切运行良好。当我在 Web 界面中检查 scrapyd 日志时,我得到的是:

2014-04-01 11:40:22-0400 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:40:21 +0000] "POST /schedule.json HTTP/1.1" 200 47 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
2014-04-01 11:40:58-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:40:57 +0000] "GET / HTTP/1.1" 200 747 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:41:01-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:41:00 +0000] "GET /logs/ HTTP/1.1" 200 1203 "http://localhost:6800/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:41:03-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:41:02 +0000] "GET /logs/scrapyd.log HTTP/1.1" 200 36938 "http://localhost:6800/logs/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:42:02-0400 [HTTPChannel,2,127.0.0.1] Unhandled Error
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 1730, in allContentReceived
        req.requestReceived(command, path, version)
      File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 826, in requestReceived
        self.process()
      File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 189, in process
        self.render(resrc)
      File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 238, in render
        body = resrc.render(self)
    --- <exception caught here> ---
      File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 18, in render
        return JsonResource.render(self, txrequest)
      File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
        r = resource.Resource.render(self, txrequest)
      File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
        return m(request)
      File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 37, in render_POST
        self.root.scheduler.schedule(project, spider, **args)
      File "/usr/lib/pymodules/python2.7/scrapyd/scheduler.py", line 15, in schedule
        q = self.queues[project]
    exceptions.KeyError: 'url_finder'

2014-04-01 11:42:02-0400 [HTTPChannel,2,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:42:01 +0000] "POST /schedule.json HTTP/1.1" 200 47 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"

有任何想法吗?

4

1 回答 1

11

为了运行一个scrapyd项目,你必须首先部署它。这在在线文档中没有得到很好的解释(尤其是对于初次使用的用户)。这是一种对我有用的解决方案:

安装 scrapyd-deploy:如果你有 Ubuntu 或类似的,你可以运行:

apt-get install scrapyd-deploy

在您的 scrapy 项目文件夹中编辑 scrapy.cfg 并取消注释该行

 url = http://localhost:6800/

这是您的部署目标——scrapy 将在此位置部署项目。接下来,检查以确保 scrapyd 可以看到部署目标:

scrapyd-deploy -l

这应该输出类似于:

default http://localhost:6800/

接下来您可以部署项目(url_finder):

scrapyd-deploy default -p url_finder

最后运行蜘蛛:

curl http://localhost:6800/schedule.json -d project=url_finder -d spider=test_ip_spider_1x
于 2014-04-04T14:47:02.723 回答