0

我正在尝试将我抓取的数据放在我firebase的云帐户上,但是ImportError当我运行蜘蛛时我得到了这个。我尝试制作新项目,甚至在特定版本上重新安装firebaseand ,但没有帮助。shubPython

蜘蛛在我的机器上完美运行,并且没有显示任何 ImportErrors。这是错误日志。

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/middlewares.py", line 30, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/app/__main__.egg/Terminator/spiders/IcyTermination.py", line 18, in parse
    from firebase import firebase
ImportError: No module named firebase

有什么帮助吗?

4

1 回答 1

1

由于声誉问题,我无法发表评论。但是你创建了你的 requirements.txt 吗?

在这里,您将了解如何将自己的依赖项部署到 scrapinghub。

基本上,您在项目的根目录下创建一个 requirements.txt 文件,每行一个依赖项并添加

requirements_file: requirements.txt

到你的 scrapinghub.yml 文件

于 2017-07-03T20:00:57.287 回答