1

LS,

我已经安装了 Django-Dynamic-Scraper。我想通过 Splash 渲染 Javascript。因此,我安装了 scrapy-splash 并安装了 docker splash 图像。下图显示可以到达 docker 容器。

飞溅泊坞窗容器

然而,当我通过 DDS 对其进行测试时,它返回以下错误:

2016-10-25 17:06:00 [scrapy] INFO: Spider opened
2016-10-25 17:06:00 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-25 17:06:00 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-25 17:06:05 [scrapy] DEBUG: Crawled (200) <POST http://192.168.0.150:8050/render.html> (referer: None)
2016-10-25 17:06:06 [root] ERROR: No base objects found!
2016-10-25 17:06:06 [scrapy] INFO: Closing spider (finished)
2016-10-25 17:06:06 [scrapy] INFO: Dumping Scrapy stats:

执行时:

scrapy crawl my_spider -a id=1

我已配置 DDS 管理页面并选中复选框以呈现 javascript:

管理员配置

我遵循了scrapy-splash的配置:

# ----------------------------------------------------------------------
# SPLASH SETTINGS
# https://github.com/scrapy-plugins/scrapy-splash#configuration
# --------------------------------------------------------------------
SPLASH_URL = 'http://192.168.0.150:8050/'

DSCRAPER_SPLASH_ARGS = {'wait': 3}

DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

# This middleware is needed to support cache_args feature;
# it allows to save disk space by not storing duplicate Splash arguments
# multiple times in a disk request queue.
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'

# If you use Scrapy HTTP cache then a custom cache storage backend is required.
# scrapy-splash provides a subclass
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

我假设正确配置了 DDS/scrapy-splash,它会将所需的参数发送到 splash docker 容器进行渲染,是这样吗?

我错过了什么?我需要用启动脚本调整蜘蛛吗?

4

0 回答 0