0

我写了一个蜘蛛来抓取一个大型网站。我将它托管在 scrapehub 上,并且正在使用 crawlera 插件。没有 crawlera,我的蜘蛛在 scrapehub 上运行得很好。一旦我切换到 crawlera 中间件,蜘蛛就会退出而不进行一次爬行。

我已经在没有 crawlera 的情况下运行了蜘蛛,它在我的本地系统和 scrapehub 上运行,我唯一改变的是为 crawlera 启用了中间件。没有 crawlera 它运行,它不运行。我将并发请求设置为我的 C10 计划限制

   CRAWLERA_APIKEY = <apikey>
CONCURRENT_REQUESTS = 10
CONCURRENT_REQUESTS_PER_DOMAIN = 10
AUTOTHROTTLE_ENABLED = False
DOWNLOAD_TIMEOUT = 600

DOWNLOADER_MIDDLEWARES = {
    #'ytscraper.middlewares.YtscraperDownloaderMiddleware': 543,
    'scrapy_crawlera.CrawleraMiddleware': 300
}


Here is the log dump

    2019-02-06 05:54:34 INFO    Log opened.
1:  2019-02-06 05:54:34 INFO    [scrapy.log] Scrapy 1.5.1 started
2:  2019-02-06 05:54:34 INFO    [scrapy.utils.log] Scrapy 1.5.1 started (bot: ytscraper)
3:  2019-02-06 05:54:34 INFO    [scrapy.utils.log] Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 2.7.15 (default, Nov 16 2018, 23:19:37) - [GCC 4.9.2], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.5, Platform Linux-4.4.0-141-generic-x86_64-with-debian-8.11
4:  2019-02-06 05:54:34 INFO    [scrapy.crawler] Overridden settings: {'NEWSPIDER_MODULE': 'ytscraper.spiders', 'STATS_CLASS': 'sh_scrapy.stats.HubStorageStatsCollector', 'LOG_LEVEL': 'INFO', 'CONCURRENT_REQUESTS_PER_DOMAIN': 10, 'CONCURRENT_REQUESTS': 10, 'SPIDER_MODULES': ['ytscraper.spiders'], 'AUTOTHROTTLE_ENABLED': True, 'LOG_ENABLED': False, 'DOWNLOAD_TIMEOUT': 600, 'MEMUSAGE_LIMIT_MB': 950, 'BOT_NAME': 'ytscraper', 'TELNETCONSOLE_HOST': '0.0.0.0'}
5:  2019-02-06 05:54:34 INFO    [scrapy.middleware] Enabled extensions: More
6:  2019-02-06 05:54:34 INFO    [scrapy.middleware] Enabled downloader middlewares: Less
['sh_scrapy.diskquota.DiskQuotaDownloaderMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 u'scrapy_crawlera.CrawleraMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'sh_scrapy.middlewares.HubstorageDownloaderMiddleware']
7:  2019-02-06 05:54:34 INFO    [scrapy.middleware] Enabled spider middlewares: Less
['sh_scrapy.diskquota.DiskQuotaSpiderMiddleware',
 'sh_scrapy.middlewares.HubstorageSpiderMiddleware',
 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
8:  2019-02-06 05:54:34 INFO    [scrapy.middleware] Enabled item pipelines: More
9:  2019-02-06 05:54:34 INFO    [scrapy.core.engine] Spider opened
10: 2019-02-06 05:54:34 INFO    [scrapy.extensions.logstats] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
11: 2019-02-06 05:54:34 INFO    [root] Using crawlera at http://proxy.crawlera.com:8010 (user: 11b143d...)
12: 2019-02-06 05:54:34 INFO    [root] CrawleraMiddleware: disabling download delays on Scrapy side to optimize delays introduced by Crawlera. To avoid this behaviour you can use the CRAWLERA_PRESERVE_DELAY setting but keep in mind that this may slow down the crawl significantly
13: 2019-02-06 05:54:34 INFO    TelnetConsole starting on 6023
14: 2019-02-06 05:54:40 INFO    [scrapy.core.engine] Closing spider (finished)
15: 2019-02-06 05:54:40 INFO    [scrapy.statscollectors] Dumping Scrapy stats: More
16: 2019-02-06 05:54:40 INFO    [scrapy.core.engine] Spider closed (finished)
17: 2019-02-06 05:54:40 INFO    Main loop terminated.

这是没有爬虫中间件的同一个蜘蛛的日志

0:  2019-02-05 17:42:13 INFO    Log opened.
1:  2019-02-05 17:42:13 INFO    [scrapy.log] Scrapy 1.5.1 started
2:  2019-02-05 17:42:13 INFO    [scrapy.utils.log] Scrapy 1.5.1 started (bot: ytscraper)
3:  2019-02-05 17:42:13 INFO    [scrapy.utils.log] Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 2.7.15 (default, Nov 16 2018, 23:19:37) - [GCC 4.9.2], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.5, Platform Linux-4.4.0-135-generic-x86_64-with-debian-8.11
4:  2019-02-05 17:42:13 INFO    [scrapy.crawler] Overridden settings: {'NEWSPIDER_MODULE': 'ytscraper.spiders', 'STATS_CLASS': 'sh_scrapy.stats.HubStorageStatsCollector', 'LOG_LEVEL': 'INFO', 'CONCURRENT_REQUESTS_PER_DOMAIN': 32, 'CONCURRENT_REQUESTS': 32, 'SPIDER_MODULES': ['ytscraper.spiders'], 'AUTOTHROTTLE_ENABLED': True, 'LOG_ENABLED': False, 'DOWNLOAD_TIMEOUT': 600, 'MEMUSAGE_LIMIT_MB': 950, 'BOT_NAME': 'ytscraper', 'TELNETCONSOLE_HOST': '0.0.0.0'}
5:  2019-02-05 17:42:13 INFO    [scrapy.middleware] Enabled extensions: More
6:  2019-02-05 17:42:14 INFO    [scrapy.middleware] Enabled downloader middlewares: More
7:  2019-02-05 17:42:14 INFO    [scrapy.middleware] Enabled spider middlewares: More
8:  2019-02-05 17:42:14 INFO    [scrapy.middleware] Enabled item pipelines: More
9:  2019-02-05 17:42:14 INFO    [scrapy.core.engine] Spider opened
10: 2019-02-05 17:42:14 INFO    [scrapy.extensions.logstats] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
11: 2019-02-05 17:42:14 INFO    [root] Using crawlera at http://proxy.crawlera.com:8010 (user: 11b143d...)
12: 2019-02-05 17:42:14 INFO    [root] CrawleraMiddleware: disabling download delays on Scrapy side to optimize delays introduced by Crawlera. To avoid this behaviour you can use the CRAWLERA_PRESERVE_DELAY setting but keep in mind that this may slow down the crawl significantly
13: 2019-02-05 17:42:14 INFO    TelnetConsole starting on 6023
14: 2019-02-05 17:43:14 INFO    [scrapy.extensions.logstats] Crawled 17 pages (at 17 pages/min), scraped 16 items (at 16 items/min)
15: 2019-02-05 17:44:14 INFO    [scrapy.extensions.logstats] Crawled 35 pages (at 18 pages/min), scraped 34 items (at 18 items/min)
16: 2019-02-05 17:45:14 INFO    [scrapy.extensions.logstats] Crawled 41 pages (at 6 pages/min), scraped 40 items (at 6 items/min)
17: 2019-02-05 17:45:30 INFO    [scrapy.crawler] Received SIGTERM, shutting down gracefully. Send again to force
18: 2019-02-05 17:45:30 INFO    [scrapy.core.engine] Closing spider (shutdown)
19: 2019-02-05 17:45:38 INFO    [scrapy.statscollectors] Dumping Scrapy stats: More
20: 2019-02-05 17:45:38 INFO    [scrapy.core.engine] Spider closed (shutdown)
21: 2019-02-05 17:45:38 INFO    Main loop terminated.

我在 python 中编写了一个脚本来测试我的 crawlera 连接

import requests

response = requests.get(
    "https://www.youtube.com",
    proxies={
        "http": "http://<APIkey>:@proxy.crawlera.com:8010/",
    },
)
print(response.text)

这行得通,但我一辈子都无法让爬虫与爬虫中间件一起工作。

我想使用 crawlera bc 获得相同的结果,而不会很快被禁止。

请帮忙。

4

2 回答 2

0

日志中的数据与问题定义不符。在这两种情况下,蜘蛛都使用了 crawlera 代理,因为两个日志都有这一行:

INFO    [root] Using crawlera at http://proxy.crawlera.com:8010 (user: 11b143d...)

根据scrapy_crawlera.CrawleraMiddleware源代码,这意味着在两种情况下都启用了 CrawleraMiddleware。我需要来自日志的其他数据。(至少是统计数据(包含统计数据的日志的结束行))

目前我有以下假设:
根据第一个日志,您没有覆盖 cookie 设置并且启用了 CookiesMiddleware。
默认情况下,scrapy 会启用处理 cookie。
通常,网站使用 cookie 来跟踪访问者的活动/会话。
如果网站从多个 IP 接收具有单个 sessionId 的请求(就像任何蜘蛛对启用的 crawlera 和启用的 cookie 所做的那样) - 这允许网络服务器识别代理使用并通过其存储在 cookie 中的唯一 sessionId 禁止所有使用的 IP。因此,在这种情况下,蜘蛛会因为 IP 禁令而停止工作。(并且 crawlera 的其他用户在一段时间内将无法向该站点发送请求) 应
通过设置禁用 CookiesCOOKIES_ENABLEDFalse

于 2019-02-08T23:42:06.447 回答
0

您在设置中丢失CRAWLERA_ENABLED = True了。

有关详细信息,请参阅 scrapy-crawlera 文档的配置部分。

于 2019-02-06T13:57:37.807 回答