2

我正在尝试从http://www.apkmirror.com抓取,但目前我无法在浏览器中访问该网站,因为它说所有者禁止了我的 IP 地址(见下文)。

在此处输入图像描述

我试图通过使用 Privoxy 和 Tor 来解决这个问题,类似于http://blog.michaelyin.info/2014/02/19/scrapy-socket-proxy/中描述的内容。

首先,我安装了一个启动的Privoxy,默认情况下侦听端口 8118。我在下面添加了以下行/etc/privoxy/config

forward-socks5   /               127.0.0.1:9050 .

我也有 Tor 正在运行,它在端口 9050 上侦听,使用验证

kurt@kurt-ThinkPad:~$ netstat -tulnp | grep 9050
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 127.0.0.1:9050          0.0.0.0:*               LISTEN      - 

据我所知wget,它正在工作。例如,如果我wget apkmirror.com使用代理,我会收到响应:

kurt@kurt-ThinkPad:~$ wget www.apkmirror.com -e use_proxy=yes -e http_proxy=127.0.0.1:8118
--2017-04-24 11:02:32--  http://www.apkmirror.com/
Connecting to 127.0.0.1:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 185097 (181K) [text/html]
Saving to: ‘index.html.2’

index.html.2        100%[===================>] 180,76K  --.-KB/s    in 0,004s  

2017-04-24 11:02:44 (42,7 MB/s) - ‘index.html.2’ saved [185097/185097]

而没有代理我得到ERROR 403: Forbidden

kurt@kurt-ThinkPad:~$ wget www.apkmirror.com
--2017-04-24 11:01:24--  http://www.apkmirror.com/
Resolving www.apkmirror.com (www.apkmirror.com)... 104.19.134.58, 104.19.136.58, 104.19.133.58, ...
Connecting to www.apkmirror.com (www.apkmirror.com)|104.19.134.58|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-04-24 11:01:24 ERROR 403: Forbidden.

现在是 Python 代码。我写了以下(简化的)蜘蛛:

import scrapy

DEBUG = True

class TorSpider(scrapy.spiders.SitemapSpider):
    name = "tor-spider"

    sitemap_urls = ['https://www.apkmirror.com/sitemap_index.xml']
    sitemap_rules = [(r'.*-android-apk-download/$', 'parse')]

    if DEBUG:
        custom_settings = {'CLOSESPIDER_PAGECOUNT': 20}

    def parse(self, response):
        item = {'url': response.url}
        yield item

我还将以下几行添加到settings.py

import os
os.environ['http_proxy'] = "http://localhost:8118"

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 1,
}

根据我对https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#module-scrapy.downloadermiddlewares.httpproxy的了解,如果我设置http_proxy环境变量HttpProxyMiddleware应该可以工作。但是,如果我尝试使用该命令进行抓取

scrapy crawl tor-spider -o test.json

我得到以下回复:

2017-04-24 10:59:17 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: proxy_spider)
2017-04-24 10:59:17 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'proxy_spider.spiders', 'FEED_URI': 'test.json', 'SPIDER_MODULES': ['proxy_spider.spiders'], 'BOT_NAME': 'proxy_spider', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}

2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.closespider.CloseSpider',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider opened
2017-04-24 10:59:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-24 10:59:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/robots.txt> (referer: None)
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/sitemap_index.xml> (referer: None)
2017-04-24 10:59:18 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 https://www.apkmirror.com/sitemap_index.xml>: HTTP status code is not handled or not allowed
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-24 10:59:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 519,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 3110,
 'downloader/response_count': 2,
 'downloader/response_status_count/403': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 927878),
 'log_count/DEBUG': 3,
 'log_count/INFO': 8,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 489419)}
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider closed (finished)

403简而言之,尽管尝试使用 Privoxy/Tor 进行匿名抓取,但我仍然遇到刮板错误。难道我做错了什么?

4

1 回答 1

2

akpmirror 正在使用 cloudflare 来保护自己(除其他外)免受抓取和机器人攻击。

很可能他们将scrapy的标准用户代理列入黑名单。因此,除了使用 tor IP(顺便说一句,它也可以很容易地被列入黑名单)之外,您还应该设置一个看起来像真实浏览器的用户代理标头:

在 settings.py

USER_AGENT = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0"

(有关详细信息,请参阅https://doc.scrapy.org/en/latest/topics/settings.html#user-agent

于 2017-04-24T10:33:23.173 回答