5

我正在使用 Ubuntu 14.04 LTS。

我尝试了 Polipo,但它一直拒绝 Firefox 的连接,即使我将自己添加为 allowedClient 并进行了数小时的研究而没有解决方案。所以相反,我安装了 Privoxy,并通过访问 Tor 网站验证了它与 Firefox 一起使用,它说恭喜这个浏览器配置为使用 Tor。这证实了我应该能够抓取 Tor 网站。

但是,当我使用 Scrapy 时,我收到一个似乎没有人遇到的错误......?

2016-07-14 02:43:34 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'myProject.middlewares.RandomUserAgentMiddleware',
 'myProject.middlewares.ProxyMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-07-14 02:43:34 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-07-14 02:43:34 [scrapy] INFO: Enabled item pipelines:
['myProject.pipelines.MysqlPipeline']
2016-07-14 02:43:34 [scrapy] INFO: Spider opened
2016-07-14 02:43:34 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-07-14 02:43:34 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-07-14 02:43:34 [Tor] DEBUG: User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.3 Safari/534.53.10 <GET http://thehiddenwiki.org>
2016-07-14 02:43:34 [scrapy] ERROR: Error downloading <GET http://thehiddenwiki.org>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/usr/local/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
    result = f(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request
    return handler.download_request(request, spider)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 60, in download_request
    return agent.download_request(request)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 259, in download_request
    agent = self._get_agent(request, timeout)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 239, in _get_agent
    _, _, proxyHost, proxyPort, proxyParams = _parse(proxy)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 37, in _parse
    return _parsed_url_args(parsed)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 20, in _parsed_url_args
    host = b(parsed.hostname)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 17, in <lambda>
    b = lambda s: to_bytes(s, encoding='ascii')
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.py", line 117, in to_bytes
    'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got NoneType

我查找了这个“to_byte”错误,但我转到了 Scrapy 的源代码。

我知道这段代码可以在没有代理的情况下工作,因为它会抓取我的本地主机网站和其他网站,但显然不是 Tor,因为它需要代理才能访问洋葱网站。

到底是怎么回事?

中间件.py

class RandomUserAgentMiddleware(object):
    def process_request(self, request, spider):
        ua  = random.choice(settings.get('USER_AGENT_LIST'))
        if ua:
            request.headers.setdefault('User-Agent', ua)
            #this is just to check which user agent is being used for request
            spider.log(
                u'User-Agent: {} {}'.format(request.headers.get('User-Agent'), request),
                level=log.DEBUG
            )

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = settings.get('HTTP_PROXY')

设置.py

USER_AGENT_LIST = [
    'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.36 Safari/535.7',
    'Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0) Gecko/16.0 Firefox/16.0',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.3 Safari/534.53.10'
]

DOWNLOADER_MIDDLEWARES = {
     'myProject.middlewares.RandomUserAgentMiddleware': 400,
     'myProject.middlewares.ProxyMiddleware': 410,
     #'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None
    # Disable compression middleware, so the actual HTML pages are cached
}

HTTP_PROXY = 'localhost:8118'
4

1 回答 1

6

在内部,Scrapy 使用urllib(2)'s_parse_proxy来检测代理设置。来自urllib文档

urlopen() 函数对不需要身份验证的代理透明地工作。在 Unix 或 Windows 环境中,在启动 Python 解释器之前,将 http_proxy 或 ftp_proxy 环境变量设置为标识代理服务器的 URL。

% http_proxy="http://www.someproxy.com:3128"
% export http_proxy
% python
...

并且当使用proxykey in时meta,Scrapy 需要相同的语法,即它必须包含方案,例如'http://localhost:8118'.

这是在文档中,虽然有点隐蔽:

您还可以将proxy每个请求的元键设置为http://some_proxy_server:port.

于 2016-07-15T14:48:01.013 回答