我想运行这个命令:
scrapy shell 'https://www.python.org/'
这没用。你有解决办法吗?Scrapy 已安装。这是错误。
scrapy shell 'https://www.python.org/'
2018-07-06 12:33:26 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2018-07-06 12:33:26 [scrapy] INFO: Optional features available: ssl, http11, boto
2018-07-06 12:33:26 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0}
2018-07-06 12:33:26 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, CoreStats, SpiderState
2018-07-06 12:33:26 [boto] DEBUG: Retrieving credentials from metadata server.
2018-07-06 12:33:27 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2018-07-06 12:33:27 [boto] ERROR: Unable to read instance data, giving up
2018-07-06 12:33:27 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2018-07-06 12:33:27 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2018-07-06 12:33:27 [scrapy] INFO: Enabled item pipelines:
2018-07-06 12:33:27 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-07-06 12:33:27 [scrapy] INFO: Spider opened
2018-07-06 12:33:27 [scrapy] DEBUG: Retrying <GET https://www.python.org/> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'tlsv1 alert protocol version'), ('SSL routines', 'ssl3_write_bytes', 'ssl handshake failure')]>]
2018-07-06 12:33:27 [scrapy] DEBUG: Retrying <GET https://www.python.org/> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'tlsv1 alert protocol version'), ('SSL routines', 'ssl3_write_bytes', 'ssl handshake failure')]>]
2018-07-06 12:33:27 [scrapy] DEBUG: Gave up retrying <GET https://www.python.org/> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'tlsv1 alert protocol version'), ('SSL routines', 'ssl3_write_bytes', 'ssl handshake failure')]>]
Traceback (most recent call last):
File "/usr/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==1.0.3', 'console_scripts', 'scrapy')()
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/commands/shell.py", line 63, in run
shell.start(url=url)
File "/usr/lib/python2.7/dist-packages/scrapy/shell.py", line 44, in start
self.fetch(url, spider)
File "/usr/lib/python2.7/dist-packages/scrapy/shell.py", line 87, in fetch
reactor, self._schedule, request, spider)
File "/usr/lib/python2.7/dist-packages/twisted/internet/threads.py", line 122, in blockingCallFromThread
result.raiseException()
File "<string>", line 2, in raiseException
twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_read_bytes', 'tlsv1 alert protocol version'), ('SSL routines', 'ssl3_write_bytes', 'ssl handshake failure')]>]