1

我之前在 Windows XP 机器上安装了 Python 2.6.2 并运行以下代码:

import urllib2
import urllib

page = urllib2.Request('http://www.python.org/fish.html')
urllib2.urlopen( page )

我收到以下错误。

Traceback (most recent call last):<br>
  File "C:\Python26\test3.py", line 6, in <module><br>
    urllib2.urlopen( page )<br>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen<br>
    return _opener.open(url, data, timeout)<br>
  File "C:\Python26\lib\urllib2.py", line 383, in open<br>
    response = self._open(req, data)<br>
  File "C:\Python26\lib\urllib2.py", line 401, in _open<br>
    '_open', req)<br>
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain<br>
    result = func(*args)<br>
  File "C:\Python26\lib\urllib2.py", line 1130, in http_open<br>
    return self.do_open(httplib.HTTPConnection, req)<br>
  File "C:\Python26\lib\urllib2.py", line 1105, in do_open<br>
    raise URLError(err)<br>
URLError: <urlopen error [Errno 11001] getaddrinfo failed><br><br><br>
4

5 回答 5

4
import urllib2
response = urllib2.urlopen('http://www.python.org/fish.html')
html = response.read()

你这样做是错的。

于 2009-08-17T20:13:51.540 回答
3

查看 urllib2 源代码,在回溯指定的行中:

File "C:\Python26\lib\urllib2.py", line 1105, in do_open
raise URLError(err)

在那里你会看到以下片段:

    try:
        h.request(req.get_method(), req.get_selector(), req.data, headers)
        r = h.getresponse()
    except socket.error, err: # XXX what error?
        raise URLError(err)

因此,看起来源是套接字错误,而不是与 HTTP 协议相关的错误。可能的原因:您未在线,您位于限制性防火墙后面,您的 DNS 已关闭,...

正如mcandre指出的那样,除了您的代码是错误的这一事实之外,所有这些都是错误的。

于 2009-08-18T07:09:08.723 回答
2

名称解析错误。

getaddrinfo用于解析python.org请求中的主机名 ( )。如果失败,则意味着无法解析名称,因为:

  1. 它不存在,或者记录已经过时(不太可能;python.org 是一个成熟的域名)
  2. 您的 DNS 服务器已关闭(不太可能;如果您可以浏览其他站点,则应该能够通过 Python 获取该页面)
  3. 防火墙阻止 Python 或您的脚本访问 Internet(很可能;Windows 防火墙有时不会询问您是否要允许应用程序)
  4. 你住在一个古老的巫毒墓地。(不太可能;如果是这种情况,你应该搬出去)
于 2012-11-04T19:30:39.083 回答
1

Windows Vista,python 2.6.2

这是一个404页面,对吧?

>>> import urllib2
>>> import urllib
>>>
>>> page = urllib2.Request('http://www.python.org/fish.html')
>>> urllib2.urlopen( page )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen
    return _opener.open(url, data, timeout)
  File "C:\Python26\lib\urllib2.py", line 389, in open
    response = meth(req, response)
  File "C:\Python26\lib\urllib2.py", line 502, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python26\lib\urllib2.py", line 427, in error
    return self._call_chain(*args)
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain
    result = func(*args)
  File "C:\Python26\lib\urllib2.py", line 510, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
>>>
于 2009-08-17T21:28:15.617 回答
0

DJ

First, I see no reason to import urllib; I've only ever seen urllib2 used to replace urllib entirely and I know of no functionality that's useful from urllib and yet is missing from urllib2.

Next, I notice that http://www.python.org/fish.html gives a 404 error to me. (That doesn't explain the backtrace/exception you're seeing. I get urllib2.HTTPError: HTTP Error 404: Not Found

Normally if you just want to do a default fetch of a web pages (without adding special HTTP headers, doing doing any sort of POST, etc) then the following suffices:

req = urllib2.urlopen('http://www.python.org/')
html = req.read()
# and req.close() if you want to be pedantic
于 2009-08-17T21:42:05.153 回答