1

我有一个程序可以从存储在数据库中的 url 中获取内容。我正在使用beautifulsoup,urllib2来获取内容。当我输出结果时,我看到程序在遇到(看起来像)403 错误时崩溃。那么如何防止我的程序因 403/404 等错误而崩溃?

相关输出:

Traceback (most recent call last):
  File "web_content.py", line 29, in <module>
    grab_text(row) 
  File "web_content.py", line 21, in grab_text
    f = urllib2.urlopen(row)
  File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 400, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 513, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 438, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 521, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
4

1 回答 1

4

您可以用 包围请求try/except,例如

try:
    urllib2.openurl(url)
except urllib2.HTTPError, e:
    print e

有关一些好的示例和信息,请参阅http://www.voidspace.org.uk/python/articles/urllib2.shtml#handling-exceptions

于 2012-04-12T05:30:51.300 回答