我有一个应该运行几天的迭代器。我希望捕获并报告错误,然后我希望迭代器继续。或者整个过程可以重新开始。
这是功能:
def get_units(self, scraper):
units = scraper.get_units()
i = 0
while True:
try:
unit = units.next()
except StopIteration:
if i == 0:
log.error("Scraper returned 0 units", {'scraper': scraper})
break
except:
traceback.print_exc()
log.warning("Exception occurred in get_units", extra={'scraper': scraper, 'iteration': i})
else:
yield unit
i += 1
因为scraper
可能是许多代码变体之一,所以它不可信,我不想处理那里的错误。
但是当 中发生错误时units.next()
,整个事情就停止了。StopIteration
我怀疑是因为迭代器在其中一个迭代失败时抛出 a 。
这是输出(只有最后几行)
[2012-11-29 14:11:12 /home/amcat/amcat/scraping/scraper.py:135 DEBUG] Scraping unit <Element div at 0x4258c710>
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article Counter-Strike: Global Offensive Update Released
Traceback (most recent call last):
File "/home/amcat/amcat/scraping/controller.py", line 101, in get_units
unit = units.next()
File "/home/amcat/amcat/scraping/scraper.py", line 114, in get_units
for unit in self._get_units():
File "/home/amcat/scraping/games/steamcommunity.py", line 90, in _get_units
app_doc = self.getdoc(url,urlencode(form))
File "/home/amcat/amcat/scraping/scraper.py", line 231, in getdoc
return self.opener.getdoc(url, encoding)
File "/home/amcat/amcat/scraping/htmltools.py", line 54, in getdoc
response = self.opener.open(url, encoding)
File "/usr/lib/python2.7/urllib2.py", line 406, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 444, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
[2012-11-29 14:11:14 /home/amcat/amcat/scraping/controller.py:110 WARNING] Exception occurred in get_units
...code ends...
那么,当错误发生时,如何防止迭代停止呢?
编辑:这是 get_units() 中的代码
def get_units(self):
"""
Split the scraping job into a number of 'units' that can be processed independently
of each other.
@return: a sequence of arbitrary objects to be passed to scrape_unit
"""
self._initialize()
for unit in self._get_units():
yield unit
这是一个简化的 _get_units():
INDEX_URL = "http://www.steamcommunity.com"
def _get_units(self):
doc = self.getdoc(INDEX_URL) #returns a lxml.etree document
for a in doc.cssselect("div.discussion a"):
link = a.get('href')
yield link
编辑:问题跟进:更改函数中的每个 for 循环以在每次失败迭代后自动执行错误处理