1

在带有 Requests 0.12.1 和 BeautifulSoup 4.1.0 的 Kubuntu Linux 12.10 上运行的 Python 3.2.3 上,我在解析时遇到了一些网页中断:

try:       
    response = requests.get('http://www.wbsonline.com/resources/employee-check-tampering-fraud/')
except Exception as error:
    return False

pprint(str(type(response)));
pprint(response);
pprint(str(type(response.content)));

soup = bs4.BeautifulSoup(response.content)

请注意,数百个其他网页解析得很好。这个特定页面中有什么导致 Python 崩溃,我该如何解决它?这是崩溃:

 - bruno:scraper$ ./test-broken-site.py 
"<class 'requests.models.Response'>"
<Response [200]>
"<class 'bytes'>"
Traceback (most recent call last):
  File "./test-broken-site.py", line 146, in <module>
    main(sys.argv)
  File "./test-broken-site.py", line 138, in main
    has_adsense('http://www.wbsonline.com/resources/employee-check-tampering-fraud/')
  File "./test-broken-site.py", line 67, in test_page_parse
    soup = bs4.BeautifulSoup(response.content)
  File "/usr/lib/python3/dist-packages/bs4/__init__.py", line 172, in __init__
    self._feed()
  File "/usr/lib/python3/dist-packages/bs4/__init__.py", line 185, in _feed
    self.builder.feed(self.markup)
  File "/usr/lib/python3/dist-packages/bs4/builder/_lxml.py", line 175, in feed
    self.parser.close()
  File "parser.pxi", line 1171, in lxml.etree._FeedParser.close (src/lxml/lxml.etree.c:79886)
  File "parsertarget.pxi", line 126, in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:88932)
  File "lxml.etree.pyx", line 282, in lxml.etree._ExceptionContext._raise_if_stored (src/lxml/lxml.etree.c:7469)
  File "saxparser.pxi", line 288, in lxml.etree._handleSaxDoctype (src/lxml/lxml.etree.c:85572)
  File "parsertarget.pxi", line 84, in lxml.etree._PythonSaxParserTarget._handleSaxDoctype (src/lxml/lxml.etree.c:88469)
  File "/usr/lib/python3/dist-packages/bs4/builder/_lxml.py", line 150, in doctype
    doctype = Doctype.for_name_and_ids(name, pubid, system)
  File "/usr/lib/python3/dist-packages/bs4/element.py", line 720, in for_name_and_ids
    return Doctype(value)
  File "/usr/lib/python3/dist-packages/bs4/element.py", line 653, in __new__
    return str.__new__(cls, value, DEFAULT_OUTPUT_ENCODING)
TypeError: coercing to str: need bytes, bytearray or buffer-like object, NoneType found

而不是bs4.BeautifulSoup(response.content) 我试过 bs4.BeautifulSoup(response.text)。这具有相同的结果(此页面上的相同崩溃)。我可以做些什么来解决像这样中断的页面,以便我可以解析它们?

4

1 回答 1

1

您的输出中提供的网站具有 doctype:

<!DOCTYPE>

而一个合适的网站必须有这样的东西:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

当 beautifulsoup 解析器尝试在此处获取文档类型时:

File "/usr/lib/python3/dist-packages/bs4/element.py", line 720, in for_name_and_ids
return Doctype(value)

Doctype 的值为空,然后当尝试使用该值时,解析器将失败。

一种解决方案是在将页面解析为 beautifulsoup 之前使用正则表达式手动修复问题

于 2013-06-16T16:05:12.057 回答