我正在做一些事情,从美味中提取网址,然后使用这些网址来发现相关的提要。
但是,delicious 中的某些书签不是 html 链接,导致 BS 吐槽。基本上,如果 BS 获取链接并且它看起来不像 html,我想丢弃它。
现在,这就是我得到的。
trillian:Documents jauderho$ ./d2o.py "green data center"
processing http://www.greenm3.com/
processing http://www.eweek.com/c/a/Green-IT/How-to-Create-an-EnergyEfficient-Green-Data-Center/?kc=rss
Traceback (most recent call last):
File "./d2o.py", line 53, in <module>
get_feed_links(d_links)
File "./d2o.py", line 43, in get_feed_links
soup = BeautifulSoup(html)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1499, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1230, in __init__
self._feed(isHTML=isHTML)
File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1263, in _feed
self.builder.feed(markup)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 150, in goahead
k = self.parse_endtag(i)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 314, in parse_endtag
self.error("bad end tag: %r" % (rawdata[i:j],))
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 115, in error
raise HTMLParseError(message, self.getpos())
HTMLParser.HTMLParseError: bad end tag: u'</b />', at line 739, column 1
更新:
耶希亚的回答成功了。作为参考,这里有一些获取内容类型的代码:
def check_for_html(link):
out = urllib.urlopen(link)
return out.info().getheader('Content-Type')