我过去对此进行了一些研究,并最终在 Python 中实现了这种方法 [pdf] 。我实现的最终版本在应用算法之前也做了一些清理,比如删除 head/script/iframe 元素、隐藏元素等,但这是它的核心。
这是一个带有“链接列表”鉴别器的(非常)天真的实现的函数,它试图删除链接与文本比例较大的元素(即导航栏、菜单、广告等):
def link_list_discriminator(html, min_links=2, ratio=0.5):
"""Remove blocks with a high link to text ratio.
These are typically navigation elements.
Based on an algorithm described in:
http://www.psl.cs.columbia.edu/crunch/WWWJ.pdf
:param html: ElementTree object.
:param min_links: Minimum number of links inside an element
before considering a block for deletion.
:param ratio: Ratio of link text to all text before an element is considered
for deletion.
"""
def collapse(strings):
return u''.join(filter(None, (text.strip() for text in strings)))
# FIXME: This doesn't account for top-level text...
for el in html.xpath('//*'):
anchor_text = el.xpath('.//a//text()')
anchor_count = len(anchor_text)
anchor_text = collapse(anchor_text)
text = collapse(el.xpath('.//text()'))
anchors = float(len(anchor_text))
all = float(len(text))
if anchor_count > min_links and all and anchors / all > ratio:
el.drop_tree()
在我使用的测试语料库中,它实际上运行良好,但要实现高可靠性需要进行大量调整。