我有一个网页,我目前正在使用 BeautifulSoup 进行解析,但速度很慢,所以我决定尝试 lxml,因为我阅读它的速度非常快。
无论如何,我正在努力让我的代码遍历我想要的部分,不知道如何使用 lxml,而且我找不到明确的文档。
无论如何,这是我的代码:
import urllib, urllib2
from lxml import etree
def wgetUrl(target):
try:
req = urllib2.Request(target)
req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3 Gecko/2008092417 Firefox/3.0.3')
response = urllib2.urlopen(req)
outtxt = response.read()
response.close()
except:
return ''
return outtxt
newUrl = 'http://www.tv3.ie/3player'
data = wgetUrl(newUrl)
parser = etree.HTMLParser()
tree = etree.fromstring(data, parser)
for elem in tree.iter("div"):
print elem.tag, elem.attrib, elem.text
这将返回所有 DIV,但我如何指定只遍历 dev id='slider1'?
div {'style': 'position: relative;', 'id': 'slider1'} None
这不起作用:
for elem in tree.iter("slider1"):
我知道这可能是一个愚蠢的问题,但我无法弄清楚..
谢谢!
* 编辑* *
在您添加此代码的帮助下,我现在有以下输出:
for elem in tree.xpath("//div[@id='slider1']//div[@id='gridshow']"):
print elem[0].tag, elem[0].attrib, elem[0].text
print elem[1].tag, elem[1].attrib, elem[1].text
print elem[2].tag, elem[2].attrib, elem[2].text
print elem[3].tag, elem[3].attrib, elem[3].text
print elem[4].tag, elem[4].attrib, elem[4].text
输出:
a {'href': '/3player/show/392/57922/1/Tallafornia', 'title': '3player | Tallafornia, 11/01/2013. The Tallafornia crew are back, living in a beachside villa in Santa Ponsa, Majorca. As the crew settle in, the egos grow bigger than ever and cause tension'} None
h3 {} None
span {'id': 'gridcaption'} The Tallafornia crew are back, living in a beachside vill...
span {'id': 'griddate'} 11/01/2013
span {'id': 'gridduration'} 00:27:52
这一切都很棒,但我错过了上面 a 标签的一部分。解析器会不会正确处理代码?
我没有得到以下信息:
<img alt="3player | Tallafornia, 11/01/2013. The Tallafornia crew are back, living in a beachside villa in Santa Ponsa, Majorca. As the crew settle in, the egos grow bigger than ever and cause tension" src='http://content.tv3.ie/content/videos/0378/tallaforniaep2_fri11jan2013_3player_1_57922_180x102.jpg' class='shadow smallroundcorner'></img>
任何想法为什么它不拉这个?
再次感谢,非常有用的帖子..