6

在 URL 中包含“alpha”的链接上有许多链接(hrefs),我想从 20 个不同的页面收集这些链接并粘贴到一般 url 的末尾(倒数第二行)。href 可以在一个表中找到,其中 td 的类是 mys-elastic mys-left ,而 a 显然是包含 href 属性的元素。任何帮助将不胜感激,因为我已经为此工作了大约一周。

for i in range(1, 11):
# The HTML Scraper for the 20 pages that list all the exhibitors
 url = 'http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm?alpha=%40&type=alpha&page='         + str(i) + '#GotoResults'
print url
list_html = scraperwiki.scrape(url)
root = lxml.html.fromstring(list_html)
href_element = root.cssselect('td.mys-elastic mys-left a')

for element in href_element:
#   Convert HTMl to lxml Object 
 href = href_element.get('href')
 print href

 page_html = scraperwiki.scrape('http://ahr13.mapyourshow.com' + href)
 print page_html
4

2 回答 2

18

无需使用 javascript - 它都在 html 中:

import scraperwiki
import lxml.html

html = scraperwiki.scrape('http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm?  alpha=%40&type=alpha&page=1')

root = lxml.html.fromstring(html)
# get the links
hrefs = root.xpath('//td[@class="mys-elastic mys-left"]/a')

for href in hrefs:
   print 'http://ahr13.mapyourshow.com' + href.attrib['href'] 
于 2013-01-03T10:17:59.587 回答
2
import lxml.html as lh
from itertools import chain

URL = 'http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm?alpha=%40&type=alpha&page='
BASE = 'http://ahr13.mapyourshow.com'
path = '//table[2]//td[@class="mys-elastic mys-left"]//@href'

results = []   
for i in range(1,21):     
    doc=lh.parse(URL+str(i)) 
    results.append(BASE+i for i in doc.xpath(path))

print list(chain(*results))
于 2013-01-02T10:25:51.530 回答