我正在使用scrapy,我想通过www.rentler.com 进行搜索。我已经去网站搜索了我感兴趣的城市,这里是那个搜索结果的链接:
https://www.rentler.com/search?Location=millcreek&MaxPrice=
现在,我感兴趣的所有列表都包含在该页面上,我想逐个递归地遍历它们。
每个列表都列在下面:
<body>/<div id="wrap">/<div class="container search-res">/<ul class="search-results"><li class="result">
每个结果都有一个<a class="search-result-link" href="/listing/288910">
我知道我需要为 crawlspider 创建一个规则,让它查看那个 href 并将其附加到 url。这样它就可以进入每一页,并获取我感兴趣的数据。
我想我需要这样的东西:
rules = (Rule(SgmlLinkExtractor(allow="not sure what to insert here, but this is where I think I need to href appending", callback='parse_item', follow=true),)
更新 *感谢您的输入。这是我现在拥有的,它似乎可以运行但不会刮擦: *
import re
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from KSL.items import KSLitem
class KSL(CrawlSpider):
name = "ksl"
allowed_domains = ["https://www.rentler.com"]
start_urls = ["https://www.rentler.com/ksl/listing/index/?sid=17403849&nid=651&ad=452978"]
regex_pattern = '<a href="listing/(.*?) class="search-result-link">'
def parse_item(self, response):
items = []
hxs = HtmlXPathSelector(response)
sites = re.findall(regex_pattern, "https://www.rentler.com/search?location=millcreek&MaxPrice=")
for site in sites:
item = KSLitem()
item['price'] = site.select('//div[@class="price"]/text()').extract()
item['address'] = site.select('//div[@class="address"]/text()').extract()
item['stats'] = site.select('//ul[@class="basic-stats"]/li/div[@class="count"]/text()').extract()
item['description'] = site.select('//div[@class="description"]/div/p/text()').extract()
items.append(item)
return items
想法?