我试图将 Scrapy 限制在特定的 XPath 位置以获取以下链接。XPath 是正确的(根据 Chrome 的 XPath Helper 插件),但是当我运行 Crawl Spider 时,我的 Rule 出现语法错误。
我的蜘蛛代码是:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from tutorial.items import BassItem
import logging
from scrapy.log import ScrapyFileLogObserver
logfile = open('testlog.log', 'w')
log_observer = ScrapyFileLogObserver(logfile, level=logging.DEBUG)
log_observer.start()
class BassSpider(CrawlSpider):
name = "bass"
allowed_domains = ["talkbass.com"]
start_urls = ["http://www.talkbass.com/forum/f126"]
rules = [Rule(SgmlLinkExtractor(allow=['/f126/index*']), callback='parse_item', follow=True, restrict_xpaths=('//a[starts-with(@title,"Next ")]')]
def parse_item(self, response):
hxs = HtmlXPathSelector(response)
ads = hxs.select('//table[@id="threadslist"]/tbody/tr/td[@class="alt1"][2]/div')
items = []
for ad in ads:
item = BassItem()
item['title'] = ad.select('a/text()').extract()
item['link'] = ad.select('a/@href').extract()
items.append(item)
return items
所以在规则内部,XPath '//a[starts-with(@title,"Next ")]' 返回一个错误,我不知道为什么,因为实际的 XPath 是有效的。我只是想让蜘蛛抓取每个“下一页”链接。谁能帮我吗。如果您需要我的代码的任何其他部分以寻求帮助,请告诉我。