1

尽管设置了规则,但无法弄清楚为什么 Scrapy 中的 CrawlSpider 不进行分页。

但是,如果将 start_url 更改为http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/并注释掉 parse_start_url,我会为上述页面抓取更多项目。

我的目标是抓取所有类别。请知道做错了什么?

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

from bitcointravel.items import BitcointravelItem



class BitcoinSpider(CrawlSpider):
    name = "bitcoin"
    allowed_domains = ["bitcoin.travel"]
    start_urls = [
        "http://bitcoin.travel/categories/"
    ]

    rules = (

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LinkExtractor(allow=('.+/page/\d+/$'), restrict_xpaths=('//a[@class="next page-numbers"]'),),
             callback='parse_items', follow=True),
    )

    def parse_start_url(self, response):
        for sel in response.xpath("//ul[@class='maincat-list']/li"):
            url = sel.xpath('a/@href').extract()[0]
            if url == 'http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/':
            # url = 'http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/'
                yield scrapy.Request(url, callback=self.parse_items)


    def parse_items(self, response):
        self.logger.info('Hi, this is an item page! %s', response.url)
        for sel in response.xpath("//div[@class='grido']"):
            item = BitcointravelItem()
            item['name'] = sel.xpath('a/@title').extract()
            item['website'] = sel.xpath('a/@href').extract()
            yield item

这是结果

{'downloader/request_bytes': 574,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 98877,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'dupefilter/filtered': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 2, 15, 13, 44, 17, 37859),
 'item_scraped_count': 24,
 'log_count/DEBUG': 28,
 'log_count/INFO': 8,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2016, 2, 15, 13, 44, 11, 250892)}
2016-02-15 14:44:17 [scrapy] INFO: Spider closed (finished)

项目计数假设为 55 而不是 24

4

1 回答 1

1

对于http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/,HTML 源包含与规则中的模式匹配的链接'.+/page/\d+/$'

<a class='page-numbers' href='http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/page/2/'>2</a>
<a class='page-numbers' href='http://bitcoin.travel/listing-category/bitcoin-hotels-and-travel/page/3/'>3</a>

而 http://bitcoin.travel/categories/不包含此类链接,主要包含指向其他类别页面的链接

...
<li class="cat-item cat-item-227"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-coffee-tea-supplies/" title="The best Coffee &amp; Tea Supplies businesses where you can spend your bitcoins!">Coffee &amp; Tea Supplies</a>  </li>
<li class="cat-item cat-item-50"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-cupcakes/" title="The best Cupcakes businesses where you can spend your bitcoins!">Cupcakes</a>  </li>
<li class="cat-item cat-item-229"><a href="http://bitcoin.travel/listing-category/bitcoin-food/bitcoin-distilleries/" title="The best Distilleries businesses where you can spend your bitcoins!">Distilleries</a>  </li>
...

如果您想抓取更多,您需要添加规则来抓取这些类别页面

于 2016-02-15T16:19:54.320 回答