10

如何使用 Scrapy 遍历网站?我想提取所有匹配的网站的正文http://www.saylor.org/site/syllabus.php?cid=NUMBER,其中 NUMBER 是 1 到 400 左右。

我写了这个蜘蛛:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from syllabi.items import SyllabiItem

class SyllabiSpider(CrawlSpider):

    name = 'saylor'
    allowed_domains = ['saylor.org']
    start_urls = ['http://www.saylor.org/site/syllabus.php?cid=']
    rules = [Rule(SgmlLinkExtractor(allow=['\d+']), 'parse_syllabi')]

    def parse_syllabi(self, response):
        x = HtmlXPathSelector(response)

        syllabi = SyllabiItem()
        syllabi['url'] = response.url
        syllabi['body'] = x.select("/html/body/text()").extract()
        return syllabi

但它不起作用。我知道它正在该 start_url 中寻找链接,这并不是我真正想要的。我想遍历这些站点。说得通?

谢谢您的帮助。

4

1 回答 1

13

尝试这个:

from scrapy.spider import BaseSpider
from scrapy.http import Request
from syllabi.items import SyllabiItem

class SyllabiSpider(BaseSpider):
    name = 'saylor'
    allowed_domains = ['saylor.org']
    max_cid = 400

    def start_requests(self):
        for i in range(self.max_cid):
            yield Request('http://www.saylor.org/site/syllabus.php?cid=%d' % i,
                    callback=self.parse_syllabi)

    def parse_syllabi(self, response):
        syllabi = SyllabiItem()
        syllabi['url'] = response.url
        syllabi['body'] = response.body

        return syllabi
于 2012-12-28T21:53:04.813 回答