2

这个蜘蛛应该循环通过http://www.saylor.org/site/syllabus.php?cid=NUMBER,其中 NUMBER 是 1 到 404 并提取每个页面。但由于某种原因,它会跳过循环中的页面。很多页。例如,它跳过 1 到 16。有人能告诉我发生了什么吗?

这是代码:

 from scrapy.spider import BaseSpider
 from scrapy.http import Request
 from opensyllabi.items import OpensyllabiItem

 import boto

 class OpensyllabiSpider(BaseSpider):
      name = 'saylor'
      allowed_domains = ['saylor.org']
      max_cid = 405
      i = 1

      def start_requests(self):
          for self.i in range(1, self.max_cid):
              yield Request('http://www.saylor.org/site/syllabus.php?cid=%d' % self.i, callback=self.parse_Opensyllabi)

      def parse_Opensyllabi(self, response):
          Opensyllabi = OpensyllabiItem()
          Opensyllabi['url'] = response.url
          Opensyllabi['body'] = response.body

          filename = ("/root/opensyllabi/data/saylor" + '%d' % self.i)
          syllabi = open(filename, "w")
          syllabi.write(response.body)

          return Opensyllabi
4

1 回答 1

3

尝试这个

class OpensyllabiSpider(BaseSpider):
      name = 'saylor'
      allowed_domains = ['saylor.org']
      max_cid = 405

      def start_requests(self):
          for i in range(1, self.max_cid):
              yield Request('http://www.saylor.org/site/syllabus.php?cid=%d' % i, 
                    meta={'index':i},
                    callback=self.parse_Opensyllabi)

      def parse_Opensyllabi(self, response):
          Opensyllabi = OpensyllabiItem()
          Opensyllabi['url'] = response.url
          Opensyllabi['body'] = response.body


          filename = ("/root/opensyllabi/data/saylor" + '%d' % response.request.meta['index'])
          syllabi = open(filename, "w")
          syllabi.write(response.body)

          return Opensyllabi
于 2013-01-16T09:22:36.570 回答