0

我正在抓取谷歌学者作者个人资料页面。当我试图抓取每个作者的标题时,我遇到了一个问题,每个作者都有超过 500 个标题,并且使用加载更多按钮显示它们,我有加载更多分页的链接

问题是我想计算一位作者拥有的书名总数,但我没有得到正确的总值。当我尝试仅抓取 2 个作者时,它返回正确,但是当我尝试抓取页面中的所有作者(一页中有10 个作者)时,我得到了错误的总值。

我的代码如下。我的逻辑哪里错了?

   def parse(self, response):

       for author_sel in response.xpath('.//div[@class="gsc_1usr"]'):  // loop to get all the author in a page

           link = author_sel.xpath(".//h3[@class='gs_ai_name']/a/@href").extract_first()
           url = response.urljoin(link)
           yield scrapy.Request(url,callback=self.parse_url_to_crawl)

  def parse_url_to_crawl(self,response):
      url = response.url
      yield scrapy.Request(url+'&cstart=0&pagesize=100',callback=self.parse_profile_content)

  def parse_profile_content(self,response):

    url = response.url
    idx = url.find("user")
    _id = url[idx+5:idx+17]
    name = response.xpath("//div[@id='gsc_prf_in']/text()").extract()[0]
    tmp = response.xpath('//tbody[@id="gsc_a_b"]/tr[@class="gsc_a_tr"]/td[@class="gsc_a_t"]/a/text()').extract() //it extracts the title 

    item = GooglescholarItem()  
    n = len(tmp)
    titles=[]
    if tmp:

        offset = 0; d = 0
        idx = url.find('cstart=')
        idx += 7
        while url[idx].isdigit():
            offset = offset*10 + int(url[idx])
            idx += 1
            d += 1
        self.n += len(tmp)
        titles.append(self.n)
        self.totaltitle = titles[-1]
        logging.info('inside if URL is: %s',url[:idx-d] + str(offset+100) + '&pagesize=100')  
        yield scrapy.Request(url[:idx-d] + str(offset+100) + '&pagesize=100', self.parse_profile_content)

    else:

        item = GooglescholarItem()
        item['name'] = name
        item['totaltitle'] = self.totaltitle
        self.n=0
        self.totaltitle=0
        yield item

这是结果,但我得到了错误的总标题值。Klaus-Robert Müller 共有 837 个冠军头衔,Tom Mitchell 有 264 个冠军头衔。有关日志,请参阅附图。我知道我的逻辑有问题

 [
 {"name": "Carl Edward Rasmussen", "totaltitle": 1684},
 {"name": "Carlos Guestrin", "totaltitle": 365},
 {"name": "Chris Williams", "totaltitle": 1072},
 {"name": "Ruslan Salakhutdinov", "totaltitle": 208},
 {"name": "Sepp Hochreiter", "totaltitle": 399},
 {"name": "Tom Mitchell", "totaltitle": 282},
 {"name": "Johannes Brandstetter", "totaltitle": 1821},
 {"name": "Klaus-Robert Müller", "totaltitle": 549},
 {"name": "Ajith Abraham", "totaltitle": 1259},
 {"name": "Amit kumar", "totaltitle": 1127}
 ]

在此处输入图像描述

4

1 回答 1

1

我认为你把事情复杂化了。我建议使用request.meta来保存您offset和计数的文章:

  def parse_url_to_crawl(self,response):
      url = response.url
      user = find_user_here()
      yield scrapy.Request(url+'&cstart=0&pagesize=100',callback=self.parse_profile_content, meta={'offset': 0, 'user': user})

  def parse_profile_content(self,response):
    offset = response.meta['offset']
    total_articles = response.meta.get("total_articles", 0)
    user = response.meta['user']
    # parse and count all articles
    total_articles += current_page_articles
    if load_more:
       offset += 100
       yield scrapy.Request("https://scholar.google.com/citations?hl=en&user={user}&cstart={offset}&pagesize=100'.format(offset=offset, user=user),callback=self.parse_profile_content, meta={'offset': 0, 'user': user, 'total_articles': total_articles})
    else:
        yield {'user': user, 'total_articles': total_articles}
于 2019-11-15T00:50:45.180 回答