0

我可以使用 Nokogiri抓取http://www.example.com/view-books/0/new-releases但如何抓取所有页面?这个有五页,但不知道最后一页我该如何继续?

这是我写的程序:

require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'csv'

urls=Array['http://www.example.com/view-books/0/new-releases?layout=grid&_pop=flyout',
           'http://www.example.com/view-books/1/bestsellers',
           'http://www.example.com/books/pre-order?query=book&cid=1&layout=list&ref=4b116001-01a6-4f53-8da7-945b74fdb253'
      ]

@titles=Array.new
@prices=Array.new
@descriptions=Array.new
@page=Array.new

urls.each do |url|
  doc=Nokogiri::HTML(open(url))
  puts doc.at_css("title").text

  doc.css('.fk-inf-scroll-item').each do |item|
   @prices << item.at_css(".final-price").text
   @titles << item.at_css(".fk-srch-title-text").text
   @descriptions << item.at_css(".fk-item-specs-section").text
   @page << item.at_css(".fk-inf-pageno").text rescue nil
  end

  (0..@prices.length - 1).each do |index|
    puts "title: #{@titles[index]}"
    puts "price: #{@prices[index]}"
    puts "description: #{@descriptions[index]}"
  #  puts "pageno. : #{@page[index]}"
    puts ""
  end
end

CSV.open("result.csv", "wb") do |row|
  row << ["title", "price", "description","pageno"]
  (0..@prices.length - 1).each do |index|
    row << [@titles[index], @prices[index], @descriptions[index],@page[index]]
  end
end

如您所见,我已经对 URL 进行了硬编码。你如何建议我刮掉整个书籍类别?我正在尝试海葵,但无法让它发挥作用。

4

2 回答 2

2

如果您检查加载更多结果时究竟发生了什么,您会意识到他们实际上是在使用 JSON 来读取带有偏移量的信息。

所以,你可以得到这样的五个页面:

http://www.flipkart.com/view-books/0/new-releases?response-type=json&inf-start=0
http://www.flipkart.com/view-books/0/new-releases?response-type=json&inf-start=20
http://www.flipkart.com/view-books/0/new-releases?response-type=json&inf-start=40
http://www.flipkart.com/view-books/0/new-releases?response-type=json&inf-start=60
http://www.flipkart.com/view-books/0/new-releases?response-type=json&inf-start=80

基本上,您会不断增加inf-start并获得结果,直到您获得的结果result-set应该20是最后一页为止。

于 2012-09-04T08:37:34.110 回答
1

这是一个未经测试的代码示例,可以执行您的操作,只是写得更简洁一点:

require 'nokogiri'
require 'open-uri'
require 'csv'

urls = %w[
  http://www.flipkart.com/view-books/0/new-releases?layout=grid&_pop=flyout
  http://www.flipkart.com/view-books/1/bestsellers
  http://www.flipkart.com/books/pre-order?query=book&cid=1&layout=list&ref=4b116001-01a6-4f53-8da7-945b74fdb253
]

CSV.open('result.csv', 'wb') do |row|

  row << ['title', 'price', 'description', 'pageno']

  urls.each do |url|

    doc = Nokogiri::HTML(open(url))
    puts doc.at_css('title').text

    doc.css('.fk-inf-scroll-item').each do |item|

      page = {
        titles:       item.at_css('.fk-srch-title-text').text,
        prices:       item.at_css('.final-price').text,
        descriptions: item.at_css('.fk-item-specs-section').text,
        pageno:       item.at_css('.fk-inf-pageno').text rescue nil,
      }

      page.each do |k, v|
        puts '%s: %s' % [k.to_s, v]
      end

      row << page.values
    end
  end
end

您可以使用一些有用的数据来帮助您确定需要检索多少条记录:

var config = {container: "#search_results", page_size: 20, counterSelector: ".fk-item-count", totalResults: 88, "startParamName" : "inf-start", "startFrom": 20};

要访问这些值,请使用以下内容:

doc.at('script[type="text/javascript+fk-onload"]').text =~ /page_size: (\d+).+totalResults: (\d+).+"startFrom": (\d+)/
page_size, total_results, start_from = $1, $2, $3
于 2012-09-05T06:18:03.333 回答