我最近开始学习网络爬虫,我用 Ruby、Anemone和Mongodb构建了一个示例爬虫用于存储。我正在一个可能有数十亿个链接的大型公共网站上测试爬虫。
crawler.rb 正在索引正确的信息,尽管当我在活动监视器中检查内存使用时,它显示内存不断增长。我只运行了爬虫大约 6-7 个小时,内存显示 mongod 为 1.38GB,Ruby 进程为 1.37GB。它似乎每小时增长约 100MB。
看来我可能有内存泄漏?他们是一种更优化的方式,我可以在内存升级失控的情况下实现相同的爬网,以便它可以运行更长时间?
# Sample web_crawler.rb with Anemone, Mongodb and Ruby.
require 'anemone'
# do not store the page's body.
module Anemone
class Page
def to_hash
{'url' => @url.to_s,
'links' => links.map(&:to_s),
'code' => @code,
'visited' => @visited,
'depth' => @depth,
'referer' => @referer.to_s,
'fetched' => @fetched}
end
def self.from_hash(hash)
page = self.new(URI(hash['url']))
{'@links' => hash['links'].map { |link| URI(link) },
'@code' => hash['code'].to_i,
'@visited' => hash['visited'],
'@depth' => hash['depth'].to_i,
'@referer' => hash['referer'],
'@fetched' => hash['fetched']
}.each do |var, value|
page.instance_variable_set(var, value)
end
page
end
end
end
Anemone.crawl("http://www.example.com/", :discard_page_bodies => true, :threads => 1, :obey_robots_txt => true, :user_agent => "Example - Web Crawler", :large_scale_crawl => true) do | anemone |
anemone.storage = Anemone::Storage.MongoDB
#only crawl pages that contain /example in url
anemone.focus_crawl do |page|
links = page.links.delete_if do |link|
(link.to_s =~ /example/).nil?
end
end
# only process pages in the /example directory
anemone.on_pages_like(/example/) do | page |
regex = /some type of regex/
example = page.doc.css('#example_div').inner_html.gsub(regex,'') rescue next
# Save to text file
if !example.nil? and example != ""
open('example.txt', 'a') { |f| f.puts "#{example}"}
end
page.discard_doc!
end
end