我尝试将现有的爬虫从 EventMachine 切换到赛璐珞。为了与赛璐珞取得联系,我在一个通过 Nginx 提供服务的 linux 机器上生成了一堆静态文件,每个文件 150 kB。
底部的代码应该可以完成它的工作,但是代码存在一个我不明白的问题:由于线程池大小为 50,代码应该最多产生 50 个线程,但它产生了 180 个线程。如果我将池大小增加到 100,则会产生 330 个线程。那里出了什么问题?
此代码的简单复制和粘贴应该适用于每个盒子,所以欢迎任何提示:)
#!/usr/bin/env jruby
require 'celluloid'
require 'open-uri'
URLS = *(1..1000)
@@requests = 0
@@responses = 0
@@total_size = 0
class Crawler
include Celluloid
def fetch(id)
uri = URI("http://data.asconix.com/#{id}")
puts "Request ##{@@requests += 1} -> #{uri}"
begin
req = open(uri).read
rescue Exception => e
puts e
end
end
end
URLS.each_slice(50).map do |idset|
pool = Crawler.pool(size: 50)
crawlers = idset.to_a.map do |id|
begin
pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{@@responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
@@total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
pool.terminate
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end
$stdout.print "Requests total: #{@@requests}\n"
$stdout.print "Responses total: #{@@responses}\n"
$stdout.print "Size total: #{@@total_size} bytes\n"
顺便说一句,当我在 each_slice 循环之外定义池时,也会出现同样的问题:
....
@pool = Crawler.pool(size: 50)
URLS.each_slice(50).map do |idset|
crawlers = idset.to_a.map do |id|
begin
@pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{@@responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
@@total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end