这是我的测试:
require 'csv'
require 'benchmark'
small_csv_file = "test_data_small_50k.csv"
large_csv_file = "test_data_large_20m.csv"
Benchmark.bmbm do |x|
x.report("Small: CSV #parse") do
CSV.parse(File.open(small_csv_file), headers: true) do |row|
row
end
end
x.report("Small: CSV #foreach") do
CSV.foreach(small_csv_file, headers: true) do |row|
row
end
end
x.report("Large: CSV #parse") do
CSV.parse(File.open(large_csv_file), headers: true) do |row|
row
end
end
x.report("Large: CSV #foreach") do
CSV.foreach(large_csv_file, headers: true) do |row|
row
end
end
end
Rehearsal -------------------------------------------------------
Small: CSV #parse 0.950000 0.000000 0.950000 ( 0.952493)
Small: CSV #foreach 0.950000 0.000000 0.950000 ( 0.953514)
Large: CSV #parse 659.000000 2.120000 661.120000 (661.280070)
Large: CSV #foreach 648.240000 1.800000 650.040000 (650.062963)
------------------------------------------- total: 1313.060000sec
user system total real
Small: CSV #parse 1.000000 0.000000 1.000000 ( 1.143246)
Small: CSV #foreach 0.990000 0.000000 0.990000 ( 0.984285)
Large: CSV #parse 646.380000 1.890000 648.270000 (648.286247)
Large: CSV #foreach 651.010000 1.840000 652.850000 (652.874320)
基准测试在具有 8GB 内存的 Macbook Pro 上运行。结果表明使用 CSV#parse 或 CSV#foreach 的性能在统计上是等效的。
已删除标题选项(仅测试小文件):
require 'csv'
require 'benchmark'
small_csv_file = "test_data_small_50k.csv"
Benchmark.bmbm do |x|
x.report("Small: CSV #parse") do
CSV.parse(File.open(small_csv_file)) do |row|
row
end
end
x.report("Small: CSV #foreach") do
CSV.foreach(small_csv_file) do |row|
row
end
end
end
Rehearsal -------------------------------------------------------
Small: CSV #parse 0.590000 0.010000 0.600000 ( 0.597775)
Small: CSV #foreach 0.620000 0.000000 0.620000 ( 0.621950)
---------------------------------------------- total: 1.220000sec
user system total real
Small: CSV #parse 0.590000 0.000000 0.590000 ( 0.597594)
Small: CSV #foreach 0.610000 0.000000 0.610000 ( 0.604537)
笔记:
large_csv_file 与 small_csv_file 具有不同的结构,因此比较两个文件之间的结果(即行/秒)将是不准确的。
small_csv_file 有 50,000 条记录
large_csv_file 有 1,000,000 条记录
由于为行中的每个字段构建哈希,将 Headers 选项设置为 true 会显着降低性能(请参阅 HeadersConverters 部分:http ://www.ruby-doc.org/stdlib-2.0.0/libdoc/csv/rdoc/CSV .html )