3

我有一组哈希数组。

items = 
[{ "item_9": 152 }, { "item_2": 139 }, { "item_13": 138 }, { "item_72": 137 }, { "item_125": 140 }, { "item_10": 144 }]
[{ "item_9": 152 }, { "item_2": 139 }, { "item_13": 138 }, { "item_72": 137 }, { "item_125": 140 }, { "item_10": 146 }]
[{ "item_9": 152 }, { "item_2": 139 }, { "item_13": 138 }, { "item_72": 137 }, { "item_125": 140 }, { "item_10": 147 }]
[{ "item_9": 152 }, { "item_2": 139 }, { "item_13": 138 }, { "item_72": 137 }, { "item_125": 140 }, { "item_10": 148 }]
[{ "item_9": 152 }, { "item_2": 139 }, { "item_13": 138 }, { "item_72": 137 }, { "item_125": 140 }, { "item_10": 153 }]
.
.
.
[{ "item_9": 152 }, { "item_2": 145 }, { "item_13": 150 }, { "item_72": 154 }, { "item_125": 141 }, { "item_10": 144 }]
[{ "item_9": 152 }, { "item_2": 145 }, { "item_13": 150 }, { "item_72": 154 }, { "item_125": 141 }, { "item_10": 146 }]
[{ "item_9": 152 }, { "item_2": 145 }, { "item_13": 150 }, { "item_72": 154 }, { "item_125": 141 }, { "item_10": 147 }]
[{ "item_9": 152 }, { "item_2": 145 }, { "item_13": 150 }, { "item_72": 154 }, { "item_125": 141 }, { "item_10": 148 }]
[{ "item_9": 152 }, { "item_2": 145 }, { "item_13": 150 }, { "item_72": 154 }, { "item_125": 141 }, { "item_10": 153 }]

我想做的是将它更改为哈希数组...

items =
{"item_9"=>152, "item_2"=>145, "item_13"=>150, "item_72"=>154, "item_125"=>141, "item_10"=>146}
{"item_9"=>152, "item_2"=>145, "item_13"=>150, "item_72"=>154, "item_125"=>141, "item_10"=>147}
{"item_9"=>152, "item_2"=>145, "item_13"=>150, "item_72"=>154, "item_125"=>141, "item_10"=>148}
{"item_9"=>152, "item_2"=>145, "item_13"=>150, "item_72"=>154, "item_125"=>141, "item_10"=>153}

我相信我可以使用...

items.map! { |item| item.reduce({}, :merge) }

但是,它的性能不是很好。至少当您拥有 1.4 亿条记录时,它的性能还不够。有一个更好的方法吗?

4

3 回答 3

3

也许更长一点,但它的工作速度更快:

require 'benchmark'

items = [
  [{ item_9: 152 }, { item_2: 139 }, { item_13: 138 }, { item_72: 137 }, { item_125: 140 }, { item_10: 146 }],
  [{ item_9: 152 }, { item_2: 139 }, { item_13: 138 }, { item_72: 137 }, { item_125: 140 }, { item_10: 147 }],
  [{ item_9: 152 }, { item_2: 139 }, { item_13: 138 }, { item_72: 137 }, { item_125: 140 }, { item_10: 148 }],
  [{ item_9: 152 }, { item_2: 139 }, { item_13: 138 }, { item_72: 137 }, { item_125: 140 }, { item_10: 153 }],
  [{ item_9: 152 }, { item_2: 145 }, { item_13: 150 }, { item_72: 154 }, { item_125: 141 }, { item_10: 144 }],
  [{ item_9: 152 }, { item_2: 145 }, { item_13: 150 }, { item_72: 154 }, { item_125: 141 }, { item_10: 146 }],
  [{ item_9: 152 }, { item_2: 145 }, { item_13: 150 }, { item_72: 154 }, { item_125: 141 }, { item_10: 147 }],
]

n = 100_000
Benchmark.bm do |b|
  b.report do
    n.times do |i|
      items.map { |item| item.reduce({}, :merge) }
    end
  end
  b.report do
    n.times do |i|
      # the winer
      items.map { |item| item.reduce({}, :update) }
    end
  end
  b.report do
    n.times do |i|
      items.map { |i| i.inject({}) { |f,c| f.update c } }
    end
  end
end

正如@tokland 建议的那样,item.reduce({}, :update)速度更快:

   user     system      total        real
6.300000   0.080000   6.380000 (  6.386180)
1.840000   0.020000   1.860000 (  1.860073)
2.220000   0.020000   2.240000 (  2.237294)

谢谢@tokland

于 2012-11-08T20:38:25.967 回答
0

由于性能是一个问题,可能是时候进行for循环yield了,并注意有关数据的有趣事实(如果有的话)。例如,您的数据似乎有许多重复项。这是规律还是巧合?

于 2012-11-08T20:29:53.143 回答
0

如果您确定您有一个两级数组(对中没有其他数组)并且每对中有两个项目,那么使用它会更快更短:

array = [['A', 'a'], ['B', 'b'], ['C', 'c']]
hash = Hash[*array.flatten]

对于超过两级的深度数组,这将给出错误的结果甚至错误(对于某些输入)。

array = [['A', 'a'], ['B', 'b'], ['C', ['a', 'b', 'c']]]
hash = Hash[*array.flatten]
# => {"A"=>"a", "B"=>"b", "C"=>"a", "b"=>"c"}

但是,如果您运行的是 Ruby 1.8.7 或更高版本,您可以将参数传递给 Array#flatten 并使其仅展平一层:

# on Ruby 1.8.7+
hash = Hash[*array.flatten(1)]
# => {"A"=>"a", "B"=>"b", "C"=>["a", "b", "c"]}
于 2015-09-30T03:50:51.160 回答