3

我将如何将聚合函数(例如“ sum()”或“ max()”)应用于向量中的 bin。

那就是如果我有:

  1. 长度为 N 的值 x 向量
  2. 长度为 N 的 bin 标签 b 的向量

这样 b 表示 x 中的每个值属于哪个 bin。对于 ba 中的每个可能值,我想对属于该 bin 的所有 x 值应用聚合函数“func()”。

>> x = [1,2,3,4,5,6]
>> b = ["a","b","a","a","c","c"]    

输出应该是 2 个向量(比如聚合函数是乘积函数):

>>(labels, y) = apply_to_bins(values = x, bins = b, func = prod)

labels = ["a","b","c"]
y = [12, 2, 30]

我想在 numpy (或只是 python)中尽可能优雅地做到这一点,因为显然我可以在它上面“for循环”。

4

6 回答 6

1

有了这个pandas groupby

import pandas as pd

def with_pandas_groupby(func, x, b):
    grouped = pd.Series(x).groupby(b)
    return grouped.agg(func)

使用 OP 的示例:

>>> x = [1,2,3,4,5,6]
>>> b = ["a","b","a","a","c","c"]
>>> with_pandas_groupby(np.prod, x, b)
a    12
b     2
c    30

我只是对速度感兴趣,所以我比较了senderlewith_pandas_groupby答案中给出的一些功能。

  • apply_to_bins_groupby

     3 levels,      100 values: 175 us per loop
     3 levels,     1000 values: 1.16 ms per loop
     3 levels,  1000000 values: 1.21 s per loop
    
    10 levels,      100 values: 304 us per loop
    10 levels,     1000 values: 1.32 ms per loop
    10 levels,  1000000 values: 1.23 s per loop
    
    26 levels,      100 values: 554 us per loop
    26 levels,     1000 values: 1.59 ms per loop
    26 levels,  1000000 values: 1.27 s per loop
    
  • apply_to_bins3

     3 levels,      100 values: 136 us per loop
     3 levels,     1000 values: 259 us per loop
     3 levels,  1000000 values: 205 ms per loop
    
    10 levels,      100 values: 297 us per loop
    10 levels,     1000 values: 447 us per loop
    10 levels,  1000000 values: 262 ms per loop
    
    26 levels,      100 values: 617 us per loop
    26 levels,     1000 values: 795 us per loop
    26 levels,  1000000 values: 299 ms per loop
    
  • with_pandas_groupby

     3 levels,      100 values: 365 us per loop
     3 levels,     1000 values: 443 us per loop
     3 levels,  1000000 values: 89.4 ms per loop
    
    10 levels,      100 values: 369 us per loop
    10 levels,     1000 values: 453 us per loop
    10 levels,  1000000 values: 88.8 ms per loop
    
    26 levels,      100 values: 382 us per loop
    26 levels,     1000 values: 466 us per loop
    26 levels,  1000000 values: 89.9 ms per loop
    

所以pandas对于大件物品来说是最快的。此外,级别(箱)的数量对计算时间没有太大影响。(请注意,时间是从 numpy 数组开始计算的,pandas.Series包括创建时间)

我生成的数据是:

def gen_data(levels, size):
    choices = 'abcdefghijklmnopqrstuvwxyz'
    levels = np.asarray([l for l in choices[:nlevels]])
    index = np.random.random_integers(0, levels.size - 1, size)
    b = levels[index]
    x = np.arange(1, size + 1)
    return x, b

然后像这样运行基准测试ipython

In [174]: for nlevels in (3, 10, 26):
   .....:     for size in (100, 1000, 10e5):
   .....:         x, b = gen_data(nlevels, size)
   .....:         print '%2d levels, ' % nlevels, '%7d values:' % size,
   .....:         %timeit function_to_time(np.prod, x, b)
   .....:     print
于 2012-07-28T10:18:40.670 回答
1

如果你要做这种事情,我强烈建议使用Pandas包。有一个不错的 groupby() 方法,您可以在数据框或 Series 上调用该方法,使此类事情变得简单。

例子:


In [450]: lst = [1, 2, 3, 1, 2, 3]
In [451]: s = Series([1, 2, 3, 10, 20, 30], lst)
In [452]: grouped = s.groupby(level=0)
In [455]: grouped.sum()
Out[455]: 
1    11
2    22
3    33

于 2012-07-26T13:43:25.763 回答
1

有几个有趣的解决方案不依赖于groupby. 第一个非常简单:

def apply_to_bins(func, values, bins):
    return zip(*((bin, func(values[bins == bin])) for bin in set(bins)))

这使用“花式索引”而不是分组,并且对于小输入执行得相当好;基于列表理解的变体做得更好(请参阅下面的时间)。

def apply_to_bins2(func, values, bins):
    bin_names = sorted(set(bins))
    return bin_names, [func(values[bins == bin]) for bin in bin_names]

这些具有可读性强的优点。两者的表现也比groupby小输入好,但对于大输入,它们的速度要慢得多,尤其是当有很多 bin 时;他们的表现是O(n_items * n_bins)。对于小输入,另一种numpy基于 - 的方法较慢,但对于大输入要快得多,尤其是对于具有大量 bin 的大输入:

def apply_to_bins3(func, values, bins):
    bins_argsort = bins.argsort()
    values = values[bins_argsort]
    bins = bins[bins_argsort]
    group_indices = (bins[1:] != bins[:-1]).nonzero()[0] + 1
    groups = numpy.split(values, group_indices)
    return numpy.unique(bins), [func(g) for g in groups]

一些测试。首先对于小输入:

>>> def apply_to_bins_groupby(func, x, b):
...         return zip(*[(k, np.product(x[list(v)]))
...                  for k, v in groupby(np.argsort(b), key=lambda i: b[i])])
... 
>>> x = numpy.array([1, 2, 3, 4, 5, 6])
>>> b = numpy.array(['a', 'b', 'a', 'a', 'c', 'c'])
>>> 
>>> %timeit apply_to_bins(numpy.prod, x, b)
10000 loops, best of 3: 31.9 us per loop
>>> %timeit apply_to_bins2(numpy.prod, x, b)
10000 loops, best of 3: 29.6 us per loop
>>> %timeit apply_to_bins3(numpy.prod, x, b)
10000 loops, best of 3: 122 us per loop
>>> %timeit apply_to_bins_groupby(numpy.prod, x, b)
10000 loops, best of 3: 67.9 us per loop

这里的apply_to_bins3表现不太好,但它仍然比最快的慢一个数量级。变大时效果更好n_items

>>> x = numpy.arange(1, 100000)
>>> b_names = numpy.array(['a', 'b', 'c', 'd'])
>>> b = b_names[numpy.random.random_integers(0, 3, 99999)]
>>> 
>>> %timeit apply_to_bins(numpy.prod, x, b)
10 loops, best of 3: 27.8 ms per loop
>>> %timeit apply_to_bins2(numpy.prod, x, b)
10 loops, best of 3: 27 ms per loop
>>> %timeit apply_to_bins3(numpy.prod, x, b)
100 loops, best of 3: 13.7 ms per loop
>>> %timeit apply_to_bins_groupby(numpy.prod, x, b)
10 loops, best of 3: 124 ms per loop

n_bins上升时,前两种方法需要很长时间才能在这里显示 - 大约五秒钟。apply_to_bins3是这里的明显赢家。

>>> x = numpy.arange(1, 100000)
>>> bn_product = product(['a', 'b', 'c', 'd', 'e'], repeat=5)
>>> b_names = numpy.array(list(''.join(s) for s in bn_product))
>>> b = b_names[numpy.random.random_integers(0, len(b_names) - 1, 99999)]
>>> 
>>> %timeit apply_to_bins3(numpy.prod, x, b)
10 loops, best of 3: 109 ms per loop
>>> %timeit apply_to_bins_groupby(numpy.prod, x, b)
1 loops, best of 3: 205 ms per loop

总的来说,groupby在大多数情况下可能没问题,但不太可能很好地扩展,正如这个线程所建议的那样。使用 pure(er)numpy方法,对于小输入来说速度较慢,但​​只有一点点;权衡是一个很好的选择。

于 2012-07-26T19:58:21.543 回答
1
import itertools as it
import operator as op

def apply_to_bins(values, bins, func):
    return {k: func(x[1] for x in v) for k,v in it.groupby(sorted(zip(bins, values), key=op.itemgetter(0)), key=op.itemgetter(0))}

x = [1,2,3,4,5,6]
b = ["a","b","a","a","c","c"]   

print apply_to_bins(x, b, sum) # returns {'a': 8, 'b': 2, 'c': 11}
print apply_to_bins(x, b, max) # returns {'a': 4, 'b': 2, 'c': 6}
于 2012-07-26T11:51:20.863 回答
1
>>> from itertools import groupby
>>> x = np.array([1, 2, 3, 4, 5, 6])
>>> zip(*[(k, np.product(x[list(v)]))
...       for k, v in groupby(np.argsort(b), key=lambda i: b[i])])
[('a', 'b', 'c'), (12, 2, 30)]

或者,一步一步:

>>> np.argsort(b)
array([0, 2, 3, 1, 4, 5])

b(或)的索引列表,x按 中的键排序b

>>> [(k, list(v)) for k, v in groupby(np.argsort(b), key=lambda i: b[i])]
[('a', [0, 2, 3]), ('b', [1]), ('c', [4, 5])]

按 键分组的索引b

>>> [(k, x[list(v)]) for k, v in groupby(np.argsort(b), key=lambda i: b[i])]
[('a', array([1, 3, 4])), ('b', array([2])), ('c', array([5, 6]))]

使用索引从中获取正确的元素x

>>> [(k, np.product(x[list(v)]))
...  for k, v in groupby(np.argsort(b), key=lambda i: b[i])]
[('a', 12), ('b', 2), ('c', 30)]

申请np.product

所以,把所有东西放在一起,

def apply_to_bins(values, bins, op):
    grouped = groupby(np.argsort(bins), key=lambda i: bins[i])
    applied = [(bin, op(x[list(indices)])) for bin, indices in grouped]
    return zip(*applied)
于 2012-07-26T11:58:34.983 回答
0

在聚合函数func可以表示为总和的特殊情况下,thenbincount似乎比 快pandas。例如当func是乘积时,它可以表示为对数之和,我们可以这样做:

x = np.arange( 1000000 )
b = nr.randint( 0, 100, 1000000 )

def apply_to_bincount( values, bins ) :
    logy = np.bincount( bins, weights=np.log( values ) )
    return np.arange(len(logy)), np.exp( logy )

%%timeit
apply_to_bincount( x, b )
10 loops, best of 3: 16.9 ms per loop

%%timeit
with_pandas_groupby( np.prod, x, b )
10 loops, best of 3: 36.2 ms per loop
于 2014-06-21T17:35:28.010 回答