3

我正在尝试尽快计算许多 3x1 向量对的叉积。这个

n = 10000
a = np.random.rand(n, 3)
b = np.random.rand(n, 3)
numpy.cross(a, b)

给出了正确的答案,但出于对类似问题的回答的动机,我认为这einsum会让我有所收获。我发现两者

eijk = np.zeros((3, 3, 3))
eijk[0, 1, 2] = eijk[1, 2, 0] = eijk[2, 0, 1] = 1
eijk[0, 2, 1] = eijk[2, 1, 0] = eijk[1, 0, 2] = -1

np.einsum('ijk,aj,ak->ai', eijk, a, b)
np.einsum('iak,ak->ai', np.einsum('ijk,aj->iak', eijk, a), b)

计算叉积,但它们的性能令人失望:两种方法的性能都比np.cross

%timeit np.cross(a, b)
1000 loops, best of 3: 628 µs per loop
%timeit np.einsum('ijk,aj,ak->ai', eijk, a, b)
100 loops, best of 3: 9.02 ms per loop
%timeit np.einsum('iak,ak->ai', np.einsum('ijk,aj->iak', eijk, a), b)
100 loops, best of 3: 10.6 ms per loop

关于如何改进einsums 的任何想法?

4

2 回答 2

4

的乘法运算次数einsum()更多cross(),并且在最新的 NumPy 版本中,cross()不会创建很多临时数组。所以einsum()不能比cross().

这是cross的旧代码:

x = a[1]*b[2] - a[2]*b[1]
y = a[2]*b[0] - a[0]*b[2]
z = a[0]*b[1] - a[1]*b[0]

这是cross的新代码:

multiply(a1, b2, out=cp0)
tmp = array(a2 * b1)
cp0 -= tmp
multiply(a2, b0, out=cp1)
multiply(a0, b2, out=tmp)
cp1 -= tmp
multiply(a0, b1, out=cp2)
multiply(a1, b0, out=tmp)
cp2 -= tmp

为了加快速度,您需要 cython 或 numba。

于 2016-09-23T14:48:00.207 回答
4

您可以使用矩阵乘法np.tensordot在第一级丢失一个维度,然后使用np.einsum丢失另一个维度,如下所示 -

np.einsum('aik,ak->ai',np.tensordot(a,eijk,axes=([1],[1])),b)

a或者,我们可以在和busing之间执行广播的元素乘法np.einsum,然后一次性丢失两个维度np.tensordot,如下所示 -

np.tensordot(np.einsum('aj,ak->ajk', a, b),eijk,axes=([1,2],[1,2]))

我们也可以通过引入新的轴来执行元素乘法a[...,None]*b[:,None],但它似乎减慢了速度。


虽然,这些方法比提出的仅基于方法的方法有了很好的改进np.einsum,但未能击败np.cross

运行时测试 -

In [26]: # Setup input arrays
    ...: n = 10000
    ...: a = np.random.rand(n, 3)
    ...: b = np.random.rand(n, 3)
    ...: 

In [27]: # Time already posted approaches
    ...: %timeit np.cross(a, b)
    ...: %timeit np.einsum('ijk,aj,ak->ai', eijk, a, b)
    ...: %timeit np.einsum('iak,ak->ai', np.einsum('ijk,aj->iak', eijk, a), b)
    ...: 
1000 loops, best of 3: 298 µs per loop
100 loops, best of 3: 5.29 ms per loop
100 loops, best of 3: 9 ms per loop

In [28]: %timeit np.einsum('aik,ak->ai',np.tensordot(a,eijk,axes=([1],[1])),b)
1000 loops, best of 3: 838 µs per loop

In [30]: %timeit np.tensordot(np.einsum('aj,ak->ajk',a,b),eijk,axes=([1,2],[1,2]))
1000 loops, best of 3: 882 µs per loop
于 2016-09-23T14:23:50.370 回答