当有一些重复的索引相加而其他索引不相加时,执行数组操作的最佳方法是什么?似乎我可能不得不使用这些操作,但如果有一个带有标志的替代方案,用于对齐但未求和的尺寸, einsum
那会更好。tensordot
有没有人知道一个快速的数值例程(也许在 lapack 中?),它的行为类似于 tensordot,除了某些轴可以对齐而不被求和?
==
这是一个示例代码,用于显示所需的数组操作类型。我需要的操作是由method_sum
、method_einsum
和完成的method_matmul
。类似的运算在匹配的 j 轴上求和,由method2_einsum
和完成method2_tensordot
。
通过比较时间,似乎tensordot
应该能够einsum
解决第一个问题。但是,它没有在不汇总轴的情况下对齐轴的功能。
#import scipy
import scipy as sp
# Shapes of arrays
I = 200
J = 50
K = 200
L = 100
a = sp.ones((I, J, L))
b = sp.ones((J, K, L))
# The desired product has a sum over the l-axis
## Use broadcasting to multiply and sum over the last dimension
def method_sum(a, b):
"Multiply arrays and sum over last dimension."
c = (a[:, :, None, :] * b[None, :, :, :]).sum(-1)
return c
## Use einsum to multiply arrays and sum over the l-axis
def method_einsum(a, b):
"Multiply arrays and sum over last dimension."
c = sp.einsum('ijl,jkl->ijk', a, b)
return c
## Use matmul to multiply arrays and sum over one of the axes
def method_matmul(a, b):
"Multiply arrays using the new matmul operation."
c = sp.matmul(a[:, :, None, None, :],
b[None, :, :, :, None])[:, :, :, 0, 0]
return c
# Compare einsum vs tensordot on summation over j and l
## Einsum takes about the same amount of time when j is not summed over)
def method2_einsum(a, b):
"Multiply arrays and sum over last dimension."
c = sp.einsum('ijl,jkl->ik', a, b)
return c
## Tensor dot can do this faster but it always sums over the aligned axes
def method2_tensordot(a, b):
"Multiply and sum over all overlapping dimensions."
c = sp.tensordot(a, b, axes=[(1, 2,), (0, 2,)])
return c
以下是我电脑上各种例程的一些时间。Tensordot 可以击败 einsum,method2
因为它使用多个内核。我想实现类似于tensordot
J 轴和 L 轴对齐但只有 L 轴相加的计算的性能。
Time for method_sum:
1 loops, best of 3: 744 ms per loop
Time for method_einsum:
10 loops, best of 3: 95.1 ms per loop
Time for method_matmul:
10 loops, best of 3: 93.8 ms per loop
Time for method2_einsum:
10 loops, best of 3: 90.4 ms per loop
Time for method2_tensordot:
100 loops, best of 3: 10.9 ms per loop