4

我正在尝试使用 NumbaPro 的 cuda 扩展来乘以大型数组矩阵。我最终想要的是将大小为 NxN 的矩阵乘以一个对角矩阵,该对角矩阵将作为一维矩阵输入(因此,a.dot(numpy.diagflat(b)) 我发现它是 a 的同义词* b)。但是,我收到一个不提供任何信息的断言错误。

如果我将两个一维数组矩阵相乘,我只能避免这个断言错误,但这不是我想要做的。

from numbapro import vectorize, cuda
from numba import f4,f8
import numpy as np

def generate_input(n):
    import numpy as np
    A = np.array(np.random.sample((n,n)))
    B = np.array(np.random.sample(n) + 10)
    return A, B

def product(a, b):
    return a * b

def main():
    cu_product = vectorize([f4(f4, f4), f8(f8, f8)], target='gpu')(product)

    N = 1000

    A, B = generate_input(N)
    D = np.empty(A.shape)

    stream = cuda.stream()

    with stream.auto_synchronize():
        dA = cuda.to_device(A, stream)
        dB = cuda.to_device(B, stream)
        dD = cuda.to_device(D, stream, copy=False)
        cu_product(dA, dB, out=dD, stream=stream)
        dD.to_host(stream)

if __name__ == '__main__':
    main()

这是我的终端吐出的内容:

Traceback (most recent call last):
  File "cuda_vectorize.py", line 32, in <module>
    main()
  File "cuda_vectorize.py", line 28, in main
    cu_product(dA, dB, out=dD, stream=stream)
  File "/opt/anaconda1anaconda2anaconda3/lib/python2.7/site-packages/numbapro/_cudadispatch.py", line 109, in __call__
  File "/opt/anaconda1anaconda2anaconda3/lib/python2.7/site-packages/numbapro/_cudadispatch.py", line 191, in _arguments_requirement
AssertionError
4

2 回答 2

4

问题是您正在使用vectorize一个接受非标量参数的函数。NumbaPro 的想法vectorize是,它将标量函数作为输入,并生成一个函数,该函数将标量运算并行应用于向量的所有元素。请参阅NumbaPro 文档

您的函数需要一个矩阵和一个向量,它们绝对不是标量。[编辑] 您可以使用 NumbaPro 的 cuBLAS 包装器或编写自己的简单内核函数在 GPU 上做您想做的事情。这是一个演示两者的示例。注意将需要 NumbaPro 0.12.2 或更高版本(在此编辑时刚刚发布)。

from numbapro import jit, cuda
from numba import float32
import numbapro.cudalib.cublas as cublas
import numpy as np
from timeit import default_timer as timer

def generate_input(n):
    A = np.array(np.random.sample((n,n)), dtype=np.float32)
    B = np.array(np.random.sample(n), dtype=A.dtype)
    return A, B

@cuda.jit(argtypes=[float32[:,:], float32[:,:], float32[:]])
def diagproduct(c, a, b):
  startX, startY = cuda.grid(2)
  gridX = cuda.gridDim.x * cuda.blockDim.x;
  gridY = cuda.gridDim.y * cuda.blockDim.y;
  height, width = c.shape

  for y in range(startY, height, gridY):
    for x in range(startX, width, gridX):       
      c[y, x] = a[y, x] * b[x]

def main():

    N = 1000

    A, B = generate_input(N)
    D = np.empty(A.shape, dtype=A.dtype)
    E = np.zeros(A.shape, dtype=A.dtype)
    F = np.empty(A.shape, dtype=A.dtype)

    start = timer()
    E = np.dot(A, np.diag(B))
    numpy_time = timer() - start

    blas = cublas.api.Blas()

    start = timer()
    blas.gemm('N', 'N', N, N, N, 1.0, np.diag(B), A, 0.0, D)
    cublas_time = timer() - start

    diff = np.abs(D-E)
    print("Maximum CUBLAS error %f" % np.max(diff))

    blockdim = (32, 8)
    griddim  = (16, 16)

    start = timer()
    dA = cuda.to_device(A)
    dB = cuda.to_device(B)
    dF = cuda.to_device(F, copy=False)
    diagproduct[griddim, blockdim](dF, dA, dB)
    dF.to_host()
    cuda_time = timer() - start   

    diff = np.abs(F-E)
    print("Maximum CUDA error %f" % np.max(diff))

    print("Numpy took    %f seconds" % numpy_time)
    print("CUBLAS took   %f seconds, %0.2fx speedup" % (cublas_time, numpy_time / cublas_time)) 
    print("CUDA JIT took %f seconds, %0.2fx speedup" % (cuda_time, numpy_time / cuda_time))

if __name__ == '__main__':
    main()

内核明显更快,因为 SGEMM 执行完整的矩阵矩阵乘法 (O(n^3)),并将对角线扩展为完整矩阵。diagproduct功能更智能。它只是对每个矩阵元素进行一次乘法运算,并且从不将对角线扩展为完整矩阵。以下是 N=1000 在我的 NVIDIA Tesla K20c GPU 上的结果:

Maximum CUBLAS error 0.000000
Maximum CUDA error 0.000000
Numpy took    0.024535 seconds
CUBLAS took   0.010345 seconds, 2.37x speedup
CUDA JIT took 0.004857 seconds, 5.05x speedup

时序包括进出 GPU 的所有副本,这对于小型矩阵来说是一个重大瓶颈。如果我们将 N 设置为 10,000 并再次运行,我们将获得更大的加速:

Maximum CUBLAS error 0.000000
Maximum CUDA error 0.000000
Numpy took    7.245677 seconds
CUBLAS took   1.371524 seconds, 5.28x speedup
CUDA JIT took 0.264598 seconds, 27.38x speedup

然而,对于非常小的矩阵,CUBLAS SGEMM 具有优化的路径,因此更接近 CUDA 性能。这里,N=100

Maximum CUBLAS error 0.000000
Maximum CUDA error 0.000000
Numpy took    0.006876 seconds
CUBLAS took   0.001425 seconds, 4.83x speedup
CUDA JIT took 0.001313 seconds, 5.24x speedup
于 2013-06-18T05:13:02.843 回答
1

只是为了反弹所有这些考虑因素。我还想在 CUDA 上实现一些矩阵计算,但后来听说了 numpy.einsum 函数。事实证明,einsum 的速度非常快。在这种情况下,这里是它的代码。但它可以应用于许多类型的计算。

G = np.einsum('ij,j -> ij',A, B)

在速度方面,这里是 N = 10000 的结果

Numpy took    8.387756 seconds
CUDA JIT took 0.218394 seconds, 38.41x speedup
EINSUM took 0.131751 seconds, 63.66x speedup
于 2014-10-09T18:34:52.397 回答