我正在用 pyCUDA 自学 CUDA。在本练习中,我想将一个简单的 1024 个浮点数组发送到 GPU 并将其存储在共享内存中。正如我在下面的论点中指定的那样,我只在一个具有 1024 个线程的块上运行这个内核。
import pycuda.driver as cuda
from pycuda.compiler import SourceModule
import pycuda.autoinit
import numpy as np
import matplotlib.pyplot as plt
arrayOfFloats = np.float64(np.random.sample(1024))
mod = SourceModule("""
__global__ void myVeryFirstKernel(float* arrayOfFloats) {
extern __shared__ float sharedData[];
// Copy data to shared memory.
sharedData[threadIdx.x] = arrayOfFloats[threadIdx.x];
}
""")
func = mod.get_function('myVeryFirstKernel')
func(cuda.InOut(arrayOfFloats), block=(1024, 1, 1), grid=(1, 1))
print str(arrayOfFloats)
奇怪的是,我收到了这个错误。
[dfaux@harbinger CUDA_tutorials]$ python sharedMemoryExercise.py
Traceback (most recent call last):
File "sharedMemoryExercise.py", line 17, in <module>
func(cuda.InOut(arrayOfFloats), block=(1024, 1, 1), grid=(1, 1))
File "/software/linux/x86_64/epd-7.3-1-pycuda/lib/python2.7/site-packages/pycuda-2012.1-py2.7-linux-x86_64.egg/pycuda/driver.py", line 377, in function_call
Context.synchronize()
pycuda._driver.LaunchError: cuCtxSynchronize failed: launch failed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: launch failed
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuModuleUnload failed: launch failed
我试图通过更改发送到 GPU 的元素类型来调试此错误(例如,我使用 float32 而不是 float64)。我也尝试改变我的块和网格大小无济于事。
有什么问题?什么是死上下文?任何建议或想法表示赞赏。