除了单个添加之外,这并没有给 GPU 提供太多工作。在您看到好处之前,该数组必须相当大。反正:
我使用 C++,不熟悉 C# 或 CUDAfy,但移植逻辑应该很容易。将每对元素之和存储在数组中的核函数是:
template<typename T>
__global__ void sum_combinations_of_array( const T* arr, const size_t len, T* dest )
{
const int tx = blockIdx.x*blockDim.x+threadIdx.x;
const int ty = blockIdx.y*blockDim.y+threadIdx.y;
if( tx < len && ty < len && tx < ty ) {
dest[tx*len+ty] = arr[tx]+arr[ty];
}
}
您只是使用 2D 线程块来决定要添加数组的哪些元素(它们只是代替i
和j
在您的代码中)。 arr
应该至少len
在大小上,并且dest
应该至少len*len
在大小上。设置所有这些并运行它的主机代码将类似于:
const int len = 1000;
int* arr;
cudaMalloc( &arr, len*sizeof(int) );
int* matrix;
cudaMalloc( &matrix, len*len*sizeof(int) );
// cudaMalloc2D could also be used here, but then you'll
// have to pay attention to the pitch
cudaMemset( matrix, 0, len*len*sizeof(int) );
// copy host array to arr with cudaMemcpy
// ...
const int numThreads = ???; // depends on your hardware
dim3 grid( len, (len+numThreads-1)/numThreads ), threads( 1, numThreads );
sum_combinations_of_array<int><<<grid,threads>>>( arr, len, matrix );
cudaDeviceSynchronize(); // wait for completion
// copy device matrix to host with cudaMemcpy (or cudaMemcpy2D)
// remember any element i<=j will be 0
// ...
cudaFree( arr );
cudaFree( matrix );