5

当我将内核中的展开从 8 个循环增加到 9 个循环时,它会因out of resources错误而中断。

我阅读了如何诊断由于资源不足而导致的 CUDA 启动失败?参数不匹配和寄存器的过度使用可能是一个问题,但这里似乎不是这种情况。

我的内核计算n点和m质心之间的距离,并为每个点选择最接近的质心。它适用于 8 个维度,但不适用于 9 个维度。当我dimensions=9为距离计算设置并取消注释两条线时,我得到一个pycuda._driver.LaunchError: cuLaunchGrid failed: launch out of resources.

您认为什么可能导致这种行为?还有哪些其他问题会导致out of resources*?

我使用 Quadro FX580。这是最小的(ish)示例。为了展开真实代码,我使用模板。

import numpy as np
from pycuda import driver, compiler, gpuarray, tools
import pycuda.autoinit


## preference
np.random.seed(20)
points = 512
dimensions = 8
nclusters = 1

## init data
data = np.random.randn(points,dimensions).astype(np.float32)
clusters = data[:nclusters]

## init cuda
kernel_code = """

      // the kernel definition 
    __device__ __constant__ float centroids[16384];

    __global__ void kmeans_kernel(float *idata,float *g_centroids,
    int * cluster, float *min_dist, int numClusters, int numDim) {
    int valindex = blockIdx.x * blockDim.x + threadIdx.x ;
    float increased_distance,distance, minDistance;
    minDistance = 10000000 ;
    int nearestCentroid = 0;
    for(int k=0;k<numClusters;k++){
      distance = 0.0;
      increased_distance = idata[valindex*numDim] -centroids[k*numDim];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+1] -centroids[k*numDim+1];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+2] -centroids[k*numDim+2];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+3] -centroids[k*numDim+3];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+4] -centroids[k*numDim+4];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+5] -centroids[k*numDim+5];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+6] -centroids[k*numDim+6];
      distance = distance +(increased_distance * increased_distance);
      increased_distance =  idata[valindex*numDim+7] -centroids[k*numDim+7];
      distance = distance +(increased_distance * increased_distance);
      //increased_distance =  idata[valindex*numDim+8] -centroids[k*numDim+8];
      //distance = distance +(increased_distance * increased_distance);

      if(distance <minDistance) {
        minDistance = distance ;
        nearestCentroid = k;
        } 
      }
      cluster[valindex]=nearestCentroid;
      min_dist[valindex]=sqrt(minDistance);
    } 
 """
mod = compiler.SourceModule(kernel_code)
centroids_adrs = mod.get_global('centroids')[0]    
kmeans_kernel = mod.get_function("kmeans_kernel")
clusters_gpu = gpuarray.to_gpu(clusters)
cluster = gpuarray.zeros(points, dtype=np.int32)
min_dist = gpuarray.zeros(points, dtype=np.float32)

driver.memcpy_htod(centroids_adrs,clusters)

distortion = gpuarray.zeros(points, dtype=np.float32)
block_size= 512

## start kernel
kmeans_kernel(
    driver.In(data),driver.In(clusters),cluster,min_dist,
    np.int32(nclusters),np.int32(dimensions),
    grid = (points/block_size,1),
    block = (block_size, 1, 1),
)
print cluster
print min_dist
4

1 回答 1

8

您的寄存器用完了,因为您的block_size(512) 太大了。

ptxas报告您的内核使用 16 个寄存器和注释行:

$ nvcc test.cu -Xptxas --verbose
ptxas info    : Compiling entry function '_Z13kmeans_kernelPfS_PiS_ii' for 'sm_10'
ptxas info    : Used 16 registers, 24+16 bytes smem, 65536 bytes cmem[0]

取消注释这些行会将寄存器使用增加到 17 并在运行时出现错误:

$ nvcc test.cu -run -Xptxas --verbose
ptxas info    : Compiling entry function '_Z13kmeans_kernelPfS_PiS_ii' for 'sm_10'
ptxas info    : Used 17 registers, 24+16 bytes smem, 65536 bytes cmem[0]
error: too many resources requested for launch

内核的每个线程使用的物理寄存器的数量限制了您可以在运行时启动的块的大小。SM 1.0 设备有 8K 寄存器可供线程块使用。我们可以将其与内核的寄存器要求进行比较:17 * 512 = 8704 > 8K. 在 16 个寄存器处,您原来的注释内核只是吱吱作响:16 * 512 = 8192 == 8K

未指定架构时nvcc,默认为 SM 1.0 设备编译内核。PyCUDA 可能以同样的方式工作。

要解决您的问题,您可以减少block_size(比如说,256)或找到一种方法来配置 PyCUDA 来为 SM 2.0 设备编译您的内核。block_sizeSM 2.0 设备(例如 QuadroFX 580)提供 32K 寄存器,对于您的原始512来说绰绰有余。

于 2011-10-01T02:16:33.397 回答