0

我是 CUDA C 的新手,我正在尝试并行化 slave_sort 函数的以下代码,您会意识到它已经与 posix 线程并行工作。我有以下结构:

typedef struct{
   long densities[MAX_RADIX];
   long ranks[MAX_RADIX];
   char pad[PAGE_SIZE];
}prefix_node;

struct global_memory {
   long Index;                             /* process ID */
   struct prefix_node prefix_tree[2 * MAX_PROCESSORS];
} *global;

void slave_sort(){
         .
         .
         .
long *rank_me_mynum;
struct prefix_node* n;
struct prefix_node* r;
struct prefix_node* l;
         .
         .
MyNum = global->Index;
global->Index++;
n = &(global->prefix_tree[MyNum]);
     for (i = 0; i < radix; i++) {
        n->densities[i] = key_density[i];
        n->ranks[i] = rank_me_mynum[i];
     }
     offset = MyNum;
     level = number_of_processors >> 1;
     base = number_of_processors;
     while ((offset & 0x1) != 0) {
       offset >>= 1;
       r = n;
       l = n - 1;
       index = base + offset;
       n = &(global->prefix_tree[index]);
       if (offset != (level - 1)) {
         for (i = 0; i < radix; i++) {
           n->densities[i] = r->densities[i] + l->densities[i];
           n->ranks[i] = r->ranks[i] + l->ranks[i];
         }
       } else {
         for (i = 0; i < radix; i++) {
           n->densities[i] = r->densities[i] + l->densities[i];
         }
       }
       base += level;
       level >>= 1;
}

Mynum 是处理器的数量。我希望在将代码传递给内核之后, Mynum 是represented by blockIdx.x。问题是我对结构感到困惑。我不知道如何在内核中传递它们。谁能帮我?

下面的代码对吗?

__global__ void testkernel(prefix_node *prefix_tree, long *dev_rank_me_mynum, long *key_density,long radix)

int i = threadIdx.x + blockIdx.x*blockDimx.x;
prefix_node *n;
prefix_node *l;
prefix_node *r;
long offset;
     .
     .
     .
n = &prefix_tree[blockIdx.x];
if((i%numthreads) == 0){
    for(int j=0; j<radix; j++){
        n->densities[j] = key_density[j + radix*blockIdx.x];
        n->ranks[i] = dev_rank_me_mynum[j + radix*blockIdx.x];
    }
    .
    .
    .
 } 


int main(...){

    long *dev_rank_me_mynum;
    long *key_density;
    prefix_node *prefix_tree;
    long radix = 1024;

    cudaMalloc((void**)&dev_rank_me_mynum, radix*numblocks*sizeof(long));
    cudaMalloc((void**)&key_density, radix*numblocks*sizeof(long));
    cudaMalloc((void**)&prefix_tree, numblocks*sizeof(prefix_node));

    testkernel<<<numblocks,numthreads>>>(prefix_tree,dev_runk_me_mynum,key_density,radix);
}
4

1 回答 1

0

您在编辑中发布的主机 API 代码看起来不错。该prefix_node结构仅包含静态声明的数组,因此只需一次cudaMalloc调用即可为内核分配内存以供其使用。您传递prefix_tree给内核的方法也很好。

内核代码虽然不完整并且包含一些明显的拼写错误,但就是另一回事了。似乎您的意图是每个块只有一个线程在prefix_tree. 这将是非常低效的,并且只利用了 GPU 总容量的一小部分。例如为什么这样做:

prefix_node *n = &prefix_tree[blockIdx.x];
if((i%numthreads) == 0){
    for(int j=0; j<radix; j++){
        n->densities[j] = key_density[j + radix*blockIdx.x];
        n->ranks[j] = dev_rank_me_mynum[j + radix*blockIdx.x];
    }
    .
    .
    .
} 

当你可以做到这一点时:

prefix_node *n = &prefix_tree[blockIdx.x];
for(int j=threadIdx.x; j<radix; j+=blockDim.x){
    n->densities[j] = key_density[j + radix*blockIdx.x];
    n->ranks[j] = dev_rank_me_mynum[j + radix*blockIdx.x];
}

它合并内存读取并使用块中的尽可能多的线程,而不仅仅是一个,因此应该快很多倍。因此,也许您应该重新考虑直接尝试将您发布的串行 C 代码转换为内核的策略......

于 2012-06-04T09:18:05.933 回答