我想在CUDA上生成一些决策树,下面我们有伪代码(代码很原始,只是为了理解我写的内容):
class Node
{
public :
Node* father;
Node** sons;
int countSons;
__device__ __host__ Node(Node* father)
{
this->father = father;
sons = NULL;
}
};
__global__ void GenerateSons(Node** fathers, int countFathers*, Node** sons, int* countSons)
{
int Thread_Index = (blockDim.x * blockIdx.x) + threadIdx.x;
if(Thread_Index < *(countFathers))
{
Node* Thread_Father = fathers[Thread_Index];
Node** Thread_Sons;
int Thread_countSons;
//Now we are creating new sons for our Thread_Father
/*
* Generating Thread_Sons for Thread_Father;
*/
Thread_Father->sons = Thread_Sons;
Thread_Father->countSons = Thread_countSons;
//Wait for others
/*I added here __syncthreads because I want to count all generated sons
by threads
*/
*(countSons) += Thread_countSons;
__syncthreads();
//Get all generated sons from whole Block and copy to sons
if(threadIdx.x == 0)
{
sons = new Node*[*(countSons)];
}
/*I added here __syncthreads because I want to allocated array for sons
*/
__syncthreads();
int Thread_Offset;
/*
* Get correct offset for actual thread
*/
for(int i = 0; i < Thread_countSons; i++)
sons[Thread_Offset + i] = Thread_Sons[i];
}
}
void main ()
{
Node* root = new Node();
//transfer root to kernel by cudaMalloc and cudaMemcpy
Node* root_d = root->transfer();
Node** fathers_d;
/*
* preapre array with father root and copy him to kernel
*/
int* countFathers, countSons;
/*
* preapre pointer of int for kernel and for countFathers set value 1
*/
for(int i = 0; i < LevelTree; i++)
{
Node** sons = NULL;
int threadsPerBlock = 256;
int blocksPerGrid = (*(countFathers)/*get count of fathers*/ + threadsPerBlock - 1) / threadsPerBlock;
GenerateSons<<<blocksPerGrid , threadsPerBlock >>>(fathers_d, countFathers, sons, countSons);
//Wait for end of kernel call
cudaDeviceSynchronize();
//replace
fathers_d = sons;
countFathers = countSons;
}
}
因此,它适用于 5 级(为检查器生成决策树),但在 6 级我有错误。在内核代码的某个地方,malloc 正在返回NULL
,对我来说,这是一个信息,即 blockThreads 中的某些线程无法分配更多内存。我很确定我正在清理所有我不需要的对象,在调用内核的每一端。我想,我无法理解 CUDA 中使用内存的一些事实。如果我在线程的本地内存中创建对象并且内核结束了他的活动,那么在内核的第二次启动时,我可以看到内核第一次调用的节点是。所以我的问题是对象在哪里Node
从第一次调用内核开始存储?它们是否存储在块中线程的本地内存中?因此,如果这是真的,那么在每次调用我的内核函数时,我会减少该线程的本地内存空间吗?
我正在使用带有计算能力 2.1、CUDA SDK 5.0、带有 NSight 3.0 的 Visual Studio 2010 Premium 的 GT 555m