我编写了一个代码,用于将 2 个长度为“N”元素的向量相乘,并在 CUDA 5.0 中返回相同长度的乘积向量。这是我的代码我改变“N”的值只是看看GPU与CPU相比如何。我最多可以使用 2000000000 个元素。但是,当我转到 3000000000 时,我收到警告:
vecmul.cu(52): warning: floating-point value does not fit in required integral type
vecmul.cu(52): warning: floating-point value does not fit in required integral type
vecmul.cu: In function `_Z6vecmulPiS_S_':
vecmul.cu:15: warning: comparison is always false due to limited range of data type
vecmul.cu: In function `int main()':
vecmul.cu:40: warning: comparison is always true due to limited range of data type
这是我的代码
// Summing 2 Arrays
#include<stdio.h>
#include <fstream>
#define N (3000000000)
//const int threadsPerBlock = 256;
// Declare add function for Device
__global__ void vecmul(int *a,int *b,int *c)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
if (tid >= N) {return;} // (LINE 15)
c[tid] = a[tid] * b[tid];
}
int main(void)
{
// Allocate Memory on Host
int *a_h = new int[N];
int *b_h = new int[N];
int *c_h = new int[N];
// Allocate Memory on GPU
int *a_d;
int *b_d;
int *c_d;
cudaMalloc((void**)&a_d,N*sizeof(int));
cudaMalloc((void**)&b_d,N*sizeof(int));
cudaMalloc((void**)&c_d,N*sizeof(int));
//Initialize Host Array
for (int i=0;i<N;i++) // (LINE 40)
{
a_h[i] = i;
b_h[i] = (i+1);
}
// Copy Data from Host to Device
cudaMemcpy(a_d,a_h,N*sizeof(int),cudaMemcpyHostToDevice);
cudaMemcpy(b_d,b_h,N*sizeof(int),cudaMemcpyHostToDevice);
// Run Kernel
int blocks = int(N - 0.5)/256 + 1; // (LINE 52)
vecmul<<<blocks,256>>>(a_d,b_d,c_d);
// Copy Data from Device to Host
cudaMemcpy(c_h,c_d,N*sizeof(int),cudaMemcpyDeviceToHost);
// Free Device Memory
cudaFree(a_d);
cudaFree(b_d);
cudaFree(c_d);
// Free Memory from Host
free(a_h);
free(b_h);
free(c_h);
return 0;
}
这是因为块的数量对于这个数组大小来说是不够的吗?任何建议都会受到欢迎,因为我是 CUDA 的初学者。我在 NVIDIA Quadro 2000 上运行它。