我正在开发一个多 GPU 加速流求解器。目前我正在尝试实现通信隐藏。这意味着,在交换数据时,GPU 会计算网格中不参与通信的部分,并在通信完成后计算网格的其余部分。
我试图通过一个流(computeStream
)用于长期运行内核(fluxKernel
)和一个(communicationStream
)用于不同的通信阶段来解决这个问题。具有非常低的computeStream
优先级,以便允许 上的内核communicationStream
交错fluxKernel
,即使它使用所有资源。
这些是我正在使用的流:
int priority_high, priority_low;
cudaDeviceGetStreamPriorityRange(&priority_low , &priority_high ) ;
cudaStreamCreateWithPriority (&communicationStream, cudaStreamNonBlocking, priority_high );
cudaStreamCreateWithPriority (&computeStream , cudaStreamNonBlocking, priority_low );
所需的并发模式如下所示:
在通过 MPI 发送数据之前,我需要同步communicationStream
数据,以确保在发送之前完全下载数据。
在下面的清单中,我展示了我目前正在做的事情的结构。首先,我fluxKernel
在computeStream
. 然后我开始sendKernel
收集应该发送到第二个 GPU 的数据,然后将其下载到主机(由于硬件限制,我不能使用 cuda-aware MPI)。然后数据以非阻塞方式发送MPI_Isend
,随后使用阻塞接收(MPI_recv
)。当接收到数据时,该过程向后完成。首先将数据上传到设备,然后通过recvKernel
. 最后对fluxKernel
上的网格的剩余部分调用communicationStream
。
请注意,显示的代码内核在默认流上运行之前和之后。
{ ... } // Preparations
// Start main part of computatation on first stream
fluxKernel<<< ..., ..., 0, computeStream >>>( /* main Part */ );
// Prepare send data
sendKernel<<< ..., ..., 0, communicationStream >>>( ... );
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyDeviceToHost, communicationStream );
cudaStreamSynchronize( communicationStream );
// MPI Communication
MPI_Isend( ... );
MPI_Recv ( ... );
// Use received data
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyHostToDevice, communicationStream );
recvKernel<<< ..., ..., 0, communicationStream >>>( ... );
fluxKernel<<< ..., ..., 0, communicationStream >>>( /* remaining Part */ );
{ ... } // Rest of the Computations
我使用 nvprof 和 Visual Profiler 来查看流是否实际同时执行。这是结果:
我观察到sendKernel
(紫色)、上传、MPI 通信和下载与fluxKernel
. 但是,recvKernel
(红色)仅在另一个流完成后才开始。开启同步并不能解决问题:
对于我的实际应用程序,我不仅有一个通信,而且还有多个。我也通过两次通信对此进行了测试。程序是:
sendKernel<<< ..., ..., 0, communicationStream >>>( ... );
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyDeviceToHost, communicationStream );
cudaStreamSynchronize( communicationStream );
MPI_Isend( ... );
sendKernel<<< ..., ..., 0, communicationStream >>>( ... );
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyDeviceToHost, communicationStream );
cudaStreamSynchronize( communicationStream );
MPI_Isend( ... );
MPI_Recv ( ... );
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyHostToDevice, communicationStream );
recvKernel<<< ..., ..., 0, communicationStream >>>( ... );
MPI_Recv ( ... );
cudaMemcpyAsync ( ..., ..., ..., cudaMemcpyHostToDevice, communicationStream );
recvKernel<<< ..., ..., 0, communicationStream >>>( ... );
结果类似于一次通信(上图),因为第二次内核调用(这次是 a sendKernel
)被延迟到内核computeStream
完成。
因此,总体观察结果是,第二次内核调用被延迟,与这是哪个内核无关。
你能解释一下,为什么 GPU 会以这种方式同步,或者我如何才能让第二个内核communicationStream
同时运行到 computeStream?
非常感谢。
编辑1:问题的完整返工
最小可重现示例
我构建了一个最小的可重现示例。最后,代码将数据绘制int
到终端。正确的最后一个值是 32778 (=(32*1024-1) + 1 + 10)。一开始我添加了一个选项整数来测试 3 个不同的选项:
- 0:在 CPU 修改数据之前同步的预期版本
- 1:同0,但不同步
- 2:memcpys 专用流且无同步
#include <iostream>
#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
const int option = 0;
const int numberOfEntities = 2 * 1024 * 1024;
const int smallNumberOfEntities = 32 * 1024;
__global__ void longKernel(float* dataDeviceIn, float* dataDeviceOut, int numberOfEntities)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if(index >= numberOfEntities) return;
float tmp = dataDeviceIn[index];
#pragma unroll
for( int i = 0; i < 2000; i++ ) tmp += 1.0;
dataDeviceOut[index] = tmp;
}
__global__ void smallKernel_1( int* smallDeviceData, int numberOfEntities )
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if(index >= numberOfEntities) return;
smallDeviceData[index] = index;
}
__global__ void smallKernel_2( int* smallDeviceData, int numberOfEntities )
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if(index >= numberOfEntities) return;
int value = smallDeviceData[index];
value += 10;
smallDeviceData[index] = value;
}
int main(int argc, char **argv)
{
cudaSetDevice(0);
float* dataDeviceIn;
float* dataDeviceOut;
cudaMalloc( &dataDeviceIn , sizeof(float) * numberOfEntities );
cudaMalloc( &dataDeviceOut, sizeof(float) * numberOfEntities );
int* smallDataDevice;
int* smallDataHost;
cudaMalloc ( &smallDataDevice, sizeof(int) * smallNumberOfEntities );
cudaMallocHost( &smallDataHost , sizeof(int) * smallNumberOfEntities );
cudaStream_t streamLong;
cudaStream_t streamSmall;
cudaStream_t streamCopy;
int priority_high, priority_low;
cudaDeviceGetStreamPriorityRange(&priority_low , &priority_high ) ;
cudaStreamCreateWithPriority (&streamLong , cudaStreamNonBlocking, priority_low );
cudaStreamCreateWithPriority (&streamSmall, cudaStreamNonBlocking, priority_high );
cudaStreamCreateWithPriority (&streamCopy , cudaStreamNonBlocking, priority_high );
//////////////////////////////////////////////////////////////////////////
longKernel <<< numberOfEntities / 32, 32, 0, streamLong >>> (dataDeviceIn, dataDeviceOut, numberOfEntities);
//////////////////////////////////////////////////////////////////////////
smallKernel_1 <<< smallNumberOfEntities / 32, 32, 0 , streamSmall >>> (smallDataDevice, smallNumberOfEntities);
if( option <= 1 ) cudaMemcpyAsync( smallDataHost, smallDataDevice, sizeof(int) * smallNumberOfEntities, cudaMemcpyDeviceToHost, streamSmall );
if( option == 2 ) cudaMemcpyAsync( smallDataHost, smallDataDevice, sizeof(int) * smallNumberOfEntities, cudaMemcpyDeviceToHost, streamCopy );
if( option == 0 ) cudaStreamSynchronize( streamSmall );
// some CPU modification of data
for( int i = 0; i < smallNumberOfEntities; i++ ) smallDataHost[i] += 1;
if( option <= 1 ) cudaMemcpyAsync( smallDataDevice, smallDataHost, sizeof(int) * smallNumberOfEntities, cudaMemcpyHostToDevice, streamSmall );
if( option == 2 ) cudaMemcpyAsync( smallDataDevice, smallDataHost, sizeof(int) * smallNumberOfEntities, cudaMemcpyHostToDevice, streamCopy );
smallKernel_2 <<< smallNumberOfEntities / 32, 32, 0 , streamSmall >>> (smallDataDevice, smallNumberOfEntities);
//////////////////////////////////////////////////////////////////////////
cudaDeviceSynchronize();
cudaMemcpy( smallDataHost, smallDataDevice, sizeof(int) * smallNumberOfEntities, cudaMemcpyDeviceToHost );
for( int i = 0; i < smallNumberOfEntities; i++ ) std::cout << smallDataHost[i] << "\n";
return 0;
}
使用代码,我看到与上述相同的行为:
选项2(完全错误的结果,全部10,之前下载smallKernel_1
)