0

我有两个任务。它们都执行复制到设备 (D),并运行内核 (R) 操作。任务有不同的内核运行时。R1 的完成时间是 R2 的 5 倍(R1 = ~17 ms,R2 = ~3.5 ms)。任务的内核执行等待操作,我允许这些内核同时运行。每个复制操作需要 7 毫秒。

我有 GeForce GT 555M、CUDA 4.1 和 Fedora 16。

我使用 cudaEventRecord 记录每个任务的 D 和 R 操作的开始和停止时间。我为每个任务使用两个流。我使用 cudaStreamWaitEvents 在这两个任务流之间进行同步。一个流用于任务的D op,另一个用于任务的R op。我的目标是将 D2 与 R1 重叠。我测量了 task1 和 task2 的总体时间,以确定是否实现了这种重叠。

我有两种情况。在 Scenerio1 中,“start R1”放置在内核之前,“start R2”放置在内核之间。在 Scenerio2 中,“start R1”和“start R2”都放在内核之前。

对于下面给出的伪代码,Scenario1 和 Scenerio2 的行为并不相同:尽管 Scenerio2 未能将 D2 与 R1 重叠,但 Scenerio1 成功了!所以我的问题是:为了将 D2 与 R1 重叠,当 R2 比 R1 短时,为什么我们必须在内核之间(而不是之前)放置“开始 R2”?(请注意,我还测试了 R1 比 R2 短的情况。在这种情况下,将“start R2”放在内核之前或内核之间并没有什么不同,在这两种情况下,我们都可以将 D2 与 R1 重叠。 D2完成后,我们也可以同时运行R1和R2。)

这是场景 1 和 2 的伪代码(我对 task1 使用 stream1 和 stream3,对 task2 使用 stream2 和 stream4):

场景 1(成功):

start overall

start D1 on stream1
D1 on stream1
stop D1 on stream1

start D2 on stream2
D2 on stream2
stop D2 on stream2

start R1 on stream3

R1 on stream3 //longer

start R2 on stream4 // start R2 is in between kernels

R2 on stream4 //shorter

stop R2 on stream4
stop R1 on stream3

stop overall

场景2(失败):

start overall

start D1 on stream1
D1 on stream1
stop D1 on stream1

start D2 on stream2
D2 on stream2
stop D2 on stream2

start R1 on stream3

start R2 on stream4 // start R2 is before kernels

R1 on stream3 //longer

R2 on stream4 //shorter

stop R2 on stream4
stop R1 on stream3

stop overall 

场景的总体时间安排如下:

场景 1 = 24.109312

方案 2 = 31.194496

这些场景的预期总运行时间为 D1 + R1 = 7 + 17 = 24(我们可以将 D2 与 R1 重叠,同时同时运行 R1 和 R2)。尽管 Scenario1 成功实现了此运行时,但 Scenerio2 却未能做到。这是因为 Scenario2 不能将 D2 与 R1 重叠。(D2 需要 7 毫秒,这就是 Scenario2 运行时间为 24 + 7 = 31 的原因)。

我还附上了下面的 CUDA 代码:

#include <stdio.h>
#include <cuda_runtime.h>
#include <sys/time.h>

__global__ void wait_k(long time_clocks)
{ 
    long start_clock = clock();

    long clock_offset = 0;

    while( clock_offset < time_clocks) {
        clock_offset = clock() - start_clock;
    }
}


void shorterR2_D2_R1_Overlap()
{
float *h_A;
float *d_A, *d_C;
float *h_A2;
float *d_A2, *d_C2;


int N = 10000000;
size_t size = N * sizeof(float); 

cudaMallocHost((void**) &h_A, size);
cudaMallocHost((void**) &h_A2, size);

// Allocate vector in device memory
cudaMalloc((void**)&d_A, size);
cudaMalloc((void**)&d_C, size);
cudaMalloc((void**)&d_A2, size);
cudaMalloc((void**)&d_C2, size);


for (int i = 0; i<N; ++i)
{
h_A[i] = 1;
h_A2[i] = 5;
}

cudaStream_t csStream1, csStream2, csStream3, csStream4;

cudaStreamCreate(&csStream1);
cudaStreamCreate(&csStream2);
cudaStreamCreate(&csStream3);
cudaStreamCreate(&csStream4);

//allocate vars for dummy copy 
float* h_pfDummy;
float* d_pfDummy;
size_t iMemSz = 10 * sizeof(float);
cudaMallocHost((void**) &h_pfDummy, iMemSz);
cudaMalloc((void**)&d_pfDummy, iMemSz);

cudaMemcpyAsync(d_pfDummy, h_pfDummy, iMemSz, cudaMemcpyHostToDevice, csStream1);
cudaMemcpyAsync(d_pfDummy, h_pfDummy, iMemSz, cudaMemcpyHostToDevice, csStream2);

//delete vars of dummy copy 
cudaFree(d_pfDummy);
cudaFreeHost(h_pfDummy);

long time_clocks = 20000000; 
long div = 5;

cudaEvent_t ceEvStart, ceEvStop; 
cudaEventCreate( &ceEvStart );
cudaEventCreate( &ceEvStop );

//diff stream time events
cudaEvent_t ceEvStartCpyDev1, ceEvStopCpyDev1, ceEvStartKer1, ceEvStopKer1;
cudaEventCreate( &ceEvStartCpyDev1 );
cudaEventCreate( &ceEvStopCpyDev1 );
cudaEventCreate( &ceEvStartKer1 );
cudaEventCreate( &ceEvStopKer1 );
cudaEvent_t ceEvStartCpyDev2, ceEvStopCpyDev2, ceEvStartKer2, ceEvStopKer2; 
cudaEventCreate( &ceEvStartCpyDev2 );
cudaEventCreate( &ceEvStopCpyDev2 );
cudaEventCreate( &ceEvStartKer2 );
cudaEventCreate( &ceEvStopKer2 );

//Scenario1: put start R1 before kernels and start R2 between kernels
cudaDeviceSynchronize();

cudaEventRecord(ceEvStart, 0);

cudaEventRecord(ceEvStartCpyDev1, csStream1);
cudaMemcpyAsync(d_A, h_A, size, cudaMemcpyHostToDevice, csStream1);
cudaEventRecord(ceEvStopCpyDev1, csStream1);

cudaEventRecord(ceEvStartCpyDev2, csStream2);
cudaMemcpyAsync(d_A2, h_A2, size, cudaMemcpyHostToDevice, csStream2);
cudaEventRecord(ceEvStopCpyDev2, csStream2);

//insert runker1 start event before concurrent kernels
cudaStreamWaitEvent(csStream3, ceEvStopCpyDev1, 0);
cudaEventRecord(ceEvStartKer1, csStream3); 

wait_k<<<1,1,0,csStream3>>>(time_clocks);

//insert runker2 start event between concurrent kernels
cudaStreamWaitEvent(csStream4, ceEvStopCpyDev2, 0);
cudaEventRecord(ceEvStartKer2, csStream4); 

wait_k<<<1,1,0,csStream4>>>(time_clocks/div);

cudaEventRecord(ceEvStopKer2, csStream4);
cudaEventRecord(ceEvStopKer1, csStream3);

cudaEventRecord(ceEvStop, 0);
cudaDeviceSynchronize();

float fTim1;
cudaEventElapsedTime( &fTim1, ceEvStart, ceEvStop);
printf("Scenario1 overall runtime = %10f\n", fTim1);

//Scenario2: put start R1 before kernels and start R2 between kernels
cudaDeviceSynchronize();

cudaEventRecord(ceEvStart, 0);

cudaEventRecord(ceEvStartCpyDev1, csStream1);
cudaMemcpyAsync(d_A, h_A, size, cudaMemcpyHostToDevice, csStream1);
cudaEventRecord(ceEvStopCpyDev1, csStream1);

cudaEventRecord(ceEvStartCpyDev2, csStream2);
cudaMemcpyAsync(d_A2, h_A2, size, cudaMemcpyHostToDevice, csStream2);
cudaEventRecord(ceEvStopCpyDev2, csStream2);

//insert runker1 start event before concurrent kernels
cudaStreamWaitEvent(csStream3, ceEvStopCpyDev1, 0);
cudaEventRecord(ceEvStartKer1, csStream3); 

//insert runker2 start event before concurrent kernels
cudaStreamWaitEvent(csStream4, ceEvStopCpyDev2, 0);
cudaEventRecord(ceEvStartKer2, csStream4); 

wait_k<<<1,1,0,csStream3>>>(time_clocks);

wait_k<<<1,1,0,csStream4>>>(time_clocks/div);

cudaEventRecord(ceEvStopKer2, csStream4);
cudaEventRecord(ceEvStopKer1, csStream3);

cudaEventRecord(ceEvStop, 0);
cudaDeviceSynchronize();

float fTim2;
cudaEventElapsedTime( &fTim2, ceEvStart, ceEvStop);
printf("Scenario2 overall runtime = %10f\n", fTim2);

}

int main()
{
 shorterR2_D2_R1_Overlap();
}

非常感谢您的帮助!

4

1 回答 1

0

Compute capabilities 1.0 - 3.0 have a single push buffer to submit work to the GPU. Work is submitted in the order of the CUDA API calls. In scenario 2 the push buffer cannot execute commands beyond cudaStreamWaitEvent(csStream4, ceEvStopCpyDev2, 0); until ceEvStopCpyDev2 completes.

The presentation CUDA C/C++ Streams and Concurrency( pdf | video ) contains more information on this topic. The slide Stream Scheduling contains more details on the issue that you have observed.

于 2012-08-21T04:34:21.830 回答