1

我正在开发一个项目,以并行化 VPR(多功能布局布线)工具中用于布局(布局和布线)的模拟退火算法。

基本上,我需要将该工具使用的众多 C 文件之一转换为 CUDA C。我只需要一整段代码在多个内核上并行运行。每个核心都需要处理单独的数据副本。所以我想我需要将数据从主机复制到设备内存。

是否可以在不逐行修改代码的情况下完成整个过程?

正如 Janisz 所建议的,我附上了我感兴趣的代码部分。

while (exit_crit(t, cost, annealing_sched) == 0) 
{
//Starting here,I require this part to run on different cores. 
//Not the entire while loop.
av_cost = 0.;//These variables should be a local copy for each core.
av_bb_cost = 0.;
av_delay_cost = 0.;
av_timing_cost = 0.;
sum_of_squares = 0.;
success_sum = 0;
inner_crit_iter_count = 1;

for (inner_iter=0; inner_iter < move_lim; inner_iter++) {
//This function try_swap also has to run on different cores and also needs 
//to be run on a local copy of data, ie each core needs to completely 
//operate on its own data. And this function calls other functions which also have 
//the same requirements.
  if (try_swap(t, &cost, &bb_cost, &timing_cost, 
     rlim, pins_on_block, placer_opts.place_cost_type,
         old_region_occ_x, old_region_occ_y, placer_opts.num_regions,
         fixed_pins, placer_opts.place_algorithm, 
     placer_opts.timing_tradeoff, inverse_prev_bb_cost, 
     inverse_prev_timing_cost, &delay_cost) == 1) {
success_sum++;
av_cost += cost;
av_bb_cost += bb_cost;
av_timing_cost += timing_cost;
av_delay_cost += delay_cost;
sum_of_squares += cost * cost;
  }

#ifdef VERBOSE
      printf("t = %g  cost = %g   bb_cost = %g timing_cost = %g move = %d dmax = %g\n",
         t, cost, bb_cost, timing_cost, inner_iter, d_max);
      if (fabs(bb_cost - comp_bb_cost(CHECK, placer_opts.place_cost_type, 
                  placer_opts.num_regions)) > bb_cost * ERROR_TOL) 
exit(1);
#endif 
}

moves_since_cost_recompute += move_lim;
if (moves_since_cost_recompute > MAX_MOVES_BEFORE_RECOMPUTE) {
   new_bb_cost = recompute_bb_cost (placer_opts.place_cost_type, 
                 placer_opts.num_regions);       
   if (fabs(new_bb_cost - bb_cost) > bb_cost * ERROR_TOL) {
      printf("Error in try_place:  new_bb_cost = %g, old bb_cost = %g.\n",
          new_bb_cost, bb_cost);
      exit (1);
   }
   bb_cost = new_bb_cost;

   if (placer_opts.place_algorithm ==BOUNDING_BOX_PLACE) {
 cost = new_bb_cost;
   }
   moves_since_cost_recompute = 0;
}

tot_iter += move_lim;
success_rat = ((float) success_sum)/ move_lim;
if (success_sum == 0) {
   av_cost = cost;
   av_bb_cost = bb_cost;
   av_timing_cost = timing_cost;
   av_delay_cost = delay_cost;
}
else {
   av_cost /= success_sum;
   av_bb_cost /= success_sum;
   av_timing_cost /= success_sum;
   av_delay_cost /= success_sum;
}
std_dev = get_std_dev (success_sum, sum_of_squares, av_cost);

#ifndef SPEC
    printf("%11.5g  %10.6g %11.6g  %11.6g  %11.6g %11.6g %11.4g %9.4g %8.3g  %7.4g  %7.4g  %10d  ",t, av_cost, av_bb_cost, av_timing_cost, av_delay_cost, place_delay_value, d_max, success_rat, std_dev, rlim, crit_exponent,tot_iter);
#endif
//the while loop continues, but till here is what needs to run on different cores.

总而言之,这里给出的代码及其进行的函数调用必须同时在多个内核上运行,即多次运行代码,每次运行在一个单独的内核上。

4

2 回答 2

3

如果您不喜欢逐行更改代码,可以尝试使用OpenACC

OpenACC 可以通过编译器指令轻松并行化遗留的科学技术 Fortran 和 C 代码,而无需修改或调整底层代码本身。您只需要确定要加速的代码区域,插入编译器指令,然后编译器就会完成将原始顺序计算映射到并行加速器的工作。

我没有这方面的个人经验,但是,从我参加的一些会议演讲来看,并行化的易用性与性能有关。

于 2013-06-30T17:02:39.963 回答
0

每个核心都需要处理单独的数据副本。所以我想我需要将数据从主机复制到设备内存。

是的你将会。如果它是一个“小”矩阵,它可能适合目标 CUDA(或 OpenCL)设备的只读部分。这可能会产生显着的性能优势。如果没有,您的目标 CUDA 设备可能仍然比您现有的目标具有更快的内存访问。

是否可以在不逐行修改代码的情况下完成整个过程?

多半是对的。如果您采用迭代方法的一个或多个主轴,而是让单个循环的主体使用一些巧妙的索引来加载输入和/或存储结果,那么这就是端口的大部分挑战所在。它可能取决于被移植代码的复杂性,但如果它是一个足够简单的算法,它应该不是一个很大的挑战。

于 2013-07-01T04:42:56.670 回答