0

我正在尝试在属于超立方体组(快速排序项目)的进程之间分散值。根据进程的数量,我要么创建一个不包括过多进程的新通信器,要么复制 MPI_COMM_WORLD,如果它恰好适合任何超立方体(2 的幂)。

在这两种情况下,除 0 之外的进程都会收到它们的数据,但是: - 在第一种情况下,进程 0 会引发分段错误 11 - 在第二种情况下,没有任何错误,但进程 0 收到的值是乱码。

注意:如果我尝试常规 MPI_Scatter 一切正常。

//Input
vector<int> LoadFromFile();

int d;                      //dimension of hypercube
int p;                      //active processes
int idle;                   //idle processes 
vector<int> values;         //values loaded
int arraySize;              //number of total values to distribute

int main(int argc, char* argv[])
{       
int mpiWorldRank;
int mpiWorldSize;

int mpiRank; 
int mpiSize;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &mpiWorldRank);
MPI_Comm_size(MPI_COMM_WORLD, &mpiWorldSize);
MPI_Comm MPI_COMM_HYPERCUBE;

d = log2(mpiWorldSize);     
p = pow(2, d);                  //Number of processes belonging to the hypercube
idle = mpiWorldSize - p;        //number of processes in excess
int toExclude[idle];            //array of idle processes to exclude from communicator
int sendCounts[p];              //array of values sizes to be sent to processes

//
int i = 0;
while (i < idle)
{
    toExclude[i] = mpiWorldSize - 1 - i;
    ++i;
}

//CREATING HYPERCUBE GROUP: Group of size of power of 2 -----------------
MPI_Group world_group;
MPI_Comm_group(MPI_COMM_WORLD, &world_group);

// Remove excessive processors if any from communicator
if (idle > 0)
{
    MPI_Group newGroup;     
    MPI_Group_excl(world_group, 1, toExclude, &newGroup);
    MPI_Comm_create(MPI_COMM_WORLD, newGroup, &MPI_COMM_HYPERCUBE);
    //Abort any processor not part of the hypercube.    
    if (mpiWorldRank > p)
    {
        cout << "aborting: " << mpiWorldRank <<endl;
        MPI_Finalize();
        return 0;
    }   
}   
else 
{
    MPI_Comm_dup(MPI_COMM_WORLD, &MPI_COMM_HYPERCUBE);
}

MPI_Comm_rank(MPI_COMM_HYPERCUBE, &mpiRank);
MPI_Comm_size(MPI_COMM_HYPERCUBE, &mpiSize);
//END OF: CREATING HYPERCUBE GROUP --------------------------

if (mpiRank == 0)
{
    //STEP1: Read input
    values = LoadFromFile();
    arraySize = values.size();
}

//Transforming input vector into an array
int valuesArray[values.size()];
if(mpiRank == 0)
{
    copy(values.begin(), values.end(), valuesArray);
}

//Broadcast input size to all processes
MPI_Bcast(&arraySize, 1, MPI_INT, 0, MPI_COMM_HYPERCUBE);

//MPI_Scatterv: determining size of arrays to be received and displacement
int nmin = arraySize / p;
int remainingData = arraySize % p;
int displs[p];
int recvCount;

int k = 0;
for (i=0; i<p; i++)
{
    sendCounts[i] = i < remainingData
        ? nmin+1
        : nmin;
    displs[i] = k;
    k += sendCounts[i];
}

recvCount = sendCounts[mpiRank];
int recvValues[recvCount];

//Following MPI_Scatter works well:     
// MPI_Scatter(&valuesArray, 13, MPI_INT, recvValues , 13, MPI_INT, 0, MPI_COMM_HYPERCUBE);

MPI_Scatterv(&valuesArray, sendCounts, displs, MPI_INT, recvValues , recvCount, MPI_INT, 0, MPI_COMM_HYPERCUBE);

int j = 0;
while (j < recvCount)
{
    cout << "rank " << mpiRank << " received: " << recvValues[j] << endl;
    ++j;
}   

MPI_Finalize();
return 0;
}
4

1 回答 1

1

首先,您提供了错误的论点MPI_Group_excl

MPI_Group_excl(world_group, 1, toExclude, &newGroup);
//                          ^

第二个参数指定排除列表中的条目数,因此应该等于idle。由于您仅排除一个等级,因此生成的组具有mpiWorldSize-1等级,因此MPI_Scatterv期望两者都sendCounts[]具有displs[]那么多元素。其中只有p元素被正确初始化,其余元素是随机的,因此MPI_Scatterv在根中崩溃。

另一个错误是中止空闲进程的代码:它应该读取if (mpiWorldRank >= p).

我建议将整个排除代码替换为一次调用MPI_Comm_split

MPI_Comm comm_hypercube;
int colour = mpiWorldRank >= p ? MPI_UNDEFINED : 0;

MPI_Comm_split(MPI_COMM_WORLD, colour, mpiWorldRank, &comm_hypercube);
if (comm_hypercube == MPI_COMM_NULL)
{
   MPI_Finalize();
   return 0;
}

当没有进程提供MPI_UNDEFINED颜色作为其颜色时,调用等效于MPI_Comm_dup.

请注意,您应该避免在代码名称中使用以开头的代码名称,MPI_因为这些名称可能与 MPI 实现中的符号冲突。

附加说明:std::vector<T>使用连续存储,因此您无需将元素复制到常规数组中,只需在调用中提供第一个元素的地址即可MPI_Scatter(v)

MPI_Scatterv(&values[0], ...);
于 2014-10-30T13:49:51.810 回答