我有一个使用 MPI 编写的超立方体的一对多广播方法:
one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype)
{
MPI_Status status;
int mask, partner;
int mask2 = ((1 << n) - 1) ^ (1 << n-1);
for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1)
{
if (rank & mask2 == 0)
{
partner = rank ^ mask;
if (rank & mask)
MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status);
else
MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD);
}
}
}
从 main 调用它时:
int main( int argc, char **argv )
{
int n, rank;
MPI_Init (&argc, &argv);
MPI_Comm_size (MPI_COMM_WORLD, &n);
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR);
MPI_Finalize();
return 0;
}
在 8 个节点上编译和执行,我收到一系列错误报告,报告进程 1、3、5、7 在接收任何数据之前停止:
MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD)
Rank (1, MPI_COMM_WORLD): Call stack within LAM:
Rank (1, MPI_COMM_WORLD): - MPI_Recv()
Rank (1, MPI_COMM_WORLD): - main()
MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD)
Rank (3, MPI_COMM_WORLD): Call stack within LAM:
Rank (3, MPI_COMM_WORLD): - MPI_Recv()
Rank (3, MPI_COMM_WORLD): - main()
MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD)
Rank (5, MPI_COMM_WORLD): Call stack within LAM:
Rank (5, MPI_COMM_WORLD): - MPI_Recv()
Rank (5, MPI_COMM_WORLD): - main()
MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD)
Rank (7, MPI_COMM_WORLD): Call stack within LAM:
Rank (7, MPI_COMM_WORLD): - MPI_Recv()
Rank (7, MPI_COMM_WORLD): - main()
我哪里错了?