我正在尝试查找数组中所有给定数字的总和。而且我必须将数组拆分为相等大小并发送到每个进程并计算总和。稍后将每个进程的计算总和发送回根进程以获得最终答案。实际上,我知道我可以使用MPI_Scatter
. 但我的问题是如果我的列表是奇数怎么办。例如,我有一个包含13
元素的数组,然后我有3
进程。所以默认情况下,MPI_Scatter
将数组除以3
最后一个元素。基本上,它将仅计算12
元素的总和。我只使用时的输出MPI_Scatter
:
myid = 0 total = 6
myid = 1 total = 22
myid = 2 total = 38
results from all processors_= 66
size= 13
所以,我打算使用MPI_Scatter
and MPI_Send
。所以我可以得到最后一个元素并通过发送它MPI_Send
并计算它,并在根进程中接收。但我遇到了问题..我的代码:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>
/* globals */
int numnodes, myid, mpi_err;
int last_core;
int n;
int last_elements[];
#define mpi_root 0
/* end globals */
void init_it(int *argc, char ***argv);
void init_it(int *argc, char ***argv) {
mpi_err = MPI_Init(argc, argv);
mpi_err = MPI_Comm_size( MPI_COMM_WORLD, &numnodes );
mpi_err = MPI_Comm_rank(MPI_COMM_WORLD, &myid);
}
int main(int argc, char *argv[]) {
int *myray, *send_ray, *back_ray;
int count;
int size, mysize, i, k, j, total;
MPI_Status status;
init_it(&argc, &argv);
/* each processor will get count elements from the root */
count = 4;
myray = (int*)malloc(count * sizeof(int));
size = (count * numnodes) + 1;
send_ray = (int*)malloc(size * sizeof(int));
back_ray = (int*)malloc(numnodes * sizeof(int));
last_core = numnodes - 1;
/* create the data to be sent on the root */
if(myid == mpi_root){
for(i = 0; i < size; i++)
{
send_ray[i] = i;
}
}
/* send different data to each processor */
mpi_err = MPI_Scatter( send_ray, count, MPI_INT,
myray, count, MPI_INT,
mpi_root, MPI_COMM_WORLD);
if(myid == mpi_root) {
n = 1;
memcpy(last_elements, &send_ray[size-n], n * sizeof(int));
//Send the last numbers to the last core through send command
MPI_Send(last_elements, n, MPI_INT, last_core, 99, MPI_COMM_WORLD);
}
/* each processor does a local sum */
total = 0;
for(i = 0; i < count; i++)
total = total + myray[i];
//total = total + send_ray[size-1];
printf("myid= %d total= %d\n", myid, total);
if(myid == last_core)
{
printf("Last core\n");
MPI_Recv(last_elements, n, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
}
/* send the local sums back to the root */
mpi_err = MPI_Gather(&total, 1, MPI_INT,
back_ray, 1, MPI_INT,
mpi_root, MPI_COMM_WORLD);
/* the root prints the global sum */
if(myid == mpi_root){
total=0;
for(i = 0; i < numnodes; i++)
total = total + back_ray[i];
printf("results from all processors_= %d \n", total);
printf("size= %d \n ", size);
}
mpi_err = MPI_Finalize();
}
输出:
myid = 0 total = 6
myid = 1 total = 22
myid = 2 total = 38
Last core
[ubuntu:11884] *** An error occurred in MPI_Recv
[ubuntu:11884] *** on communicator MPI_COMM_WORLD
[ubuntu:11884] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:11884] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpiexec has exited due to process rank 2 with PID 11884 on
node ubuntu exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
我知道我做错了。如果你能指出我,我将不胜感激。