我有一个使用 MPI express 的工作 Wafefront 程序。在这个程序中发生的事情是,对于一个n x m
有n
进程的矩阵。每个进程被分配一行。每个进程执行以下操作:
for column = 0 to matrix_width do:
1) x = get the value of this column from the row above (rank - 1 process)
2) y = Get the value left of us (our row, column-1)
3) Add to our current column value: (x + y)
n x m
因此,在主进程中,我将声明一个元素数组。因此,每个从属进程应该分配一个长度数组m
。但就我的解决方案而言,每个进程都必须分配一个数组n x m
以使分散操作起作用,否则我会得到一个空指针(如果我分配它null
)或越界异常(如果我用 实例化它new int[1]
)。我确信必须有一个解决方案,否则每个进程都需要与根一样多的内存。
我想我需要类似 C 中可分配的东西。
重要部分下方是标记为“MASTER”的部分。通常我会将初始化拉入if(rank == 0)
测试并在 else 分支中使用(不分配内存)初始化数组,null
但这不起作用。
package be.ac.vub.ir.mpi;
import mpi.MPI;
// Execute: mpjrun.sh -np 2 -jar parsym-java.jar
/**
* Parallel and sequential implementation of a prime number counter
*/
public class WaveFront
{
// Default program parameters
final static int size = 4;
private static int rank;
private static int world_size;
private static void log(String message)
{
if (rank == 0)
System.out.println(message);
}
////////////////////////////////////////////////////////////////////////////
//// MAIN //////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
public static void main(String[] args) throws InterruptedException
{
// MPI variables
int[] matrix; // matrix stored at process 0
int[] row; // each process keeps its row
int[] receiveBuffer; // to receive a value from ``row - 1''
int[] sendBuffer; // to send a value to ``row + 1''
/////////////////
/// INIT ////////
/////////////////
MPI.Init(args);
rank = MPI.COMM_WORLD.Rank();
world_size = MPI.COMM_WORLD.Size();
/////////////////
/// ALL PCS /////
/////////////////
// initialize data structures
receiveBuffer = new int[1];
sendBuffer = new int[1];
row = new int[size];
/////////////////
/// MASTER //////
/////////////////
matrix = new int[size * size];
if (rank == 0)
{
// Initialize matrix
for (int idx = 0; idx < size * size; idx++)
matrix[idx] = 0;
matrix[0] = 1;
receiveBuffer[0] = 0;
}
/////////////////
/// PROGRAM /////
/////////////////
// distribute the rows of the matrix to the appropriate processes
int startOfRow = rank * size;
MPI.COMM_WORLD.Scatter(matrix, startOfRow, size, MPI.INT, row, 0, size, MPI.INT, 0);
// For each column each process will calculate it's new values.
for (int col_idx = 0; col_idx < size; col_idx++)
{
// Get Y from row above us (rank - 1).
if (rank > 0)
MPI.COMM_WORLD.Recv(receiveBuffer, 0, 1, MPI.INT, rank - 1, 0);
// Get the X value (left from current column).
int x = col_idx == 0 ? 0 : row[col_idx - 1];
// Assign the new Z value.
row[col_idx] = row[col_idx] + x + receiveBuffer[0];
// Wait for other process to ask us for this value.
sendBuffer[0] = row[col_idx];
if (rank + 1 < size)
MPI.COMM_WORLD.Send(sendBuffer, 0, 1, MPI.INT, rank + 1, 0);
}
// At this point each process should be done so we call gather.
MPI.COMM_WORLD.Gather(row, 0, size, MPI.INT, matrix, startOfRow, size, MPI.INT, 0);
// Let the master show the result.
if (rank == 0)
for (int row_idx = 0; row_idx < size; ++row_idx)
{
for (int col_idx = 0; col_idx < size; ++col_idx)
System.out.print(matrix[size * row_idx + col_idx] + " ");
System.out.println();
}
MPI.Finalize(); // Don't forget!!
}
}