我有以下 MWE 使用comm.Scatterv
并comm.Gatherv
在给定数量的内核上分布 4D 阵列(size
)
import numpy as np
from mpi4py import MPI
import matplotlib.pyplot as plt
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
if rank == 0:
test = np.random.rand(411,48,52,40) #Create array of random numbers
outputData = np.zeros(np.shape(test))
split = np.array_split(test,size,axis = 0) #Split input array by the number of available cores
split_sizes = []
for i in range(0,len(split),1):
split_sizes = np.append(split_sizes, len(split[i]))
displacements = np.insert(np.cumsum(split_sizes),0,0)[0:-1]
plt.imshow(test[0,0,:,:])
plt.show()
else:
#Create variables on other cores
split_sizes = None
displacements = None
split = None
test = None
outputData = None
#Broadcast variables to other cores
test = comm.bcast(test, root = 0)
split = comm.bcast(split, root=0)
split_sizes = comm.bcast(split_sizes, root = 0)
displacements = comm.bcast(displacements, root = 0)
output_chunk = np.zeros(np.shape(split[rank])) #Create array to receive subset of data on each core, where rank specifies the core
print("Rank %d with output_chunk shape %s" %(rank,output_chunk.shape))
comm.Scatterv([test,split_sizes, displacements,MPI.DOUBLE],output_chunk,root=0) #Scatter data from test across cores and receive in output_chunk
output = output_chunk
plt.imshow(output_chunk[0,0,:,:])
plt.show()
print("Output shape %s for rank %d" %(output.shape,rank))
comm.Barrier()
comm.Gatherv(output,[outputData,split_sizes,displacements,MPI.DOUBLE], root=0) #Gather output data together
if rank == 0:
print("Final data shape %s" %(outputData.shape,))
plt.imshow(outputData[0,0,:,:])
plt.show()
这会创建一个随机数的 4D 数组,原则上应该在重新组合之前将其划分为size
核心。我希望Scatterv
根据向量split_sizes
和中的起始整数和位移沿轴 0(长度 411)划分displacements
。Gatherv
但是,使用( )重新组合时出现错误,mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated
并且每个核心上的 output_chunk 图显示大部分输入数据已丢失,因此似乎沿第一个轴没有发生拆分。
我的问题是:为什么分裂不沿着第一个轴发生,我怎么知道分裂沿着哪个轴发生,是否可以更改/指定沿着哪个轴发生?