1

我在集群上运行 MPI 应用程序,使用 4 个节点,每个节点有 64 个内核。应用程序执行全对全通信模式。

通过以下方式执行应用程序运行良好:

$: mpirun -npernode 36 ./应用程序

为每个节点添加一个进一步的进程会让应用程序崩溃:

$: mpirun -npernode 37 ./应用程序

--------------------------------------------------------------------------
A process failed to create a queue pair. This usually means either
the device has run out of queue pairs (too many connections) or
there are insufficient resources available to allocate a queue pair
(out of memory). The latter can happen if either 1) insufficient
memory is available, or 2) no more physical memory can be registered
with the device.

For more information on memory registration see the Open MPI FAQs at:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

Local host:             laser045
Local device:           qib0
Queue pair type:        Reliable connected (RC)
--------------------------------------------------------------------------
[laser045:15359] *** An error occurred in MPI_Issend
[laser045:15359] *** on communicator MPI_COMM_WORLD
[laser045:15359] *** MPI_ERR_OTHER: known error not in list
[laser045:15359] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[laser040:49950] [[53382,0],0]->[[53382,1],30] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 163]
[laser040:49950] [[53382,0],0]->[[53382,1],21] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 154]
--------------------------------------------------------------------------
mpirun has exited due to process rank 128 with PID 15358 on
node laser045 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[laser040:49950] 4 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / ibv_create_qp failed
[laser040:49950] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[laser040:49950] 4 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal

编辑为所有通信模式添加了一些源代码:

// Send data to all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
    if((unsigned)rank == i){
        continue;
    }

    MPI_Request request;
    MPI_Issend(&data, dataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &request);
    requests.push_back(request);
}

// Recv data from all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
    if((unsigned)rank == i){
       continue;
    }

    MPI_Status status;
    MPI_Recv(&recvData, recvDataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status);
}

// Finish communication operations
for(MPI_Request &r: requests){
    MPI_Status status;
    MPI_Wait(&r, &status);
}

作为集群用户,我有什么可以做的,或者我可以给集群管理员一些建议吗?

4

2 回答 2

2

mca_oob_tcp_msg_send_handler 错误行可能表明与接收等级对应的节点死亡(内存不足或接收到 SIGSEGV):

http://www.open-mpi.org/faq/?category=tcp#tcp-connection-errors

Open-MPI 中的 OOB(带外)框架用于控制消息,而不是用于应用程序的消息。实际上,消息通常通过字节传输层 (BTL),例如 self、sm、vader、openib (Infiniband) 等。

'ompi_info -a' 的输出在这方面很有用。

最后,问题中没有指定 Infiniband 硬件供应商是 Mellanox,因此 XRC 选项可能不起作用(例如,Intel/QLogic Infiniband 不支持此选项)。

于 2014-10-31T14:49:09.827 回答
2

该错误与此处注释的 mpi 消息队列的缓冲区大小有关:

http://www.open-mpi.org/faq/?category=openfabrics#ib-xrc

以下环境设置解决了我的问题:

$ export OMPI_MCA_btl_openib_receive_queues="P,128,256,192,128:S,65536,256,192,128"

于 2014-10-27T13:06:14.253 回答