2

我在尝试使用 slurm SBATCH 作业或 SRUN 作业和 MPI over infiniband 时遇到问题。

OpenMPI 已安装,如果我启动以下测试程序(称为hello),mpirun -n 30 ./hello它就可以工作。

// compilation: mpicc -o helloMPI helloMPI.c
#include <mpi.h>
#include <stdio.h>
int main ( int argc, char * argv [] )
{
   int myrank, nproc;
   MPI_Init ( &argc, &argv );
   MPI_Comm_size ( MPI_COMM_WORLD, &nproc );
   MPI_Comm_rank ( MPI_COMM_WORLD, &myrank );
  printf ( "hello from rank %d of %d\n", myrank, nproc );
   MPI_Barrier ( MPI_COMM_WORLD );
   MPI_Finalize (); 
   return 0;
}

所以 :

user@master:~/hello$ mpicc -o hello hello.c
user@master:~/hello$ mpirun -n 30 ./hello
--------------------------------------------------------------------------
[[5627,1],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: usNIC
  Host: master

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
hello from rank 25 of 30
hello from rank 1 of 30
hello from rank 6 of 30
[...]
hello from rank 17 of 30

当我尝试通过 SLURM 启动它时,会出现如下分段错误:

user@master:~/hello$ srun -n 20 ./hello
[node05:01937] *** Process received signal ***
[node05:01937] Signal: Segmentation fault (11)
[node05:01937] Signal code: Address not mapped (1)
[node05:01937] Failing at address: 0x30
[node05:01937] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7fcf6bf7ecb0]
[node05:01937] [ 1] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x244c6)[0x7fcf679b64c6]
[node05:01937] [ 2] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x254cb)[0x7fcf679b74cb]
[node05:01937] [ 3] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(ompi_btl_openib_connect_base_select_for_local_port+0xb1)[0x7fcf679b2141]
[node05:01937] [ 4] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x10ad0)[0x7fcf679a2ad0]
[node05:01937] [ 5] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_btl_base_select+0x114)[0x7fcf6c209b34]
[node05:01937] [ 6] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x12)[0x7fcf67bca652]
[node05:01937] [ 7] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_bml_base_init+0x69)[0x7fcf6c209359]
[node05:01937] [ 8] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_pml_ob1.so(+0x5975)[0x7fcf65d1b975]
[node05:01937] [ 9] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_pml_base_select+0x35c)[0x7fcf6c21a0bc]
[node05:01937] [10] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(ompi_mpi_init+0x4ed)[0x7fcf6c1cb89d]
[node05:01937] [11] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(MPI_Init+0x16b)[0x7fcf6c1eb56b]
[node05:01937] [12] /home/user/hello/./hello[0x400826]
[node05:01937] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fcf6bbd076d]
[node05:01937] [14] /home/user/hello/./hello[0x400749]
[node05:01937] *** End of error message ***
[node05:01938] *** Process received signal ***
[node05:01938] Signal: Segmentation fault (11)
[node05:01938] Signal code: Address not mapped (1)
[node05:01938] Failing at address: 0x30
[node05:01940] *** Process received signal ***
[node05:01940] Signal: Segmentation fault (11)
[node05:01940] Signal code: Address not mapped (1)
[node05:01940] Failing at address: 0x30
[node05:01938] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f68b2e10cb0]
[node05:01938] [ 1] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x244c6)[0x7f68ae8484c6]
[node05:01938] [ 2] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x254cb)[0x7f68ae8494cb]
[node05:01940] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f8af1d82cb0]
[node05:01940] [ 1] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x244c6)[0x7f8aed7ba4c6]
[node05:01940] [ 2] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x254cb)[0x7f8aed7bb4cb]
[node05:01940] [ 3] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(ompi_btl_openib_connect_base_select_for_local_port+0xb1)[0x7f8aed7b6141]
[node05:01940] [ 4] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x10ad0)[0x7f8aed7a6ad0]
[node05:01938] [ 3] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(ompi_btl_openib_connect_base_select_for_local_port+0xb1)[0x7f68ae844141]
[node05:01938] [ 4] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_btl_openib.so(+0x10ad0)[0x7f68ae834ad0]
[node05:01938] [ 5] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_btl_base_select+0x114)[0x7f68b309bb34]
[node05:01938] [ 6] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x12)[0x7f68aea5c652]
[node05:01940] [ 5] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_btl_base_select+0x114)[0x7f8af200db34]
[node05:01940] [ 6] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_bml_r2.so(mca_bml_r2_component_init+0x12)[0x7f8aed9ce652]
[node05:01938] [ 7] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_bml_base_init+0x69)[0x7f68b309b359]
[node05:01938] [ 8] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_pml_ob1.so(+0x5975)[0x7f68acbad975]
[node05:01940] [ 7] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_bml_base_init+0x69)[0x7f8af200d359]
[node05:01940] [ 8] /opt/cluster/spool/openMPI/1.8/gcc/lib/openmpi/mca_pml_ob1.so(+0x5975)[0x7f8aebb1f975]
[node05:01940] [ 9] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_pml_base_select+0x35c)[0x7f8af201e0bc]
[node05:01938] [ 9] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(mca_pml_base_select+0x35c)[0x7f68b30ac0bc]
[node05:01938] [10] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(ompi_mpi_init+0x4ed)[0x7f68b305d89d]
[node05:01940] [10] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(ompi_mpi_init+0x4ed)[0x7f8af1fcf89d]
[node05:01938] [11] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(MPI_Init+0x16b)[0x7f68b307d56b]
[node05:01938] [12] /home/user/hello/./hello[0x400826]
[node05:01940] [11] /opt/cluster/spool/openMPI/1.8/gcc/lib/libmpi.so.1(MPI_Init+0x16b)[0x7f8af1fef56b]
[node05:01940] [12] /home/user/hello/./hello[0x400826]
[node05:01938] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f68b2a6276d]
[node05:01938] [14] /home/user/hello/./hello[0x400749]
[node05:01938] *** End of error message ***
[node05:01940] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f8af19d476d]
[node05:01940] [14] /home/user/hello/./hello[0x400749]
[node05:01940] *** End of error message ***
[node05:01939] *** Process received signal ***
[node05:01939] Signal: Segmentation fault (11)
[node05:01939] Signal code: Address not mapped (1)
[node05:01939] Failing at address: 0x30
[...]etc

有谁知道是什么问题?我已经构建了支持 Slurm 的 openMPI,并安装了相同版本的编译器和库,实际上所有库都在一个 NFS 共享中,该共享安装在每个节点上。

评论:

它应该使用 infiniband,因为它已安装。但是当我用mpirun启动 openmpi 时,我注意到

[[5627,1],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: usNIC
  Host: cluster

我猜这意味着“不通过 infiniband 运行”。我已经安装了 infiniband 驱动程序,并通过 Infiniband 设置了 IP。Slurm 配置为使用 infiniband IP 运行:这是正确的配置吗?

提前致谢最好的问候

编辑 :

我刚刚尝试用 MPICH2 而不是 openMPI 编译它,它可以与 SLURM 一起使用。所以问题可能与openMPI有关,而不是Slurm配置?

编辑 2:实际上,我已经看到使用带有 SBATCH 命令而不是 SRUN 的 openMPI 1.6.5(而不是 1.8)我的脚本被执行(即它返回线程号、等级和主机。但它显示了与 openfabric 供应商相关的警告和注册内存的分配:

The OpenFabrics (openib) BTL failed to initialize while trying to
allocate some locked memory.  This typically can indicate that the
memlock limits are set too low.  For most HPC installations, the
memlock limits should be set to "unlimited".  The failure occured
here:

  Local host:    node05
  OMPI source:   btl_openib_component.c:1216
  Function:      ompi_free_list_init_ex_new()
  Device:        mlx4_0
  Memlock limit: 65536

You may need to consult with your system administrator to get this
problem fixed.  This FAQ entry on the Open MPI web site may also be
helpful:

    http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   node05
  Local device: mlx4_0
--------------------------------------------------------------------------
Hello world from process 025 out of 048, processor name node06
Hello world from process 030 out of 048, processor name node06
Hello world from process 032 out of 048, processor name node06
Hello world from process 046 out of 048, processor name node07
Hello world from process 031 out of 048, processor name node06
Hello world from process 041 out of 048, processor name node07
Hello world from process 034 out of 048, processor name node06
Hello world from process 044 out of 048, processor name node07
Hello world from process 033 out of 048, processor name node06
Hello world from process 045 out of 048, processor name node07
Hello world from process 026 out of 048, processor name node06
Hello world from process 043 out of 048, processor name node07
Hello world from process 024 out of 048, processor name node06
Hello world from process 038 out of 048, processor name node07
Hello world from process 014 out of 048, processor name node05
Hello world from process 027 out of 048, processor name node06
Hello world from process 036 out of 048, processor name node07
Hello world from process 019 out of 048, processor name node05
Hello world from process 028 out of 048, processor name node06
Hello world from process 040 out of 048, processor name node07
Hello world from process 023 out of 048, processor name node05
Hello world from process 042 out of 048, processor name node07
Hello world from process 018 out of 048, processor name node05
Hello world from process 039 out of 048, processor name node07
Hello world from process 021 out of 048, processor name node05
Hello world from process 047 out of 048, processor name node07
Hello world from process 037 out of 048, processor name node07
Hello world from process 015 out of 048, processor name node05
Hello world from process 035 out of 048, processor name node06
Hello world from process 020 out of 048, processor name node05
Hello world from process 029 out of 048, processor name node06
Hello world from process 016 out of 048, processor name node05
Hello world from process 017 out of 048, processor name node05
Hello world from process 022 out of 048, processor name node05
Hello world from process 012 out of 048, processor name node05
Hello world from process 013 out of 048, processor name node05
Hello world from process 000 out of 048, processor name node04
Hello world from process 001 out of 048, processor name node04
Hello world from process 002 out of 048, processor name node04
Hello world from process 003 out of 048, processor name node04
Hello world from process 006 out of 048, processor name node04
Hello world from process 009 out of 048, processor name node04
Hello world from process 011 out of 048, processor name node04
Hello world from process 004 out of 048, processor name node04
Hello world from process 007 out of 048, processor name node04
Hello world from process 008 out of 048, processor name node04
Hello world from process 010 out of 048, processor name node04
Hello world from process 005 out of 048, processor name node04
[node04:04390] 47 more processes have sent help message help-mpi-btl-openib.txt / init-fail-no-mem
[node04:04390] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[node04:04390] 47 more processes have sent help message help-mpi-btl-openib.txt / error in device init

我从中了解到,a)v.1.6.5 具有更好的错误处理能力,b)我必须配置具有更多注册内存大小的 openMPI 和/或 infiniband 驱动程序。我看到这个页面,显然我只需要修改 openMPI 的东西?我必须测试它...

4

2 回答 2

0

两件事:要“srun ... mpi_app”,您需要在 OMPI 中做一些特殊的事情。有关如何在 SLURM 下运行 Open MPI 作业,请参阅http://www.open-mpi.org/faq/?category=slurm 。

usnic 消息似乎是一个合法的错误报告,您应该将其提交到 Open MPI 用户的邮件列表:

http://www.open-mpi.org/community/lists/ompi.php

特别是,我想查看一些详细信息,以了解您收到有关 usNIC 的警告消息的原因(我猜您没有在安装了 usNIC 的 Cisco UCS 平台上运行,但如果您安装了 IB ,您不应该看到此消息)。

于 2014-04-19T15:24:01.490 回答
0
  1. 我的解决方案:升级到 Slurm 14.03.2-1、OpenMPI 1.8.1。

  2. 奇怪的是,在 Infiniband 网络重组后,我在一些节点上遇到了这个问题(btl openib 上的段错误)。我使用的是 Slurm 2.6.9 和 OpenMPI 1.8。

在带有 Dell/AMD Opteron/Mellanox 的机架上,它会出现段错误(并且在网络重组之前它正在工作。)

带有 HP/Intel/Mellanox 的机架在重组前后继续工作。

这可能与 Infiniband 拓扑有关。

于 2014-05-06T16:56:43.610 回答