0

我阅读了有关快速 ceph 部署的官方文档,并且在我们激活 OSD 的部分中总是遇到相同的错误:

ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1

此命令不起作用,并且始终显示相同的日志:

[2016-01-29 14:19:54,024][ceph_deploy.conf][DEBUG ] found configuration file at: /home/admin/.cephdeploy.conf
[2016-01-29 14:19:54,032][ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
[2016-01-29 14:19:54,033][ceph_deploy.cli][INFO  ] ceph-deploy options:
[2016-01-29 14:19:54,033][ceph_deploy.cli][INFO  ]  username                      : None
[2016-01-29 14:19:54,034][ceph_deploy.cli][INFO  ]  verbose                       : False
[2016-01-29 14:19:54,035][ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[2016-01-29 14:19:54,036][ceph_deploy.cli][INFO  ]  subcommand                    : activate
[2016-01-29 14:19:54,037][ceph_deploy.cli][INFO  ]  quiet                         : False
[2016-01-29 14:19:54,038][ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f866bc90368>
[2016-01-29 14:19:54,040][ceph_deploy.cli][INFO  ]  cluster                       : ceph
[2016-01-29 14:19:54,041][ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f866bee75f0>
[2016-01-29 14:19:54,042][ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[2016-01-29 14:19:54,043][ceph_deploy.cli][INFO  ]  default_release               : False
[2016-01-29 14:19:54,044][ceph_deploy.cli][INFO  ]  disk                          : [('node2', '/var/local/osd0', None), ('node3', '/var/local/osd1', None)]
[2016-01-29 14:19:54,058][ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/var/local/osd0: node3:/var/local/osd1:
[2016-01-29 14:19:56,498][node2][DEBUG ] connection detected need for sudo
[2016-01-29 14:19:58,497][node2][DEBUG ] connected to host: node2 
[2016-01-29 14:19:58,516][node2][DEBUG ] detect platform information from remote host
[2016-01-29 14:19:58,601][node2][DEBUG ] detect machine type
[2016-01-29 14:19:58,609][node2][DEBUG ] find the location of an executable
[2016-01-29 14:19:58,613][ceph_deploy.osd][INFO  ] Distro info: debian 8.3 jessie
[2016-01-29 14:19:58,615][ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
[2016-01-29 14:19:58,617][ceph_deploy.osd][DEBUG ] will use init type: systemd
[2016-01-29 14:19:58,622][node2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init systemd --mount /var/local/osd0
[2016-01-29 14:19:58,816][node2][WARNING] DEBUG:ceph-disk:Cluster uuid is eacfd426-58a3-44e8-a6f0-636a6b23e89e
[2016-01-29 14:19:58,818][node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[2016-01-29 14:19:59,401][node2][WARNING] Traceback (most recent call last):
[2016-01-29 14:19:59,403][node2][WARNING]   File "/usr/sbin/ceph-disk", line 3576, in <module>
[2016-01-29 14:19:59,405][node2][WARNING]     main(sys.argv[1:])
[2016-01-29 14:19:59,406][node2][WARNING]   File "/usr/sbin/ceph-disk", line 3530, in main
[2016-01-29 14:19:59,407][node2][WARNING]     args.func(args)
[2016-01-29 14:19:59,409][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2432, in main_activate
[2016-01-29 14:19:59,410][node2][WARNING]     init=args.mark_init,
[2016-01-29 14:19:59,412][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2258, in activate_dir
[2016-01-29 14:19:59,413][node2][WARNING]     (osd_id, cluster) = activate(path, activate_key_template, init)
[2016-01-29 14:19:59,415][node2][WARNING]   File "/usr/sbin/ceph-disk", line 2331, in activate
[2016-01-29 14:19:59,416][node2][WARNING]     raise Error('No cluster conf found in ' + SYSCONFDIR + ' with fsid %s' % ceph_fsid)
[2016-01-29 14:19:59,418][node2][WARNING] __main__.Error: Error: No cluster conf found in /etc/ceph with fsid eacfd426-58a3-44e8-a6f0-636a6b23e89e
[2016-01-29 14:19:59,443][node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[2016-01-29 14:19:59,445][ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init systemd --mount /var/local/osd0

我在 Debian 8.3 中工作。在 OSD 激活之前,我已经完成了所有这些操作。我在 node2 /var/local/osd0 和 node3 /var/local/osd1 上安装了 10GB ext4 分区。在 OSD 准备命令之后,出现了一些文件,但 OSD 活动命令仍然不起作用。

有人可以帮助我吗?

4

1 回答 1

1

发生这种情况是因为我在所有节点上都有相同的磁盘 ID。在我使用 fdisk 更改 ID 后,我的集群开始工作。

于 2016-02-03T11:00:53.583 回答