0

Command ceph status doesn't response if we put any servers in /etc/ntp.conf
I have 3 ceph nodes on centos 7 with this /etc/ntp.conf:

driftfile /vat/lib/ntp/drift

restrict 0.0.0.0 mask 0.0.0.0

server 0.ua.pool.ntp.org iburst
server 1.ua.pool.ntp.org iburst
server 2.ua.pool.ntp.org iburst
server 3.ua.pool.ntp.org iburst

includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

and with this /etc/rc.local:

touch /var/lock/subsys/local
/sbin/iptables-restore < /etc/sysconfig/iptables
/sbin/ntpd -gq
/sbin/hwclock --systohc
systemctl enable ntpd.service
systemctl start ntpd.service

If I comment servers in /etc/ntp.conf:

#server 0.ua.pool.ntp.org iburst
#server 1.ua.pool.ntp.org iburst
#server 2.ua.pool.ntp.org iburst
#server 3.ua.pool.ntp.org iburst

then ceph becomes response. But with this answer:

health HEALTH_WARN
 clock skew detected on mon.node2, mon.node3
 Monitor clock skew detected
...

systemctl status ntpd.service show that service is active and running.

I really cant understand why ceph becomes unresponsable if we put servers in ntp.conf.
Please, help me.

4

1 回答 1

0

我不知道为什么,但是由于命令 in 发生了ntpd -gq。此命令从您在 ntp.conf 中编写的服务器更新您的数据时间,然后停止。
我无法弄清楚为什么 ceph 在执行此命令后会失败,但是当我将其更改为:

 ntpdate 0.ua.pool.ntp.org

Ceph 变成了工作。

于 2016-02-04T07:30:11.540 回答