1

我已经使用 3 个 VPS 设置了 3 个节点 kubernetes 并安装了 rook/ceph。

当我跑步时

kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status

我得到以下结果

osd: 0 osds: 0 up, 0 in

我试过了

ceph device ls

结果是

DEVICE  HOST:DEV  DAEMONS  LIFE EXPECTANCY

ceph osd status没有结果

这是我使用的yaml文件

https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml

当我使用以下命令时

sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision

结果是

2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"

我的磁盘分区

root@node1: lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /

我在这里做错了什么?

4

2 回答 2

2

ceph status在我多次安装和拆卸测试后,我遇到了OSD 没有出现的类似问题。

我通过运行解决了这个问题

dd if=/dev/zero of=/dev/sdX bs=1M status=progress

完全删除此类原始块磁盘上的任何信息。

于 2021-06-17T09:06:52.523 回答
1

我想为了让屋顶 ceph 工作,我应该在我的节点上附加一个额外的原始卷,因为它不允许在主磁盘上挂载目录。

现在看起来像这样

root@node1:~/marketing-automation-agency# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /
于 2021-05-10T12:46:34.413 回答