我正在通过以下链接在 Kubernetes 中部署 pgo: https ://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/ 在 minikube 上成功尝试后。运行命令后
pgo create cluster -n pgo hippo
kubectl get pods -n pgo
我得到以下结果:
NAME READY STATUS RESTARTS AGE
hippo-backrest-shared-repo-8ddd75f69-f4jfj 0/1 Pending 0 3s
pgo-deploy-tzw2v 0/1 Completed 0 17m
postgres-operator-797bcb5d6-mjwxq 4/4 Running 0 15m
如果我运行:
kubectl describe pods hippo-backrest-shared-repo-8ddd75f69-f4jfj -n pgo
Name: hippo-backrest-shared-repo-8ddd75f69-f4jfj
Namespace: pgo
Priority: 0
Node: <none>
Labels: name=hippo-backrest-shared-repo
pg-cluster=hippo
pg-pod-anti-affinity=preferred
pgo-backrest-repo=true
pod-template-hash=8ddd75f69
service-name=hippo-backrest-shared-repo
vendor=crunchydata
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hippo-backrest-shared-repo-8ddd75f69
Containers:
database:
Image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest-repo:centos8-13.1-4.6.0
Port: 2022/TCP
Host Port: 0/TCP
Requests:
memory: 48Mi
Environment:
MODE: pgbackrest-repo
PGBACKREST_STANZA: db
PGBACKREST_DB_PATH: /pgdata/hippo
PGBACKREST_REPO1_PATH: /backrestrepo/hippo-backrest-shared-repo
PGBACKREST_PG1_PORT: 5432
PGBACKREST_LOG_PATH: /tmp
PGBACKREST_PG1_SOCKET_PATH: /tmp
PGBACKREST_DB_HOST: hippo
Mounts:
/backrestrepo from backrestrepo (rw)
/etc/pgbackrest/conf.d from pgbackrest-config (rw)
/sshd from sshd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
sshd:
Type: Secret (a volume populated by a Secret)
SecretName: hippo-backrest-repo-config
Optional: false
backrestrepo:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hippo-pgbr-repo
ReadOnly: false
pgbackrest-config:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: hippo-config-backrest
ConfigMapOptional: 0xc00090a479
SecretName: hippo-config-backrest
SecretOptionalName: 0xc00090a47a
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 43s (x4 over 2m44s) default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
我还创建了一个默认存储类:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
ubuntu@master-node:~$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/no-provisioner Delete Immediate false 84m
PVC:
ubuntu@master-node:~$ kubectl get pvc -n pgo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo Pending standard 12m
hippo-pgbr-repo Pending standard 12m
ubuntu@master-node:~$ kubectl describe pvc hippo -n pgo
Name: hippo
Namespace: pgo
StorageClass: standard
Status: Pending
Volume:
Labels: pg-cluster=hippo
pgremove=true
vendor=crunchydata
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m54s (x42 over 14m) persistentvolume-controller no volume plugin matched name: kubernetes.io/no-provisioner
ubuntu@master-node:~$ kubectl describe pvc hippo-pgbr-repo -n pgo
Name: hippo-pgbr-repo
Namespace: pgo
StorageClass: standard
Status: Pending
Volume:
Labels: pg-cluster=hippo
pgremove=true
vendor=crunchydata
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: hippo-backrest-shared-repo-8ddd75f69-f4jfj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 4m22s (x42 over 14m) persistentvolume-controller no volume plugin matched name: kubernetes.io/no-provisioner
我该如何解决它并正确创建正确的 PersistenVolumeClaims 并传递给 pgo?我可以提供进一步的澄清或信息。谢谢。
编辑:
ubuntu@master-node:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-volume-provisioner-s6cnm 1/1 Running 0 3m21s
ubuntu@master-node:~$ kubectl get pv -n pgo
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-pgo 100Gi RWO Delete Available local-storage 3m44s