11

我目前正在尝试在 Minikube 上重现本教程:

http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html

我更新了配置文件以使用主机路径作为 minikube 节点上的持久存储。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv0001
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp"

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: myclaim
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
  volumeClaimTemplates:
    - metadata:
        name: myclaim

结果如下:

kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM             REASON    AGE
pv0001                                     1Gi        RWO           Retain          Available                               17s
pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3   1Gi        RWO           Delete          Bound       default/myclaim             11s

kubectl get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
myclaim   Bound     pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3   1Gi        RWO           14s

kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes   10.0.0.1     <none>        443/TCP     3d
mongo        None         <none>        27017/TCP   53s

kubectl get pod
No resources found.


kubectl describe service mongo
Name:           mongo
Namespace:      default
Labels:         name=mongo
Selector:       role=mongo
Type:           ClusterIP
IP:         None
Port:           <unset> 27017/TCP
Endpoints:      <none>
Session Affinity:   None
No events.


kubectl get statefulsets
NAME      DESIRED   CURRENT   AGE
mongo     3         0         4h


kubectl describe statefulsets mongo
Name:           mongo
Namespace:      default
Image(s):       mongo,cvallance/mongo-k8s-sidecar
Selector:       environment=test,role=mongo
Labels:         environment=test,role=mongo
Replicas:       0 current / 3 desired
Annotations:        <none>
CreationTimestamp:  Thu, 30 Mar 2017 18:23:56 +0200
Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen LastSeen    Count   From        SubObjectPath   Type    Reason      Message
  --------- --------    -----   ----        -------------   --------------      -------
  1s        1s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
  1s        1s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
  1s        0s      4   {statefulset }          WarningFailedCreate pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]


kubectl get ev | grep mongo
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-1, error: PersistentVolumeClaim "myclaim-mongo-1" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
29s        1m          15        mongo      StatefulSet               Warning   FailedCreate              {statefulset }          pvc: myclaim-mongo-2, error: PersistentVolumeClaim "myclaim-mongo-2" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]

kubectl describe pvc myclaim
Name:       myclaim
Namespace:  default
StorageClass:   standard
Status:     Bound
Volume:     pvc-134a6c0f-1565-11e7-9cf1-080027f4d8c3
Labels:     <none>
Capacity:   1Gi
Access Modes:   RWO
No events.

minikube version: v0.17.1

该服务似乎无法加载 pod,这使得使用 kubectl 日志进行调试变得复杂。我在节点上创建持久卷的方式有问题吗?

非常感谢

4

1 回答 1

9

TL; 博士

在问题中描述的情况下,问题在于 StatefulSet 的 Pod 根本没有启动,因此服务没有目标。没有启动的原因是:

WarningFailedCreate pvc: myclaim-mongo-0, error: PersistentVolumeClaim "myclaim-mongo-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required , spec.resources[storage]: Required value]`

并且由于默认情况下卷是根据需要定义的,因此 Pod 将不会在没有它的情况下启动。所以编辑 StatefulSet 的 volumeClaimTemplate 有:

volumeClaimTemplates:
- metadata:
    name: myclaim
  spec:
    accessModes: [ "ReadWriteOnce" ]
    resources:
      requests:
        storage: 1Gi

(无需手动创建 PersistentVolumeClaim。)

更通用的解决方案

如果无法连接服务,请尝试以下命令:

kubectl describe service myservicename

如果您在输出中看到类似这样的内容:

Endpoints:      <none>

这意味着没有目标(通常是 Pod)在运行,或者目标还没有准备好。要找出是哪种情况,请执行以下操作:

kubectl describe endpoint myservicename

它将列出所有端点,准备好与否。如果还没有准备好,请调查 Pod 中的 readinessProbe。如果不存在,则尝试通过查看 StatefulSet(Deployment、ReplicaSet、ReplicationController 等)本身的消息(事件部分)来找出原因:

kubectl describe statefulset mystatefulsetname

如果您执行以下操作,则此信息可用:

kubectl get ev | grep something

如果您确定它们正在运行并准备就绪,则 Pod 和服务上的标签不匹配。

于 2017-03-31T09:40:45.077 回答