0

集群:1个master 2个worker

我正在使用具有 3 个副本的 PV (kubernetes.io/no-provisioner storageClass) 使用本地卷部署 StatefulSet。为两个工作节点创建了 2 个 PV。

预期:Pod 将被安排在两个工作人员上并共享相同的卷。

结果:在单个工作节点上创建了 3 个有状态的 Pod。yaml:-

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-1
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-node1 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-2
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-node2

---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: test
  labels:
    app: test
spec:
  ports:
  - name: test-headless
    port: 8000
  clusterIP: None
  selector:
    app: test
---
apiVersion: v1
kind: Service
metadata:
  name: test-service
  labels:
    app: test
spec:
  ports:
  - name: test
    port: 8000
    protocol: TCP
    nodePort: 30063
  type: NodePort
  selector:
    app: test

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: test-stateful
spec:
  selector:
    matchLabels:
      app: test
  serviceName: stateful-service
  replicas: 6
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: container-1
        image: <Image-name>
        imagePullPolicy: Always
        ports:
        - name: http
          containerPort: 8000
        volumeMounts:
        - name: localvolume 
          mountPath: /tmp/
      volumes:
      - name: localvolume
        persistentVolumeClaim:
          claimName: example-local-claim
4

1 回答 1

4

发生这种情况是因为 Kubernetes 不关心分发。它具有提供特定分布的机制,称为 Pod Affinity。要在所有工作人员上分发 pod,您可以使用Pod Affinity。此外,您可以使用软关联(我在此处解释的差异),它并不严格并且允许生成所有 pod。例如,StatefulSet 将如下所示:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3 
  template:
    metadata:
      labels:
        app: my-app
    spec:
      affinity:
        podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - my-app
          topologyKey: kubernetes.io/hostname      
      terminationGracePeriodSeconds: 10
      containers:
      - name: app-name
        image: k8s.gcr.io/super-app:0.8
        ports:
        - containerPort: 21
          name: web

这个 StatefulSet 将尝试在一个新的 worker 上生成每个 pod;如果没有足够的工人,它将在 pod 已经存在的节点上生成 pod。

于 2018-06-12T11:53:04.687 回答