0

我在标准 GCP 上创建了一个 Cloud Filestore 实例,并将运行 Kubernetes 的集群放在同一个 VPC 中。使用本指南https://cloud.google.com/filestore/docs/accessing-fileshares我尝试访问文件共享实例以用作我的部署的持久存储。我的部署是一个名为 Apache OFBiz 的 web 应用程序,它是一组主要用于会计的业务工具。它的演示和文档可在线获取,因为它是开源的。因此,为了测试删除 pod 时数据是否仍然存在,我在将部署公开到公共 IP 后在应用程序上创建了一个用户,并将我拥有的域附加到该公共 IP。创建了用户,然后当我使用云外壳删除集群上的用户时,当再次创建 pod 时,我访问了 webapp 并且它不再拥有用户,它又回到了它的基本形式。我不确定出了什么问题,是否是对文件存储实例的访问、存储和从实例中提取数据。webapp 有一个嵌入式 Apache Derby 数据库,就像一个注释。我想我的问题也是指南是否足够,或者我是否必须做其他任何事情才能完成这项工作,以及是否还有其他需要查看的内容。

所以这是我的部署 yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: "2021-03-19T21:08:27Z"
  generation: 2
  labels:
    app: ofbizvpn
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:app: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"ofbizvpn"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl-create
    operation: Update
    time: "2021-03-19T21:08:27Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:template:
          f:spec:
            f:containers:
              k:{"name":"ofbizvpn"}:
                f:volumeMounts:
                  .: {}
                  k:{"mountPath":"ofbiz/data"}:
                    .: {}
                    f:mountPath: {}
                    f:name: {}
            f:volumes:
              .: {}
              k:{"name":"mypvc"}:
                .: {}
                f:name: {}
                f:persistentVolumeClaim:
                  .: {}
                  f:claimName: {}
    manager: GoogleCloudConsole
    operation: Update
    time: "2021-03-19T22:11:44Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-19T23:19:35Z"
  name: ofbizvpn
  namespace: default
  resourceVersion: "3004167"
  selfLink: /apis/apps/v1/namespaces/default/deployments/ofbizvpn
  uid: b2e10550-eabe-47fb-8f51-4e9e89f7e8ea
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ofbizvpn
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ofbizvpn
    spec:
      containers:
      - image: gcr.io/lithe-joy-306319/ofbizvpn
        imagePullPolicy: Always
        name: ofbizvpn
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: ofbiz/data
          name: mypvc
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: mypvc
        persistentVolumeClaim:
          claimName: fileserver-claim
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-03-19T21:08:28Z"
    lastUpdateTime: "2021-03-19T22:11:53Z"
    message: ReplicaSet "ofbizvpn-6d458f54cf" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2021-03-19T23:19:35Z"
    lastUpdateTime: "2021-03-19T23:19:35Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

这是我的持久卷 yaml

apiVersion: v1
kind: PersistentVolume
metadata:
 name: fileserver
spec:
 capacity:
  storage: 10Gi
 accessModes:
 - ReadWriteMany
 nfs:
  path: /fileshare1
  server: 10.249.37.194

这是我的持久量声明 yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: fileserver-claim
spec:
# Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
# A nil storageClassName value uses the default StorageClass. For details, see
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
 accessModes:
 - ReadWriteMany
 storageClassName: ""
 volumeName: fileserver
 resources:
  requests:
   storage: 10Gi
4

1 回答 1

0

如果您想要数据持久性,为什么不使用StatefulSet而不是Deployment. 你应该更好地使用StatefulSet.

Deployment基本上用于无状态应用程序和StatefulSet有状态应用程序。pod 的唯一性没有得到维护,Deployment所以当再次创建 pod 时,基本上不会获得以前的 pod 的标识,而是获得新的名称和标识。

StatefulSet 是用于管理有状态应用程序的工作负载 API 对象。管理一组 Pod 的部署和扩展,并保证这些 Pod 的顺序和唯一性。参见 k8s文档

StatefulSet来自 k8s 文档的示例yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
          storage: 1Gi

在上面的例子中:

  • 名为 的无头服务nginx用于控制网络域。

  • StatefulSet名为的web有一个Spec表示nginx容器的 3 个副本将以唯一的方式启动Pods

  • 这将使用 Provisioner提供的volumeClaimTemplates稳定存储。PersistentVolumesPersistentVolume

于 2021-03-20T09:19:42.317 回答