6

我刚开始在 2 个私有云服务器上设置 2 个节点(主从)上的 Kubernetes。我已经安装了它,做了基本的配置,让它运行了一些简单的从 master 到 minion 的 pods/services。

我的问题是:

不使用 Google Cloud 时,如何将持久存储与 pod 一起使用?

对于我的第一次测试,我运行了一个 Ghost Blog pod,但如果我撕掉 pod,更改就会丢失。尝试将卷添加到 pod,但实际上找不到任何关于不在 GC 上时如何完成的文档。

我的尝试:

apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
  manifest:
    version: v1beta1
    id: ghost
    containers:
      - name: ghost
        image: ghost
        volumeMounts:
          - name: ghost-persistent-storage
            mountPath: /var/lib/ghost
        ports:
          - hostPort: 8080
            containerPort: 2368
    volumes:
      - name: ghost-persistent-storage
        source:
          emptyDir: {}

发现这个:在 Kubernetes 上持久安装 MySQL 和 WordPress

无法弄清楚如何将存储(NFS?)添加到我的测试安装中。

4

3 回答 3

2

在新的 API ( v1beta3 ) 中,我们添加了更多卷类型,包括NFS 卷。NFS 卷类型假定您已经有一个 NFS 服务器在某个地方运行以指向 pod。试一试,如果您有任何问题,请告诉我们!

于 2015-04-25T19:07:52.663 回答
1

NFS 示例: https ://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs

GlusterFS 示例: https ://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs

希望有帮助!

于 2015-04-25T20:49:03.097 回答
0

您可以尝试https://github.com/suquant/glusterd解决方案。

Kubernetes 集群中的 Glusterfs 服务器

思路很简单,集群管理器监听kubernetes api并添加到/etc/hosts“metadata.name”和pod ip地址。

1. 创建豆荚

gluster1.yaml

apiVersion: v1
kind: Pod
metadata:
  name: gluster1
  namespace: mynamespace
  labels:
    component: glusterfs-storage
spec:
  nodeSelector:
    host: st01
  containers:
    - name: glusterfs-server
      image: suquant/glusterd:3.6.kube
      imagePullPolicy: Always
      command:
        - /kubernetes-glusterd
      args:
        - --namespace
        - mynamespace
        - --labels
        - component=glusterfs-storage
      ports:
        - containerPort: 24007
        - containerPort: 24008
        - containerPort: 49152
        - containerPort: 38465
        - containerPort: 38466
        - containerPort: 38467
        - containerPort: 2049
        - containerPort: 111
        - containerPort: 111
          protocol: UDP
      volumeMounts:
        - name: brick
          mountPath: /mnt/brick
        - name: fuse
          mountPath: /dev/fuse
        - name: data
          mountPath: /var/lib/glusterd
      securityContext:
        capabilities:
          add:
            - SYS_ADMIN
            - MKNOD
  volumes:
    - name: brick
      hostPath:
        path: /opt/var/lib/brick1
    - name: fuse
      hostPath:
        path: /dev/fuse
    - name: data
      emptyDir: {}

gluster2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: gluster2
  namespace: mynamespace
  labels:
    component: glusterfs-storage
spec:
  nodeSelector:
    host: st02
  containers:
    - name: glusterfs-server
      image: suquant/glusterd:3.6.kube
      imagePullPolicy: Always
      command:
        - /kubernetes-glusterd
      args:
        - --namespace
        - mynamespace
        - --labels
        - component=glusterfs-storage
      ports:
        - containerPort: 24007
        - containerPort: 24008
        - containerPort: 49152
        - containerPort: 38465
        - containerPort: 38466
        - containerPort: 38467
        - containerPort: 2049
        - containerPort: 111
        - containerPort: 111
          protocol: UDP
      volumeMounts:
        - name: brick
          mountPath: /mnt/brick
        - name: fuse
          mountPath: /dev/fuse
        - name: data
          mountPath: /var/lib/glusterd
      securityContext:
        capabilities:
          add:
            - SYS_ADMIN
            - MKNOD
  volumes:
    - name: brick
      hostPath:
        path: /opt/var/lib/brick1
    - name: fuse
      hostPath:
        path: /dev/fuse
    - name: data
      emptyDir: {}

3. 运行 pod

kubectl create -f gluster1.yaml
kubectl create -f gluster2.yaml

2.管理glusterfs服务器

kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer probe gluster2"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster peer status"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume create media replica 2 transport tcp,rdma gluster1:/mnt/brick gluster2:/mnt/brick force"
kubectl --namespace=mynamespace exec -ti gluster1 -- sh -c "gluster volume start media"

三、用法

gluster-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: glusterfs-storage
  namespace: mynamespace
spec:
  ports:
    - name: glusterfs-api
      port: 24007
      targetPort: 24007
    - name: glusterfs-infiniband
      port: 24008
      targetPort: 24008
    - name: glusterfs-brick0
      port: 49152
      targetPort: 49152
    - name: glusterfs-nfs-0
      port: 38465
      targetPort: 38465
    - name: glusterfs-nfs-1
      port: 38466
      targetPort: 38466
    - name: glusterfs-nfs-2
      port: 38467
      targetPort: 38467
    - name: nfs-rpc
      port: 111
      targetPort: 111
    - name: nfs-rpc-udp
      port: 111
      targetPort: 111
      protocol: UDP
    - name: nfs-portmap
      port: 2049
      targetPort: 2049
  selector:
    component: glusterfs-storage

运行服务

kubectl create -f gluster-svc.yaml

在您可以通过主机名“glusterfs-storage.mynamespace”将 NFS 挂载到集群中之后

于 2016-07-17T19:26:46.320 回答