2

有没有人在通过 devstack / Magnum 建立集群后最近部署了 k8s 应用程序?

使用 devstack(最新)我已经在 OpenStack 上成功部署了一个 K8s 集群。这是在运行 Ubuntu 18.04 的单个裸机服务器上。

openstack coe cluster template create k8s-cluster-template \
                           --image fedora-atomic-27 \
                           --keypair testkey \
                           --external-network public \
                           --dns-nameserver 8.8.8.8 \
                           --flavor m1.small \
                           --docker-volume-size 5 \
                           --network-driver flannel \
                           --coe kubernetes \
                           --volume-driver cinder

openstack coe cluster create k8s-cluster \
                      --cluster-template k8s-cluster-template \
                      --master-count 1 \
                      --node-count 1

在尝试集群时,我遇到了配置问题。我试图确定我哪里出错了,想知道是否还有其他人看到 magnum k8s 集群和 cinder 卷的动态配置问题?

K8s 版本:

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

配置问题:首先,Kubernetes 中没有创建默认存储类。当我使用 helm 部署一些简单的东西(稳定/mariadb)时,持久卷声明从未被绑定。事实证明,这是 magnum 的一个已知问题,有待修复

我使用 kubectl 创建了一个默认值:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: kubernetes.io/cinder

在那之后,PVC 仍然处于未决状态,但是当我在其中运行 describe 时,我可以看到一个错误:

  Type     Reason              Age                From                         Message
  ----     ------              ----               ----                         -------
  Warning  ProvisioningFailed  55s (x26 over 6m)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory

查看 kube-controller-manager 进程,它没有通过 cloud-provider 或 cloud-config 命令行参数:

kube      3111  1.8  4.2 141340 86392 ?        Ssl  Sep19   1:18 /usr/local/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --leader-elect=true --service-account-private-key-file=/etc/kubernetes/certs/service_account_private.key --root-ca-file=/etc/kubernetes/certs/ca.crt

即使这些参数是通过 magnum/heat/cloud-init 写入 /etc/kubernetes/controller-manager 的:

###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--leader-elect=true  --service-account-private-key-file=/etc/kubernetes/certs/service_account_private.key --root-ca-file=/etc/kubernetes/certs/ca.crt --cloud-config=/etc/kubernetes/kube_openstack_config --cloud-provider=openstack"

从 cloud-init 输出日志和“原子容器列表”中,我可以看到控制器管理器是从 docker 映像启动的。事实证明,图像是使用 /usr/bin/kube-controller-manager.sh 脚本运行的。查看图像 rootfs 此脚本正在删除 -cloud-config / -cloud-provider 参数:

ARGS=$(echo $ARGS | sed s/--cloud-provider=openstack//)
ARGS=$(echo $ARGS | sed s#--cloud-config=/etc/kubernetes/kube_openstack_config##)

知道为什么图像会这样做吗?

为了取得进展,我注释掉了两条 sed 行并重新启动。然后我可以验证这些过程是否具有预期的参数。日志文件显示它们已被拾取(并抱怨它们已被弃用)。

现在,当我尝试通过 helm 安装 MariaDB 时,我收到一个错误,即卷分配调用失败并显示 400:

  Type     Reason              Age              From                         Message
  ----     ------              ----             ----                         -------
  Warning  ProvisioningFailed  9s (x7 over 1m)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": failed to create a 8 GB volume: Invalid request due to incorrect syntax or missing required parameters.

来自 /var/log/syslog cinder 正在抱怨,但未提供任何其他信息:

Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mPOST http://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mPOST http://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;32mDEBUG cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mAction: 'create', calling method: create, body: {"volume":{"availability_zone":"nova","metadata":{"kubernetes.io/created-for/pv/name":"pvc-687269c1-bcf6-11e8-bf16-fa163e3354e2","kubernetes.io/created-for/pvc/name":"data-fantastic-yak-mariadb-master-0","kubernetes.io/created-for/pvc/namespace":"default"},"name":"kubernetes-dynamic-pvc-687269c1-bcf6-11e8-bf16-fa163e3354e2","size":8}}#033[00m #033[00;33m{{(pid=32491) _process_stack /opt/stack/cinder/cinder/api/openstack/wsgi.py:870}}#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;32mDEBUG cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;32m] #033[01;35m#033[00;32mAction: 'create', calling method: create, body: {"volume":{"availability_zone":"nova","metadata":{"kubernetes.io/created-for/pv/name":"pvc-68e9c7c9-bcf6-11e8-bf16-fa163e3354e2","kubernetes.io/created-for/pvc/name":"data-fantastic-yak-mariadb-slave-0","kubernetes.io/created-for/pvc/namespace":"default"},"name":"kubernetes-dynamic-pvc-68e9c7c9-bcf6-11e8-bf16-fa163e3354e2","size":8}}#033[00m #033[00;33m{{(pid=32490) _process_stack /opt/stack/cinder/cinder/api/openstack/wsgi.py:870}}#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-cc10f012-a824-4f05-9aa4-d871603842dc #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mhttp://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes returned with HTTP 400#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: [pid: 32490|app: 0|req: 205/414] 172.24.4.10 () {64 vars in 1329 bytes} [Thu Sep 20 10:31:36 2018] POST /volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes => generated 494 bytes in 7 msecs (HTTP/1.1 400) 5 headers in 230 bytes (2 switches on core 0)
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: #033[00;36mINFO cinder.api.openstack.wsgi [#033[01;36mNone req-7d95ad99-015b-4c59-8072-6e800abbf01f #033[00;36mdemo admin#033[00;36m] #033[01;35m#033[00;36mhttp://192.168.7.172/volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes returned with HTTP 400#033[00m#033[00m
Sep 20 10:31:36 vantiq-dell-02 devstack@c-api.service[32488]: [pid: 32491|app: 0|req: 210/415] 172.24.4.10 () {64 vars in 1329 bytes} [Thu Sep 20 10:31:36 2018] POST /volume/v2/9b400f82c32b43068779637a00d3ea5e/volumes => generated 495 bytes in 7 msecs (HTTP/1.1 400) 5 headers in 230 bytes (2 switches on core 0)

这里参考的是主 MariaDB pod 的卷配置:

      volumes:
        - name: config
          configMap:
            name: joking-opossum-mariadb-master
        - name: custom-init-scripts
          configMap:
            name: joking-opossum-mariadb-master-init-scripts
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          app: "mariadb"
          chart: mariadb-4.4.2
          component: "master"
          release: "joking-opossum"
          heritage: "Tiller"
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"

任何对错误的见解将不胜感激。

4

1 回答 1

1

该问题似乎是最新 devstack 代码(截至 2018 年 9 月 19 日)中 Kubenetes 和 Cinder 之间交互的错误。我退出并使用 stable/queens 分支进行部署,这两个问题(缺少 cmd 行参数 / PVC 未绑定)都消失了。我可以成功地将 MariaDB 部署到通过 Magnum 创建的 2 节点集群。

于 2018-09-20T22:59:03.643 回答