-1

干净安装 3 个节点(2 个主节点和 3 个节点)的 Kubernetes 集群后,主节点也被分配为工作节点。

成功安装后,我得到了节点的以下角色。如图所示,主节点缺少节点角色的位置。

$ kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node1   Ready    master   12d   v1.18.5
node2   Ready    master   12d   v1.18.5
node3   Ready    <none>   12d   v1.18.5

库存/mycluster/hosts.yaml

all:
  hosts:
    node1:
      ansible_host: 10.1.10.110
      ip: 10.1.10.110
      access_ip: 10.1.10.110
    node2:
      ansible_host: 10.1.10.111
      ip: 10.1.10.111
      access_ip: 10.1.10.111
    node3:
      ansible_host: 10.1.10.112
      ip: 10.1.10.112
      access_ip: 10.1.10.112
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}
    vault:
      hosts:
        node1
        node2
        node3

网络插件:法兰绒

用于调用 ansible 的命令:

ansible-playbook -i inventory/mycluster/hosts.yaml --become cluster.yml

如何使主节点也作为工作节点工作?

Kubectl 描述 node1 输出:

kubectl describe node node1
Name:               node1
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"a6:bb:9e:2a:7e:a8"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.1.10.110
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 01 Jul 2020 09:26:15 -0700
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Jul 2020 06:39:58 -0700
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 10 Jul 2020 12:51:05 -0700   Fri, 10 Jul 2020 12:51:05 -0700   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 14 Jul 2020 06:40:02 -0700   Fri, 03 Jul 2020 15:00:26 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 14 Jul 2020 06:40:02 -0700   Fri, 03 Jul 2020 15:00:26 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 14 Jul 2020 06:40:02 -0700   Fri, 03 Jul 2020 15:00:26 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 14 Jul 2020 06:40:02 -0700   Mon, 06 Jul 2020 10:45:01 -0700   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.1.10.110
  Hostname:    node1
Capacity:
  cpu:                8
  ephemeral-storage:  51175Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32599596Ki
  pods:               110
Allocatable:
  cpu:                7800m
  ephemeral-storage:  48294789041
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             31997196Ki
  pods:               110
System Info:
  Machine ID:                 c8690497b9704d2d975c33155c9fa69e
  System UUID:                00000000-0000-0000-0000-AC1F6B96768A
  Boot ID:                    5e3eabe0-7732-4e6d-b25d-7eeec347d6c6
  Kernel Version:             3.10.0-1127.13.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.12
  Kubelet Version:            v1.18.5
  Kube-Proxy Version:         v1.18.5
PodCIDR:                      10.233.64.0/24
PodCIDRs:                     10.233.64.0/24
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
  default                     httpd-deployment-598596ddfc-n56jq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 coredns-dff8fc7d-lb6bh                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d17h
  kube-system                 kube-apiserver-node1                           250m (3%)     0 (0%)      0 (0%)           0 (0%)         12d
  kube-system                 kube-controller-manager-node1                  200m (2%)     0 (0%)      0 (0%)           0 (0%)         12d
  kube-system                 kube-flannel-px8cj                             150m (1%)     300m (3%)   64M (0%)         500M (1%)      3d17h
  kube-system                 kube-proxy-6spl2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d17h
  kube-system                 kube-scheduler-node1                           100m (1%)     0 (0%)      0 (0%)           0 (0%)         12d
  kube-system                 kubernetes-metrics-scraper-54fbb4d595-28vvc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d20h
  kube-system                 nodelocaldns-rxs4f                             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests       Limits
  --------           --------       ------
  cpu                900m (11%)     300m (3%)
  memory             205860Ki (0%)  856515840 (2%)
  ephemeral-storage  0 (0%)         0 (0%)
  hugepages-1Gi      0 (0%)         0 (0%)
  hugepages-2Mi      0 (0%)         0 (0%)
Events:              <none>
4

1 回答 1

1

如何使主节点也作为工作节点工作?

使用以下命令从主节点中删除NoSchedule污点

kubectl taint node node1 node-role.kubernetes.io/master:NoSchedule-
kubectl taint node node2 node-role.kubernetes.io/master:NoSchedule-

在此之后node1node2将变得像工作节点一样,并且可以在其上安排 Pod。

于 2020-07-14T13:11:46.320 回答