0

编辑:所以我一直在努力确保 IP 流量可以从 docker 流向 kubernetes(参考https://fedoramagazine.org/docker-and-fedora-32/),然后我的journalctl 错误。日志太大,无法发布到 stackoverflow,所以这里是https://creedence.mchambersradio.com/journalctl_kubelet.txt

我正在尝试在 Fedora 33 VM 上运行一个基本的 Kubernetes 集群,其中我总共有 3 个节点,我计划将其放入一个集群中进行一些实验。我安装了基本的 Fedora Server 33,删除了 zram 交换并安装了 kubernetes 和 kubeadm。我已经在 firewalld 中为 kubernetes 开放了所有推荐的端口,并且我已经设置了 SELinux Policy 以允许 kubernetes 具有访问权限(并且在我让 kubernetes 运行之前让 SELinux 处于 Permissive 中,然后我将验证我可以运行它强制执行之后)。

我相当肯定我正在做一些愚蠢的事情,这在手册中很明显,但我只是没有找到它。如果答案是手册的链接,我不会生气,但如果可能的话,您能否将我链接到手册中的正确位置。谢谢

当我打电话

sudo kubeadm init --config kubeadm-config.yaml

我明白了

W0214 15:02:30.550625   14702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8node1.kube.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.50.1 172.16.52.2 172.16.52.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8node1.kube.local localhost] and IPs [172.16.52.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8node1.kube.local localhost] and IPs [172.16.52.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0214 15:02:36.332363   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0214 15:02:36.340457   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0214 15:02:36.341549   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

这是配置 yaml 文件

[k8admin@k8node1 ~]$ cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: xb11me.fn9fxtdpg5gxvyso
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.52.2
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8node1.kube.local
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.16.52.2
  - 127.0.0.1
  extraArgs:
    audit-log-maxage: "2"
    audit-log-path: /etc/kubernetes/audit/kube-apiserver-audit.log
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
    authorization-mode: Node,RBAC
    feature-gates: TTLAfterFinished=true
  extraVolumes:
  - hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml
    name: audit-policy
    pathType: File
  - hostPath: /var/log/kubernetes/audit
    mountPath: /etc/kubernetes/audit
    name: audit-volume
    pathType: DirectoryOrCreate
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
    feature-gates: TTLAfterFinished=true
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.52.0/27
  serviceSubnet: 172.16.50.0/27
scheduler:
  extraArgs:
    bind-address: 0.0.0.0
    feature-gates: TTLAfterFinished=true

systemctl状态的输出

[k8admin@k8node1 ~]$ sudo systemctl status kubelet
[sudo] password for k8admin: 
● kubelet.service - Kubernetes Kubelet Server
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Sun 2021-02-14 15:13:56 CST; 1s ago
       Docs: https://kubernetes.io/docs/concepts/overview/components/#kubelet
             https://kubernetes.io/docs/reference/generated/kubelet/
    Process: 23862 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=>
   Main PID: 23862 (code=exited, status=255/EXCEPTION)
        CPU: 462ms

Feb 14 15:13:56 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.
...skipping...
● kubelet.service - Kubernetes Kubelet Server
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Sun 2021-02-14 15:13:56 CST; 1s ago
       Docs: https://kubernetes.io/docs/concepts/overview/components/#kubelet
             https://kubernetes.io/docs/reference/generated/kubelet/
    Process: 23862 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=>
   Main PID: 23862 (code=exited, status=255/EXCEPTION)
        CPU: 462ms

Feb 14 15:13:56 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.

和 journalctl 的输出

Feb 14 15:15:39 k8node1.kube.local systemd[1]: Started Kubernetes Kubelet Server.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 9970.
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. >
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config f>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config >
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.242407   25384 server.go:417] Version: v1.18.2
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.242896   25384 plugins.go:100] No cloud provider specified.
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.255867   25384 server.go:615] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system conta>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.256032   25384 server.go:622] failed to get the container runtime's cgroup: failed to get container name for docker p>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344674   25384 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344880   25384 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344911   25384 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroup>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345044   25384 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345053   25384 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345058   25384 container_manager_linux.go:306] Creating device plugin manager: true
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345161   25384 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345176   25384 client.go:92] Start docker client with request timeout=2m0s
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.354211   25384 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling ba>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.354255   25384 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.377617   25384 docker_service.go:253] Docker cri networking managed by cni
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.389851   25384 docker_service.go:258] Docker Info: &{ID:KV2D:3HQS:5ENS:ISC6:TJ36:ZZMR:NRFF:74ZF:TWF5:C77P:Y35C:J7AH C>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.389959   25384 docker_service.go:271] Setting cgroupDriver to systemd
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411574   25384 remote_runtime.go:59] parsed scheme: ""
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411595   25384 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411634   25384 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411646   25384 clientconn.go:933] ClientConn switching balancer to "pick_first"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411713   25384 remote_image.go:50] parsed scheme: ""
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411723   25384 remote_image.go:50] scheme "" not registered, fallback to default scheme
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411737   25384 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411743   25384 clientconn.go:933] ClientConn switching balancer to "pick_first"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411764   25384 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411783   25384 kubelet.go:317] Watching apiserver
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.428553   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430406   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430608   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430522   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.431237   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.435296   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:41 k8node1.kube.local kubelet[25384]: E0214 15:15:41.261836   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:41 k8node1.kube.local kubelet[25384]: E0214 15:15:41.981080   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:42 k8node1.kube.local kubelet[25384]: E0214 15:15:42.485722   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:44 k8node1.kube.local kubelet[25384]: E0214 15:15:44.757167   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.741740   25384 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chai>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.757726   25384 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.13, apiVersion: 1.40.0
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.758340   25384 server.go:1125] Started kubelet
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.758492   25384 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.759568   25384 event.go:269] Unable to write event: 'Post "https://172.16.52.2:6443/api/v1/namespaces/default/events">
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.761982   25384 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.762596   25384 server.go:145] Starting to listen on 0.0.0.0:10250
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.763392   25384 server.go:393] Adding debug handlers to kubelet server.
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.768934   25384 volume_manager.go:265] Starting Kubelet Volume Manager
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.769572   25384 desired_state_of_world_populator.go:139] Desired state populator starts to run
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.769733   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get "https:>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.769989   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get "https:>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.771286   25384 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://172.16>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.808541   25384 status_manager.go:158] Starting to sync pod status with apiserver
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.808620   25384 kubelet.go:1821] Starting kubelet main sync loop.
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.808709   25384 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.810015   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.810866   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: W0214 15:15:45.847142   25384 container.go:526] Failed to update stats for container "/": failed to parse memory.usage_in_bytes - op>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.868960   25384 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.878661   25384 kubelet.go:2267] node "k8node1.kube.local" not found
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.906578   25384 kubelet_node_status.go:70] Attempting to register node k8node1.kube.local
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.906975   25384 kubelet_node_status.go:92] Unable to register node "k8node1.kube.local" with API server: Post "https:/>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.909125   25384 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed >
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.971755   25384 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "https://172.16>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.979138   25384 kubelet.go:2267] node "k8node1.kube.local" not found
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.980502   25384 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009365   25384 cpu_manager.go:184] [cpumanager] starting with none policy
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009387   25384 cpu_manager.go:185] [cpumanager] reconciling every 10s
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009429   25384 state_mem.go:36] [cpumanager] initializing new in-memory state store
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009622   25384 state_mem.go:88] [cpumanager] updated default cpuset: ""
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009639   25384 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009681   25384 policy_none.go:43] [cpumanager] none policy: Start
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: F0214 15:15:46.009703   25384 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: unable to find data in me>
Feb 14 15:15:46 k8node1.kube.local systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ An ExecStart= process belonging to unit kubelet.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 255.
Feb 14 15:15:46 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.

这也是我第一次在 stackoverflow 上发帖,尽管我已经参考这个网站多年,甚至在学术论文中引用了它。

4

2 回答 2

0

作为修复尝试关闭交换:

$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^/#/' /etc/fstab

然后重新启动您的虚拟机。然后执行命令:

$ kubeadm reset
$ kubeadm init --ignore-preflight-errors all

请参阅:kubeadm-timeoutkubeadm-swapoff

于 2021-02-15T08:46:03.020 回答
0

尝试卸载负责创建交换的服务

sudo dnf remove zram-generator-defaults

这应该可以解决您的问题。

于 2021-08-01T15:42:22.167 回答