23

我正在microk8s使用ubuntu

我正在尝试运行一个简单的 hello world 程序,但在pod创建时出现错误。

kubelet 没有配置 ClusterDNS IP,无法使用“ClusterFirst”策略创建 Pod。回退到“默认”政策

这是我正在尝试应用的deployment.yaml文件。

apiVersion: v1
kind: Service
metadata:
  name: grpc-hello
spec:
  ports:
  - port: 80
    targetPort: 9000
    protocol: TCP
    name: http
  selector:
    app: grpc-hello
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-hello
  template:
    metadata:
      labels:
        app: grpc-hello
    spec:
      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http2_port=9000",
          "--backend=grpc://127.0.0.1:50051",
          "--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
          "--rollout_strategy=managed",
        ]
        ports:
          - containerPort: 9000
      - name: python-grpc-hello
        image: gcr.io/octa-test-123/python-grpc-hello:1.0
        ports:
          - containerPort: 50051

这是我尝试describe使用吊舱时得到的

Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container
Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container

我对此进行了很多搜索,我找到了一些答案,但没有人为我工作我也kube-dns为此创建了,但不知道为什么这仍然不起作用。这些 kube-dns 正在运行。kube-dns 在kube-system命名空间中。

NAME                       READY   STATUS    RESTARTS   AGE
kube-dns-6dbd676f7-dfbjq   3/3     Running   0          22m

这是我申请创建的kube-dns

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.152.183.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  upstreamNameservers: |-
    ["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

请让我知道我错过了什么。

4

1 回答 1

39

您尚未指定如何部署 kube dns,但建议使用 microk8s 使用核心 dns。您不应自行部署 kube dns 或核心 dns,而需要使用此命令启用 dns,该命令microk8s enable dns将部署核心 DNS 并设置 DNS。

于 2020-01-01T07:28:32.890 回答