2

I am trying to mount a NFS share (outside of k8s cluster) in my container via DNS lookup, my config is as below

apiVersion: v1
kind: Pod
metadata:
  name: service-a
spec:
  containers:
  - name: service-a
    image: dockerregistry:5000/centOSservice-a
    command: ["/bin/bash"]
    args: ["/etc/init.d/jboss","start"]
    volumeMounts:
      - name: service-a-vol
        mountPath: /myservice/por/data
  volumes:
    - name: service-a-vol
      nfs:
        server: nfs.service.domain
        path: "/myservice/data"
  restartPolicy: OnFailure 

nslookup of nfs.service.domin works fine from my container. This is achiveded via StubDomain . However when creating the container it fails to resolve the nfs server. Error:

Warning  FailedMount  <invalid>  kubelet, worker-node-1  MountVolume.SetUp failed for volume "service-a-vol" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol --scope -- mount -t nfs nfs.service.domain:/myservice/data /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol
Output: Running scope as unit run-27293.scope.
mount.nfs: Failed to resolve server nfs.service.domain: Name or service not known
mount.nfs: Operation already in progress

If i modify server: nfs.service.domain to server: 10.10.1.11 this works fine! So to summarise

  1. DNS resolution of the service works fine
  2. Mounting via DNS resolution does not
  3. Mounting via specific IP address works
  4. I have tried Headless Service instead of StubDomain but the same issue exists

Any help much appreciated

Update 1: If i add an entry in the /etc/hosts files of worker/master nodes 10.10.1.11 nfs.service.domain then my configuration above server: nfs.service.domain works. This is obviously not a desired workaround...

4

4 回答 4

3

正如@Giorgio Cerruti 所指出的以及此 github 票证中所引用的那样,目前这是不可能的,因为节点需要能够解析 DNS 条目并且它不能解析 kube-dns。两种可能的解决方案是:

  1. 更新/etc/hosts每个 kubernetes 节点以解析 NFS 端点(根据上面的更新)。这是一个原始的解决方案。
  2. 适用于此 NFS 服务和同一域(作为 NFS)中的任何其他远程服务的更强大的修复是将远程 DNS 服务器添加到 kubernetes 节点resolv.conf

    someolddomain.org service.domain xx.xxx.xx nameserver 10.10.0.12 nameserver 192.168.20.22 nameserver 8.8.4.4

于 2018-03-15T20:35:55.307 回答
0

尝试不使用“cluster.local”并仅使用 nfs 服务名称的名称。

于 2021-11-25T05:49:40.547 回答
0

尝试使用完整的服务名称,如下所示“[service-name].[service-namespace].svc.cluster.local”。

于 2022-02-13T04:28:08.123 回答
0

我正在使用完整的服务名称,它对我来说工作正常。像这样:

apiVersion: v1
kind: Pod
metadata:
  name: alpine
  labels:
    app: alpine
spec:
  containers:
    - name: alpine
      image: "alpine:latest"
      imagePullPolicy: "Always"
      command: [ "tail", "-f", "/dev/null" ]
      resources:
        limits:
          cpu: 100m
          memory: 100Mi
      volumeMounts:
        - mountPath: /nfs
          name: nfs-vol
  volumes:
    - name: nfs-vol
      nfs:
        path: /exports
        server: nfs-server-svc.nfs-test.svc.cluster.local

在此之前,我试图仅使用“简单”服务名称,但它不起作用。

我正在开发 GKE,版本为“v1.20.10-gke.1600”。

您可以在此处查看更多详细信息。

谢谢。

于 2021-11-24T22:57:08.683 回答