1

我不小心检查了我的 pod 是否有权限ServiceAccount及其绑定。出于好奇,我想检查从该服务帐户创建的令牌是否与特定绑定ClusterRole并且ClusterRoleBinding是只读的。我从Kubernetes Documentation中尝试了一些东西,恐怕我的理解有些错误。请帮助我正确理解这一点。

我使用现有的ClusterRole view并关联到我的服务帐户my-sa作为ClusterRoleBinding.

APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')

SECRET_NAME=$(kubectl get serviceaccount my-sa -o jsonpath='{.secrets[0].name}')

TOKEN=$(kubectl get secret $SECRET_NAME -o jsonpath='{.data.token}' | base64 --decode)

然后我使用TOKENto talk toAPISERVER在测试命名空间中创建一个 pod

curl -X POST $APISERVER/api/v1/namespaces/test/pods\?fieldManager=kubectl-run 
-d '{"kind":"Pod",
     "apiVersion":"v1",
     "metadata":
       { "name":"nginx",
         "creationTimestamp":null,
         "labels":{"run":"nginx"}
       },
      "spec": 
          { "containers":
            [ 
              { "name":"nginx",
                "image":"nginx",
                "resources":{}
              } 
            ],
         "restartPolicy":"Always",
         "dnsPolicy":"ClusterFirst"
     },
    "status":{}
  }' --header "Content-Type: application/json" --header "Authorization: Bearer $TOKEN" --header "Accept: application/json, */*" --insecure

发生了什么

  1. 这创建了一个成功的 pod,并且这个令牌还可以创建任何东西。
  2. default ServiceAccount我什至尝试使用default命名空间中的,行为是一样的

我的理解。

  1. 此令牌与 of 相关联ClusterRoleView因此它不应该允许令牌创建任何资源。
  2. 这不是一个漏洞,因为如果我能够从客户端代码中获取 ServiceAccount 令牌并将其传递到集群之外?

我的设置:Docker 桌面 Kubernetes。

请帮助我理解其中的不正确之处。我无法理解ServiceAccount令牌的范围。我试图用谷歌搜索,但我找不到它。可能我没有合适的词组来找到它。

编辑-1:

描述 ClusterRole 视图。

    Name:         view
Labels:       kubernetes.io/bootstrapping=rbac-defaults
              rbac.authorization.k8s.io/aggregate-to-edit=true
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                    Non-Resource URLs  Resource Names  Verbs
  ---------                                    -----------------  --------------  -----
  bindings                                     []                 []              [get list watch]
  configmaps                                   []                 []              [get list watch]
  endpoints                                    []                 []              [get list watch]
  events                                       []                 []              [get list watch]
  limitranges                                  []                 []              [get list watch]
  namespaces/status                            []                 []              [get list watch]
  namespaces                                   []                 []              [get list watch]
  persistentvolumeclaims/status                []                 []              [get list watch]
  persistentvolumeclaims                       []                 []              [get list watch]
  pods/log                                     []                 []              [get list watch]
  pods/status                                  []                 []              [get list watch]
  pods                                         []                 []              [get list watch]
  replicationcontrollers/scale                 []                 []              [get list watch]
  replicationcontrollers/status                []                 []              [get list watch]
  replicationcontrollers                       []                 []              [get list watch]
  resourcequotas/status                        []                 []              [get list watch]
  resourcequotas                               []                 []              [get list watch]
  serviceaccounts                              []                 []              [get list watch]
  services/status                              []                 []              [get list watch]
  services                                     []                 []              [get list watch]
  controllerrevisions.apps                     []                 []              [get list watch]
  daemonsets.apps/status                       []                 []              [get list watch]
  daemonsets.apps                              []                 []              [get list watch]
  deployments.apps/scale                       []                 []              [get list watch]
  deployments.apps/status                      []                 []              [get list watch]
  deployments.apps                             []                 []              [get list watch]
  replicasets.apps/scale                       []                 []              [get list watch]
  replicasets.apps/status                      []                 []              [get list watch]
  replicasets.apps                             []                 []              [get list watch]
  statefulsets.apps/scale                      []                 []              [get list watch]
  statefulsets.apps/status                     []                 []              [get list watch]
  statefulsets.apps                            []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling/status  []                 []              [get list watch]
  horizontalpodautoscalers.autoscaling         []                 []              [get list watch]
  cronjobs.batch/status                        []                 []              [get list watch]
  cronjobs.batch                               []                 []              [get list watch]
  jobs.batch/status                            []                 []              [get list watch]
  jobs.batch                                   []                 []              [get list watch]
  daemonsets.extensions/status                 []                 []              [get list watch]
  daemonsets.extensions                        []                 []              [get list watch]
  deployments.extensions/scale                 []                 []              [get list watch]
  deployments.extensions/status                []                 []              [get list watch]
  deployments.extensions                       []                 []              [get list watch]
  ingresses.extensions/status                  []                 []              [get list watch]
  ingresses.extensions                         []                 []              [get list watch]
  networkpolicies.extensions                   []                 []              [get list watch]
  replicasets.extensions/scale                 []                 []              [get list watch]
  replicasets.extensions/status                []                 []              [get list watch]
  replicasets.extensions                       []                 []              [get list watch]
  replicationcontrollers.extensions/scale      []                 []              [get list watch]
  ingresses.networking.k8s.io/status           []                 []              [get list watch]
  ingresses.networking.k8s.io                  []                 []              [get list watch]
  networkpolicies.networking.k8s.io            []                 []              [get list watch]
  poddisruptionbudgets.policy/status           []                 []              [get list watch]
  poddisruptionbudgets.policy                  []                 []              [get list watch]
4

1 回答 1

1

Neil Cresswell 在 portainer.io 上问题摘要

默认情况下,Docker Desktop 及其嵌入式 Kubernetes 产品不强制执行任何 RBAC 规则。它会让您创建 RBAC 规则,但不会强制执行它们。

所有服务帐户都会自动接收默认的集群管理员角色。

文章说这可以通过运行轻松修复,kubectl delete clusterrolebinding docker-for-desktop-binding它将开始执行 RBAC 规则。

您还可以通过运行以下命令在您自己的部署中修补此问题:

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: docker-for-desktop-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts:kube-system
EOF
于 2021-12-23T23:43:30.983 回答