0

大家好,我正在尝试在我的 k8s 集群上运行 pi spark 示例。我已经安装了 spark 操作符,拉取图像并运行以下命令:

kubectl apply -f ./spark-pi.yaml

文档在这里

当我记录驱动程序窗格时,它会给出:

pkg/mod/k8s.io/client-go@v0.19.6/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:namespace:spark-operator-spark" cannot list resource "pods" in API group "" at the cluster scope

当我运行操作员吊舱时,它会给出:

pkg/mod/k8s.io/client-go@v0.19.6/tools/cache/reflector.go:156: Failed to watch *v1.Pod: failed to list *v1.Pod: Unauthorized

这是我用于 ClusterRole 和 ClusterRoleBinding 的 rbac.yaml 文件(与原始 helm 图表文件相同的文件):https ://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/charts/spark-operator-chart /templates/rbac.yaml 有 什么解决办法吗?

4

1 回答 1

0

在安装 Operator 之前,您需要设置: ServiceAccount RoleBinding Spark 应用程序的命名空间(可选但非常推荐) Spark Operator 的命名空间(可选但非常推荐)

见下面的例子:

apiVersion: v1
kind: Namespace
metadata:
  name: spark-operator

apiVersion: v1
kind: Namespace
metadata:
  name: spark-apps

apiVersion: v1
kind: ServiceAccount
metadata:
  name: spark
  namespace: spark-apps

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: spark-operator-role
  namespace: spark-apps
roleRef:
  apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
 - kind: ServiceAccount
name: spark
namespace: spark-apps

取自https://gist.github.com/dzlab/b546a450a9e8cfa5c8c3ff0a7c9ff091#file-spark-operator-yaml

于 2022-01-20T16:55:27.363 回答