1

只是想澄清一下 spark-submit --keytab --principal && --proxy-user 参数是否可以共存?

我们要求以真正的业务用户身份提交作业,但该用户在 hadoop kdc 中没有委托人。

每当将代理用户和 kerberos 主体一起使用时,我都会遇到异常。

17/02/09 13:51:43 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 379 for atlas on 10.12.118.92:8020
Exception in thread "main" java.io.IOException: java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:888)
        at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:8
        at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2243)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
        at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1293)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.take(RDD.scala:1288)
        at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1328)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.first(RDD.scala:1327)
        at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:269)
        at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:265)
        at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:242)
        at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:74)
        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:171)
        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:44)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
        at org.sandbox.Main$.main(Main.scala:39)
        at org.sandbox.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
        at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:161)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
        at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:870)
        ... 57 more
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 403, message: Forbidde
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
        at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
        at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128
        at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
  1. 如果代理用户和主体参数不能共存,你们有这方面的文档吗?
  2. kerberos hadoop 环境中代理用户参数的真正用例是什么?
4

3 回答 3

3

制作 spark-submit --keytab 时,不能同时使用 --principal && --proxy-user 参数。

如果一起使用提交将出现以下错误:

Spark 提交失败:错误:只能提供 --proxy-user 或 --principal 之一。

在此处输入图像描述

于 2018-03-14T06:18:45.583 回答
0

我可以使用 spark submit 一起使用--proxy-user、--principal 和--keytab。上述问题是由于 DELEGATIONTOKEN 向 KMS Ranger 请求权限所致。

因此,我在“自定义 kms 站点”中添加了以下条目以使其正常工作。

hadoop.kms.proxyuser.xxx.users=*
hadoop.kms.proxyuser.xxx.hosts=*
于 2017-02-14T06:00:01.153 回答
0

1)--proxy-user并且--principal 不能同时传递给spark-submit。但是,您可以初始化为 kerberos 用户并在代理用户下启动 spark-job: kinit -kt USER.keytab USER && spark-submit --proxy-user PROXY-USER * 如果您将 spark 与 hive 一起使用,它将无法工作 + 确保您已hadoop.proxyuser.USER.{hosts,groups}正确配置。

2)用户名​​为“super”的超级用户想要代表用户 joe 提交作业并访问 hdfs。超级用户拥有 kerberos 凭据,但用户 joe 没有。这些任务需要以用户 joe 的身份运行,并且 namenode 上的任何文件访问都需要以用户 joe 的身份完成。要求用户 joe 可以在使用 super 的 kerberos 凭据进行身份验证的连接上连接到名称节点或作业跟踪器。换句话说,super 正在冒充用户 joe。

于 2017-02-09T14:26:03.067 回答