1

我正在使用 Spark 数据框插入配置单元表。即使应用程序是使用用户名“myuser”提交的,一些 hive staging 部分文件也是使用用户名“mapr”创建的。因此,在重命名暂存文件说访问被拒绝时,对配置单元表的最终写入失败。命令:

resultDf.write.mode("append").insertInto(insTable)

错误:

线程“主”org.apache.hadoop.security.AccessControlException 中的异常:用户 myuser(用户 id 2547)确实已被拒绝访问重命名 /ded /data/db/da_mydb.db/managed/da_primary/.hive-staging_hive_2017- 12-27_13-25-22_586_3120774356819313410-1/-ext-10000/_temporary/0/task_201712271325_0080_m_000000/part-00000 至 /ded /data/db/da_mydb.db/managed/da_mydb.db/managed/da_mydb.db/managed/da_mydb.db/managed/da_mydb.db/managed/da_120-staging-227_1-320 -22_586_3120774356819313410-1/-ext-10000/part-00000 at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1112) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java :461) 在 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 的 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:475)。commitJobInternal(FileOutputCommitter.java:392) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:364) at org.apache.hadoop.mapred.FileOutputCommitter.commitJob(FileOutputCommitter.java:136)在 org.apache.spark.sql.hive.SparkHiveWriterContainer.commitJob(hiveWriterContainers.scala:108) 在 org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:85) 在 org.apache.spark .sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:201) 在 org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:127) 在 org.apache.spark.sql org.apache 上的 .hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:27​​6)。spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 在 org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130) 在 org.apache.spark.sql.execution .QueryExecution.toRdd$lzycompute(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala :189) at org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:166) at com.iri.suppChain.RunKeying$.execXForm(RunKeying.scala:74) at com.iri.suppChain.RunKeying$$ anonfun$1.apply(RunKeying.scala:36) at com.iri.suppChain.RunKeying$$anonfun$1.apply(RunKeying.scala:36) at scala.collection.immutable.List.foreach(List.scala:318) at com.iri.suppChain.RunKeying $delayedInit$body.apply(RunKeying.scala:36) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12 )

以下是环境详细信息:

  • 火花 1.6.1
  • 分布图
4

1 回答 1

1

尝试以下并提供反馈

resultDF.registerTempTable("results_tbl")
sqlContext.sql("INSERT INTO TABLE insTable SELECT * FROM results_tbl")
于 2017-12-29T17:21:36.423 回答