1

我有以下问题。我会尽量提供尽可能多的细节,但如果我错过了任何可能对这项工作有用的东西,请不要犹豫。

# spark-defaults:
spark.sql.warehouse.dir = /mnt/data
spark.hadoop.fs.permissions.umask-mode = 007

# hive-site:
<property>
  <name>hive.warehouse.subdir.inherit.perms</name>
  <value>true</value>
  <description></description>
</property>
<property>
  <name>hive.metastore.execute.setugi</name>
  <value>true</value>
  <description></description>
</property>

我尝试使用这两者的不同组合来使用上述蜂巢站点设置,但问题仍然存在。

Spark 集群(独立,master/workers/shuffle/thrift/history)作为用户spark(服务帐户)运行,它是spark 用户组的一部分。没有 HDFS,但文件系统是分布式的并且符合 posix 标准(将其视为商业 HDFS),安装了 NFS v3。Hive 元存储在 PostgreSQL 10 中。

星火仓库在这里:

# ls -l /mnt
drwxrws--- 22 spark spark users 10240 Aug  9 09:31 data

# umask
0007

我作为spark 用户组的一部分的user_1运行 PySpark 进程。该过程创建数据库,创建表并将数据写入表中。

该过程失败,但有以下例外:

18/08/09 09:31:42 ERROR FileFormatWriter: Aborting job null.
java.io.IOException: Failed to rename 
DeprecatedRawLocalFileStatus
{path=file:/mnt/data/new.db/new_table/_temporary/0/ 
task_20180809093142_0002_m_000000/
part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet;
isDirectory=false; length=39330; replication=1; blocksize=33554432; 
modification_time=1533821502000; access_time=0; owner=; group=; 
permission=rw-rw-rw-; isSymlink=false} to 
file:/mnt/data/new.db/new_table/
part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths
(FileOutputCommitter.java:415) at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths
(FileOutputCommitter.java:428) at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal
(FileOutputCommitter.java:362) at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob
(FileOutputCommitter.java:334) at 
org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob
(ParquetOutputCommitter.java:47) at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob
(HadoopMapReduceCommitProtocol.scala:166) at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write
(FileFormatWriter.scala:213) at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand
.run (InsertIntoHadoopFsRelationCommand.scala:154) at 
org.apache.spark.sql.execution.command.DataWritingCommandExec
.sideEffectResult$lzycompute(commands.scala:104) at 
org.apache.spark.sql.execution.command.DataWritingCommandExec
.sideEffectResult(commands.scala:102) at 
org.apache.spark.sql.execution.command.DataWritingCommandExec
.executeCollect(commands.scala:115) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply
(Dataset.scala:190) at 
org.apache.spark.sql.Dataset$$anonfun$6.apply
(Dataset.scala:190) at 
org.apache.spark.sql.Dataset$$anonfun$52.apply
(Dataset.scala:3254) at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId
(SQLExecution.scala:77) at 
org.apache.spark.sql.Dataset.withAction
(Dataset.scala:3253) at 
org.apache.spark.sql.Dataset.<init>
(Dataset.scala:190) at 
org.apache.spark.sql.Dataset$.ofRows
(Dataset.scala:75) at 
org.apache.spark.sql.SparkSession.sql
(SparkSession.scala:641) at 
sun.reflect.NativeMethodAccessorImpl.invoke0
(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke
(NativeMethodAccessorImpl.java:62) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke
(DelegatingMethodAccessorImpl.java:43) at 
java.lang.reflect.Method.invoke
(Method.java:498) at 
py4j.reflection.MethodInvoker.invoke
(MethodInvoker.java:244) at 
py4j.reflection.ReflectionEngine.invoke
(ReflectionEngine.java:357) at 
py4j.Gateway.invoke(Gateway.java:282) at 
py4j.commands.AbstractCommand.invokeMethod
(AbstractCommand.java:132) at 
py4j.commands.CallCommand.execute
(CallCommand.java:79) at 
py4j.GatewayConnection.run
(GatewayConnection.java:238) at 
java.lang.Thread.run
(Thread.java:748) 
18/08/09 09:31:42 WARN FileUtil: Failed to delete file or dir 
[/mnt/data/new.db/new_table/
_temporary/0/task_20180809093142_0002_m_000000/
.part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet.crc]: 
it still exists.
18/08/09 09:31:42 WARN FileUtil: Failed to delete file or dir 
[/mnt/data/new.db/new_table/
_temporary/0/task_20180809093142_0002_m_000000/
part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet]: 
it still exists.

如果无法重命名和删除文件/目录。

目录结构:

# ls -lR new.db/
new.db/:
total 4
drwxrws--- 3 user_1 spark users 1024 Aug  9 09:31 new_table

new.db/new_table:
total 48
-rw-rw---- 1 user_1 spark users 39330 Aug  9 09:31 part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet
drwxrws--- 3 user_1 spark users   512 Aug  9 09:31 _temporary

new.db/new_table/_temporary:
total 4
drwxrws--- 3 user_1 spark users 512 Aug  9 09:31 0

new.db/new_table/_temporary/0:
total 4
drwxr-sr-x 2 spark spark users 1024 Aug  9 09:31 task_20180809093142_0002_m_000000

new.db/new_table/_temporary/0/task_20180809093142_0002_m_000000:
total 44
-rw-rw---- 1 spark spark users 39330 Aug  9 09:31 part-00000-55f3fe5c-51c2-4a0f-9f0c-dc673f9967b3-c000.snappy.parquet

如您所见,直到temporary/0(包括)的目录归user_1所有,但是temporary/0中的task_目录归spark用户所有。此外,用于创建这些task_目录的 umask 是 022,而不是所需的 007。

如果我可以强制创建这些task_目录的spark用户实际使用正确的 umask,则问题将得到解决。

我很感激,指点和建议。

4

1 回答 1

2

该问题是由于 Worker 进程中的错误 umask 引起的。Worker 进程通过 ssh in 作为命令启动,$SPARK_HOME/sbin/slaves.sh在这种情况下不应用 .bashrc 设置(非交互式会话)。最简单的解决方案是设置umask 002$SPARK_HOME/conf/spark-env.sh因为这是由所有 spark 进程提供的纯 shell 脚本。

于 2018-12-08T20:17:58.860 回答