0

我正在尝试将我的 Spark 数据帧(在 EMR 上运行的 Zeppelin 笔记本)保存到我同一个 AWS 账户中的 GlueCatalog。saveAsTable()当我不使用该方法时,该方法没有任何问题bucketBy()。当我使用它时,我会得到UnknownHostException

该主机名不在我的 EMR 中。当我更改数据库名称时,会报告一个不同的主机名。

我的问题是:该主机名的配置在哪里?它是干什么用的?为什么bucketBy需要那个?

谢谢你的帮助。阿弗雷尔

spark.sql("use my_database_1")
my_df.write.partitionBy("dt").mode("overwrite").bucketBy(10, "id").option("path","s3://my-bucket/").saveAsTable("my_table")
java.lang.IllegalArgumentException: java.net.UnknownHostException: ip-10-10-10-71.ourdc.local
  at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
  at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)
  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:351)
  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:285)
  at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
  at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
  at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:496)
  at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$createDataSourceTable(HiveExternalCatalog.scala:399)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:263)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:236)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:236)
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
  at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:236)
  at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createTable(ExternalCatalogWithListener.scala:94)
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:324)
  at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:185)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
  at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
  at org.apache.spark.sql.DataFrameWriter.createTable(DataFrameWriter.scala:474)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:453)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:409)
  ... 47 elided
Caused by: java.net.UnknownHostException: ip-10-10-10-71.ourdc.local
  ... 87 more

4

1 回答 1

2

我的问题有两个单独的问题:

  1. 主机名来自哪里
  2. 为什么只有在使用bucketBy时才发现问题。

对于问题 (1),我们的胶水数据库是使用spark.sql("create database mydb"). 这将创建一个位置设置为 HDFS 路径的粘合数据库,该路径默认具有 EMR 主 IP 地址。10.10.10.71 是我们旧 EMR 的 IP 地址(已终止)

对于问题(2),似乎在做bucketByand时sortBy,Spark 需要在写入最终目的地之前有一些临时空间。该临时空间的位置是数据库的默认位置,完整路径为<db_location>-<table_name>-__PLACEHOLDER__

修复:(1)需要修改Glue中数据库的位置。在 (2) 上什么都不需要/不能做

于 2019-08-30T06:55:46.600 回答