0

在我通过 Google Cloud Console 删除了一个 Google Cloud Storage 目录后(该目录是由早期的 Spark (ver 1.3.1) 作业生成的),当重新运行该作业时,它总是失败并且似乎该目录仍然存在工作; 我找不到使用 gsutil 的目录。

这是一个错误,还是我错过了什么?谢谢!

我得到的错误:

java.lang.RuntimeException: path gs://<my_bucket>/job_dir1/output_1.parquet already exists.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:112)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240)
at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1196)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:995)
at com.xxx.Job1$.execute(Job1.scala:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
4

1 回答 1

1

看来您可能遇到了 NFS 列表一致性缓存的已知错误:https ://github.com/GoogleCloudPlatform/bigdata-interop/issues/5

它已在最新版本中修复,如果您通过使用 bdutil-1.3.1 部署新集群进行升级(在此处公布:https ://groups.google.com/forum/#!topic/gcp-hadoop-announce/vstNuV0LpDc ) 问题应该得到解决。如果您需要就地升级,您可以尝试将最新的 gcs-connector-1.4.1 jarfile 下载到您的主节点和工作节点下/home/hadoop/hadoop-install/lib/gcs-connector-*.jar,然后重新启动 Spark 守护程序:

sudo sudo -u hadoop /home/hadoop/spark-install/sbin/stop-all.sh
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/start-all.sh
于 2015-07-13T20:55:03.993 回答