1

我正在使用来自 Databricks 笔记本的 ADLS Gen2,它试图使用“abfss”路径处理文件。我能够很好地读取镶木地板文件,但是当我尝试加载 XML 文件时,我收到错误找不到配置 - 找不到配置属性 xxx.dfs.core.windows.net。

我没有尝试安装文件,但试图了解它是否是 XML 文件的已知限制,因为我能够很好地读取镶木地板文件。

这是我的 XML 库配置 com.databricks:spark-xml_2.11:0.9.0

我根据其他文章尝试了几件事,但仍然遇到相同的错误。

  • 添加了一个新范围以查看它是否是 Databricks 工作区中的范围问题。
  • 尝试添加配置 spark.conf.set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx==")
df = spark.read.format("xml")
 .option("rootTag","BookArticle")
 .option("inferSchema", "true")
 .option("error_bad_lines",True)
 .option("mode", "DROPMALFORMED")
 .load(abfsssourcename)   ##abfsssourcename is the path of the source file name

Exception Details: Py4JJavaError: An error occurred while calling o1113.load. 
Configuration property xxxx.dfs.core.windows.net not found. at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:392) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1008) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:151) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:106) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:500) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:469) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1281) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1269) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.SparkContext.withScope(SparkContext.scala:820) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:1269) at com.databricks.spark.xml.util.XmlFile$.withCharset(XmlFile.scala:46) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation$1.apply(DefaultSource.scala:71) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation$1.apply(DefaultSource.scala:71) at com.databricks.spark.xml.XmlRelation$$anonfun$1.apply(XmlRelation.scala:43) at com.databricks.spark.xml.XmlRelation$$anonfun$1.apply(XmlRelation.scala:42) at scala.Option.getOrElse(Option.scala:121) at com.databricks.spark.xml.XmlRelation.<init>(XmlRelation.scala:41) at com.databricks.spark.xml.XmlRelation$.apply(XmlRelation.scala:29) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:74) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:52) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
4

1 回答 1

3

我将解决方案总结如下。

该包com.databricks:spark-xml似乎使用 RDD API 来读取 xml 文件。当我们使用 RDD API 访问 Azure Data Lake Storage Gen2 时,我们无法访问使用spark.conf.set(...). 所以我们应该将代码更新为spark._jsc.hadoopConfiguration().set("fs.azure.account.key.xxxxx.dfs.core.windows.net", "xxxx=="). 更多详情,请参阅此处

此外,您还可以将 Azure Data Lake Storage Gen2 挂载为 Azure databricks 中的文件系统。

于 2020-08-16T12:37:45.473 回答