3

我已经设置了 Databricks Connect,以便我可以在本地开发并获得 Intellij 好东西,同时利用 Azure Databricks 上大型 Spark 集群的强大功能。

当我想读取或写入 Azure Data Lake 时 spark.read.csv("abfss://blah.csv) ,我得到以下信息

xception in thread "main" java.io.IOException: No FileSystem for scheme: abfss
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:618)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:467)

由此的印象是,由于代码是远程执行的,因此在本地引用 Azure Data Lake 不会有问题。显然我错了。

有没有人可以解决这个问题?

4

1 回答 1

1

问题的原因是我想拥有 Spark 的源并能够在 Databricks 上执行工作负载。不幸的是,databricks-connect jars 不包含源。所以这意味着我需要在项目中手动导入它们。这是摩擦 - 就像它在文档中所说的那样:

... If this is not possible, make sure that the JARs you add are at the front of the classpath. In particular, they must be ahead of any other installed version of Spark (otherwise you will either use one of those other Spark versions and run locally ...

我就是这么做的。

在此处输入图像描述

现在我可以烤蛋糕吃了!

唯一的问题是,如果我添加新的依赖项,我必须再次重新排序。

于 2020-03-02T09:44:33.423 回答