4

我想用akka和spark创建一个项目。我也添加了依赖项和其他一些依赖项。这些依赖项是否会对使用 spark 产生任何影响。

我有以下 sbt 文件

    dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-core" % "2.8.7"
    dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-databind" % "2.8.7"
    dependencyOverrides += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.8.7"

lazy val commonSettings = Seq(
  organization := "com.bitool.analytics",
  scalaVersion := "2.11.12",
  libraryDependencies ++= Seq(
    "org.scala-lang.modules" %% "scala-async" % "0.9.6",
    "com.softwaremill.macwire" %% "macros" % "2.3.0",
    "com.softwaremill.macwire" %% "macrosakka" % "2.3.0",
    "com.typesafe.akka" %% "akka-http" % "10.0.6",
    "io.swagger" % "swagger-jaxrs" % "1.5.19",
    "com.github.swagger-akka-http" %% "swagger-akka-http" % "0.9.1",
    "io.circe" %% "circe-generic" % "0.8.0", 
    "io.circe" %% "circe-literal" % "0.8.0", 
    "io.circe" %% "circe-parser" % "0.8.0", 
    "io.circe" %% "circe-optics" % "0.8.0", 
    "org.scalafx" %% "scalafx" % "8.0.144-R12",
    "org.scalafx" %% "scalafxml-core-sfx8" % "0.4",
    "org.apache.spark" %% "spark-core" % "2.3.0",
    "org.apache.spark" %% "spark-sql" % "2.3.0",
    "org.apache.spark" %% "spark-hive" % "2.3.0",
    "org.scala-lang" % "scala-xml" % "2.11.0-M4",
    "mysql" % "mysql-connector-java" % "6.0.5"
  )
)
lazy val root = (project in file(".")).
  settings(commonSettings: _*).
  settings(
    name := "BITOOL-1.0"
  )
ivyScala := ivyScala.value map {
  _.copy(overrideScalaVersion = true)
}
fork in run := true

下面是我的火花代码

private val warehouseLocation = new File("spark-warehouse").getAbsolutePath
val conf = new SparkConf()
  conf.setMaster("local[4]")
  conf.setAppName("Bitool")
  conf.set("spark.sql.warehouse.dir", warehouseLocation)

  val SPARK = SparkSession
    .builder().config(conf).enableHiveSupport()
    .getOrCreate()
  val SPARK_CONTEXT = SPARK.sparkContext

当我尝试执行此操作时,它正在创建 metastore_db 文件夹,但未创建 spark-warehouse 文件夹。

4

1 回答 1

0

此目录不是由getOrCreate. 您可以在 Spark 源代码中查看它:getOrCreate将其操作委托给SparkSession.getOrCreate,这只是一个 setter。所有内部测试并CliSuite使用这样的代码片段提前初始化目录:val warehousePath = Utils.createTempDir()

相反,在实际的用户代码中,您必须至少执行一次数据修改操作才能实现您的仓库目录。尝试在您的代码之后运行类似的东西并再次检查硬盘上的仓库目录:

  import SPARK.implicits._
  import SPARK.sql
  sql("DROP TABLE IF EXISTS test")
  sql("CREATE TABLE IF NOT EXISTS test (key INT, value STRING) USING hive")
于 2021-02-13T15:50:09.317 回答