0

我正在尝试构建 spark-cassandra 连接器并点击此链接:

http://www.planetcassandra.org/blog/kindling-an-introduction-to-spark-with-cassandra/

链接中进一步要求从 git 下载连接器并使用 sbt 构建。但是,当我尝试运行命令时./sbt/sbt assembly。它抛出以下异常:

Launching sbt from sbt/sbt-launch-0.13.8.jar
[info] Loading project definition from /home/naresh/Desktop/spark-cassandra-connector/project
Using releases: https://oss.sonatype.org/service/local/staging/deploy/maven2 for releases
Using snapshots: https://oss.sonatype.org/content/repositories/snapshots for snapshots

  Scala: 2.10.5 [To build against Scala 2.11 use '-Dscala-2.11=true']
  Scala Binary: 2.10
  Java: target=1.7 user=1.7.0_79

[info] Set current project to root (in build file:/home/naresh/Desktop/spark-cassandra-connector/)
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [info] Compiling 140 Scala sources and 1 Java source to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/classes...
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:48: not found: value processTableIdentifier
    [error]     val id = processTableIdentifier(tableIdentifier).reverse.lift
    [error]              ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:134: value toSeq is not a member of org.apache.spark.sql.catalyst.TableIdentifier
    [error]     cachedDataSourceTables.refresh(tableIdent.toSeq)
    [error]                                               ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraSQLContext.scala:94: not found: value BroadcastNestedLoopJoin
    [error]       BroadcastNestedLoopJoin
    [error]       ^
    [error] three errors found
    [info] Compiling 11 Scala sources to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/target/scala-2.10/classes...
    [warn] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/src/main/scala/com/datastax/spark/connector/embedded/SparkTemplate.scala:69: value actorSystem in class SparkEnv is deprecated: Actor system is no longer supported as of 1.4.0
    [warn]   def actorSystem: ActorSystem = SparkEnv.get.actorSystem
    [warn]                                               ^
    [warn] one warning found
    [error] (spark-cassandra-connector/compile:compileIncremental) Compilation failed
    [error] Total time: 27 s, completed 4 Nov, 2015 12:34:33 PM
4

1 回答 1

0

这对我有用,运行mvn -DskipTests clean package

  • 你可以从你的 spark 目录中找到文件build spark commandREADME.md
  • 在运行该命令之前,您需要通过设置 MAVEN_OPTS 来配置 Maven 以使用比平时更多的内存 export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
于 2015-11-04T13:54:22.047 回答