2

我创建了一个带有 5 个节点的 hadoop 集群(包括 Apache Spark)的 IBM BigInsights 服务实例。我尝试使用 SparkR 连接 Cloudant 数据库,获取一些数据并进行一些处理。

我已经启动了 SparkR shell(终端)并运行了以下代码:

sparkR.stop()
# Creating SparkConext and connecting to Cloudant DB
sc <- sparkR.init(sparkEnv = list("cloudant.host"="<<cloudant-host-name>>","<<><<cloudant-user-name>>>","cloudant.password"="<<cloudant-password>>", "jsonstore.rdd.schemaSampleSize"="-1"))

# Database to be connected to extract the data
database <- "testdata"
# Creating Spark SQL Context
sqlContext <- sparkRSQL.init(sc)
# Creating DataFrame for the "testdata" Cloudant DB
testDataDF <- read.df(sqlContext, database, header='true', source = "com.cloudant.spark",inferSchema='true')

我收到以下错误消息:

16/08/05 19:00:27 ERROR RBackendHandler: loadDF on org.apache.spark.sql.api.r.SQLUtils failed
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
  java.lang.ClassNotFoundException: Failed to find data source: com.cloudant.spark. Please find packages at http://spark-packages.org
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
        at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:160)
        at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBacke

如何在 IBM BigInsights 中安装 spark-cloudant 连接器并解决问题?任何帮助将非常感激。

4

1 回答 1

0

您需要将包的名称传递给 sparkR.init:

sc <- sparkR.init(sparkPackages="com.databricks:spark-csv_2.11:1.0.3")

看这里:

https://spark.apache.org/docs/1.6.0/sparkr.html#from-data-sources

火花混浊剂包在这里:

https://spark-packages.org/package/cloudant-labs/spark-cloudant

对于 4.2 集群,我认为您需要:

cloudant-labs:spark-cloudant:1.6.4-s_2.10
于 2016-12-10T19:14:59.480 回答