1

我是鲨鱼的菜鸟-尽管我确实对火花有一些经验。从鲨鱼那里检索数据的每一次尝试都挂了。

作为初步步骤:让我们确保 spark 正常运行:

spark>
val tf = sc.textFile("hdfs://10.213.39.125:8020/hadoop/example/20417.txt")

 val c = tf.count 
..
14/04/10 19:44:34 INFO SparkContext: Job finished: count at <console>:14, took 0.161135127 s
c: Long = 12761

我已经仔细检查了正确安装 spark 的 Shark-env.sh 点。

现在让我们去鲨鱼并尝试(a)读取相同的文件和(b)读取鲨鱼表

(一个)

shark>
       val tf = sc.textFile("hdfs://10.213.39.125:8020/hadoop/example/20417.txt")                          
tf: org.apache.spark.rdd.RDD[String] = MappedRDD[4] at textFile at <console>:17

scala>  val c2 = tf.count      
(wait minutes .. finally do control -c)


shark>
sc.makeRDD("select * from dual")
res1: org.apache.spark.rdd.RDD[Char] = ParallelCollectionRDD[2] at makeRDD at <console>:18

scala> res1.collect                                                                                        

(Once again:  wait minutes .. finally do control -c)

java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Object.wait(Object.java:485)
        at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:62)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:313)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:725)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:744)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:758)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:772)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:560)

更多细节

以下是 Shark-env.sh 的相关部分

export SPARK_MEM=2g

# (Required) Set the master program's memory
export SHARK_MASTER_MEM=1g

# (Required) Point to your Scala installation.
export SCALA_HOME="/usr/local/scala-2.9.3"

# (Required) Point to the patched Hive binary distribution
export HIVE_HOME="/home/guest/shark-0.8.0-bin-hadoop1/hive-0.9.0-shark-0.8.0-bin"

# For running Shark in distributed mode, set the following:
export HADOOP_HOME="/usr/local/hadoop"
export SPARK_HOME="/home/guest/spark-0.8.0"
export MASTER="spark://swlab-r03-16L:17087"

从鲨鱼壳,让我们确保我们正在与同一个火花服务器交谈

scala> sc.sparkHome
res0: String = /home/guest/spark-0.8.0

scala> sc.isLocal                                                                                          
res1: Boolean = false

scala> sc.master
res2: String = spark://swlab-r03-16L:17087
4

1 回答 1

0

似乎存在配置单元元存储配置问题。Metastore 参数位于 Shark-hive-/conf/hive-site.xml 下

于 2014-05-16T03:13:38.257 回答