我是 Spark 的新手,我正在构建一个小型示例应用程序,它是一个 Spark 文件流式应用程序。我想要的只是一次读取整个文件,而不是逐行读取(我猜这就是 textFileStream 所做的)。
代码如下:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.StreamingContext._
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.io.Text
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
import scalax.io._
object SampleXML{
def main(args: Array[String]){
val logFile = "/home/akhld/mobi/spark-streaming/logs/sample.xml"
val ssc = new StreamingContext("spark://localhost:7077","XML Streaming Job",Seconds(5),"/home/akhld/mobi/spark-streaming/spark-0.8.0-incubating",List("target/scala-2.9.3/simple-project_2.9.3-1.0.jar"))
val lines = ssc.fileStream[LongWritable, Text, TextInputFormat]("/home/akhld/mobi/spark-streaming/logs/")
lines.print()
lines.foreachRDD(rdd => {
rdd.count() // prints counts
})
ssc.start()
}
}
此代码失败,异常表示:
[error] /home/akhld/mobi/spark-streaming/samples/samplexml/src/main/scala/SampleXML.scala:31: value foreachRDD is not a member of org.apache.spark.streaming.DStream[(org.apache.hadoop.io.LongWritable, org.apache.hadoop.io.Text)]
[error] ssc.fileStream[LongWritable, Text, TextInputFormat]("/home/akhld/mobi/spark-streaming/logs/").foreachRDD(rdd => {
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 3 s, completed Feb 3, 2014 7:32:57 PM
如果这不是在流中显示文件内容的正确方式,请帮我举个例子。我搜索了很多,但找不到合适的文件流。