1
Issue Description:

Spark Version: 1.6.2
Execution: Spark-shell (REPL) master = local[2] (tried local[*])

example.json is as below:

{"name":"D2" ,"lovesPandas":"Y"}
{"name":"D3" ,"lovesPandas":"Y"}
{"name":"D4" ,"lovesPandas":"Y"}
{"name":"D5" ,"lovesPandas":"Y"} 

在 Spark-shell 本地模式下执行的代码:

import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._ 
import org.apache.spark.streaming.kafka._
import org.apache.spark.sql._
import org.json4s._
import org.json4s.jackson.JsonMethods._
import _root_.kafka.serializer.StringDecoder
import _root_.kafka.serializer.Decoder
import _root_.kafka.utils.VerifiableProperties
import org.apache.hadoop.hbase._
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.mapred.JobConf
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.StreamingContext

val ssc = new  StreamingContext(sc,  Seconds(2) )
val messages = ssc.textFileStream("C:\\pdtemp\\test\\example.json")

messages.print()

I tried the saveAsTextFiles but it is not saving any files too.

这不起作用——显示没有输出——尝试在 spark-shell 上从 Kafka 读取流

也尝试了以下方法 - 不起作用:

messages.foreachRDD(rdd => rdd.foreach(print))

此外,尝试解析模式转换为数据框,但似乎没有任何效果

正常的 json 解析正在工作,我可以打印正常的内容 //RDD/DF //到 Spark-shell 中的控制台

有人可以帮忙吗?

4

0 回答 0