我正在尝试从 dstream 中的 json 创建一个数据框,但下面的代码似乎没有正确获取数据框 -
import sys
import json
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.sql import SQLContext
def getSqlContextInstance(sparkContext):
if ('sqlContextSingletonInstance' not in globals()):
globals()['sqlContextSingletonInstance'] = SQLContext(sparkContext)
return globals()['sqlContextSingletonInstance']
if __name__ == "__main__":
if len(sys.argv) != 3:
raise IOError("Invalid usage; the correct format is:\nquadrant_count.py <hostname> <port>")
# Initialize a SparkContext with a name
spc = SparkContext(appName="jsonread")
sqlContext = SQLContext(spc)
# Create a StreamingContext with a batch interval of 2 seconds
stc = StreamingContext(spc, 2)
# Checkpointing feature
stc.checkpoint("checkpoint")
# Creating a DStream to connect to hostname:port (like localhost:9999)
lines = stc.socketTextStream(sys.argv[1], int(sys.argv[2]))
lines.pprint()
parsed = lines.map(lambda x: json.loads(x))
def process(time, rdd):
print("========= %s =========" % str(time))
try:
# Get the singleton instance of SQLContext
sqlContext = getSqlContextInstance(rdd.context)
# Convert RDD[String] to RDD[Row] to DataFrame
rowRdd = rdd.map(lambda w: Row(word=w))
wordsDataFrame = sqlContext.createDataFrame(rowRdd)
# Register as table
wordsDataFrame.registerTempTable("mytable")
testDataFrame = sqlContext.sql("select summary from mytable")
print(testDataFrame.show())
print(testDataFrame.printSchema())
except:
pass
parsed.foreachRDD(process)
stc.start()
# Wait for the computation to terminate
stc.awaitTermination()
没有错误,但是当脚本运行时,它确实从流上下文中成功读取了 json,但是它不会打印摘要中的值或数据帧模式。
我正在尝试阅读的示例 json -
{“reviewerID”:“A2IBPI20UZIR0U”,“asin”:“1384719342”,“reviewerName”:“cassandra tu \“是的,好吧,就像,你......”,“有帮助”:[0, 0], "reviewText": "这里没什么可写的,但它确实做到了它应该做的事情。过滤掉流行的声音。现在我的录音更加清晰。它是亚马逊上最低价格的弹出过滤器之一,所以不妨购买它,尽管定价相同,但它们确实工作相同,“,”整体“:5.0,“总结”:“好”,“unixReviewTime”:1393545600,“评论时间": "2014 年 2 月 28 日"}
我绝对是激发流媒体的新手,并通过阅读文档开始从事宠物项目。非常感谢任何帮助和指导。