我无法打开 github 链接 https://github.com/apache/incubator-spark/tree/master/examples/src/main/scala/org/apache/spark/streaming/examples。
但是,您可以使用以下对我有用的代码。
import org.apache.spark.streaming.{ Seconds, StreamingContext }
import org.apache.spark.SparkContext._
import org.apache.spark.streaming.twitter._
import org.apache.spark.SparkConf
import org.apache.spark.streaming._
import org.apache.spark.{ SparkContext, SparkConf }
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.flume._
/**
* A Spark Streaming application that receives tweets on certain
* keywords from twitter datasource and find the popular hashtags
*
* Arguments: <comsumerKey> <consumerSecret> <accessToken> <accessTokenSecret> <keyword_1> ... <keyword_n>
* <comsumerKey> - Twitter consumer key
* <consumerSecret> - Twitter consumer secret
* <accessToken> - Twitter access token
* <accessTokenSecret> - Twitter access token secret
* <keyword_1> - The keyword to filter tweets
* <keyword_n> - Any number of keywords to filter tweets
*
* More discussion at stdatalabs.blogspot.com
*
* @author Sachin Thirumala
*/
object SparkPopularHashTags {
val conf = new SparkConf().setMaster("local[4]").setAppName("Spark Streaming - PopularHashTags")
val sc = new SparkContext(conf)
def main(args: Array[String]) {
sc.setLogLevel("WARN")
val Array(consumerKey, consumerSecret, accessToken, accessTokenSecret) = args.take(4)
val filters = args.takeRight(args.length - 4)
// Set the system properties so that Twitter4j library used by twitter stream
// can use them to generat OAuth credentials
System.setProperty("twitter4j.oauth.consumerKey", consumerKey)
System.setProperty("twitter4j.oauth.consumerSecret", consumerSecret)
System.setProperty("twitter4j.oauth.accessToken", accessToken)
System.setProperty("twitter4j.oauth.accessTokenSecret", accessTokenSecret)
// Set the Spark StreamingContext to create a DStream for every 5 seconds
val ssc = new StreamingContext(sc, Seconds(5))
// Pass the filter keywords as arguements
// val stream = FlumeUtils.createStream(ssc, args(0), args(1).toInt)
val stream = TwitterUtils.createStream(ssc, None, filters)
// Split the stream on space and extract hashtags
val hashTags = stream.flatMap(status => status.getText.split(" ").filter(_.startsWith("#")))
// Get the top hashtags over the previous 60 sec window
val topCounts60 = hashTags.map((_, 1)).reduceByKeyAndWindow(_ + _, Seconds(60))
.map { case (topic, count) => (count, topic) }
.transform(_.sortByKey(false))
// Get the top hashtags over the previous 10 sec window
val topCounts10 = hashTags.map((_, 1)).reduceByKeyAndWindow(_ + _, Seconds(10))
.map { case (topic, count) => (count, topic) }
.transform(_.sortByKey(false))
// print tweets in the currect DStream
stream.print()
// Print popular hashtags
topCounts60.foreachRDD(rdd => {
val topList = rdd.take(10)
println("\nPopular topics in last 60 seconds (%s total):".format(rdd.count()))
topList.foreach { case (count, tag) => println("%s (%s tweets)".format(tag, count)) }
})
topCounts10.foreachRDD(rdd => {
val topList = rdd.take(10)
println("\nPopular topics in last 10 seconds (%s total):".format(rdd.count()))
topList.foreach { case (count, tag) => println("%s (%s tweets)".format(tag, count)) }
})
ssc.start()
ssc.awaitTermination()
}
}
说明:
setMaster("local[4]")
- 确保将 master 设置为具有至少 2 个线程的本地模式,因为 1 个线程用于收集传入流,另一个线程用于处理它。
我们使用以下代码计算流行的主题标签:
val topCounts60 = hashTags.map((_, 1)).reduceByKeyAndWindow(_ + _, Seconds(60))
.map { case (topic, count) => (count, topic) }
.transform(_.sortByKey(false))
上面的代码片段对之前 60/10 秒内的主题标签进行字数统计,reduceByKeyAndWindow
并按降序对它们进行排序。
reduceByKeyAndWindow
用于我们必须对在先前流间隔中累积的数据应用转换的情况。
通过将四个 twitter OAuth 令牌作为参数传递来执行代码:
您应该每隔 10/60 秒看到一次流行的主题标签。
您可以通过以下链接将 Spark Streaming 和 Storm 与 Flume 和 kafka 集成来检查类似的项目:
火花流:
Spark Streaming 第 1 部分:实时 Twitter 情绪分析
http://stdatalabs.blogspot.in/2016/09/spark-streaming-part-1-real-time.html
Spark 流式传输第 2 部分:使用 Flume 进行实时 Twitter 情绪分析
http://stdatalabs.blogspot.in/2016/09/spark-streaming-part-2-real-time_10.html
Spark 流式传输第 3 部分:使用 kafka 进行实时 Twitter 情绪分析
http://stdatalabs.blogspot.in/2016/09/spark-streaming-part-3-real-time.html
Spark Streaming 中的数据保证与 kafka 集成
http://stdatalabs.blogspot.in/2016/10/data-guarantees-in-spark-streaming-with.html
风暴:
使用 Apache Storm 进行实时流处理 - 第 1 部分
http://stdatalabs.blogspot.in/2016/09/realtime-stream-processing-using-apache.html
使用 Apache Storm 和 Kafka 进行实时流处理 - 第 2 部分
http://stdatalabs.blogspot.in/2016/10/real-time-stream-processing-using.html