我有一个这样的 RDD:byUserHour: org.apache.spark.rdd.RDD[(String, String, Int)]
我想创建一个数据的稀疏矩阵,用于计算中值、平均值等。RDD 包含 row_id、column_id 和 value。我有两个包含 row_id 和 column_id 字符串的数组用于查找。
这是我的尝试:
import breeze.linalg._
val builder = new CSCMatrix.Builder[Int](rows=BCnUsers.value.toInt,cols=broadcastTimes.value.size)
byUserHour.foreach{x =>
val row = userids.indexOf(x._1)
val col = broadcastTimes.value.indexOf(x._2)
builder.add(row,col,x._3)}
builder.result()
这是我的错误:
14/06/10 16:39:34 INFO DAGScheduler: Failed to run foreach at <console>:38
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: breeze.linalg.CSCMatrix$Builder$mcI$sp
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:770)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:713)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1176)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
我的数据集非常大,所以如果可能的话,我想这样做分布式。任何帮助,将不胜感激。
进度更新:
CSCMartix 不适用于 Spark。但是,有RowMatrix扩展DistributedMatrix
。RowMatrix
确实有一个方法,computeColumnSummaryStatistics()
应该计算我正在寻找的一些统计数据。我知道 MLlib 每天都在增长,所以我会关注更新,但同时我会尝试制作一个RDD[Vector]
to feed RowMatrix
。注意这RowMatrix
是实验性的,代表了一个面向行的分布式矩阵,没有有意义的行索引。