我想用 Spark 处理一个大文本文件“mydata.txt”(实际文件大小约为 30GB)。它的记录分隔符是“\ |” 后跟“\n”。因为加载文件的默认记录分隔符(通过“sc.textFile”)是“\n”,所以我将org.apache.hadoop.conf.Configuration的“textinputformat.record.delimiter”属性设置为“\|\n”指定记录分隔符:
AAAAA_|BBBBB_|
CCCCC\
DDDDD
EEEEE_FFFFFFFFFFFF\ |
GGGGG_|HHHHH_|
IIIII\
GGGGG\
KKKKK_|LLLLLLLLLLL\ |
MMMM_|NNNNN_|OOOOO\ |
接下来我在spark-shell中执行了如下代码:</p>
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.io.Text
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat
val LINE_DELIMITER = "\\ |\n"
val FIELD_SEP = "_\\|"
val conf = new Configuration
conf.set("textinputformat.record.delimiter", LINE_DELIMITER)
val raw_data = sc.newAPIHadoopFile("mydata.txt", classOf[TextInputFormat], classOf[LongWritable], classOf[Text], conf).map(_._2.toString)
到目前为止,一切都很好。然而,
scala> val data = raw_data.filter(x => x.split(FIELD_SEP).size >= 3)
data: org.apache.spark.rdd.RDD[String] = FilteredRDD[4] at filter at <console>:22
scala> data.collect
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.hadoop.conf.Configuration
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1049)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1031)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1031)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:772)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:715)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:699)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1203)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
scala> data.foreach(println)
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.hadoop.conf.Configuration
...
为什么我不能操作 RDD “数据”,而使用时一切都很好sc.textFile("mydata.txt")
?以及如何解决?