所以我对函数式编程和 Spark 和 Scala 还很陌生,所以如果这很明显,请原谅我......但基本上我有一个通过 HDFS 满足某些标准的文件列表,即如下所示:
val List = (
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000140_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=03/000258_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=05/000270_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000297_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=30/000300_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000362_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=29/000365_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000397_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=15/000436_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=16/000447_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000529_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=17/000585_0" )
我现在需要从这个列表中建立一个 RDD 来使用......我的想法是使用递归联合......基本上是一个类似的函数:
def dostuff(line: String): (org.apache.spark.rdd.RDD[String]) = {
val x = sc.textFile(line)
val x:org.apache.spark.rdd.RDD[String] = sc.textFile(x) ++ sc.textFile(line)
}
然后只需通过地图应用它:
val RDD_list = List.map(l => l.dostuff)