0

我有一条RDD[(Long, String)]S3 路径(存储桶 + 密钥)及其大小。我想以这样一种方式对它进行分区,即每个分区都获得大小总和大致相同的路径。这样,当我读取这些路径的内容时,每个分区应该有大致相同的数据量来处理。我为此编写了这个自定义分区器。

import org.apache.spark.Partitioner
import scala.collection.mutable.PriorityQueue

class S3Partitioner(partitions: Int, val totalSize: Long) extends Partitioner {
  require(partitions >= 0, s"Number of partitions ($partitions) cannot be negative.")
  require(totalSize >= 0, s"Number of totalSize ($totalSize) cannot be negative.")

  val pq = PriorityQueue[(Int, Long)]()
  (0 until partitions).foreach { partition =>
    pq.enqueue((partition, totalSize / partitions))
  }

  def getPartition(key: Any): Int = key match {
    case k: Long =>
      val (partition, capacityLeft) = pq.dequeue
      pq.enqueue((partition, capacityLeft - k))
      partition
    case _ => 0
  }

  def numPartitions: Int = partitions

  override def equals(other: Any): Boolean = other match {
    case p: S3Partitioner =>
      p.totalSize == totalSize && p.numPartitions == numPartitions
    case _ => false
  }

  override def hashCode: Int = {
    (972 * numPartitions.hashCode) ^ (792 * totalSize.hashCode)
  }
}

如果给分区器提供了一个键(大小)按降序排序的 RDD,那么分区器应该表现最好。当我尝试使用它时,我开始在之前工作的代码中收到此错误:

Cause: java.io.NotSerializableException: scala.collection.mutable.PriorityQueue$ResizableArrayAccess

这就是我使用它的方式:

val pathsWithSize: RDD[(Long, String)] = ...
val totalSize = pathsWithSize.map(_._1).reduce(_ + _)

new PairRDDFunctions(pathsWithSize)
  .partitionBy(new S3Partitioner(5 * sc.defaultParallelism, totalSize))
  .mapPartitions { iter =>
    iter.map { case (_, path) => readS3(path) }
  }

而且我不确定如何解决这个问题。将不胜感激任何帮助。

4

0 回答 0