我无法处理具有 230M 边的图形。我克隆了 apache.spark,构建了它,然后在集群上尝试了它。
我使用 Spark 独立集群:
-5 machines (each has 12 cores/32GB RAM)
-'spark.executor.memory' == 25g
-'spark.driver.memory' == 3g
图有 231359027 条边。它的文件重量为 4,524,716,369 字节。图表以文本格式表示:
sourceVertexId destinationVertexId
我的代码:
object Canonical {
def main(args: Array[String]) {
val numberOfArguments = 3
require(args.length == numberOfArguments, s"""Wrong argument number. Should be $numberOfArguments .
|Usage: <path_to_grpah> <partiotioner_name> <minEdgePartitions> """.stripMargin)
var graph: Graph[Int, Int] = null
val nameOfGraph = args(0).substring(args(0).lastIndexOf("/") + 1)
val partitionerName = args(1)
val minEdgePartitions = args(2).toInt
val sc = new SparkContext(new SparkConf()
.setSparkHome(System.getenv("SPARK_HOME"))
.setAppName(s" partitioning | $nameOfGraph | $partitionerName | $minEdgePartitions parts ")
.setJars(SparkContext.jarOfClass(this.getClass).toList))
graph = GraphLoader.edgeListFile(sc, args(0), false, edgeStorageLevel = StorageLevel.MEMORY_AND_DISK,
vertexStorageLevel = StorageLevel.MEMORY_AND_DISK, minEdgePartitions = minEdgePartitions)
graph = graph.partitionBy(PartitionStrategy.fromString(partitionerName))
println(graph.edges.collect.length)
println(graph.vertices.collect.length)
}
}
在我运行它之后,我遇到了许多java.lang.OutOfMemoryError: Java heap space
错误,当然我没有得到结果。我的代码有问题吗?还是在集群配置中?因为它适用于相对较小的图形。但对于这张图,它从来没有奏效。(而且我不认为 230M 边缘是太大的数据)
感谢您的任何建议!
解决
我没有为驱动程序放置足够的内存。我已将集群配置更改为:
-4 workers (each has 12 cores/32GB RAM)
-1 master with driver program (each has 12 cores/32GB RAM)
-'spark.executor.memory' == 25g
-'spark.driver.memory' == 25g
而且收集所有顶点和边来计算它们也不是一个好主意。这样做很容易:graph.vertices.count
和graph.edges.count