1

我正在准备基于协作过滤的用户推荐。为此,我使用了 org.apache.spark.mllib.recommendation._ 。但是当我以大约 7GB 的大小运行作业时,由于火花内存泄漏而导致作业失败。但是我得到了小数据集的结果。

ALS 型号配置:

  • 排名 50
  • 迭代 15
  • 拉姆达 0.25
  • 阿尔法 300

星火集群详细信息:

  • 节点 9
  • 内存 每个节点 64 GB
  • 每个节点 4 个核心

运行配置:

--num-executors 16 --executor-cores 2 --executor-memory 32G

代码 :

def generateTopKProductRecommendations(topK : Int = 20: RDD[Row] = {
       model.recommendProductsForUsers(topK)
      .map{ r => Row(r._1, r._2.map(x => x.product).toSeq)}
}

即使我尝试调试,

model.recommendProductsForUsers(topK)
      .map{ r => Row(r._1)}

纱线日志:

16/11/08 06:33:48 ERROR executor.Executor: Managed memory leak detected; size = 67108864 bytes, TID = 8796
16/11/08 06:33:48 ERROR executor.Executor: Managed memory leak detected; size = 67108864 bytes, TID = 8791
16/11/08 06:33:48 ERROR storage.ShuffleBlockFetcherIterator: Failed to get block(s) from xxx.xxx.xxx.xxx:42340
java.io.IOException: Failed to connect to xxx.xxx.xxx.xxx:42340
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:193)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)......

16/11/08 06:33:48 ERROR storage.ShuffleBlockFetcherIterator: Failed to get block(s) from xxx.xxx.xxx.xxx:42340
java.io.IOException: Failed to connect to xxx.xxx.xxx.xxx:42340
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:193)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
    at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:88)

4

0 回答 0