1

假设我有以下图表:

scala> v.show()
+---+---------------+
| id|downstreamEdges|
+---+---------------+
|CCC|           null|
|BBB|           null|
|QQQ|           null|
|DDD|           null|
|FFF|           null|
|EEE|           null|
|AAA|           null|
|GGG|           null|
+---+---------------+


scala> e.show()
+---+---+---+
| iD|src|dst|
+---+---+---+
|  1|CCC|AAA| 
|  2|CCC|BBB| 
...
+---+---+---+

我想运行一个聚合来获取从目标顶点发送到源顶点的所有消息(不仅仅是总和、第一个、最后一个等)。所以我想运行的命令是这样的:

g.aggregateMessages.sendToSrc(AM.edge("id")).agg(all(AM.msg).as("downstreamEdges")).show()

除了该功能all不存在(我不知道)。输出将类似于:

+---+---------------+
| id|downstreamEdges|
+---+---------------+
|CCC|         [1, 2]|
... 
+---+---------------+

我可以将上述功能与firstlast代替 (the non-existent)一起使用all,但他们只会给我

+---+---------------+
| id|downstreamEdges|
+---+---------------+
|CCC|              1|
... 
+---+---------------+

或者

+---+---------------+
| id|downstreamEdges|
+---+---------------+
|CCC|              2|
... 
+---+---------------+

分别。我怎样才能保留所有条目?(可能有很多,不仅仅是 1 和 2,而是 1、2、23、45 等)。谢谢。

4

2 回答 2

1

我通过使用聚合函数解决了类似的问题collect_set()

 agg = gx.aggregateMessages(
            f.collect_set(AM.msg).alias("aggMess"),
            sendToSrc=AM.edge("id")
            sendToDst=None)

另一个(有重复)将是collect_list()

于 2019-11-25T16:56:50.870 回答
0

我调整了这个答案以提出以下内容:

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._
import org.graphframes.lib.AggregateMessages

class KeepAllString extends UserDefinedAggregateFunction {
  private val AM = AggregateMessages

  override def inputSchema: org.apache.spark.sql.types.StructType =
    StructType(StructField("value", StringType) :: Nil)

  // This is the internal fields you keep for computing your aggregate.
  override def bufferSchema: StructType = StructType(
    StructField("ids", ArrayType(StringType, containsNull = true), nullable = true) :: Nil
  )

  // This is the output type of your aggregatation function.
  override def dataType: DataType = ArrayType(StringType,true)

  override def deterministic: Boolean = true

  // This is the initial value for your buffer schema.
  override def initialize(buffer: MutableAggregationBuffer): Unit = buffer(0) = Seq[String]()


  // This is how to update your buffer schema given an input.
  override def update(buffer: MutableAggregationBuffer, input: Row): Unit =
    buffer(0) = buffer.getAs[Seq[String]](0) ++ Seq(input.getAs[String](0))

  // This is how to merge two objects with the bufferSchema type.
  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit =
    buffer1(0) = buffer1.getAs[Seq[String]](0) ++ buffer2.getAs[Seq[String]](0)

  // This is where you output the final value, given the final value of your bufferSchema.
  override def evaluate(buffer: Row): Any = buffer.getAs[Seq[String]](0)
}

他们我all上面的方法只是:val all = new KeepAllString()

但是如何使其通用,以便对于 BigDecimal、Timestamp 等,我可以执行以下操作:

val allTimestamp = new KeepAll[Timestamp]()

?

于 2018-04-07T02:23:30.597 回答