0

我想使用 pyspark 中的数据帧中的值构造一个距离矩阵。我现在拥有的是

+----+-------------+
| id | list        |
+----+-------------+
| 1  | [a, b, ...] |
+----+-------------+
| 2  | [c, d, ...] |
+----+-------------+
| 3  | [e, f, ...] |
+----+-------------+

我想使用我自己的距离函数并做类似的事情

for i in range(len(ids)):
    for j in range(i + 1, len(ids)):
        dist = calculate_distance(features[i], features[j])
        add_row_to_distance_df([ids[i], ids[j], dist])

编辑:预期输出是

+-----+-----+-----------------------------+
| id1 | id2 | dist                        |
+-----+-----+-----------------------------+
| 1   | 2   | d([a, b, ...], [c, d, ...]) |
+-----+-----+-----------------------------+
| 1   | 3   | d([a, b, ...], [e, f, ...]) |
+-----+-----+-----------------------------+
| 2   | 3   | d([c, d, ...], [e, f, ...]) |
+-----+-----+-----------------------------+

我该怎么做呢?

4

1 回答 1

1

您可以只使用必要cartesian()filter()三角形,例如:

In []:
def calculate_distance(a, b):
    return f'd({a}, {b})'  # Py 3.6

rdd = sc.parallelize([(1, ['a', 'b', 'c']), (2, ['c', 'd', 'e']), (3, ['e', 'f', 'g'])])

(rdd.cartesian(rdd)
 .filter(lambda x: x[0][0] < x[1][0])
 .map(lambda x: (x[0][0], x[1][0], calculate_distance(x[0][1], x[1][1])))
 .collect())

Out[]:
[(1, 2, "d(['a', 'b', 'c'], ['c', 'd', 'e'])"),
 (1, 3, "d(['a', 'b', 'c'], ['e', 'f', 'g'])"),
 (2, 3, "d(['c', 'd', 'e'], ['e', 'f', 'g'])")]
于 2018-04-27T04:12:08.800 回答