我有一个像这样的 PySpark 数据框:
+--------+-------------+--------------+-----------------------+
|material|purchase_date|mkt_prc_usd_lb|min_mkt_prc_over_1month|
+--------+-------------+--------------+-----------------------+
| Copper| 2019-01-09| 2.6945| 2.6838|
| Copper| 2019-01-23| 2.6838| 2.6838|
| Zinc| 2019-01-23| 1.1829| 1.1829|
| Zinc| 2019-06-26| 1.1918| 1.1918|
|Aluminum| 2019-01-02| 0.8363| 0.8342|
|Aluminum| 2019-01-09| 0.8342| 0.8342|
|Aluminum| 2019-01-23| 0.8555| 0.8342|
|Aluminum| 2019-04-03| 0.8461| 0.8461|
+--------+-------------+--------------+-----------------------+
最后一列“min_mkt_prc_over_1month”计算为材料一个月内的最小“mkt_prc_usd_lb”(第 3 列),即材料、purchase_date 窗口的(-15 天到 +15 天):
代码是:
w2 = (Window()
.partitionBy("material")
.orderBy(col("purchase_date").cast("timestamp").cast("long"))
.rangeBetween(-days(15), days(15)))
现在,我想看看当金额是/将是最低时的“purchase_date”是多少?
预期输出:(来自前两行)
+--------+-------------+--------------+-----------------------+------------------+
|material|purchase_date|mkt_prc_usd_lb|min_mkt_prc_over_1month|date_of_min_price |
+--------+-------------+--------------+-----------------------+------------------+
| Copper| 2019-01-09| 2.6945| 2.6838| 2019-01-23|
| Copper| 2019-01-23| 2.6838| 2.6838| 2019-01-23|
+--------+-------------+--------------+-----------------------+------------------+