0

使用下面的 pyspark 数据集(2.1),如何使用窗口函数来计算当前记录的星期几在过去 28 天内出现的次数。

示例数据框:

from pyspark.sql import functions as F
df = sqlContext.createDataFrame([
    ("a", "1", "2018-01-01 12:01:01","Monday"),
        ("a", "13", "2018-01-01 14:01:01","Monday"),
        ("a", "22", "2018-01-02 22:01:01","Tuesday"),
        ("a", "43", "2018-01-08 01:01:01","Monday"),
        ("a", "43", "2018-01-09 01:01:01","Tuesday"),
        ("a", "74", "2018-01-10 12:01:01","Wednesday"),
        ("a", "95", "2018-01-15 06:01:01","Monday"),
], ["person_id", "other_id", "timestamp","dow"])


df.withColumn("dow_count",`some window function`)

可能的窗口

from pyspark.sql import Window
from pyspark.sql import functions as F
Days_28 = (86400 * 28)
window= Window.partitionBy("person_id").orderBy('timestamp').rangeBetween(-Days_30, -1)
## I know this next line is wrong
df.withColumn("dow_count",F.sum(F.when(Current_day=windowed_day,1).otherwise(0)).over(window))

示例输出

df.show()

+---------+--------+-------------------+---------+---------+
|person_id|other_id|          timestamp|      dow|dow_count|
+---------+--------+-------------------+---------+---------+
|        a|       1|2018-01-01 12:01:01|   Monday|0        |
|        a|      13|2018-01-01 14:01:01|   Monday|1        |
|        a|      22|2018-01-02 22:01:01|  Tuesday|0        |
|        a|      43|2018-01-08 01:01:01|   Monday|2        |
|        a|      43|2018-01-09 01:01:01|  Tuesday|1        |
|        a|      74|2018-01-10 12:01:01|Wednesday|0        |
|        a|      95|2018-01-15 06:01:01|   Monday|3        |
+---------+--------+-------------------+---------+---------+
4

2 回答 2

2

使用 F.row_number(),由 (person_id, dow) 划分的窗口,与您的逻辑rangeBetween()应替换为where()

from datetime import timedelta, datetime

N_days = 28
end = datetime.combine(datetime.today(), datetime.min.time())
start = end - timedelta(days=N_days)

window = Window.partitionBy("person_id", "dow").orderBy('timestamp')

df.where((df.timestamp < end) & (df.timestamp >= start)) \
  .withColumn('dow_count', F.row_number().over(window)-1) \
  .show()
于 2018-06-07T02:34:25.243 回答
1

我想通了,并认为我会分享。

首先创建一个unix时间戳并将其转换为long。然后,按人和星期几划分。最后,在窗口上使用计数功能。

from pyspark.sql import functions as F
df = df.withColumn('unix_ts',df.timestamp.astype('Timestamp').cast("long"))

w = Window.partitionBy('person_id','dow').orderBy('unix_ts').rangeBetween(-86400*15,-1)
df = df.withColumn('occurrences_in_7_days',F.count('unix_ts').over(w))
df.sort(df.unix_ts).show()

奖励:如何根据时间戳创建实际的星期几。

df = df.withColumn("DayOfWeek",F.date_format(df.timestamp, 'EEEE'))

如果没有 jxc 的提示和这篇stackoverflow 文章,我无法做到这一点。

于 2018-06-07T17:49:18.007 回答