0

我想在每分钟结束时对每只股票进行最后一次观察。我的高频数据框如下所示:

+-----+--------+-------+----------+----------+----------+
|stock| date   | hour  |  minute  |  second  |  price   |
+-----+--------+-------+----------+----------+----------+
 VOD  | 01-02  |  10   |   13     |   11     |  85.35   |
 VOD  | 01-02  |  10   |   13     |   12     |  85.75   |
 VOD  | 01-02  |  10   |   14     |   09     |  84.35   |    
 VOD  | 01-02  |  10   |   14     |   16     |  82.85   |   
 VOD  | 01-02  |  10   |   14     |   26     |  85.65   |   
 VOD  | 01-02  |  10   |   15     |   07     |  84.35   |    
 ...     ...      ...     ....       ...         ...
 ABC  | 01-02  |  11   |   13     |   11     |  25.35   |
 ABC  | 01-02  |  11   |   13     |   15     |  25.39   |
 ABC  | 01-02  |  11   |   13     |   19     |  25.26   |

所需的输出应该像

+-----+--------+-------+-------+-------+
|stock| date   | hour  | minute| Price | 
+-----+--------+-------+-------+-------+
 VOD  | 01-02  |  10   |  13   | 85.75 |
 VOD  | 01-02  |  10   |  14   | 85.65 |
 VOD  | 01-02  |  10   |  15   | 84.35 |
 VOD  | 01-02  |  10   |  16   | 85.75 |
 ...     ...      ...    ....     ...       
 ABC  | 01-02  |  11   |  13   | 25.26 |   

我知道我可能必须使用partitionByandorderBy语法来获得结果,但我对这两个感到困惑。我熟悉groupbySQL 中的函数。我想知道哪个更类似于groupby功能。有人可以帮忙吗?

4

2 回答 2

1

我们可以使用window函数和分区'stock', 'date', 'hour', 'minute'来创建新框架。

  • 对于这种情况,我们可以按 ** second** 列和按descending顺序排序。

  • 然后我们只能first row从窗口框架中选择。

Example:

df.show()
#+-----+-----+----+------+------+-----+
#|stock| date|hour|minute|second|price|
#+-----+-----+----+------+------+-----+
#|  VOD|01-02|  10|    13|    11|85.35|
#|  VOD|01-02|  10|    13|    12|85.75|
#|  VOD|01-02|  10|    14|    09|84.35|
#|  VOD|01-02|  10|    14|    16|82.85|
#|  VOD|01-02|  10|    14|    26|85.65|
#+-----+-----+----+------+------+-----+

from pyspark.sql.window import Window
from pyspark.sql.functions import *

w = Window.partitionBy('stock', 'date', 'hour', 'minute').orderBy(desc('second'))

#adding rownumber to the data
df.withColumn("rn",row_number().over(w)).show()

#+-----+-----+----+------+------+-----+---+
#|stock| date|hour|minute|second|price| rn|
#+-----+-----+----+------+------+-----+---+
#|  VOD|01-02|  10|    13|    12|85.75|  1|
#|  VOD|01-02|  10|    13|    11|85.35|  2|
#|  VOD|01-02|  10|    14|    26|85.65|  1|
#|  VOD|01-02|  10|    14|    16|82.85|  2|
#|  VOD|01-02|  10|    14|    09|84.35|  3|
#+-----+-----+----+------+------+-----+---+

#then select only the first row as we are ordering descending.
df.withColumn("rn",row_number().over(w)).filter(col("rn") == 1).drop("second","rn").show()
#+-----+-----+----+------+-----+
#|stock| date|hour|minute|price|
#+-----+-----+----+------+-----+
#|  VOD|01-02|  10|    13|85.75|
#|  VOD|01-02|  10|    14|85.65|
#+-----+-----+----+------+-----+
于 2020-03-20T19:11:24.343 回答
0

经过几次试错。看来我得到了解决方案。只需创建一个具有累计价格值的列,然后选择价格最高的行。

w1(Window.partitionBy(df_trade['stock'],df_trade['date'],df_trade['hour'],df_trade['minute']).orderBy(df_trade['second']))

df1=df[['stock', 'date','hour','minute','second','price']].withColumn('subgroup',psf.sum('price').over(w1))
df1.orderBy(['stock', 'date','hour','minute','second']).show() 
 # create a column named subgroup to get the cumulative value of price

w=Window.partitionBy('stock', 'date','hour','minute','second')
df3=df1.withColumn('max',psf.max('subgroup').over(w)).where(psf.col('subgroup')==psf.col('max')).drop('max')        
#Get the row with largest value of cumulative price

df3=df3.orderBy(['stock', 'date','hour','minute','second'], ascending=[True, True,True,True,True]).drop('subgroup')

df3=df3.withColumnRenamed('price','lastprice')   # rename
df3.show()    
于 2020-03-20T18:50:45.120 回答