所以我读了一个带有模式的csv文件:
mySchema = StructType([StructField("StartTime", StringType(), True),
StructField("EndTime", StringType(), True)])
data = spark.read.load('/mnt/Experiments/Bilal/myData.csv', format='csv', header='false', schema = mySchema)
data.show(truncate = False)
我明白了:
+---------------------------+---------------------------+
|StartTime |EndTime |
+---------------------------+---------------------------+
|2018-12-24T03:03:31.8088926|2018-12-24T03:07:35.2802489|
|2018-12-24T03:13:25.7756662|2018-12-24T03:18:10.1018656|
|2018-12-24T03:23:32.9391784|2018-12-24T03:27:57.2195314|
|2018-12-24T03:33:31.0793551|2018-12-24T03:37:04.6395942|
|2018-12-24T03:43:54.1638926|2018-12-24T03:46:38.1188857|
+---------------------------+---------------------------+
现在,当我使用以下方法将这些列从 stringtype 转换为 timestamptype 时:
data = data.withColumn('StartTime', to_timestamp('StartTime', "yyyy-MM-dd'T'HH:mm:ss.SSSSSS"))
data = data.withColumn('EndTime', to_timestamp('EndTime', "yyyy-MM-dd'T'HH:mm:ss.SSSSSS"))
我得到空值:
+---------+-------+
|StartTime|EndTime|
+---------+-------+
|null |null |
|null |null |
|null |null |
|null |null |
|null |null |
+---------+-------+