我有一堆这样创建的镶木地板文件:
dd.to_parquet(df, 'dir/of/parquet', partition_on=['month', 'day'], engine='fastparquet')
我曾经读过这样的列子集:
raw_data_view = dd.read_parquet('data/raw_data_fast_par.par', columns=['@timestamp', 'http_user','dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain'],
engine='fastparquet', filters=[('@timestamp', '>=', np.datetime64(start)), ('@timestamp', '<', np.datetime64(end))])
在更新到dask 2.2.0
最新的 fastparquet 之前,它可以正常工作。现在我收到了这些消息。在执行读取命令时:
RuntimeWarning: Multiple sorted columns found, cannot autodetect index
RuntimeWarning,
并在调用计算时:
ValueError: The columns in the computed data do not match the columns in the provided metadata
Expected: ['@timestamp', 'http_user', 'dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain']
Actual: ['@timestamp', 'http_user', 'dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain', 'month', 'day']
有什么改变吗?