我正在读取一个 CSV 文件,pandas.read_csv
它会自动检测架构,就像
Column1: string
Column2: string
Column3: string
Column4: int64
Column5: double
Column6: double
__index_level_0__: int64
然后,我试图把它pyarrow.parquet.write_table
写成 Parquet 表。但是,我想为新的镶木地板文件使用以下架构
Column1: string
Column2: string
Column3: string
Column4: string
Column5: string
Column6: string
__index_level_0__: int64
但我收到一条错误消息,提示“表架构与用于创建文件的架构不匹配”。这是我用来将 CSV 文件转换为从这里借来的 Parquet 文件的一段代码
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
csv_file = 'C:/input.csv'
parquet_file = 'C:/putput.parquet'
chunksize = 100_000
csv_stream = pd.read_csv(csv_file, sep=',', chunksize=chunksize, low_memory=False, encoding="ISO-8859-1")
for i, chunk in enumerate(csv_stream):
print("Chunk", i)
if i == 0:
# Guess the schema of the CSV file from the first chunk
# parquet_schema = pa.Table.from_pandas(df=chunk).schema
parquet_schema = pa.schema([
('c1', pa.string()),
('c2', pa.string()),
('c3', pa.string()),
('c4', pa.string()),
('c5', pa.string()),
('c6', pa.string())
])
# Open a Parquet file for writing
parquet_writer = pq.ParquetWriter(parquet_file, parquet_schema, compression='snappy')
# Write CSV chunk to the parquet file
table = pa.Table.from_pandas(chunk, schema=parquet_schema)
parquet_writer.write_table(table)
parquet_writer.close()