我正在尝试使用此处记录的功能将 a 上传pandas.DataFrame
到 Google Big Query 。问题是直接上传到 Google Cloud Storage 需要 2.3 分钟不到一分钟。我打算上传一堆数据帧(~32),每个数据帧的大小都差不多,所以我想知道什么是更快的选择。pandas.DataFrame.to_gbq()
to_gbq()
这是我正在使用的脚本:
dataframe.to_gbq('my_dataset.my_table',
'my_project_id',
chunksize=None, # I have tried with several chunk sizes, it runs faster when it's one big chunk (at least for me)
if_exists='append',
verbose=False
)
dataframe.to_csv(str(month) + '_file.csv') # the file size its 37.3 MB, this takes almost 2 seconds
# manually upload the file into GCS GUI
print(dataframe.shape)
(363364, 21)
我的问题是,什么更快?
Dataframe
使用pandas.DataFrame.to_gbq()
函数上传- 保存为 CSV,然后使用Python API
Dataframe
将其作为文件上传到 BigQuery - 保存为 CSV,然后使用此过程
Dataframe
将文件上传到 Google Cloud Storage ,然后从 BigQuery 中读取
更新:
备选方案 1 似乎比备选方案 2 更快,(使用pd.DataFrame.to_csv()
and load_data_from_file()
17.9 secs more in average with 3 loops
):
def load_data_from_file(dataset_id, table_id, source_file_name):
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
table_ref = dataset_ref.table(table_id)
with open(source_file_name, 'rb') as source_file:
# This example uses CSV, but you can use other formats.
# See https://cloud.google.com/bigquery/loading-data
job_config = bigquery.LoadJobConfig()
job_config.source_format = 'text/csv'
job_config.autodetect=True
job = bigquery_client.load_table_from_file(
source_file, table_ref, job_config=job_config)
job.result() # Waits for job to complete
print('Loaded {} rows into {}:{}.'.format(
job.output_rows, dataset_id, table_id))