1

我正在尝试对我拥有的这个数据集进行一些数据探索。我要导入的表是 1100 万行。这是脚本和输出

 #Creating a variable for our BQ project space
project_id = 'project space'

#Query

Step1 <- 
"
   insertquery
"  

#Executing the query from the variable above
    Step1_df <- query_exec(Step1, project = project_id, use_legacy_sql = FALSE, max_pages = Inf,page_size = 99000)

错误:

Error in curl::curl_fetch_memory(url, handle = handle) : 
  Operation was aborted by an application callback

我可以使用不同的 bigquery 库吗?还希望加快上传时间。

4

0 回答 0