4

我正在尝试通过 api 将文件 (json.txt) 从云存储导入 Bigquery 并引发错误。当通过 web ui 完成此操作时,它可以正常工作并且没有错误(我什至设置了 maxBadRecords=0)。有人可以告诉我我在这里做错了什么吗?代码是错误的,还是我需要在 Bigquery 的某个地方更改一些设置?

该文件是一个纯文本 utf-8 文件,其内容如下:我保留了有关 bigquery 和 json 导入的文档。

{"person_id":225,"person_name":"John","object_id":1}
{"person_id":226,"person_name":"John","object_id":1}
{"person_id":227,"person_name":"John","object_id":null}
{"person_id":229,"person_name":"John","object_id":1}

并在导入作业时引发以下错误:“值无法转换为预期类型。” 对于每一行。

    {
    "reason": "invalid",
    "location": "Line:15 / Field:1",
    "message": "Value cannot be converted to expected type."
   },
   {
    "reason": "invalid",
    "location": "Line:16 / Field:1",
    "message": "Value cannot be converted to expected type."
   },
   {
    "reason": "invalid",
    "location": "Line:17 / Field:1",
    "message": "Value cannot be converted to expected type."
   },
  {
    "reason": "invalid",
    "location": "Line:18 / Field:1",
    "message": "Value cannot be converted to expected type."
   },
   {
    "reason": "invalid",
    "message": "Too many errors encountered. Limit is: 10."
   }
  ]
 },
 "statistics": {
  "creationTime": "1384484132723",
  "startTime": "1384484142972",
  "endTime": "1384484182520",
  "load": {
   "inputFiles": "1",
   "inputFileBytes": "960",
   "outputRows": "0",
   "outputBytes": "0"
  }
 }
}

该文件可以在这里访问: http ://www.sendspace.com/file/7q0o37

我的代码和架构如下:

def insert_and_import_table_in_dataset(tar_file, table, dataset=DATASET)
config= {
  'configuration'=> {
      'load'=> {
        'sourceUris'=> ["gs://test-bucket/#{tar_file}"],
        'schema'=> {
          'fields'=> [
            { 'name'=>'person_id', 'type'=>'INTEGER', 'mode'=> 'nullable'},
            { 'name'=>'person_name', 'type'=>'STRING', 'mode'=> 'nullable'},
            { 'name'=>'object_id',  'type'=>'INTEGER', 'mode'=> 'nullable'}
          ]
        },
        'destinationTable'=> {
          'projectId'=> @project_id.to_s,
          'datasetId'=> dataset,
          'tableId'=> table
        },
        'sourceFormat' => 'NEWLINE_DELIMITED_JSON',
        'createDisposition' => 'CREATE_IF_NEEDED',
        'maxBadRecords'=> 10,
      }
    },
  }

result = @client.execute(
  :api_method=> @bigquery.jobs.insert,
  :parameters=> {
     #'uploadType' => 'resumable',          
      :projectId=> @project_id.to_s,
      :datasetId=> dataset},
  :body_object=> config
)

# upload = result.resumable_upload
# @client.execute(upload) if upload.resumable?

puts result.response.body
json = JSON.parse(result.response.body)    
while true
  job_status = get_job_status(json['jobReference']['jobId'])
  if job_status['status']['state'] == 'DONE'
    puts "DONE"
    return true
  else
   puts job_status['status']['state']
   puts job_status 
   sleep 5
  end
end
end

有人可以告诉我我做错了什么吗?我在哪里修什么?

同样在将来的某个时候,我希望使用压缩文件并从中导入-“tar.gz”是否可以,还是我只需要将其设为“.gz”?

预先感谢您的所有帮助。欣赏它。

4

1 回答 1

3

很多人(包括我)都遇到过同样的问题——你正在导入一个 json 文件,但没有指定导入格式,所以它默认为 csv。

如果你将 configuration.load.sourceFormat 设置为 NEWLINE_DELIMITED_JSON 你应该很高兴。

我们有一个错误使它更难做,或者至少能够检测文件何时是错误的类型,但我会提高优先级。

于 2013-11-15T22:57:08.827 回答