2

我尝试使用 java api 将本地 csv 文件导入 BigQuery。

但我没能做到。

如果您了解我在下面的代码中的错误,请告诉我...

        TableSchema schema = new TableSchema();
        ArrayList<TableFieldSchema> fields = new ArrayList<TableFieldSchema>();
        fields.add(new TableFieldSchema().setName("nn").setType("String"));
        fields.add(new TableFieldSchema().setName("gg").setType("String"));
        fields.add(new TableFieldSchema().setName("uu").setType("String"));
        schema.setFields(fields);

        TableReference destTable = new TableReference();
        destTable.setProjectId(projectId);
        destTable.setDatasetId(datasetId);
        destTable.setTableId("testUploads_fromJava");

        FileContent content = new FileContent("application/octet-stream", new File(csv));

        Job job = new Job();
        JobConfiguration config = new JobConfiguration();
        JobConfigurationLoad configLoad = new JobConfigurationLoad();

        configLoad.setSchema(schema);
        configLoad.setDestinationTable(destTable);

        config.setLoad(configLoad);
        job.setConfiguration(config);

        Insert insert = bigquery.jobs().insert(projectId, job, content);
        insert.setProjectId(projectId);
        JobReference jobRef = insert.execute().getJobReference();

“JobReference jobRef = insert.execute().getJobReference();”中发生错误。

这是错误代码。

java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3004)
at java.net.URI.<init>(URI.java:577)
at com.google.api.client.http.GenericUrl.<init>(GenericUrl.java:100)
at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:269)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:408)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:328)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:449)
at bigquery.GettingBigQueryResult.loadLocalCSVtoBQ(GettingBigQueryResult.java:117)
at main.GetBQData.main(GetBQData.java:70)

谢谢。

4

1 回答 1

1

@greeness谢谢你的建议。

我对设置方案有误。我修改代码,因为方案定义是从 json 加载的。

正确的代码如下。

TableSchema schema = new TableSchema();
        schema.setFields(new ArrayList<TableFieldSchema>());
        JacksonFactory JACKSON = new JacksonFactory();
        JACKSON.createJsonParser(new FileInputStream("schema.json"))
        .parseArrayAndClose(schema.getFields(), TableFieldSchema.class, null);
        schema.setFactory(JACKSON);

        TableReference destTable = new TableReference();
        destTable.setProjectId(projectId);
        destTable.setDatasetId(datasetId);
        destTable.setTableId(tableId);

        FileContent content = new FileContent("application/octet-stream", new File(csv));

        Job job = new Job();
        JobConfiguration config = new JobConfiguration();
        JobConfigurationLoad configLoad = new JobConfigurationLoad();

        configLoad.setSchema(schema);
        configLoad.setDestinationTable(destTable);

        configLoad.setEncoding("UTF-8");
        configLoad.setCreateDisposition("CREATE_IF_NEEDED");

        config.setLoad(configLoad);
        job.setConfiguration(config);

        Insert insert = bigquery.jobs().insert(projectId, job, content);
        insert.setProjectId(projectId);
        JobReference jobRef = insert.execute().getJobReference();
        String jobId = jobRef.getJobId();

谢谢你。

于 2013-02-21T08:42:12.033 回答