0

I am creating some new ETL tasks for our data pipeline. We have currently have several hundred loading data from various S3 buckets.

So it would go like this:

create or replace stage ETL_STAGE url='s3://bucketname/'
  file_format = csv_etl;

create or replace file format csv_etl
  type = 'CSV'
  field_delimiter = ','
  skip_header = 1
  FIELD_OPTIONALLY_ENCLOSED_BY='"'

copy into db.schema.table
  from @ETL_STAGE/Usage
  pattern='/.*[.]csv'
  on_error = 'continue'

However, whenever I use this my file format is not only not not escaping the enclosed double quotes it is not even skipping the header so I get this:

Pretty perplexed by this as I am 99% certain the formatting options are correct here.

+-------------------+----------+----------------+---------------------+-------------------+
| "Usage Task Name" |  "Value" |  "etl_uuid"    | "etl_deviceServer"  |  "etl_timestamp"  |
| "taskname"        |  "0"     | "adfasdfasdf"  | "hostserverip"      | "2020-04-06 2124" |
+-------------------+----------+----------------+---------------------+-------------------+
4

1 回答 1

0

Run below command by including file_format. This applied the file format while loading file:

  copy into db.schema.table
  from @ETL_STAGE/Usage
  pattern='/.*[.]csv'
  on_error = 'continue'
  file_format = csv_etl;
于 2020-04-08T10:28:48.303 回答