1

我正在使用 Sqoop v1.4.2 对作业进行增量导入。这些职位是:
--create job_1 -- import --connect <CONNECT_STRING> --username <UNAME> --password <PASSWORD> -m <MAPPER#> --split-by <COLUMN> --target-dir <TARGET_DIR> --table <TABLE> --check-column <COLUMN> --incremental append --last-value 1

笔记:

  1. 增量类型是追加
  2. 就业创造成功
  3. 作业执行多次成功
  4. 可以看到在 HDFS 中导入的新行

--create job_2 -- import --connect <CONNECT_STRING> --username <UNAME> --password <PASSWORD> -m <MAPPER#> --split-by <COLUMN> --target-dir <TARGET_DIR> --table <TABLE> --check-column <COLUMN> --incremental lastmodified --last-value 1981-01-01

笔记:

  1. 增量类型是 lastmodified
  2. 作业创建成功,表名与job_1中使用的不同
  3. 作业执行仅第一次成功
  4. 可以看到为在 HDFS 中首次执行而导入的行
  5. 后续作业执行失败并出现以下错误:

    ERROR security.UserGroupInformation: PriviledgedActionException as:<MY_UNIX_USER>(auth:SIMPLE) cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory <TARGET_DIR_AS_SPECIFIED_IN_job_2> already exists
    ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory <TARGET_DIR_AS_SPECIFIED_IN_job_2> already exists
        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:872)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:476)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506)
        at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
        at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
        at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:465)
        at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:108)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
        at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:228)
        at org.apache.sqoop.tool.JobTool.run(JobTool.java:283)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
    
4

1 回答 1

0

如果你想一次又一次地执行 job_2 那么你需要使用 --incremental lastmodified --append

sqoop --create job_2 -- import --connect <CONNECT_STRING> --username <UNAME> 
--password <PASSWORD> --table <TABLE> --incremental lastmodified --append 
--check-column<COLUMN> --last-value "2017-11-05 02:43:43" --target-dir 
<TARGET_DIR> -m 1     
于 2017-11-04T22:33:20.490 回答