1

我想知道是否hadoop distcp可以用来一次将多个文件从 S3 复制到 HDFS。它似乎仅适用于具有绝对路径的单个文件。我想复制整个目录或使用通配符。

请参阅:Hadoop DistCp 使用通配符?

我知道s3distcpdistcp ,但为了简单起见,我更愿意使用。

这是我将目录从 S3 复制到 HDFS 的尝试:

[root@ip-10-147-167-56 ~]# /root/ephemeral-hdfs/bin/hadoop distcp s3n://<key>:<secret>@mybucket/dir hdfs:///input/
13/05/23 19:58:27 INFO tools.DistCp: srcPaths=[s3n://<key>:<secret>@mybucket/dir]
13/05/23 19:58:27 INFO tools.DistCp: destPath=hdfs:/input
13/05/23 19:58:29 INFO tools.DistCp: sourcePathsCount=4
13/05/23 19:58:29 INFO tools.DistCp: filesToCopyCount=3
13/05/23 19:58:29 INFO tools.DistCp: bytesToCopyCount=87.0
13/05/23 19:58:29 INFO mapred.JobClient: Running job: job_201305231521_0005
13/05/23 19:58:30 INFO mapred.JobClient:  map 0% reduce 0%
13/05/23 19:58:45 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_0, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:58:55 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_1, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:04 INFO mapred.JobClient: Task Id : attempt_201305231521_0005_m_000000_2, Status : FAILED
java.lang.NullPointerException
    at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.close(NativeS3FileSystem.java:106)
    at java.io.BufferedInputStream.close(BufferedInputStream.java:468)
    at java.io.FilterInputStream.close(FilterInputStream.java:172)
    at org.apache.hadoop.tools.DistCp.checkAndClose(DistCp.java:1386)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.copy(DistCp.java:434)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:547)
    at org.apache.hadoop.tools.DistCp$CopyFilesMapper.map(DistCp.java:314)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/05/23 19:59:18 INFO mapred.JobClient: Job complete: job_201305231521_0005
13/05/23 19:59:18 INFO mapred.JobClient: Counters: 6
13/05/23 19:59:18 INFO mapred.JobClient:   Job Counters 
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38319
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/05/23 19:59:18 INFO mapred.JobClient:     Launched map tasks=4
13/05/23 19:59:18 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/05/23 19:59:18 INFO mapred.JobClient:     Failed map tasks=1
13/05/23 19:59:18 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201305231521_0005_m_000000
With failures, global counters are inaccurate; consider running with -i
Copy failed: java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
    at org.apache.hadoop.tools.DistCp.copy(DistCp.java:667)
    at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
4

1 回答 1

1

您不能在s3n://地址中使用通配符。

但是,可以将整个目录从 S3 复制到 HDFS。在这种情况下出现空指针异常的原因是 HDFS 目标文件夹已经存在。

修复:删除 HDFS 目标文件夹:./hadoop fs -rmr /input/

注 1:我也尝试通过-updateand -overwrite,但我仍然得到 NPE。

注 2:https ://hadoop.apache.org/docs/r1.2.1/distcp.html展示了如何复制多个显式文件。

于 2013-05-23T21:20:07.527 回答