0

Ubuntu-14.04上使用HDP-2.5,运行此命令并

$ ./kite-dataset csv-import ./test.csv  test_schema

尝试import raw csv使用 KiteSdk 将数据输入 Hivever.1-1-0 并具有以下IOError

1 个作业失败发生:org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/444e6fc4-10e2-407d-afaf-723c408a6d... ID=1 (1/1)(1): java .io.FileNotFoundException:文件文件:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz 在 org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624) 中不存在.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850) 在 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614) 在 org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem) .java:125) 在 org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:468) 在 org.apache.hadoop.fs.FilterFs.resolvePath(FilterFs.java:158) 在 org.apache.hadoop。 fs。FileContext$25.next(FileContext.java:2195) at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2191) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)在 org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2191) 在 org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:603) 在 org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache (JobSubmitter.java:457) 在 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142) 在 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) 在 org.apache .hadoop.mapreduce.Job$10.run(Job.java:1287) 在 java.security.AccessController.doPrivileged(Native Method) 在 javax.security.auth.Subject.doAs(Subject.java:422) 在 org.apache。hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) 在 org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob。提交(CrunchControlledJob.java:329)在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204)在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl。 pollJobStatusAndStartNewOnes(CrunchJobControl.java:238) at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor .java:55) 在 org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) 在 java.lang.Thread.run(Thread.java:745)doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(Cru​​nchControlledJob.java: 329)在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java: 238) 在 org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) 在 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) 在org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) at java.lang.Thread.run(Thread.java:745)doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(Cru​​nchControlledJob.java: 329)在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java: 238) 在 org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) 在 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) 在org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) at java.lang.Thread.run(Thread.java:745)mapreduce.Job.submit(Job.java:1287) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(Cru​​nchControlledJob.java:329) 在 org.apache.crunch.hadoop.mapreduce.lib。 jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238) 在 org.apache.crunch.impl.mr.exec。 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) 在 org.apache.crunch.impl.mr.exec.MRExecutor$1 上的 MRExecutor.monitorLoop(MRExecutor.java:112)。在 java.lang.Thread.run(Thread.java:745) 处运行(MRExecutor.java:83)mapreduce.Job.submit(Job.java:1287) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(Cru​​nchControlledJob.java:329) 在 org.apache.crunch.hadoop.mapreduce.lib。 jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) 在 org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238) 在 org.apache.crunch.impl.mr.exec。 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) 在 org.apache.crunch.impl.mr.exec.MRExecutor$1 上的 MRExecutor.monitorLoop(MRExecutor.java:112)。在 java.lang.Thread.run(Thread.java:745) 处运行(MRExecutor.java:83)java:329) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl. java:238) 在 org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) 在 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55 ) 在 org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) 在 java.lang.Thread.run(Thread.java:745)java:329) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204) at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl. java:238) 在 org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) 在 org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55 ) 在 org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) 在 java.lang.Thread.run(Thread.java:745)apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) at org.apache.crunch .impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) 在 java.lang.Thread.run(Thread.java:745)apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112) at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55) at org.apache.crunch .impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83) 在 java.lang.Thread.run(Thread.java:745)

我已经检查了该文件是否"hdfs:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz" 存在,但很长一段时间都无法弄清楚如何解决此错误。

任何帮助是极大的赞赏。

4

2 回答 2

0

我认为您收到此错误是因为您使用的是 Kite SDK 1.1.0 版本。我在进行 csv-import 时也遇到了类似的错误。当我切换到 Kite SDK 1.0.0 版本时没有这样的错误。

我建议您切换到 Kite SDK 1.0.0 版本。

而且Kite SDK在1.1.0版本之后就没有新的发布了,甚至这个发布发生在2015年6月。

于 2017-05-05T11:41:29.450 回答
0

我遇到了同样的错误,我通过创建 /hdp/apps/2.5.0.0-1245/mapreduce 然后解决它: cp /usr/hdp/current/hadoop-client/mapreduce.tar.gz /hdp/apps/ 2.5.0.0-1245/mapreduce

然后创建了一个新错误:org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/413a41a2-8813-4056-9433-3c5e073d80... ID=1 (1/1)(1): java. io.FileNotFoundException:文件不存在:hdfs://sandbox.hortonworks.com:8020/tmp/crunch-283520469/p1/REDUCE

我仍在尝试解决问题。

于 2016-10-17T21:39:44.570 回答