4

我正在使用 YarnClient 以编程方式开始工作。我正在运行的集群已经过 kerberos 化。

通过“yarn jar examples.jar wordcount ...”提交的法线贴图减少作业工作。

我试图以编程方式提交的工作没有。我收到此错误:

2004 年 9 月 14 日 21:14:29 错误 client.ClientService:应用程序提交期间发生错误:应用程序 application_1409863263326_0002 失败 2 次,原因是 appattempt_1409863263326_0002_000002 的 AM 容器退出,退出代码:-1000,原因是:本地异常失败:java.io。 IOException:org.apache.hadoop.security.AccessControlException:客户端无法通过:[TOKEN,KERBEROS] 进行身份验证;主机详情:本地主机为:“yarn-c1-n1.clouddev.snaplogic.com/10.184.28.108”;目标主机是:“yarn-c1-cdh.clouddev.snaplogic.com”:8020;.这次尝试失败..申请失败。14/09/04 21:14:29 ERROR client.YClient: 应用程序提交失败

代码看起来像这样:

ClientContext context = createContextFrom(args);
YarnConfiguration configuration = new YarnConfiguration();
YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(configuration);
ClientService client = new ClientService(context, yarnClient, new InstallManager(FileSystem.get(configuration)));
LOG.info(Messages.RUNNING_CLIENT_SERVICE);
boolean result = client.execute();

我原以为可能会添加一些东西来达到以下效果:

yarnClient.getRMDelegationToken(new Text(InetAddress.getLocalHost().getHostAddress()));

也许可以减轻我的痛苦,但这似乎也无济于事。任何帮助将不胜感激。

4

3 回答 3

3

Alright, well after hours and hours and hours we have this figured out. For all following generations of coders, forever plagued by hadoop's lack of documentation:

You must grab the tokens from UserGroupInformation object with a call to get credentials. Then you must set the tokens on the ContainerLaunchContext.

于 2014-09-16T17:24:44.767 回答
0

如果您在任何 hdfs 路径中使用实际名称节点而不是 HA 的逻辑 URI,您也会收到此错误。

这是因为如果它找到一个名称节点 uri 而不是逻辑 uri,那么它将创建非 HA 文件系统,该系统将尝试使用简单的 UGI 而不是 kerberos UGI。

于 2017-07-31T19:29:25.707 回答
0

与不兼容的 hadoop 工件版本相同的错误。

工作示例:

public static final String CONF_CORE_SITE = "/etc/hadoop/conf/core-site.xml";
public static final String CONF_HDFS_SITE = "/etc/hadoop/conf/hdfs-site.xml";

/**
 * Pick the config files from class path
 */
private static Configuration getHdfsConfiguration() throws IOException {
    Configuration configuration = new Configuration();

    configuration.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
    configuration.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());

    File hadoopCoreConfig = new File(CONF_CORE_SITE);
    File hadoopHdfsConfig = new File(CONF_HDFS_SITE);

    if (! hadoopCoreConfig.exists() || ! hadoopHdfsConfig.exists()) {
        throw new FileNotFoundException("Files core-site.xml or hdfs-site.xml are not found. Check /etc/hadoop/conf/ path.");
    }

    configuration.addResource(new Path(hadoopCoreConfig.toURI()));
    configuration.addResource(new Path(hadoopHdfsConfig.toURI()));

    //Use existing security context created by $ kinit
    UserGroupInformation.setConfiguration(configuration);
    UserGroupInformation.loginUserFromSubject(null);
    return configuration;
}

pom.xml

<properties>
  <hadoop.version>2.6.0</hadoop.version>
  <hadoop.release>cdh5.14.2</hadoop.release>
</properties>


<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>${hadoop.version}-mr1-${hadoop.release}</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>${hadoop.version}-${hadoop.release}</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-common</artifactId>
  <version>${hadoop.version}-${hadoop.release}</version>
</dependency>
于 2018-07-05T12:08:45.353 回答