24

我只找到设置属性的方法hadoop dfsadmin -D xx=yy

但是如何xx在命令行中找到特定属性的值?

4

4 回答 4

49

您可以通过运行转储 Hadoop 配置:

$ hadoop org.apache.hadoop.conf.Configuration
于 2014-11-18T13:21:10.960 回答
7

从配置中获取特定密钥

hdfs getconf -confKey [key]

hdfs getconf -confKey dfs.replication

https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#getconf

于 2019-11-19T12:57:04.843 回答
6

您可以使用 GenericOptionsParser 将 Hadoop 的设置加载到配置类型的对象并迭代其属性。这是一个通过实用程序类(已配置)演示此方法的示例。

public class ConfigPrinter extends Configured implements Tool {
    static {
        // by default core-site.xml is already added
        // loading "hdfs-site.xml" from classpath
        Configuration.addDefaultResource("hdfs-site.xml");
        Configuration.addDefaultResource("mapred-site.xml");
    }

    @Override
    public int run(String[] strings) throws Exception {
        Configuration config =  this.getConf();
        for (Map.Entry<String, String> entry : config) {
            System.out.println(entry.getKey() + " = " + entry.getValue());
        }
        return 0;
    }

    public static void main(String[] args) throws Exception {
        ToolRunner.run(new ConfigPrinter(), args);
    }
}
于 2012-10-08T09:59:25.960 回答
0

在另一个答案中使用出色的 ConfigPrinter 程序,我继续创建了一个可执行和可编译的版本。有关代码,请参见https://github.com/tdunning/config-print

要使用它,请将 pom.xml 文件中的 hadoop 版本更改为您使用的任何版本。然后执行此操作(您将需要安装 git、maven 和 java):

$ git clone https://github.com/tdunning/config-print.git
Cloning into 'config-print'...
remote: Enumerating objects: 13, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 13 (delta 1), reused 12 (delta 0), pack-reused 0
Unpacking objects: 100% (13/13), done.
$ cd config-print/
$ mvn -q package
$ ./target/config-printer | sort
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
dfs.ha.fencing.ssh.connect-timeout = 30000
file.blocksize = 67108864
file.bytes-per-checksum = 512
file.client-write-packet-size = 65536
file.replication = 1
file.stream-buffer-size = 4096
fs.AbstractFileSystem.file.impl = org.apache.hadoop.fs.local.LocalFs
fs.AbstractFileSystem.ftp.impl = org.apache.hadoop.fs.ftp.FtpFs
fs.AbstractFileSystem.har.impl = org.apache.hadoop.fs.HarFs
fs.AbstractFileSystem.hdfs.impl = org.apache.hadoop.fs.Hdfs
fs.AbstractFileSystem.viewfs.impl = org.apache.hadoop.fs.viewfs.ViewFs
... lots of stuff deleted ...
s3native.stream-buffer-size = 4096
s3.replication = 3
s3.stream-buffer-size = 4096
tfile.fs.input.buffer.size = 262144
tfile.fs.output.buffer.size = 262144
tfile.io.chunk.size = 1048576
tdunning@nodea:~/config-print$ 
$
于 2021-06-28T19:33:22.777 回答