6

我一直在尝试设置 hadoop 的 CDH4 安装。我有 12 台机器,标记为 hadoop01 - hadoop12,并且名称节点、作业跟踪器和所有数据节点都已正常启动。我能够查看 dfshealth.jsp 并看到它找到了所有数据节点。

但是,每当我尝试启动辅助名称节点时,它都会出现异常:

Starting Hadoop secondarynamenode:                         [  OK  ]
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-hadoop02.dev.terapeak.com.out
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:324)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:312)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:305)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:222)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:186)
        at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:578)

这是我在辅助名称节点上的 hdfs-site.xml 文件:

<configuration>
  <property>
    <name>dfs.name.dir</name>
    <value>/data/1/dfs/nn</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>10.100.20.168:50070</value>
    <description>
      The address and the base port on which the dfs NameNode Web UI will listen.
      If the port is 0, the server will start on a free port.
    </description>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.check.period</name>
    <value>3600</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.txns</name>
    <value>40000</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>/var/lib/hadoop-hdfs/cache</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.edits.dir</name>
    <value>/var/lib/hadoop-hdfs/cache</value>
 </property>
 <property>
    <name>dfs.namenode.num.checkpoints.retained</name>
    <value>1</value>
  </property>
 <property>
    <name>mapreduce.jobtracker.restart.recover</name>
    <value>true</value>
  </property>

</configuration>

赋予 dfs.namenode.http-address 的值似乎有问题,但我不确定是什么。它应该以 http:// 还是 hdfs:// 开头?我尝试在 lynx 中调用 10.100.20.168:50070 并显示一个页面。有任何想法吗?

4

2 回答 2

7

看起来我缺少辅助名称节点上的 core-site.xml 配置。添加了这一点,并且该过程正常启动。

核心站点.xml:

<configuration>
 <property>
  <name>fs.defaultFS</name>
  <value>hdfs://10.100.20.168/</value>
 </property>
</configuration>
于 2012-12-06T23:55:10.517 回答
1

如果您正在运行单节点集群,请确保您已正确设置 HADOOP_PREFIX 变量,如链接所示:http: //hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster .html

即使我遇到了与您相同的问题,并且通过设置此变量得到了纠正

于 2014-08-26T08:52:21.073 回答