2

我正在尝试使用两台计算机以 hadoop michael-noll 的方式设置多节点集群。

当我尝试格式化 hdfs 时,它显示了一个NullPointerException.

hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$ 

我不知道是什么原因造成的。请帮我找出问题所在。我不是该主题的新手,因此请尽可能减少您的回答的技术含量。:)

如果需要更多信息,请告诉我。

4

5 回答 5

1
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)

似乎您的辅助名称节点无法连接到主名称节点,这对于整个系统来说绝对是必要的,因为需要做检查点的事情。所以我猜你的网络配置有问题,包括:

  • ${HADOOP_HOME}/conf/core-site.xml,其中包含如下内容:

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/app/hadoop/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system.  A URI whose
            scheme and authority determine the FileSystem implementation.  The
            uri's scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class.  The uri's authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
  • 和 /etc/hosts。这个文件真的是一个滑坡,你要小心这些ip别名,它应该和那个ip机器的主机名一致。

        127.0.0.1   localhost
        127.0.1.1   zac
    
        # The following lines are desirable for IPv6 capable hosts
        ::1     ip6-localhost ip6-loopback
        fe00::0 ip6-localnet
        ff00::0 ip6-mcastprefix
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
    
        192.168.1.153 master     #pay attention to these two!!!
        192.168.99.146 slave1
    
于 2012-03-30T02:14:36.797 回答
1

显然默认值不正确,因此您必须按照本文所述自行添加

它对我有用。

于 2014-04-17T05:47:23.327 回答
1

看来您根本没有在 datanode(slave) 中安装 hadoop(或者)您在错误的路径中安装了 hadoop。在您的情况下,正确的路径应该是 /home/hadoop/project/hadoop-0.20.2/

于 2015-12-09T15:06:08.600 回答
0

您可能设置了错误的用户目录或其他东西,看起来它正在错误的目录中查找您的文件。

于 2012-03-16T19:39:03.023 回答
0

您的 bash 脚本似乎没有执行权限,甚至不存在:

从站:bash:第 0 行:cd:/home/hadoop/project/hadoop-0.20.2/bin/..:没有这样的文件或目录
从站:bash:/home/hadoop/project/hadoop-0.20.2/bin /hadoop-daemon.sh: 没有这样的文件或目录

于 2011-03-30T19:08:29.387 回答