1

我已经在 linux 集群中安装了 Hadoop。当我尝试通过命令 $bin/start-all.sh 启动服务器时,出现以下错误:

mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
/home/spuri2/spring_2012/Hadoop/hadoop/hadoop-1.0.2/bin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-spuri2-namenode.pid: Permission denied
head: cannot open `/var/log/hadoop/spuri2/hadoop-spuri2-namenode-gpu02.cluster.out' for reading: No such file or directory
localhost: /home/spuri2/.bashrc: line 10: /act/Modules/3.2.6/init/bash: No such file or directory
localhost: mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
localhost: chown: cannot access `/var/log/hadoop/spuri2': No such file or directory

我已将 conf/hadoop-env.sh 中的日志目录参数配置为 /tmp 目录,并且我已将 core-site.xml 中的“hadoop.tmp.dir”配置为 /tmp/ 目录。由于我无权访问 /var/log 目录,但 hadoop 守护进程仍在尝试写入 /var/log 目录并失败。

我想知道为什么会这样?

4

3 回答 3

1

您必须将此目录写入“core.site.xml”文件而不是 hadoop-env.sh

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/Directory_hadoop_user_have_permission/temp/${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>
于 2012-08-04T01:25:25.067 回答
1

简而言之,我遇到了这个问题,因为在大学集群中安装了多个 hadoop。以 root 用户身份安装 hadoop 弄乱了我的本地 hadoop 安装。

Hadoop-daemons 无法启动的原因是它无法写入某些具有 root 权限的文件。我以普通用户的身份运行 Hadoop。出现问题是因为我们大学的系统管理员以 root 用户身份安装了 Hadoop,所以当我开始本地安装 hadoop 时,root 安装配置文件优先于我的本地 hadoop 配置文件。花了很长时间才弄清楚这一点,但在以 root 用户身份卸载 hadoop 后,问题得到了解决。

于 2012-11-26T01:29:24.743 回答
0

我曾经遇到过同样的错误,如果您在配置标签下添加了然后在运行更改为超级用户之前:su - 用户名(这是拥有 hadoop 目录所有权的用户)然后尝试执行 start-all.sh

确保您已在教程中提到的配置标签之间添加了必要的内容:

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

于 2012-12-10T05:15:39.990 回答