我猜您没有正确设置您的 hadoop 集群,请按照以下步骤操作:
第 1 步:从设置 .bashrc 开始:
vi $HOME/.bashrc
将以下行放在文件末尾:(将 hadoop home 更改为您的)
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
第 2 步:编辑 hadoop-env.sh 如下:
# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
第 3 步:现在创建一个目录并设置所需的所有权和权限
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
第 4 步:编辑 core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
第 5 步:编辑 mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>
第 6 步:编辑 hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
最后格式化你的hdfs(你需要在第一次设置Hadoop集群时这样做)
$ /usr/local/hadoop/bin/hadoop namenode -format
希望对你有帮助