我有三个物理节点。在每个节点中,我使用此命令进入 docker。
docker run -v /home/user/.ssh:/root/.ssh --privileged
-p 5050:5050 -p 5051:5051 -p 5052:5052 -p 2181:2181 -p 8089:8081
-p 6123:6123 -p 8084:8080 -p 50090:50090 -p 50070:50070
-p 9000:9000 -p 2888:2888 -p 3888:3888 -p 4041:4040 -p 8020:8020
-p 8485:8485 -p 7078:7077 -p 52222:22 -e WEAVE_CIDR=10.32.0.3/12
-e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
-e LIBPROCESS_IP=10.32.0.3
-e MESOS_RESOURCES=ports*:[11000-11999]
-ti hadoop_marathon_mesos_flink_2 /bin/bash
我像这样配置 hadoop: Core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://mycluster</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>
qjournal://10.32.0.1:8485;10.32.0.2:8485;10.32.0.3:8485/mycluster
</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/hadoop/dfs/jn</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
<description>Logical name for this new
nameservice</description>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
<description>Unique identifiers for each NameNode in the
nameservice</description>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>10.32.0.1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>10.32.0.2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>10.32.0.1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>10.32.0.2:50070</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>
org.apache.hadoop.hdfs.server.namenode.ha.
ConfiguredFailoverProxyProvider
</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.datanode.registration.
ip-hostname-check</name>
<value>false</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>10.32.0.1:2181,10.32.0.2:2181,10.32.0.3:2181</value>
</property>
</configuration>
问题是当我格式化namenode时:
hadoop namenode -format
它无法格式化namenode。我收到此错误:
2019-05-06 06:35:09,969 INFO ipc.Client:重试连接到服务器:10.32.0.2/10.32.0.2:8485。已尝试 9 次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2019-05-06 06:35:09,969 INFO ipc.Client:重试连接到服务器:10.32.0.3/10.32.0.3:8485。已尝试 9 次;重试策略是 RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2019-05-06 06:35:09,987 错误 namenode.NameNode:无法启动 namenode。org.apache.hadoop.hdfs.qjournal.client.QuorumException:无法检查 JN 是否已准备好进行格式化。1 抛出异常:
10.32.0.1:8485:从 50c5244de4cd/10.32.0.1 到 50c5244de4cd:8485 的呼叫因连接异常而失败:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅: http ://wiki.apache.org/hadoop/ConnectionRefused
我已经发布了 Hadoop 所需的端口,但我仍然收到connection refused。
有人能告诉我配置有什么问题吗?
先感谢您。