0

我安装了一个四节点 hadoop 集群。在 hadoop Webui 中,我可以看到所有数据节点和名称节点都已启动并运行。但是当我select count(*) from table_name;在蜂巢中运行时,查询被卡住了。

hive> select count(*) from test_hive2;
Query ID = dssbp_20160804124833_ff269da1-6b91-4e46-a1df-460603a5cb98
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>

我在数据节点节点管理器日志和配置单元日志中不断遇到的错误是:

2016-08-04 12:33:31,474 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode1/172.18.128.24:6005. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

我检查过的事情:

1.可以从datanode远程登录到name node。
2.可以执行hadoop put和get命令。
3.可以在hive中创建表并加载数据。

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.128.24   namenode1 mycluster
172.18.128.25  namenode2
172.18.128.26  datanode1
172.18.128.27  datanode2

如果有人可以提出可能的解决方案,那将是很大的帮助。

问候, 兰詹

4

1 回答 1

0

我可以解决这个问题,因为资源管理器存在一些问题,并且从数据节点它无法连接到 172.18.128.24:6005 这个端口。

于 2016-08-08T12:26:36.850 回答