0

我只是尝试在 hadoop 上安装 hypertable 遵循官方文档首先我在 CentOS 6.5-32bit 节点上以 persudo-distribute 模式部署 cdh4

然后按照hypertable官方文档在hadoop上安装hypertable

当我跑步时

cap start -f Capfile.cluster

获取 DfsBroker 没有出现错误

 * executing `start'
 ** transaction: start
  * executing `start_servers'
  * executing `start_hyperspace'
  * executing "/opt/hypertable/current/bin/start-hyperspace.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg"
    servers: ["master"]
    [master] executing command
 ** [out :: master] Started Hyperspace
    command finished in 6543ms
  * executing `start_master'
  * executing "/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh"
    servers: ["master"]
    [master] executing command
 ** [out :: master] DFS broker: available file descriptors: 65536
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] Waiting for DFS Broker (hadoop) (localhost:38030) to come up...
 ** [out :: master] ERROR: DFS Broker (hadoop) did not come up
    command finished in 129114ms
failed: "sh -c '/opt/hypertable/current/bin/start-dfsbroker.sh hadoop --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-master.sh --config=/opt/hypertable/0.9.7.16/conf/dev-hypertable.cfg &&\\\n /opt/hypertable/current/bin/start-monitoring.sh'" on master

我检查 /opt/hypertable/0.9.7.16 中的 DfsBroker.hadoop.log 得到这个

/opt/hypertable/current/bin/jrun: line 113: exec: java: not found

但我 JAVA_HOME 已设置,我测试 java 运行正常

java --version  

我尝试单独运行 jrun ,它没有提示 exec: java : not found

我在谷歌之后看到了类似的问题

但我已经使用了所有我能找到的解决方案

/opt/hypertable/current/bin/set-hadoop-distro.sh cdh4

刚刚获得

Hypertable successfully configured for Hadoop cdh4

所以如果有人能给我关于这个问题的提示,我将不胜感激

4

1 回答 1

0

在启动集群之前,您必须运行:

cap fhsize -f Capfile.cluster

然后您可以检查所有目录是否已正确设置:

ls -laF /opt/hypertable/current/lib/java/*.jar

并且java版本也应该可以工作

/opt/hypertable/current/bin/jrun -version

更多信息见快速入门

于 2014-10-10T09:41:23.473 回答