我在 cygwin 上运行了以下命令,参考https://cloud.google.com/hadoop/setting-up-a-hadoop-cluster 。
gsutil.cmd mb -p [projectname] gs://[bucketname]
./bdutil -p [projectname] -n 2 -b [bucketname] -e hadoop2_env.sh
generate_config configuration.sh
./bdutil -e configuration.sh deploy
部署后,我收到以下错误:
.
.
.
Node 'hadoop-w-0' did not become ssh-able after 10 attempts
Node 'hadoop-w-1' did not become ssh-able after 10 attempts
Node 'hadoop-m' did not become ssh-able after 10 attempts
命令失败:在第 308 行等待 ${SUBPROC}。
失败命令的退出代码:1
文件中提供了详细的调试信息:/tmp/bdutil-20150120-103601-mDh/debuginfo.txt*
debuginfo.txt 中的日志如下:
******************* Exit codes and VM logs *******************
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-0 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-1 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-2 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a
你能帮我解决这个问题吗?十分感谢。