1

我在 cygwin 上运行了以下命令,参考https://cloud.google.com/hadoop/setting-up-a-hadoop-cluster 。

gsutil.cmd mb -p [projectname] gs://[bucketname]      
./bdutil -p [projectname] -n 2 -b [bucketname] -e hadoop2_env.sh      
generate_config configuration.sh   
./bdutil -e configuration.sh deploy  

部署后,我收到以下错误: .
.
.

Node 'hadoop-w-0' did not become    ssh-able after 10 attempts  
Node    'hadoop-w-1' did not become ssh-able after 10 attempts  
Node 'hadoop-m' did not become ssh-able after 10    attempts  

命令失败:在第 308 行等待 ${SUBPROC}。

失败命令的退出代码:1

文件中提供了详细的调试信息:/tmp/bdutil-20150120-103601-mDh/debuginfo.txt*

debuginfo.txt 中的日志如下:

******************* Exit codes and VM logs *******************
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-0 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a    
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-1 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a   
Tue, Jan 20, 2015 10:18:09 AM: Exited 1 : gcloud.cmd --project=[projectname] --quiet --verbosity=info compute ssh hadoop-w-2 --command=exit 0 --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-a  

你能帮我解决这个问题吗?十分感谢。

4

1 回答 1

1

You may need to look at the console output for your Hadoop instances, from within the Developers console > Compute Engine > VM Instances > INSTANCE_NAME > scroll down to View Console Output .

Additionally you can run :

$ gcloud compute instances get-serial-port-output INSTANCE_NAME

  • this should give you a better picture of what is going on behind the scenes when the instances are booted (check if SSH daemon has started and on which port..etc.).
于 2015-01-27T14:52:14.910 回答