1

我正在尝试从官方文档部署 EKS:https ://learn.hashicorp.com/terraform/kubernetes/provision-aks-cluster

部署成功,我在其中添加了 helm/redis 图表。现在,当我运行terraform apply它时,它会在更新状态期间卡住:

module.eks.aws_iam_instance_profile.workers[0]: Refreshing state... [id=cluster1234]
module.vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-1234]
module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]: Refreshing state... [id=sgrule-1234]
module.eks.aws_security_group_rule.workers_ingress_cluster[0]: Refreshing state... [id=sgrule-1234]
module.eks.aws_security_group_rule.workers_egress_internet[0]: Refreshing state... [id=sgrule-1234]
module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]: Refreshing state... [id=sgrule-1234]
module.eks.aws_security_group_rule.workers_ingress_self[0]: Refreshing state... [id=sgrule-1234]
module.eks.aws_launch_configuration.workers[0]: Refreshing state... [id=cluster-worker-group-1234]
module.eks.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
module.eks.data.null_data_source.node_groups[0]: Refreshing state...
module.eks.random_pet.workers[0]: Refreshing state... [id=diverse-vervet]
module.eks.aws_autoscaling_group.workers[0]: Refreshing state... [id=cluster-worker-group-1234]

我已经尝试离开几个小时,再呆几个小时并尝试删除所有内容并重新部署,但似乎它是一个错误或smth?

期间的事件日志terraform apply

$> kubectl -n infra get events --sort-by='{.lastTimestamp}'
LAST SEEN   TYPE      REASON      OBJECT                       MESSAGE
58m         Normal    Pulled      pod/redis-master-0   Container image "docker.io/oliver006/redis_exporter:v1.0.3" already present on machine
28m         Warning   Unhealthy   pod/redis-slave-0    Readiness probe failed: 
Could not connect to Redis at redis-master-0.redis-headless.infra.svc.cluster.local:6379: Name or service not known
13m   Warning   Unhealthy   pod/redis-slave-0   Readiness probe failed: 
Could not connect to Redis at redis-master-0.redis-headless.infra.svc.cluster.local:6379: Name or service not known
3m31s     Warning   BackOff   pod/redis-slave-0      Back-off restarting failed container

做完之后:

export TF_LOG=TRACE

再次运行terraform apply我发现了这个:

2020/05/18 01:10:43 [TRACE] dag/walk: vertex "provider.helm (close)" is waiting for "helm_release.prom-operator"
2020/05/18 01:10:46 [TRACE] dag/walk: vertex "root" is waiting for "provider.helm (close)"
2020/05/18 01:10:48 [TRACE] dag/walk: vertex "provider.helm (close)" is waiting for "helm_release.prom-operator"
2020/05/18 01:10:51 [TRACE] dag/walk: vertex "root" is waiting for "provider.helm (close)"
2020/05/18 01:10:53 [TRACE] dag/walk: vertex "provider.helm (close)" is waiting for "helm_release.prom-operator"
2020/05/18 01:10:56 [TRACE] dag/walk: vertex "root" is waiting for "provider.helm (close)"
2020/05/18 01:10:58 [TRACE] dag/walk: vertex "provider.helm (close)" is waiting for "helm_release.prom-operator"

我无法弄清楚普罗米修斯现在出了什么问题以及这一切之间的关系..

为什么申请卡在 module.eks.aws_autoscaling_group.workers[0]: Refreshing state?

4

1 回答 1

0

我仍在尝试使用 tf 正确部署集群,但到目前为止,上面的问题已经消失了,在运行
terraform applyexport TF_LOG=TRACE
我发现图表卡住并helm delete'd',解决了问题,祝调试好运!

于 2020-05-18T09:36:38.623 回答