1

我正在迁移到 AWS SSO 以进行 cli 访问,到目前为止,它适用于除 kubectl 之外的所有内容。在对其进行故障排除时,我遵循了一些指南,这意味着我最终做出了一些货物崇拜行为,而且我显然在我的心智模型中遗漏了一些东西。

aws sts get-caller-identity
{
    "UserId": "<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:sts::<redacted>:assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/<my username>"
}

kubectl 获取 pod

调用 AssumeRole 操作时发生错误 (AccessDenied):用户:arn:aws:sts:::assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/ 无权执行:sts:AssumeRole on resource:arn:aws:iam:::role/ aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87

有趣的是,它似乎试图承担它已经使用的相同角色,但我不知道如何解决它。

~/.aws/config (子集 - 我有其他配置文件,但它们在这里不相关)

[default]
region = us-east-2
output = json

[profile default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json

~/.kube/config(删除了集群)

apiVersion: v1
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
    user: ro
  name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - foo
      - --role
      - arn:aws:iam::<redacted>:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
      command: aws
      env: null

aws-auth mapRoles 片段

- rolearn: arn:aws:iam::<redacted>:role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
  username: "devread:{{SessionName}}"
  groups:
    - view

我错过了什么明显的东西?我已经查看了具有类似问题的其他 stackoverflow 帖子,但没有一个具有arn:aws:sts:::assumed-role -> arn:aws:iam:::role路径。

4

1 回答 1

1

.aws/config有一个微妙的错误 -[profile default]没有意义,所以这两个块应该合并到[default]. 只有非默认配置文件的名称中应包含配置文件。

[default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json

[profile rw]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadWrite
region = us-east-2
sso_region = us-east-2
output = json

我还更改了.kube/config以根据配置文件获取令牌,而不是显式命名角色。这修复了 AssumeRole 失败,因为它使用了现有角色。

apiVersion: v1
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
    user: ro
  name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - foo
      - --profile
      - default
      command: aws
      env: null

我现在可以运行kubectl config use-context ro或我定义的其他配置文件(为简洁起见省略)。

在相关说明中,由于 s3 后端无法处理 sso,因此我在使用较旧的 terraform 版本时遇到了一些麻烦。aws-vault为我解决了这个问题

于 2021-12-07T17:12:54.120 回答