2

We're starting to use Docker/CoreOS as our infrastructure. We're deploying on EC2. The CoreOS cluster is managed by an auto-scaling group so new hosts come and go. Plus, there's a lot of them. I'm trying to find a way to distribute a secret (private RSA key or a shared secret for a symmetric cypher) to all hosts so I can use that to securely distribute things like database credentials, AWS Access Keys for certain services, etc.

I'd like to obey "the principle of least privilege". Specifically, if I have 2 apps in 2 different containers running on the same host, each should only have access to the secrets that app needs. For example, app A might have access to the MySQL credentials, and app B might have access to AWS Access keys for Dynamo, but A can't access Dynamo and B can't access the MySQL.

If I had a secret on each server then this wouldn't be hard. I could use a tool like Crypt to read encrypted configuration data out of etcd and then use volume maps to selectively make credentials available to individual containers.

The question is, how the heck do I get the keys onto the host safely.

Here's some things I've considered and why they don't work:

  • Use AWS roles to grant each host access to a encrypted S3 bucket. The hosts can then read a shared secret from there. But this doesn't work because S3 has a REST API, Docker doesn't limit the network access containers have, and the role applies to the whole host. Thus, any container on that host can read the key out of S3, then read all the values out of etcd (which also has an unrestricted REST API) and decrypt them.
  • In my CloudFormation template I can have a parameter for a secret key. This then gets embedded in the UserData and distributed to all hosts. Unfortunately, any container can retrieve the key via the metadata service REST API.
  • Use fleet to submit a global unit to all the hosts and that unit copies the keys. However, containers can access fleet via it's REST API and do a "fleetctl cat" to see the key.
  • Put a secret key in a container in a private repo. That can then be distributed to all hosts as a global unit and an app in that container can copy the key out to a volume mount. However, I assume that given the credentials to the private repo somebody could download the container with standard network tools and extract the key (albeit with some effort). The problem then becomes how to distribute the .dockercfg with the credentials for the private repo securely which, I think, gets us right back where we started.

Basically, it seems like the core problem is that everything has a REST API and I don't know of a way to prevent containers from accessing certain network resources.

Ideas?

4

1 回答 1

2

如果您愿意将秘密保存在 AMI 中,则可以使用您提到的 Crypt 解决方案。我实现了类似的东西如下:

  1. 生成公钥/私钥对
  2. 将私钥烘焙到用于自动缩放组的 AMI
  3. 使用公钥加密引导脚本,包括秘密
  4. Base64 编码加密的引导脚本
  5. 将编码文本嵌入到使用私钥解密的包装脚本中,并将其用作 AWS 启动配置的用户数据。

例如,引导脚本可能如下所示:

db="mysql://username:password@somehost:3306/somedb"
apikey="some_api_secret_key"
docker run --name "first container" -e db=$db -d MyImage MyCommand
docker run --name "second container" -e apikey=$apikey -d MyOtherImage MyOtherCommand

要加密,请使用带有 smime 的 openssl 来解决 rsautl 的下限。假设引导脚本位于 /tmp/bootstrap.txt 中,它可以像这样加密和编码:

$ openssl smime --encrypt -aes256 -binary -outform D -in /tmp/bootstrap.txt /tmp/public.key | openssl base64 -e > /tmp/encrypted.b64

成为用户数据的包装脚本可能如下所示:

#!/usr/bin/env bash -x 
exec >> /tmp/userdata.log 2>&1

cat << END > /tmp/bootstrap.dat
<contents of /tmp/encrypted.b64>
END
decrypted_blob=$(cat /tmp/bootstrap.dat | openssl base64 -d | openssl smime -decrypt -inform -D binary -inkey /path/to/secret.key
eval "${decrypted_blob}"
rm /tmp/bootstrap.dat

现在,如果容器访问 EC2 元数据,他们将看到 userdata 脚本,但它只有加密的 blob。私钥在主机上,容器无权访问(理论上)。

另请注意,用户数据大小限制为 16KB,因此脚本及其加密数据必须小于该值。

于 2015-02-25T17:42:40.370 回答