0

我想在 Kubernetes 集群中制定一个网络策略,以便在模块之间进行细粒度的通用访问控制。

我准备了一个包含 2 个清单的 Kubernetes 设置:

  1. 2 个容器 nginx pod,有 2 个端口监听并返回一些通用数据,一个是端口 80,另一个是 81
  2. 我有 3 个带有 2 个开/关标签的控制台 pod:“allow80”和“allow80”。因此,如果“allow80”存在,console pod 可以通过服务入口点,端口 80 访问双 nginx。同样适用于端口 81

我有 3 个控制台 pod:

  1. console-full - 访问端口 80 和 81, [allow80, allow81]
  2. 控制台部分 - 端口 80 - 开启,81 - 关闭,[allow80]
  3. 控制台禁止访问 - 80 和 81 - 受限 []

测试设置。它将在“net-policy-test”命名空间中创建所有必要的组件。

去创造:

kubectl apply -f net_policy_test.yaml

清理:

kubectl delete -f net_policy_test.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: net-policy-test
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx1
  namespace: net-policy-test
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>nginx, instance1</title>
    </head>
    <body>
      <h1>nginx, instance 1, port 80</h1>
    </body>
    </html>
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx2
  namespace: net-policy-test
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>nginx, instance2</title>
    </head>
    <body>
      <h1>nginx, instance 2, port 81</h1>
    </body>
    </html>
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf1
  namespace: net-policy-test
data:
  default.conf: |
    server {
        listen       80;
        server_name  localhost;


        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf2
  namespace: net-policy-test
data:
  default.conf: |
    server {
        listen       81;
        server_name  localhost;


        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dual-nginx
  namespace: net-policy-test
  labels:
    app: dual-nginx
    environment: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dual-nginx
  template:
    metadata:
      labels:
        app: dual-nginx
        name: dual-nginx
    spec:
      containers:
      - image: nginx
        name: nginx1
        ports:
        - name: http1
          containerPort: 80
        volumeMounts:
          - name: html1
            mountPath: /usr/share/nginx/html
          - name: config1
            mountPath: /etc/nginx/conf.d
      - image: nginx
        name: nginx2
        ports:
        - name: http2
          containerPort: 81
        volumeMounts:
          - name: html2
            mountPath: /usr/share/nginx/html
          - name: config2
            mountPath: /etc/nginx/conf.d
      volumes:
        - name: html1
          configMap:
            name: nginx1
        - name: html2
          configMap:
            name: nginx2
        - name: config1
          configMap:
            name: nginx-conf1
        - name: config2
          configMap:
            name: nginx-conf2

---
apiVersion: v1
kind: Service
metadata:
  name: dual-nginx
  namespace: net-policy-test
spec:
  selector:
    app: dual-nginx
  ports:
  - name: web1
    port: 80
    targetPort: http1
  - name: web2
    port: 81
    targetPort: http2
---
# this console deployment will have full access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-full
  namespace: net-policy-test
  labels:
    app: console-full
    environment: test
    nginx-access: full
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-full
  template:
    metadata:
      labels:
        app: console-full
        name: console-full
        allow80: "true"
        allow81: "true"
    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]

---
# this console deployment will have partial access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-partial
  namespace: net-policy-test
  labels:
    app: console-partial
    environment: test
    nginx-access: partial
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-partial
  template:
    metadata:
      labels:
        app: console-partial
        name: console-partial
        allow80: "true"

    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]
---
# this console deployment will have no access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-no-access
  namespace: net-policy-test
  labels:
    app: console-no-access
    environment: test
    nginx-access: none
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-no-access
  template:
    metadata:
      labels:
        app: console-no-access
        name: console-no-access
    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]

政策再次适用:

kubectl apply -f policies.yaml

清理:

kubectl delete -f policies.yaml


kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: nginx-restrict80
spec:
  podSelector:
    matchLabels:
      app: "dual-nginx"
  policyTypes:
    - Ingress
  ingress:
  - from:
      - podSelector:
          matchLabels:
            allow80: "true"
    ports:
      - protocol: TCP
        port: 80
  - from:
      - podSelector:
          matchLabels:
            allow81: "true"
    ports:
      - protocol: TCP
        port: 81

如果我只为一个端口留下“来自”的一个条件,它会按预期工作。我有或没有端口访问权限,取决于相应的标签是否存在,allow80 或 allow81

如果存在 2 个条件,则部分 pod 可以访问端口 80 和 81:

  1. 切换到正确的命名空间:
kubectl config set-context --current --namespace=net-policy-test
  1. 检查标签:
kubectl get pods -l allow80
NAME                               READY   STATUS    RESTARTS   AGE
console-full-78d5499959-p5kbb      1/1     Running   1          4h14m
console-partial-6679745d79-kbs5w   1/1     Running   1          4h14m

kubectl get pods -l allow81
NAME                            READY   STATUS    RESTARTS   AGE
console-full-78d5499959-p5kbb   1/1     Running   1          4h14m
  1. 检查来自 pod "console-partial-..." 的访问,它应该访问端口 80 而不是 81:
kubectl exec -ti console-partial-6679745d79-kbs5w curl http://dual-nginx:80
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance1</title>
</head>
<body>
  <h1>nginx, instance 1, port 80</h1>
</body>
</html>

kubectl exec -ti console-partial-6679745d79-kbs5w curl http://dual-nginx:81
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance2</title>
</head>
<body>
  <h1>nginx, instance 2, port 81</h1>
</body>
</html>

部分访问 pod 可以访问端口 80 和 81。

Pod,没有标签 (console-no-access-),无法访问任一端口,这是预期的

它类似于本演示文稿中描述的内容:Youtube,使用网络策略保护集群网络 - Ahmet Balkan,谷歌。因此,至少有一个标志“allow80”或“allow81”可以访问所有内容。怎么来的?

现在,问题:

  1. 这是预期的行为吗?
  2. 如何制作简单的基于标志的访问控制,目的是使其自动化或传递给管理员,谁可以轻松地大量生产这些?
4

2 回答 2

1

TLDR:它在我的集群上完全按照您希望的方式工作。

现在有点解释和例子。

我在GKE创建了一个集群并使用以下命令启用了网络策略:

gcloud beta container clusters create test1 --enable-network-policy --zone us-cental1-a

然后复制您的确切部署 yaml 和网络策略 yaml,不做任何更改并部署它们。

$ kubectl apply -f policy-test.yaml
namespace/net-policy-test created
configmap/nginx1 created
configmap/nginx2 created
configmap/nginx-conf1 created
configmap/nginx-conf2 created
deployment.apps/dual-nginx created
service/dual-nginx created
deployment.apps/console-full created
deployment.apps/console-partial created
deployment.apps/console-no-access created

$ kubectl apply -f policy.yaml
networkpolicy.networking.k8s.io/nginx-restrict80 configured

您编写的网络策略完全按照您的意愿工作。

console-partial 只能在端口 80 上访问 nginx。console-no-access 无法访问任何 nginx。

我认为这是因为 GKE 使用 Calico 作为 CNI

Google Container Engine (GKE) 还使用 Calico 网络插件为网络策略提供 beta 支持

当您使用--network-policy azure时,即Azure-CNI。我无法在 AKS 上对此进行测试,但您可以尝试将其更改为 calico。此处对此进行了解释,创建 AKS 群集并启用网络策略

  • 在定义的虚拟网络中创建 AKS 群集并启用网络策略。
    • 使用 天蓝色 网络策略选项。要使用 Calico 作为网络策略选项,请使用 --network-policy calico 参数。

至于自动化标志,也许这对你有用。

您可以在此处查看标签:

$ kubectl describe pods console-partial | grep -A3 Labels
Labels:             allow80=true
                    app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

当我开始编辑 Labels时,使用kubectl labels.

删除了一个标签allow80="true"

$ kubectl label pods console-partial-6c6dc7d94f-v8k5q allow80-
pod/console-partial-6c6dc7d94f-v8k5q labeled
$ kubectl describe pods console-partial | grep -A3 Labels
Labels:             app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

并添加标签allow81= true

kubectl label pods console-partial-6c6dc7d94f-v8k5q "allow81=true"
pod/console-partial-6c6dc7d94f-v8k5q labeled

$ kubectl describe pods console-partial | grep -3 Labels
Labels:             allow81=true
                    app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

您可以从测试中看到该策略按您的意愿工作。

$ kubectl exec -it console-partial-6c6dc7d94f-v8k5q curl http://dual-nginx:81
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance2</title>
</head>
<body>
  <h1>nginx, instance 2, port 81</h1>
</body>
</html>
$ kubectl exec -it console-partial-6c6dc7d94f-v8k5q curl http://dual-nginx:80
^Ccommand terminated with exit code 130

我希望这对您有帮助。

于 2019-09-20T11:01:45.243 回答
0

我尝试使用 Calico 网络策略和 ​​Azure CNI 在 Azure 中重新创建一个集群——它开始在 Linux 上正常工作——> Linux 通信

network_plugin="azure" &&\
network_policy="calico"
az aks create ... \
--network-plugin ${network_plugin} \
--network-policy ${network_policy}

现在,当 Windows 容器放置在客户端时。启用策略后,两个端口都无法从 Windows shell 访问以测试 Linux 容器。那是另一个故事的开始,我想

于 2019-09-23T22:53:36.770 回答