0

我正在尝试了解 kubernetes 中的 linkerd。我在我的本地使用他们网站上的 linkerd deamonset 示例minikube

它全部部署在production命名空间中。当我尝试

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs

什么都没发生。我的设置哪里出错了?

我的 Linkerd yaml:

# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: production
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      responseClassifier:
        kind: io.l5d.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.9.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

这是我对 apiservice 的部署:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: apiserver-production
spec:
  replicas: 1
  template:
    metadata:
      name: apiserver
      labels:
        app: apiserver
        role: gateway
        env: production
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: apiserver
        image: eu.gcr.io/xxxxx/apiservice:latest
        env:
        - name: MONGO_HOST
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: host
        - name: MONGO_PORT
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: port
        - name: MONGO_USR
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: username
        - name: MONGO_PWD
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: password
        - name: MONGO_DB
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: db
        - name: MONGO_PREFIX
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: prefix
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        resources:
          limits:
            memory: "300Mi"
            cpu: "50m"
        imagePullPolicy: Always
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: apiserver
          containerPort: 8080
      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        args:
        - proxy
        - "-p"
        - "8001"

这是服务:

kind: Service
apiVersion: v1
metadata:
  name: apiserver
spec:
  selector:
    app: apiserver
    role: gateway
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
  - name: external
    port: 80
    targetPort: 8080

在我的节点应用程序中,我正在使用global tunnel

const server = app.listen(port);
server.on('listening', function(){

  // make sure all traffic goes over linkerd
  globalTunnel.initialize({
    host: 'localhost',
    port: 4140
  });

 console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);
4

2 回答 2

2

你的curl命令在哪里运行?

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`

示例中的 linkerd 服务不公开公共 IP 地址。kubectl get svc/l5d你可以用-- 我希望你不会看到任何外部 IP来确认这一点。

我认为您需要修改服务定义 --- 或创建一个额外的显式外部服务来公开一个ClusterIP--- 以便接收入口流量。

于 2017-04-09T20:22:16.213 回答
0

部署两个相同的节点应用程序并让它们互相发送请求它工作。奇怪的是,请求不会显示在链接器仪表板中。

于 2017-04-09T15:21:50.023 回答