2

I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.

I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:

kibana:
  image: rancher/load-balancer-service
  ports:
  - 5602:5602
  - 5603:5603
  - 5604:5604
  links:
  - kibana3:kibana3
  - kibana4-logging:kibana4-logging
  - kibana4-metrics:kibana4-metrics
  labels:
    io.rancher.loadbalancer.target.kibana3: 5602=80
    io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
    io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601

Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:

frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
        bind *:5603
        mode http

        default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend

backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
        mode http
        timeout check 2000
        option httpchk GET /status HTTP/1.1
        server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
        server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
        server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601

The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.

I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?

4

2 回答 2

4

我在 Rancher 论坛上发布了 Rancher Labs 在 Twitter 上的建议:https ://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358

牧场主的某个人发布了一个 github 问题的链接,该问题与我遇到的类似:https ://github.com/rancher/rancher/issues/2475

总之,负载均衡器将在所有匹配的后端轮换,有一个涉及“虚拟”域的解决方法,我已经用我的配置确认它确实有效,即使它有点不雅。

labels:
  # Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
  # This eliminates any traffic from port 3000 to be directed to redis
  io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
 # Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
  # This eliminates any traffic from port 6379 to be directed to api
  io.rancher.loadbalancer.target.conf/api: bogus.com:6379

(^^ 复制自牧场主 github 问题,不是我的解决方法)

我将看看通过端口路由并引发 PR/Github 问题是多么容易,因为我认为在这种情况下它是 LB 的有效用例。

于 2016-04-09T09:44:26.753 回答
2

确保您使用的是最初暴露在 docker 容器上的端口。出于某种原因,如果将其绑定到不同的端口,HAProxy 将无法工作。如果您正在使用来自 DockerHub 的容器,而该容器正在使用系统上已占用的端口,则可能必须通过 nginx 之类的代理对其进行路由来重建该 docker 容器以使用不同的端口。

于 2016-10-14T06:24:10.957 回答