I have deployed ECK on my kubernetes cluster(all vagrant VMs). The cluster has following config.
NAME STATUS ROLES AGE VERSION
kmaster1 Ready control-plane,master 27d v1.21.1
kworker1 Ready <none> 27d v1.21.1
kworker2 Ready <none> 27d v1.21.1
I have also setup a loadbalancer with HAProxy. The loadbalancer config is as following(created my own private cert)
frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back
backend http_back
balance roundrobin
server kworker1 172.16.16.201:31953
server kworker2 172.16.16.202:31953
backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none
server kworker2 172.16.16.202:31503 check-ssl ssl verify none
I have also deployed an nginx ingress controller and 31953 is the http port of the nginx controller 31503 is the https port of nginx controller
nginx-ingress nginx-ingress-controller-service NodePort 10.103.189.197 <none> 80:31953/TCP,443:31503/TCP 8d app=nginx-ingress
I am trying to make the kibana dashboard available outside of the cluster on https. It works fine and I can access it within the cluster. However I am unable to access it via the loadbalancer.
Kibana Pod:
default quickstart-kb-f74c666b9-nnn27 1/1 Running 4 27d 192.168.41.145 kworker1 <none> <none>
I have mapped the loadbalancer to the host
172.16.16.100 elastic.kubekluster.com
Any request to https://elastic.kubekluster.com results in the following error(logs from nginx ingress controller pod)
10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: "GET / H
TTP/1.1", upstream: "http://192.168.41.145:5601/", host: "elastic.kubekluster.com"
HAproxy logs are following
Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
The ingress is as following
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubekluster-elastic-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
tls:
- hosts:
- elastic.kubekluster.com
rules:
- host: elastic.kubekluster.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
I think the request is not reaching the kibana pod because I don't see any logs in the pod. Also I don't understand why Haproxy is sending the request as HTTP instead of HTTPS. Could you please point to any issues with my configuration?