5

Apologies for not keeping this short, as any such attempt would make me miss-out on some important details of my problem.

I have a legacy Java application which works in a active/standby mode in a clustered environment to expose certain RESTful WebServices via a predefined port.

If there are two nodes in my app cluster, at any point in time only one would be in Active mode, and the other in Passive mode, and the requests are always served by the node with app running in Active mode. 'Active' and 'Passive' are just roles, the app as such would be running on both nodes. The Active and Passive instances communicate with each other through this same predetermined port.

Suppose I have a two node cluster with one instance of my application running on each node, then one of the instance would initially be active and the other will be passive. If for some reason the active node goes for a toss for some reason, the app instance in other node identifies this using some heartbeat mechanism, takes over the control and becomes the new active. When the old active comes back up it detects the other guy has owned up the new Active role, hence it goes into Passive mode.

The application manages to provide RESTful webservices on the same endpoint IP irrespective of which node is running the app in 'Active' mode by using a cluster IP, which piggy-backs on the active instance, so the cluster IP switches over to whichever node is running the app in Active mode.

I am trying to containerize this app and run this in a Kubernetes cluster for scale and ease of deployment. I am able to containerize and able to deploy it as a POD in a Kubernetes cluster.

In order to bring in the Active/Passive role here, I am running two instances of this POD, each pinned to a separate K8S nodes using node affinity (each node is labeled as either active or passive, and POD definitions pin on these labels), and clustering them up using my app's clustering mechanism whereas only one will be active and the other will be passive.

I am exposing the REST service externally using K8S Service semantics by making use of the NodePort, and exposing the REST WebService via a NodePort on the master node.

Here's my yaml file content:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  labels:
    app: myapp-service
spec:
  type: NodePort
  ports:
    - port: 8443
      nodePort: 30403
  selector:
    app: myapp

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - active
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: active
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: passive
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - passive
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: passive
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

Everything seems working fine, except that since both PODs are exposing the web service via same port, the K8S Service is routing the incoming requests to one of these PODS in a random fashion. Since my REST WebService endpoints work only on Active node, the service requests work via K8S Service resource only when the request is getting routed to the POD with app in Active role. If at any point in time the K8S Service happens to route the incoming request to POD with app in passive role, the service is inaccessible/not served.

How do I make this work in such a way that the K8S service always routes the requests to POD with app in Active role? Is this something doable in Kubernetes or I'm aiming for too much?

Thank you for your time!

4

2 回答 2

3

您可以将就绪探针与选举容器结合使用。选举总是会从选举池中选出一个主节点,如果你确保只有那个 pod 被标记为就绪……只有那个 pod 会接收流量。

于 2017-11-15T10:55:54.323 回答
1

实现此目的的一种方法是在 pod 中添加标签标记为活动和备用。然后在您的服务中选择活动的 pod。这会将流量发送到标记为活动的 pod。

https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#service-and-replicationcontroller

您可以在本文档中找到另一个示例。

https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

于 2017-11-14T23:37:45.453 回答