0

我正在尝试将输入变压器与模型一起使用。以下是部署 yaml:

apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
  name: two-testing
spec:
  predictors:
  - componentSpecs:
    - spec:
        containers:
        - name: transformer
          image: transformer_image
          ports:
            - containerPort: 7100
              name: http
              protocol: TCP
        - name: model
          image: model_image
          ports:
            - containerPort: 7200
              name: http
              protocol: TCP

    graph:
      name: transformer
      type: TRANSFORMER
      children:
        - name: model
          type: MODEL
          children: []
      endpoint:
        service_port: 7300
    name: model
    replicas: 1

我发送请求的端点 api 是: 'http://0.0.0.0:3000/api/v1.0/predictions' , 3000 是我转发端口但请求直接命中模型容器的本地端口因此我得到一个错误。有人可以告诉我应该将请求发送到哪个端点 api,以便模型通过输入转换器然后进入模型。

4

0 回答 0