1

使用 Tensorflow Serving 示例中的基本 gRPC 客户端从 docker 上运行的模型中获取预测,我得到以下响应:

        status = StatusCode.UNAVAILABLE
        details = "OS Error"
        debug_error_string = "{"created":"@1580748231.250387313",
            "description":"Error received from peer",
            "file":"src/core/lib/surface/call.cc",
            "file_line":1017,"grpc_message":"OS Error","grpc_status":14}"

这是我的客户目前的样子:

import grpc
import tensorflow as tf
import cv2

from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc


def main():
    data = cv2.imread('/home/matt/Downloads/cat.jpg')

    channel = grpc.insecure_channel('localhost:8500')
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)

    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'model'
    request.model_spec.signature_name = 'serving_default'

    request.inputs['image_bytes'].CopyFrom(
        tf.make_tensor_proto(data, shape=[1, data.size]))
    result = stub.Predict(request, 10.0)  # 10 secs timeout
    print(result)

if __name__ == '__main__':
    main()

在此先感谢您的帮助:)

4

1 回答 1

0

在这里提供解决方案,即使它出现在评论部分是为了社区的利益。

解决方案是在执行客户端文件之前,我们需要Tensorflow Model Server通过使用下面给出的代码运行 Docker 容器来调用:

docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    tensorflow/serving &

除了调用 Tensorflow 模型服务器,

  1. 它将模型的本地路径与模型在服务器上的路径进行映射,并
  2. 它将映射用于与 Tensorflow 模型服务器通信的端口。(端口8500被公开gRPC,端口8501被公开REST API
于 2020-04-09T13:21:01.583 回答