0

我目前正在尝试使用 tensorflow 服务来服务于训练有素的“ textsum ”模型。我正在使用TF 0.11,经过一番阅读,它似乎会自动调用 export_meta_graph 来创建导出的文件ckptckpt.meta文件。

在 textsum/log_root 目录下,我有多个文件。一个是model.ckpt-230381,另一个是model.ckpt-230381.meta

因此,据我了解,这是我在尝试设置服务模型时应该能够指出的位置。我已发出以下命令:

bazel build //tensorflow_serving/model_servers:tensorflow_model_server

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=model  --model_base_path=tf_models/textsum/log_root/

运行上述命令后,我收到以下消息:

W tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:204] 在基本路径 tf_models/textsum/log_root/ 下找不到可服务模型的版本

在检查点文件上运行 inspect_checkpoint 后,我​​看到:

> I tensorflow/stream_executor/dso_loader.cc:111] successfully opened
> CUDA library libcublas.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcudnn.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcufft.so locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcuda.so.1 locally I
> tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA
> library libcurand.so locally seq2seq/output_projection/w (DT_FLOAT)
> [256,335906] seq2seq/output_projection/v (DT_FLOAT) [335906]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder3/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/Linear/Bias (DT_FLOAT) [128]
> seq2seq/decoder/attention_decoder/AttnW_0 (DT_FLOAT) [1,1,512,512]
> seq2seq/decoder/attention_decoder/AttnV_0 (DT_FLOAT) [512]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/decoder/attention_decoder/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder1/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> global_step (DT_INT32) [] seq2seq/encoder1/BiRNN/BW/LSTMCell/B
> (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Bias
> (DT_FLOAT) [256]
> seq2seq/decoder/attention_decoder/Attention_0/Linear/Matrix (DT_FLOAT)
> [512,512] seq2seq/decoder/attention_decoder/Attention_0/Linear/Bias
> (DT_FLOAT) [512] seq2seq/encoder2/BiRNN/BW/LSTMCell/B (DT_FLOAT)
> [1024] seq2seq/decoder/attention_decoder/Linear/Matrix (DT_FLOAT)
> [640,128]
> seq2seq/decoder/attention_decoder/AttnOutputProjection/Linear/Matrix
> (DT_FLOAT) [768,256] seq2seq/embedding/embedding (DT_FLOAT)
> [335906,128] seq2seq/encoder0/BiRNN/BW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder3/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder0/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [384,1024]
> seq2seq/encoder0/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/decoder/attention_decoder/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/B (DT_FLOAT) [1024]
> seq2seq/encoder2/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder1/BiRNN/FW/LSTMCell/W_0 (DT_FLOAT) [768,1024]
> seq2seq/encoder2/BiRNN/BW/LSTMCell/W_0 (DT_FLOAT) [768,1024]

我是否误解了出口需要发生的事情?关于为什么找不到模型的任何想法?

4

1 回答 1

0

尽管我仍在努力为 tensorflow 服务导出 textsum 模型,但似乎我的问题是我假设当模型保存上述文件时,这些文件与导出模型时创建的文件相同。根据我在 git 上收到的答案,情况似乎并非如此,我实际上必须在模型本身上运行导出。此时,TF serving 应该可以看到模型了。

于 2016-12-06T01:50:09.273 回答