0

我正在尝试使用 tensorflow 服务设置 textsum 的解码功能,但我无法完全理解通过 MNIST 教程执行的完全必要的操作。有没有人遇到过关于设置 Tensorflow 服务模型甚至更符合 textsum 的其他教程?任何帮助或方向都会很棒。谢谢!

最后,我试图从通过 seq2seq_attention.py 中的“train”训练的模型导出解码功能:https ://github.com/tensorflow/models/blob/master/textsum/seq2seq_attention.py

在比较以下 2 个文件以了解我需要对上述 textsum 模型执行的操作时,我很难理解需要在“default_graph_signature、输入张量、classes_tensor 等”中分配什么我意识到这些可能与 textsum 模型不一致,但这是我想要澄清的,如果我看到一些其他模型被导出到 tensorflow 服务,它可能会更有意义。

Comapred: https ://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/examples/tutorials/mnist/mnist_softmax.py

https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_export.py

- - - - - - - - - 编辑 - - - - - - - - - -

以下是我到目前为止的内容,但我遇到了一些问题。我正在尝试设置 Textsum Eval 服务功能。首先,当分配 Saver(sharded=True) 时,我收到一条错误消息,指出“没有要保存的变量”。除此之外,我也不明白我应该为“classification_signature”和“named_graph_signature”变量分配什么,以便通过 textsum 解码导出结果。

关于我在这里缺少的任何帮助......确定它有点。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import sys
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter

tf.app.flags.DEFINE_string("export_dir", "exports/textsum",
                           "Directory where to export textsum model.")

tf.app.flags.DEFINE_string('checkpoint_dir', 'log_root',
                            "Directory where to read training checkpoints.")
tf.app.flags.DEFINE_integer('export_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_bool("use_checkpoint_v2", False,
                     "If true, write v2 checkpoint files.")
FLAGS = tf.app.flags.FLAGS

def Export():
    try:
        saver = tf.train.Saver(sharded=True)
        with tf.Session() as sess:
            # Restore variables from training checkpoints.
            ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
            if ckpt and ckpt.model_checkpoint_path:
                saver.restore(sess, ckpt.model_checkpoint_path)
                global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
                print('Successfully loaded model from %s at step=%s.' %
                    (ckpt.model_checkpoint_path, global_step))
            else:
                print('No checkpoint file found at %s' % FLAGS.checkpoint_dir)
                return

            # Export model
            print('Exporting trained model to %s' % FLAGS.export_dir)
            init_op = tf.group(tf.initialize_all_tables(), name='init_op')
            model_exporter = exporter.Exporter(saver)

            classification_signature = <-- Unsure what should be assigned here

            named_graph_signature = <-- Unsure what should be assigned here

            model_exporter.init(
                init_op=init_op,
                default_graph_signature=classification_signature,
                named_graph_signatures=named_graph_signature)

            model_exporter.export(FLAGS.export_dir, tf.constant(global_step), sess)
            print('Successfully exported model to %s' % FLAGS.export_dir)
    except:
        err = sys.exc_info()
        print ('Unexpected error:', err[0], ' - ', err[1])
        pass


def main(_):
    Export()

if __name__ == "__main__":
    tf.app.run()
4

0 回答 0