3

我正在使用 tf.estimator API 来预测标点符号。我使用 TFRecords 和tf.train.shuffle_batch. 现在我想做预测。我可以很好地将静态 NumPy 数据输入tf.constant并从input_fn.

但是,我正在处理序列数据,我需要一次提供一个示例,下一个输入取决于前一个输出。我还希望能够处理通过 HTTP 请求输入的数据。

每次estimator.predict调用它都会重新加载检查点并重新创建整个图。这是缓慢且昂贵的。所以我需要能够动态地将数据提供给input_fn.

我目前的尝试大致是这样的:

feature_input = tf.placeholder(tf.int32, shape=[1, MAX_SUBSEQUENCE_LEN])
q = tf.FIFOQueue(1, tf.int32, shapes=[[1, MAX_SUBSEQUENCE_LEN]])
enqueue_op = q.enqueue(feature_input)

def input_fn():
    return q.dequeue()

estimator = tf.estimator.Estimator(model_fn, model_dir=model_file)
predictor = estimator.predict(input_fn=input_fn)
sess = tf.Session()
output = None

while True:
    x = get_numpy_data(x, output)
    if x is None:
        break
    sess.run(enqueue_op, {feature_input: x})
    output = predictor.next()
    save_to_file(output)

sess.close()

但是我收到以下错误: ValueError: Input graph and Layer graph are not the same: Tensor("EmbedSequence/embedding_lookup:0", shape=(1, 200, 128), dtype=float32) is not from the passed-in graph.

如何通过一个异步将数据插入到现有图表input_fn中以一次获得一个预测?

4

1 回答 1

4

事实证明,主要问题是所有张量都需要在内部创建,input_fn否则它们不会被添加到同一个图中。我需要运行入队操作,但无法访问从输入函数返回的任何内容。

我最终继承了Estimator该类并创建了一个自定义预测函数,该函数允许我将数据动态添加到预测队列并返回结果:

# async_estimator.py

import six
import tensorflow as tf
from tensorflow.python.estimator.estimator import Estimator
from tensorflow.python.estimator.estimator import _check_hooks_type
from tensorflow.python.estimator import model_fn as model_fn_lib
from tensorflow.python.framework import ops
from tensorflow.python.framework import random_seed
from tensorflow.python.training import saver
from tensorflow.python.training import training


class AsyncEstimator(Estimator):

    def async_predictor(self,
                dtype,
                shape=None,
                predict_keys=None,
                hooks=None,
                checkpoint_path=None):
        """Returns a tuple of functions: first runs predicitons on the model, second cleans up
        Args:
          dtype: the dtype of the input
          shape: the shape of the input placeholder (optional)
          predict_keys: list of `str`, name of the keys to predict. It is used if
            the `EstimatorSpec.predictions` is a `dict`. If `predict_keys` is used
            then rest of the predictions will be filtered from the dictionary. If
            `None`, returns all.
          hooks: List of `SessionRunHook` subclass instances. Used for callbacks
            inside the prediction call.
          checkpoint_path: Path of a specific checkpoint to predict. If `None`, the
            latest checkpoint in `model_dir` is used.
        Returns:
          (predict, finish): tuple of functions

            predict: runs a single prediction and returns the results
                Args:
                    x: NumPy array of input
                Returns:
                    Evaluated value of the prediction

            finish: closes the session, allowing the program to exit

        Raises:
          ValueError: Could not find a trained model in model_dir.
          ValueError: if batch length of predictions are not same.
          ValueError: If there is a conflict between `predict_keys` and
            `predictions`. For example if `predict_keys` is not `None` but
            `EstimatorSpec.predictions` is not a `dict`.
        """
        hooks = _check_hooks_type(hooks)
        # Check that model has been trained.
        if not checkpoint_path:
            checkpoint_path = saver.latest_checkpoint(self._model_dir)
        if not checkpoint_path:
            raise ValueError('Could not find trained model in model_dir: {}.'.format(
                self._model_dir))

        with ops.Graph().as_default() as g:
            random_seed.set_random_seed(self._config.tf_random_seed)
            training.create_global_step(g)
            input_placeholder = tf.placeholder(dtype=dtype, shape=shape)
            queue = tf.FIFOQueue(1, dtype, shapes=shape)
            enqueue_op = queue.enqueue(input_placeholder)
            features = queue.dequeue()
            estimator_spec = self._call_model_fn(features, None,
                                                 model_fn_lib.ModeKeys.PREDICT)
            predictions = self._extract_keys(estimator_spec.predictions, predict_keys)
            mon_sess = training.MonitoredSession(
                    session_creator=training.ChiefSessionCreator(
                        checkpoint_filename_with_path=checkpoint_path,
                        scaffold=estimator_spec.scaffold,
                        config=self._session_config),
                    hooks=hooks)

            def predict(x):
                if mon_sess.should_stop():
                    raise StopIteration
                mon_sess.run(enqueue_op, {input_placeholder: x})
                preds_evaluated = mon_sess.run(predictions)
                if not isinstance(predictions, dict):
                    return preds_evaluated
                else:
                    preds = []
                    for i in range(self._extract_batch_length(preds_evaluated)):
                        preds.append({
                            key: value[i]
                            for key, value in six.iteritems(preds_evaluated)
                        })
                    return preds

            def finish():
                mon_sess.close()

            return predict, finish

这是使用它的粗略代码:

import tensorflow as tf
from async_estimator import AsyncEstimator


def doPrediction(model_fn, model_dir, max_seq_length):
    estimator = AsyncEstimator(model_fn, model_dir=model_dir)
    predict, finish = estimator.async_predictor(dtype=tf.int32, shape=(1, max_seq_length))
    output = None

    while True:
        # my input is dependent on the previous output
        x = get_numpy_data(output)
        if x is None:
            break
        output = predict(x)
        save_to_disk(output)

    finish()

注意:这是一个适合我需要的简单解决方案,可能需要针对其他情况进行修改。它正在使用 TensorFlow 1.2.1。

希望 TF 将正式采用类似的方法,以使使用 Estimator 提供动态预测更容易。

于 2017-07-15T20:33:10.277 回答