0

我想在 for 循环中运行图像样式转换。问题如下:每次迭代都比前一次花费更长的时间。为什么呢?我刚刚阅读了另一个主题,其中有人建议使用占位符来处理内容图像。但是我已经使用了它,它仍然没有改变行为。修改以下代码行,源代码来自:https ://github.com/hwalsuklee/tensorflow-fast-style-transfer

这是我的程序的相关代码:

sess = tf.Session(config=soft_config)

value = 1
args = parse_args()

for st in mnist_list[:]:  

if args is None:
    exit()

# load content image

content_image = utils.load_image(pfad_liste + "\\" + st, max_size=args.max_size)
transformer = style_transfer_tester.StyleTransferTester(session=sess,
                                                    model_path=args.style_model,
                                                    content_image=content_image,
                                                    )

value = value + 1

# execute the graph
start_time = time.time()
output_image = transformer.test()
end_time = time.time()
print('EXECUTION TIME for ALL  image : %f sec' % (1.*float(end_time - start_time))) 

out_string = "D:\\DeepLearning\\tensorflow-fast-style-transfer\\images\\02_results\\" + str(value) + "_resultNEU.jpg"
utils.save_image(output_image,out_string)

tf.get_variable_scope().reuse_variables()

我在上面代码中调用的函数写在这里:

import tensorflow as tf
import transform

class StyleTransferTester:

    def __init__(self, session, content_image, model_path):
        # session
        self.sess = session

        # input images
        self.x0 = content_image

        # input model
        self.model_path = model_path

        # image transform network
        self.transform = transform.Transform()

        # build graph for style transfer
        self._build_graph()

    def _build_graph(self):

        # graph input
        self.x = tf.placeholder(tf.float32, shape=self.x0.shape, name='input')
        self.xi = tf.expand_dims(self.x, 0) # add one dim for batch

        # result image from transform-net
        self.y_hat = self.transform.net(self.xi/255.0)
        self.y_hat = tf.squeeze(self.y_hat) # remove one dim for batch
        self.y_hat = tf.clip_by_value(self.y_hat, 0., 255.)

        self.sess.run(tf.global_variables_initializer())

        # load pre-trained model
        saver = tf.train.Saver()
        saver.restore(self.sess, self.model_path)

    def test(self):

        # initialize parameters
        #self.sess.run(tf.global_variables_initializer())

        # load pre-trained model
        #saver = tf.train.Saver()
        #saver.restore(self.sess, self.model_path)

        # get transformed image
        output = self.sess.run(self.y_hat, feed_dict={self.x: self.x0})

        return output

控制台的输出如下:

EXECUTION TIME for ALL  image : 3.297000 sec
EXECUTION TIME for ALL  image : 0.450000 sec
EXECUTION TIME for ALL  image : 0.474000 sec
EXECUTION TIME for ALL  image : 0.507000 sec
EXECUTION TIME for ALL  image : 0.524000 sec
EXECUTION TIME for ALL  image : 0.533000 sec
EXECUTION TIME for ALL  image : 0.559000 sec
EXECUTION TIME for ALL  image : 0.555000 sec
EXECUTION TIME for ALL  image : 0.570000 sec
EXECUTION TIME for ALL  image : 0.609000 sec
EXECUTION TIME for ALL  image : 0.623000 sec
EXECUTION TIME for ALL  image : 0.645000 sec
EXECUTION TIME for ALL  image : 0.667000 sec
EXECUTION TIME for ALL  image : 0.663000 sec
EXECUTION TIME for ALL  image : 0.746000 sec
EXECUTION TIME for ALL  image : 0.720000 sec
EXECUTION TIME for ALL  image : 0.733000 sec

我知道这是一个难题,这对 TensorFlow 的细节来说太“深入”了。

4

1 回答 1

0

我假设第一个代码块中的格式是关闭的,之后的所有内容for st in mnist_list[:]:都是缩进的。

如果是这种情况,那么您的问题可能是由于在循环的每次迭代中重新实例化变压器引起的:transformer = style_transfer_tester.StyleTransferTester(...). 这样,您将重复调用 StyleTransferTester 构造函数,该构造函数调用 _build_graph 方法,进而创建新对象(例如占位符)和添加到现有图形的操作(例如网络操作)。

因此,您的图表逐渐变大,整体执行时间也在增加。一种可能的解决方案是创建一次 style_transfer_tester 对象(在循环之外),然后仅在每次迭代时更新 content_image。

于 2020-05-28T12:22:22.237 回答