0

情况1

框架:Tensorflow 2.5.0、Intel-Tensorflow 2.5.0

环境:谷歌 Colab

我有一个由LPOT量化的成功量化模型,可以在不使用 LPOT API 的情况下运行推理,因此我编写了以下推理代码:

with tf.compat.v1.Session() as sess:
    tf.compat.v1.saved_model.loader.load(sess, ['serve'], model)
    output = sess.graph.get_tensor_by_name(output_tensor_name)
    predictions = sess.run(output, {input_tensor_name: x})
    mse = tf.reduce_mean(tf.keras.losses.mean_squared_error(y, predictions))
    print(mse.eval())

运行线路时predictions = sess.run(output, {input_tensor_name: x})

---------------------------------------------------------------------------
InternalError                             Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1374     try:
-> 1375       return fn(*args)
   1376     except errors.OpError as e:

7 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1359       return self._call_tf_sessionrun(options, feed_dict, fetch_list,
-> 1360                                       target_list, run_metadata)
   1361 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1452                                             fetch_list, target_list,
-> 1453                                             run_metadata)
   1454 

InternalError: Missing 0-th output from {{node model/layer_1/Conv2D_eightbit_requantize}}

During handling of the above exception, another exception occurred:

InternalError                             Traceback (most recent call last)
<ipython-input-6-2bddd853d111> in <module>()
      2     tf.compat.v1.saved_model.loader.load(sess, ['serve'], model)
      3     output = sess.graph.get_tensor_by_name(output_tensor_name)
----> 4     predictions = sess.run(output, {input_tensor_name: x[:64]}) # 64, 257, 60, 1
      5     mse = tf.reduce_mean(tf.keras.losses.mean_squared_error(y[:64], predictions))
      6     print(mse.eval())

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    966     try:
    967       result = self._run(None, fetches, feed_dict, options_ptr,
--> 968                          run_metadata_ptr)
    969       if run_metadata:
    970         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1189     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1190       results = self._do_run(handle, final_targets, final_fetches,
-> 1191                              feed_dict_tensor, options, run_metadata)
   1192     else:
   1193       results = []

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1367     if handle is None:
   1368       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1369                            run_metadata)
   1370     else:
   1371       return self._do_call(_prun_fn, handle, feeds, fetches)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1392                     '\nsession_config.graph_options.rewrite_options.'
   1393                     'disable_meta_optimizer = True')
-> 1394       raise type(e)(node_def, op, message)
   1395 
   1396   def _extend_graph(self):

InternalError: Missing 0-th output from node model/layer_1/Conv2D_eightbit_requantize (defined at <ipython-input-6-2bddd853d111>:2) 

无论是否Intel-Tensorflow==2.5.0安装都会发生此错误,也不会在os.environ['TF_ENABLE_ONEDNN_OPTS'] = '1'显式设置时解决。

另一方面,当我在 VS Code 中运行相同的代码时,它返回与Case 2Python 3.6.8 64-bit base: Conda中相同的错误消息。

案例2

框架:Tensorflow 2.4.0、Intel-Tensorflow 2.4.0

环境:谷歌 Colab

这个案例运行良好并打印出预测的 MSE 损失,但是当我卸载并仅使用官方 Tensorflow 运行时,同时在案例 1 ( )Intel-Tensorflow 2.4.0中运行同一行:predictions = sess.run(output, {input_tensor_name: x})

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1374     try:
-> 1375       return fn(*args)
   1376     except errors.OpError as e:

7 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1357       # Ensure any changes to the graph are reflected in the runtime.
-> 1358       self._extend_graph()
   1359       return self._call_tf_sessionrun(options, feed_dict, fetch_list,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _extend_graph(self)
   1397     with self._graph._session_run_lock():  # pylint: disable=protected-access
-> 1398       tf_session.ExtendSession(self._session)
   1399 

InvalidArgumentError: No OpKernel was registered to support Op 'QuantizedMatMulWithBiasAndDequantize' used by {{node model/dense/Tensordot/MatMul_eightbit_requantize}} with these attrs: [input_quant_mode="MIN_FIRST", T1=DT_QUINT8, Toutput=DT_FLOAT, T2=DT_QINT8, Tbias=DT_QINT32, transpose_a=false, transpose_b=false]
Registered devices: [CPU]
Registered kernels:
  <no registered kernels>

     [[model/dense/Tensordot/MatMul_eightbit_requantize]]

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-6-2bddd853d111> in <module>()
      2     tf.compat.v1.saved_model.loader.load(sess, ['serve'], model)
      3     output = sess.graph.get_tensor_by_name(output_tensor_name)
----> 4     predictions = sess.run(output, {input_tensor_name: x[:64]}) # 64, 257, 60, 1
      5     mse = tf.reduce_mean(tf.keras.losses.mean_squared_error(y[:64], predictions))
      6     print(mse.eval())

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    966     try:
    967       result = self._run(None, fetches, feed_dict, options_ptr,
--> 968                          run_metadata_ptr)
    969       if run_metadata:
    970         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1189     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1190       results = self._do_run(handle, final_targets, final_fetches,
-> 1191                              feed_dict_tensor, options, run_metadata)
   1192     else:
   1193       results = []

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1367     if handle is None:
   1368       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1369                            run_metadata)
   1370     else:
   1371       return self._do_call(_prun_fn, handle, feeds, fetches)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1392                     '\nsession_config.graph_options.rewrite_options.'
   1393                     'disable_meta_optimizer = True')
-> 1394       raise type(e)(node_def, op, message)
   1395 
   1396   def _extend_graph(self):

InvalidArgumentError: No OpKernel was registered to support Op 'QuantizedMatMulWithBiasAndDequantize' used by node model/dense/Tensordot/MatMul_eightbit_requantize (defined at <ipython-input-6-2bddd853d111>:2)  with these attrs: [input_quant_mode="MIN_FIRST", T1=DT_QUINT8, Toutput=DT_FLOAT, T2=DT_QINT8, Tbias=DT_QINT32, transpose_a=false, transpose_b=false]
Registered devices: [CPU]
Registered kernels:
  <no registered kernels>

     [[model/dense/Tensordot/MatMul_eightbit_requantize]]

os.environ['TF_ENABLE_ONEDNN_OPTS'] = '1'即使明确设置,错误仍然存​​在。

结论

我相信这两种情况都是由相同类型的错误引起的,即没有注册 OpKernel 来支持 Op ...

我被告知,通过官方Tensorflow v2.5安装和环境变量TF_ENABLE_ONEDNN_OPTS=1集(参考),量化模型应该在支持 oneDNN 的情况下运行。但在 v2.4 和 v2.5 中似乎都不是这样。

我的问题是如何在Tensorflow 2.5无需安装的情况下获得支持 oneDNN 的官方环境Intel-Tensorflow?或者为什么不起作用Intel-Tensorflow 2.5?谢谢。

4

1 回答 1

0

LPOT 在英特尔® AI Analytics Toolkit 中发布,并与英特尔优化 TensorFlow 一起使用。LPOT 可以在任何 Intel CPU 上运行以量化 AI 模型。TF_ENABLE_MKL_NATIVE_FORMAT=0Intel Optimized TensorFlow 2.5.0 需要在运行 LPOT 量化或部署量化模型之前设置环境变量。

请参阅 以获取更多信息。

您能否检查一下您是否在 2.4 中量化了 Tensorflow 中的模型并在 Tensorflow 2.5 上运行推理?对不在 Tensorflow 2.5 中运行而在 Tensorflow 2.4 中运行的模型的一个合理解释是,支持 Tensorflow 2.5 的运营商可能不支持在 Tensorflow 2.4 中创建的模型。

于 2021-07-29T06:29:28.523 回答