0

我正在尝试使用 XLA 简单地运行 .pb tensorflow 2 模型。但是,我收到以下错误:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n\007\n\003CPU\020\001\n\007\n\003GPU\020\0002\002J\0008\001\202\001\000", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
    Stacktrace:
        Node: __inference_predict_function_3130, function: 
        Node: IteratorGetNext, function: __inference_predict_function_3130
 [Op:__inference_predict_function_3130]

该错误与模型无关,并且在我训练后直接应用模型时也会发生。我认为,我在做一些根本错误的事情,或者 TF2 没有正确支持 XLA。没有运行 TF XLA 的相同代码。有谁知道如何解决这个问题?

我在 Ubuntu 18.04 中使用蟒蛇中的 python 3.8 和 TF 2.4.1 我的代码:

import tensorflow as tf
import numpy as np
import h5py
import sys

model_path_compile= 'model_Input/pbFolder'
data_inference_mat ='model_Input/data_inference/XXXX.MAT'

with h5py.File(data_inference_mat, 'r') as dataset:
    try:
        image_set = dataset['polar'][()].astype(np.uint16).T
        image = np.cast[np.float32](image_set)
        image /= 16384
    except KeyError:
        print('-----------------------ERROR--------------')
x = np.expand_dims(image, axis=0)
model_compile = tf.keras.models.load_model(model_path_compile)
with tf.device("device:XLA_CPU:0"):
    y_pred = model_compile.predict(x)`

完整的错误:

    2021-07-19 16:09:02.521211: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-07-19 16:09:02.521416: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-19 16:09:02.522638: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2021-07-19 16:09:03.357078: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2021-07-19 16:09:03.378059: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2400000000 Hz
Traceback (most recent call last):
  File "/media/ric/DATA/Software_Workspaces/MasterThesisWS/AI_HW_deploy/XLA/Tf2ToXLA_v2/TF2_RunModel.py", line 24, in <module>
    y_pred = model_compile.predict(x)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1629, in predict
    tmp_batch_outputs = self.predict_function(iterator)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 894, in _call
    return self._concrete_stateful_fn._call_flat(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
    outputs = execute.execute(
  File "/home/ric/anaconda3/envs/TfToXLA/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Function invoked by the following node is not compilable: {{node __inference_predict_function_3130}} = __inference_predict_function_3130[_XlaMustCompile=true, config_proto="\n\007\n\003CPU\020\001\n\007\n\003GPU\020\0002\002J\0008\001\202\001\000", executor_type=""](dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, dummy_input, ...).
Uncompilable nodes:
IteratorGetNext: unsupported op: No registered 'IteratorGetNext' OpKernel for XLA_CPU_JIT devices compatible with node {{node IteratorGetNext}}
    Stacktrace:
        Node: __inference_predict_function_3130, function: 
        Node: IteratorGetNext, function: __inference_predict_function_3130
 [Op:__inference_predict_function_3130]
4

1 回答 1

0

经过几天的工作和各种方法,我终于找到了适合我的目的的解决方法。

由于我只想要模型一次执行的 LLVM IR,我可以使用 TensorFlow 的替代函数 model.predict_step。它只运行一次,因此不使用 IteratorGetNext 方法来避免初始错误。

于 2021-07-20T12:27:25.150 回答