0

使用 https://www.tensorflow.org/tutorials/structured_data/preprocessing_layers上的示例

我用自己的数据创建了一个模型。我想以 Tensorflow lite 格式保存它。我保存为 SavedModel,但在转换时,我遇到了很多错误代码。我遇到的最后一个错误代码;

WARNING:tensorflow:AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function canonicalize_signatures.<locals>.signature_wrapper at 0x7f4f61cd0560> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: closure mismatch, requested ('signature_function', 'signature_key'), but source function had ()
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28290>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60> and will run it as-is.
Cause: could not parse the source code of <function _trace_resource_initializers.<locals>._wrap_obj_initializer.<locals>.<lambda> at 0x7f4f61d28e60>: no matching AST found
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
INFO:tensorflow:Assets written to: /tmp/test_saved_model/assets
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    212       model body, the input/output will be quantized as well.
--> 213     inference_type: Data type for the activations. The default value is int8.
    214     enable_numeric_verify: Experimental. Subject to change. Bool indicating

4 frames
Exception: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    tf.AddV2 {device = ""}
    tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
    tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
    tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}


During handling of the above exception, another exception occurred:

ConverterError                            Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    214     enable_numeric_verify: Experimental. Subject to change. Bool indicating
    215       whether to add NumericVerify ops into the debug mode quantized model.
--> 216 
    217   Returns:
    218     Quantized model in serialized form (e.g. a TFLITE model) with floating-point

ConverterError: <unknown>:0: error: loc("integer_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc("string_lookup_1_index_table"): 'tf.MutableHashTableV2' op is neither a custom op nor a flex op
<unknown>:0: error: loc(callsite(callsite("model/string_lookup_1/string_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_3/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/integer_lookup_1/integer_lookup_1_index_table_lookup_table_find/LookupTableFindV2@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.LookupTableFindV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/add@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.AddV2' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/mul@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.Mul' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: loc(callsite(callsite("model/category_encoding_2/bincount/DenseBincount@__inference__wrapped_model_9475" at "StatefulPartitionedCall@__inference_signature_wrapper_10110") at "StatefulPartitionedCall")): 'tf.DenseBincount' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    tf.AddV2 {device = ""}
    tf.DenseBincount {T = f32, Tidx = i64, binary_output = true, device = ""}
    tf.Mul {device = ""}Ops that need custom implementation (enabled via setting the -emit-custom-ops flag):
    tf.LookupTableFindV2 {device = "/job:localhost/replica:0/task:0/device:CPU:0"}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = !tf.string, shared_name = "table_704", use_node_name_sharing = false, value_dtype = i64}
    tf.MutableHashTableV2 {container = "", device = "", key_dtype = i64, shared_name = "table_615", use_node_name_sharing = false, value_dtype = i64}

代码;


# Save the model into temp directory
export_dir = "/tmp/test_saved_model"


tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
#save model 
tflite_model_files = pathlib.Path('/tmp/save_model_tflite.tflite')
tflite_model_file.write_bytes(tflite_model)

这个错误代码的原因是什么?我的目标是在应用程序中嵌入这个带有 react native 的模型。谢谢你。

4

1 回答 1

2

查看您的跟踪,似乎您有一些 HashTable 操作。您需要设置converter.allow_custom_ops = True才能转换此模型。

export_dir = "/content/test_saved_model"


tf.saved_model.save(model, export_dir)
# Convert the model into TF Lite.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)

converter.allow_custom_ops = True

tflite_model = converter.convert()

#save model 
tflite_model_files = pathlib.Path('/content/save_model_tflite.tflite')
tflite_model_files.write_bytes(tflite_model)
于 2021-03-12T20:07:48.797 回答