我使用 tensorflow 的 image_retraining 指南(https://www.tensorflow.org/hub/tutorials/image_retraining)训练了一个模型。然后我尝试使用 tensorflojs_converter 转换 pb 模型,但我收到有关元图的错误。
我的环境是 Ubuntu 18.04,我使用的是 tensorflow-gpu ( https://www.tensorflow.org/install/gpu ) 和最新版本的 tensorflowjs_converter (1.0.1)。
为训练模型而执行的命令:
python retrain.py --image_dir ./flower_photos --saved_model_dir=/tmp/saved_models/$(date +%s)/
为转换模型而执行的命令:
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model /tmp/saved_models/1555066703 /tmp/web_models
2019-04-12 15:45:06.797479: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-04-12 15:45:06.818525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2592000000 Hz
2019-04-12 15:45:06.819292: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55637fb624e0 executing computations on platform Host. Devices:
2019-04-12 15:45:06.819327: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2019-04-12 15:45:10.845000: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1364] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
WARNING: Logging before flag parsing goes to stderr.
W0412 15:45:11.737798 139737592477504 meta_graph.py:447] Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
W0412 15:45:11.738872 139737592477504 meta_graph.py:447] Issue encountered when serializing model_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
2019-04-12 15:45:11.743861: I tensorflow/core/grappler/devices.cc:61] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-04-12 15:45:11.743944: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-04-12 15:45:11.762060: E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node final_retrain_ops/weights/final_weights/Assign doesn't exist in graph
Traceback (most recent call last):
File "/home/davide/.local/bin/tensorflowjs_converter", line 11, in <module>
sys.exit(main())
File "/home/davide/.local/lib/python2.7/site-packages/tensorflowjs/converters/converter.py", line 358, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "/home/davide/.local/lib/python2.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 271, in convert_tf_saved_model
concrete_func)
File "/home/davide/.local/lib/python2.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 99, in convert_variables_to_constants_v2
graph_def = _run_inline_graph_optimization(func)
File "/home/davide/.local/lib/python2.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 57, in _run_inline_graph_optimization
return tf_optimizer.OptimizeGraph(config, meta_graph)
File "/home/davide/.local/lib/python2.7/site-packages/tensorflow/python/grappler/tf_optimizer.py", line 43, in OptimizeGraph
verbose, graph_id, status)
File "/home/davide/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 548, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.
我期待一个 tfjs 模型,我得到了上面的结果。