我正在尝试在 Node.JS 项目上加载 GPT-2 模型。我相信这可以使用 tfjs 库来完成。所以我尝试将 GPT-2 模型转换为 tfjs 模型。根据对此答案的建议,我将 GPT-2 模型导出为 SavedModel。
!python3 -m pip install -q git+https://github.com/huggingface/transformers.git
!python3 -m pip install tensorflow tensorflowjs
然后运行以下代码以导出 SavedModel xx.pb 文件。
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import tensorflowjs
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
model.save("./test_gpt2")
然后我运行此命令将 SavedModel 转换为 tfjs 兼容文件。
!tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='gpt2' \
--saved_model_tags=serve \
/content/test_gpt2 \
/content/test_gpt2_web_model
这会导致错误
2020-07-08 16:36:11.455383: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-07-08 16:36:11.459979: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2300000000 Hz
2020-07-08 16:36:11.460216: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e5b100 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-08 16:36:11.460284: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-07-08 16:36:18.337463: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-07-08 16:36:18.337631: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-07-08 16:36:18.536301: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: graph_to_optimize
2020-07-08 16:36:18.536373: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 163 nodes (0), 175 edges (0), time = 43.871ms.
2020-07-08 16:36:18.536384: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 163 nodes (0), 175 edges (0), time = 50.779ms.
2020-07-08 16:36:18.536393: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: __inference__wrapped_model_24863
2020-07-08 16:36:18.536402: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.004ms.
2020-07-08 16:36:18.536411: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms.
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 8, in <module>
sys.exit(pip_main())
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 735, in pip_main
main([' '.join(sys.argv[1:])])
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 739, in main
convert(argv[0].split(' '))
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 681, in convert
control_flow_v2=args.control_flow_v2)
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 494, in convert_tf_saved_model
weight_shard_size_bytes=weight_shard_size_bytes)
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 143, in optimize_graph
', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
StatefulPartitionedCall
它说不StatefulPartitionedCall
支持。有没有办法解决这个问题?