我正在尝试复制此回购的结果:
https://github.com/huggingface/transfer-learning-conv-ai
为此,我正在关注不基于 docker 的基本示例:
git clone https://github.com/huggingface/transfer-learning-conv-ai
cd transfer-learning-conv-ai
pip install -r requirements.txt
python -m spacy download en
然后我尝试:
python3 interact.py --model models/
我得到了这个错误:
np_resource = np.dtype([("resource", np.ubyte, 1)])
usage: interact.py [-h] [--dataset_path DATASET_PATH]
[--dataset_cache DATASET_CACHE] [--model {openai-gpt,gpt2}]
[--model_checkpoint MODEL_CHECKPOINT]
[--max_history MAX_HISTORY] [--device DEVICE] [--no_sample]
[--max_length MAX_LENGTH] [--min_length MIN_LENGTH]
[--seed SEED] [--temperature TEMPERATURE] [--top_k TOP_K]
[--top_p TOP_P]
interact.py: error: argument --model: invalid choice: 'models/' (choose from 'openai-gpt', 'gpt2')
我注意到的第一件事是没有任何“模型”目录,因此我创建了一个并再次尝试,得到了同样的错误。
我尝试的第二件事是按照它指定的 repo 下载模型:
We make a pretrained and fine-tuned model available on our S3 here
从那个链接我试过:
wget https://s3.amazonaws.com/models.huggingface.co/transfer-learning-chatbot/finetuned_chatbot_gpt.tar.gz
并解压缩主目录和模型目录中的文件,然后重试。
第三次,我尝试并得到了同样的错误。
这是我工作目录的当前结构:
Dockerfile config.json interact.py pytorch_model.bin train.py
LICENCE convai_evaluation.py merges.txt requirements.txt utils.py
README.md example_entry.py model_training_args.bin special_tokens.txt vocab.json
__pycache__ finetuned_chatbot_gpt.tar.gz models test_special_tokens.py
编辑
尝试过 kimbo 的建议:
python3 interact.py --model gpt2
我现在收到此错误:
File "interact.py", line 154, in <module>
run()
File "interact.py", line 114, in run
raise ValueError("Interacting with GPT2 requires passing a finetuned model_checkpoint")
ValueError: Interacting with GPT2 requires passing a finetuned model_checkpoint
还尝试运行:
python3 interact.py
为此,我没有收到任何错误,此时似乎卡住了:
INFO:/home/lramirez/transfer-learning-conv-ai/utils.py:Download dataset from https://s3.amazonaws.com/datasets.huggingface.co/personachat/personachat_self_original.json
INFO:/home/lramirez/transfer-learning-conv-ai/utils.py:Tokenize and encode the dataset
我在那里待了大约 30 分钟