下面是配置从HuggingFace 转换器库中使用的TrainingArguments以微调 GPT2语言模型的代码。
training_args = TrainingArguments(
output_dir="./gpt2-language-model", #The output directory
num_train_epochs=100, # number of training epochs
per_device_train_batch_size=8, # batch size for training #32, 10
per_device_eval_batch_size=8, # batch size for evaluation #64, 10
save_steps=100, # after # steps model is saved
warmup_steps=500,# number of warmup steps for learning rate scheduler
prediction_loss_only=True,
metric_for_best_model = "eval_loss",
load_best_model_at_end = True,
evaluation_strategy="epoch",
learning_rate=0.00004, # learning rate
)
early_stop_callback = EarlyStoppingCallback(early_stopping_patience = 3)
trainer = Trainer(
model=gpt2_model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=test_dataset,
callbacks = [early_stop_callback],
)
epoch数为100,learning_rate为0.00004,early_stopping的耐心值为3。
该模型运行了5/100个 epoch,并注意到 loss_value 的差异可以忽略不计。最新的检查点保存为checkpoint-latest
.
现在我可以修改learning_rate
可能是0.01
从0.00004
最新保存的检查点恢复训练checkpoint-latest
吗?这样做会有效率吗?
还是要以新learning_rate
值进行训练,我应该从头开始训练?