mt5微调不使用gpu(volatile gpu utill 0%)
嗨,我正在尝试使用基于 mt5 的模型对 ko-en 翻译进行微调。我认为 Cuda 设置正确完成(可用的 cuda 为 True)但是在训练期间,训练集不使用 GPU,除了首先获取数据集(非常短的时间)。
我想有效地使用 GPU 资源并获得有关翻译模型微调的建议,这是我的代码和训练环境。
import logging
import pandas as pd
from simpletransformers.t5 import T5Model, T5Args
import torch
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
train_df = pd.read_csv("data/enko_train.tsv", sep="\t").astype(str)
eval_df = pd.read_csv("data/enko_eval.tsv", sep="\t").astype(str)
train_df["prefix"] = ""
eval_df["prefix"] = ""
model_args = T5Args()
model_args.max_seq_length = 96
model_args.train_batch_size = 64
model_args.eval_batch_size = 32
model_args.num_train_epochs = 10
model_args.evaluate_during_training = True
model_args.evaluate_during_training_steps = 1000
model_args.use_multiprocessing = False
model_args.fp16 = True
model_args.save_steps = 1000
model_args.save_eval_checkpoints = True
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.preprocess_inputs = False
model_args.num_return_sequences = 1
model_args.wandb_project = "MT5 Korean-English Translation"
print("Is cuda available?", torch.cuda.is_available())
model = T5Model("mt5", "google/mt5-base", cuda_device=0 , args=model_args)
# Train the model
model.train_model(train_df, eval_data=eval_df)
# Optional: Evaluate the model. We'll test it properly anyway.
results = model.eval_model(eval_df, verbose=True)
nvcc:NVIDIA (R) Cuda 编译器驱动程序
版权所有 (c) 2005-2021 NVIDIA Corporation
构建于 Mon_May__3_19:15:13_PDT_2021
Cuda 编译工具,版本 11.3,V11.3.109
构建 cuda_11.3.r11.3/compiler.29920130_0
显卡 0 = Quadro RTX 6000