0

我正在研究讽刺数据集和我的模型,如下所示:

我首先标记我的输入文本:

 PRETRAINED_MODEL_NAME = "roberta-base"
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
import torch
from torch.utils.data import Dataset, DataLoader

MAX_LEN = 100

然后我为我的数据集定义了类:

class SentimentDataset (Dataset):
    def __init__(self,dataframe):
        self.dataframe = dataframe

    def __len__(self):
        return len(self.dataframe)
    
    def __getitem__(self, idx):
        df = self.dataframe.iloc[idx]

        text = [df["comment"]]
        label = [df["label"]]

        data_t = tokenizer(text,max_length = MAX_LEN, return_tensors="pt", padding="max_length", truncation=True)
        label_t = torch.LongTensor(label)

        return {
             "input_ids":data_t["input_ids"].squeeze().to(device),
             "label": label_t.squeeze().to(device),
        }

然后我从我的类中为训练集创建 obj 并设置其他参数:

train_dataset = SentimentDataset(train_df)
BATCH_SIZE = 32
train_dataloader = DataLoader(train_dataset, batch_size = BATCH_SIZE)
from transformers import AutoModelForSequenceClassification, AutoConfig

# For loading model stucture and pretrained weights:
model = AutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_NAME).to(device)

import transformers


optimizer = torch.optim.Adam(model.parameters(), lr=2e-5, weight_decay=1e-5)

然后我使用数据加载器来训练我的数据:

train_dataloader = DataLoader(train_dataset, batch_size = BATCH_SIZE)
EPOCHS = 5
for epoch in range(EPOCHS):
    print("\n******************\n epoch=",epoch)
    i = 0
    logits_list = []
    labels_list = []
    for batch in train_dataloader:
        i += 1
        optimizer.zero_grad()
        output_model = model(input_ids = batch["input_ids"], labels = batch["label"])
        loss = output_model.loss
        logits = output_model.logits
        logits_list.append(logits.cpu().detach().numpy())
        labels_list.append(batch["label"].cpu().detach().numpy())
        loss.backward()
        optimizer.step()
    #scheduler.step()
        if i % 50 ==0 :
            print("training loss:",loss.item())
            #print("validation loss:",loss.item())
    logits_list = np.concatenate(logits_list, axis=0)
    labels_list = np.concatenate(labels_list, axis=0)
    logits_list = np.argmax(logits_list, axis =1)
    print(classification_report(labels_list, logits_list))

我的问题是如何在我的模型中更改自我注意层数和多头注意头?

4

1 回答 1

1

简短的回答是:你不能。

您正在使用预训练模型:

model = AutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_NAME).to(device)

您不能轻易更改预训练模型。可以更改预训练模型,但这绝对不是直截了当的。您可以下载不同的预训练模型,也可以从头开始训练您喜欢的任何模型(这可能会花费太多时间和计算资源)。您唯一可以轻松更改的是模型的“深度” - 您可以丢弃一些变压器块。

于 2021-11-26T15:23:25.343 回答