0

我有:

from transformers import XLNetTokenizer, XLNetForQuestionAnswering
import torch

tokenizer =  XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetForQuestionAnswering.from_pretrained('xlnet-base-cased')

input_ids = torch.tensor(tokenizer.encode("What is my name?", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs[0]

print(outputs)
print(loss)

根据文档。这会带来一些好处:

(tensor(2.3008, grad_fn=<DivBackward0>),)
tensor(2.3008, grad_fn=<DivBackward0>)

但是,如果可能的话,我想要一个实际的答案?

4

1 回答 1

2

感谢Joe Davison在 Twitter 上提供答案:

from transformers import pipeline

qa = pipeline('question-answering')
response = qa(context='I like to eat apples, but hate bananas.',
              question='What do I like?')

print(response)

给出以下回应:

{'score': 0.282511100858045, 'start': 31, 'end': 38, 'answer': 'bananas.'}

不太对,但至少分数很低。

于 2020-02-17T13:50:02.673 回答