模型:
mrm8488/spanbert-finetuned-squadv1
SpanBERT created by Facebook Research and fine-tuned on SQuAD 1.1 for Q&A downstream task.
A pre-training method that is designed to better represent and predict spans of text.
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SQuAD 1.1 contains 100,000+ question-answer pairs on 500+ articles.
| Dataset | Split | # samples |
|---|---|---|
| SQuAD1.1 | train | 87.7k |
| SQuAD1.1 | eval | 10.6k |
The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found here
| Metric | # Value |
|---|---|
| EM | 85.49 |
| F1 | 91.98 |
{
"exact": 85.49668874172185,
"f1": 91.9845699540379,
"total": 10570,
"HasAns_exact": 85.49668874172185,
"HasAns_f1": 91.9845699540379,
"HasAns_total": 10570,
"best_exact": 85.49668874172185,
"best_exact_thresh": 0.0,
"best_f1": 91.9845699540379,
"best_f1_thresh": 0.0
}
| Model | EM | F1 score |
|---|---|---|
| SpanBert official repo | - | 92.4* |
| spanbert-finetuned-squadv1 | 85.49 | 91.98 |
Fast usage with pipelines :
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-finetuned-squadv1",
tokenizer="mrm8488/spanbert-finetuned-squadv1"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
Created by Manuel Romero/@mrm8488 | LinkedIn
Made with ♥ in Spain