模型:
racai/distilbert-base-romanian-uncased
该存储库包含不区分大小写的罗马尼亚DistilBERT(在论文中称为Distil-RoBERTa-base)。用于蒸馏的教师模型是: readerbench/RoBERT-base 。
该模型是在 this paper 中介绍的。相邻的代码可以在 here 中找到。
from transformers import AutoTokenizer, AutoModel
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-base-romanian-uncased")
model = AutoModel.from_pretrained("racai/distilbert-base-romanian-uncased")
# tokenize a test sentence
input_ids = tokenizer.encode("aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")
# run the tokens trough the model
outputs = model(input_ids)
print(outputs)
比其教师模型RoBERTa-base小35%。
| Model | Size (MB) | Params (Millions) |
|---|---|---|
| RoBERT-base | 441 | 114 |
| distilbert-base-romanian-cased | 282 | 72 |
我们针对5个罗马尼亚任务对模型进行了评估,与RoBERTa-base进行比较:
| Model | UPOS | XPOS | NER | SAPN | SAR | DI | STS |
|---|---|---|---|---|---|---|---|
| RoBERT-base | 98.02 | 97.15 | 85.14 | 98.30 | 79.40 | 96.07 | 81.18 |
| distilbert-base-romanian-uncased | 97.12 | 95.79 | 83.11 | 98.01 | 79.58 | 96.11 | 79.80 |
@article{avram2021distilling,
title={Distilling the Knowledge of Romanian BERTs Using Multiple Teachers},
author={Andrei-Marius Avram and Darius Catrina and Dumitru-Clementin Cercel and Mihai Dascălu and Traian Rebedea and Vasile Păiş and Dan Tufiş},
journal={ArXiv},
year={2021},
volume={abs/2112.12650}
}