用于阿拉伯语的预训练BERT中等语言模型
如果您在工作中使用了此模型,请引用此论文:
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
arabic-bert-medium 模型在大约82亿个单词上进行了预训练:
以及其他阿拉伯语资源,总共约95GB的文本。
关于训练数据的说明:
您可以通过安装 torch 或 tensorflow 和Huggingface库 transformers 来使用此模型。您可以像这样初始化它:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-medium-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-medium-arabic")
有关模型性能或其他问题的更多详细信息,请参阅 Arabic-BERT
感谢Google提供免费的TPU进行训练,并感谢Huggingface在其服务器上托管此模型 😊