英文

mBERTu

一个在Korpus Malti v4.0上使用多语言BERT作为初始检查点进行预训练的马耳他语多语种模型。

许可证

本作品根据 Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License 许可证授权。此许可证范围之外的权限可能在 https://mlrs.research.um.edu.mt/ 处提供。

引用

本作品首次在 Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese 中提出。请按照以下方式引用:

@inproceedings{BERTu,
    title = "Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese",
    author = "Micallef, Kurt  and
              Gatt, Albert  and
              Tanti, Marc  and
              van der Plas, Lonneke  and
              Borg, Claudia",
    booktitle = "Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing",
    month = jul,
    year = "2022",
    address = "Hybrid",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.deeplo-1.10",
    doi = "10.18653/v1/2022.deeplo-1.10",
    pages = "90--101",
}