模型:
lexlms/legal-roberta-base
该模型是基于RoBERTa基础版( https://huggingface.co/roberta-base )在LeXFiles语料库( https://huggingface.co/datasets/lexlms/lexfiles )上进行进一步预训练得到的。
LexLM(基础版/大型版)是我们新发布的RoBERTa模型。我们在语言模型的开发过程中遵循了一系列最佳实践:
需要更多信息。
该模型是在LeXFiles语料库( https://huggingface.co/datasets/lexlms/lexfiles )上进行训练的。有关评估结果,请参考我们的工作“LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development”(Chalkidis*等,2023)。
在训练过程中使用了以下超参数:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.0389 | 0.05 | 50000 | 0.9802 |
| 0.9685 | 0.1 | 100000 | 0.9021 |
| 0.9337 | 0.15 | 150000 | 0.8752 |
| 0.9106 | 0.2 | 200000 | 0.8558 |
| 0.8981 | 0.25 | 250000 | 0.8512 |
| 0.8813 | 1.03 | 300000 | 0.8203 |
| 0.8899 | 1.08 | 350000 | 0.8286 |
| 0.8581 | 1.13 | 400000 | 0.8148 |
| 0.856 | 1.18 | 450000 | 0.8141 |
| 0.8527 | 1.23 | 500000 | 0.8034 |
| 0.8345 | 2.02 | 550000 | 0.7763 |
| 0.8342 | 2.07 | 600000 | 0.7862 |
| 0.8147 | 2.12 | 650000 | 0.7842 |
| 0.8369 | 2.17 | 700000 | 0.7766 |
| 0.814 | 2.22 | 750000 | 0.7737 |
| 0.8046 | 2.27 | 800000 | 0.7692 |
| 0.7941 | 3.05 | 850000 | 0.7538 |
| 0.7956 | 3.1 | 900000 | 0.7562 |
| 0.8068 | 3.15 | 950000 | 0.7512 |
| 0.8066 | 3.2 | 1000000 | 0.7516 |
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = july,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.07507",
}