模型:
ltg/norbert3-large
这是一种新一代的NorBERT语言模型的官方发布,该模型的详细信息请阅读论文 NorBench — A Benchmark for Norwegian Language Models 。请阅读论文以获取更多关于模型的详细信息。
当前需要使用来源于 modeling_norbert.py 的自定义包装器来加载此模型,因此您应该使用 trust_remote_code=True 的选项来加载模型。
import torch from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ltg/norbert3-large") model = AutoModelForMaskedLM.from_pretrained("ltg/norbert3-large", trust_remote_code=True) mask_id = tokenizer.convert_tokens_to_ids("[MASK]") input_text = tokenizer("Nå ønsker de seg en[MASK] bolig.", return_tensors="pt") output_p = model(**input_text) output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids) # should output: '[CLS] Nå ønsker de seg en ny bolig.[SEP]' print(tokenizer.decode(output_text[0].tolist()))
目前已经实现了以下类: AutoModel,AutoModelMaskedLM,AutoModelForSequenceClassification,AutoModelForTokenClassification,AutoModelForQuestionAnswering和AutoModeltForMultipleChoice。
@inproceedings{samuel-etal-2023-norbench, title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models", author = "Samuel, David and Kutuzov, Andrey and Touileb, Samia and Velldal, Erik and {\O}vrelid, Lilja and R{\o}nningstad, Egil and Sigdel, Elina and Palatkina, Anna", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = may, year = "2023", address = "T{\'o}rshavn, Faroe Islands", publisher = "University of Tartu Library", url = "https://aclanthology.org/2023.nodalida-1.61", pages = "618--633", abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.", }