模型:
albert-xxlarge-v2
ALBERT XXLarge v2是一个使用遮蔽语言建模(MLM)目标在英语语言上进行预训练的模型。该模型在 this paper 年被引入,于 this repository 年首次发布。与所有的ALBERT模型一样,它是不区分大小写的:对于英语和English并没有区别。
免责声明:ALBERT团队没有为该模型编写模型卡片,所以这份模型卡片是Hugging Face团队编写的。
ALBERT是一个以自监督方式在大规模英语语料库上进行预训练的transformers模型。这意味着它仅仅在原始文本上进行预训练,没有人类对它们进行任何形式的标记(这也是为什么它可以使用大量公开可用的数据),使用自动化过程从这些文本中生成输入和标签。更准确地说,它是通过两个目标进行预训练的:
通过这种方式,模型学习到了英语语言的内在表示,可以用于提取下游任务有用的特征:例如,如果您有一个带有标记句子的数据集,您可以使用ALBERT模型产生的特征作为输入,训练一个标准的分类器。
ALBERT的特点是它在其Transformer中共享其层。因此,所有层的权重是相同的。重复层导致内存占用较小,但是计算成本与具有相同数量隐藏层的BERT-like结构保持相似,因为它必须遍历相同数量的(重复)层。
这是xxlarge模型的第二个版本。版本2与版本1不同,它具有不同的丢失率、额外的训练数据和更长的训练时间。在几乎所有下游任务中,它的结果都更好。
该模型具有以下配置:
您可以直接使用原始模型进行遮蔽语言建模或下一个句子预测,但它主要用于在下游任务上进行微调。可以查看 model hub 查找您感兴趣的任务的微调版本。
请注意,该模型主要用于在使用整个句子(可能是屏蔽的)进行决策的任务上进行微调,例如序列分类、标记分类或问题回答。对于文本生成等任务,您应该查看类似GPT2的模型。
您可以使用此模型直接进行遮蔽语言建模的pipeline:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
以下是如何在PyTorch中使用此模型获取给定文本的特征的示例:
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = AlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
在TensorFlow中如何使用:
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
即使用于此模型的训练数据可以被认为是相对中立的,该模型可能会产生有偏见的预测:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
这种偏见也会影响到该模型的所有微调版本。
ALBERT模型是在 BookCorpus 上进行的预训练,该数据集包含11,038本未出版的书籍和 English Wikipedia (不包括列表、表格和标题)。
文本被转换为小写,并使用SentencePiece进行标记化,词汇大小为30,000。模型的输入形式如下:
[CLS] Sentence A [SEP] Sentence B [SEP]
ALBERT的训练过程遵循BERT的设置。
每个句子的屏蔽过程的详细信息如下:
当在下游任务上进行微调时,ALBERT模型的结果如下:
| Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | |
|---|---|---|---|---|---|---|
| V2 | ||||||
| ALBERT-base | 82.3 | 90.2/83.2 | 82.1/79.3 | 84.6 | 92.9 | 66.8 |
| ALBERT-large | 85.7 | 91.8/85.2 | 84.9/81.8 | 86.5 | 94.9 | 75.2 |
| ALBERT-xlarge | 87.9 | 92.9/86.4 | 87.9/84.1 | 87.9 | 95.4 | 80.7 |
| ALBERT-xxlarge | 90.9 | 94.6/89.1 | 89.8/86.9 | 90.6 | 96.8 | 86.8 |
| V1 | ||||||
| ALBERT-base | 80.1 | 89.3/82.3 | 80.0/77.1 | 81.6 | 90.3 | 64.0 |
| ALBERT-large | 82.4 | 90.6/83.9 | 82.3/79.4 | 83.5 | 91.7 | 68.5 |
| ALBERT-xlarge | 85.5 | 92.5/86.1 | 86.1/83.1 | 86.4 | 92.4 | 74.8 |
| ALBERT-xxlarge | 91.0 | 94.8/89.3 | 90.2/87.4 | 90.8 | 96.9 | 86.5 |
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}