模型:
NbAiLab/nb-sbert-base
NB-SBERT-BASE是一个在数据集上训练的模型, machine translated version of the MNLI dataset 起始时训练的初始模型。
该模型将句子和段落映射到一个768维的稠密向量空间。这个向量可以用于聚类和语义搜索等任务。下面给出一些如何使用该模型的示例。最简单的方法就是直接测量两个句子之间的余弦距离。意思相近的句子,余弦距离小,相似度接近1。该模型的训练方式使得不同语言中相似的句子也应该彼此接近。理想情况下,英语-挪威语的句子对应该有很高的相似度。
如上所述,使用库 sentence-transformers 使得使用这些模型非常方便:
pip install -U sentence-transformers
然后可以像这样使用该模型:
from sentence_transformers import SentenceTransformer, util sentences = ["This is a Norwegian boy", "Dette er en norsk gutt"] model = SentenceTransformer('NbAiLab/nb-sbert-base') embeddings = model.encode(sentences) print(embeddings) # Compute cosine-similarities with sentence transformers cosine_scores = util.cos_sim(embeddings[0],embeddings[1]) print(cosine_scores) # Compute cosine-similarities with SciPy from scipy import spatial scipy_cosine_scores = 1 - spatial.distance.cosine(embeddings[0],embeddings[1]) print(scipy_cosine_scores) # Both should give 0.8250 in the example above.
即使没有库 sentence-transformers ,仍然可以使用该模型。首先,需要通过变换器模型传入您的输入,然后必须在上下文化的词嵌入之上应用正确的汇集操作。
from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["This is a Norwegian boy", "Dette er en norsk gutt"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('NbAiLab/nb-sbert-base') model = AutoModel.from_pretrained('NbAiLab/nb-sbert-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print(embeddings) # Compute cosine-similarities with SciPy from scipy import spatial scipy_cosine_scores = 1 - spatial.distance.cosine(embeddings[0],embeddings[1]) print(scipy_cosine_scores) # This should give 0.8250 in the example above.
SetFit是使用句子转换器来解决所有自然语言处理研究人员面临的主要问题之一的方法:有过少标记的训练示例。'nb-sbert-base'可以直接插入SetFit库中。请参阅 this tutorial 获取如何使用该技术的详细信息。
该模型可以用于从文本中提取关键词。基本技术是找到与文档最相似的单词。有各种各样的框架可以做到这一点。使用库 KeyBERT 是一种简单的方法。这个示例显示了如何做到这一点。
pip install keybert
from keybert import KeyBERT from sentence_transformers import SentenceTransformer sentence_model = SentenceTransformer("NbAiLab/nb-sbert-base") kw_model = KeyBERT(model=sentence_model) doc = """ De første nasjonale bibliotek har sin opprinnelse i kongelige samlinger eller en annen framstående myndighet eller statsoverhode. Et av de første planene for et nasjonalbibliotek i England ble fremmet av den walisiske matematikeren og mystikeren John Dee som i 1556 presenterte en visjonær plan om et nasjonalt bibliotek for gamle bøker, manuskripter og opptegnelser for dronning Maria I av England. Hans forslag ble ikke tatt til følge. """ kw_model.extract_keywords(doc, stop_words=None) # [('nasjonalbibliotek', 0.5242), ('bibliotek', 0.4342), ('samlinger', 0.3334), ('statsoverhode', 0.33), ('manuskripter', 0.3061)]
库 KeyBERT homepage 提供了其他几个有趣的例子:结合关键词提取和停用词、提取更长的短语,或直接生成高亮显示的文本。
对一组文档进行分析并确定主题具有许多用途。 BERTopic 将句子转换器的强大功能与c-TF-IDF相结合,创建易于解释的主题聚类。
这里不可能花太多时间解释主题建模。相反,我们建议您查看上面的链接以及 documentation 。要使用挪威nb-sbert-base,您需要添加以下内容:
topic_model = BERTopic(embedding_model='NbAiLab/nb-sbert-base').fit(docs)
另一个使用SentenceTransformers模型的常见用例是根据特定查询文本查找相关文档或文档片段。在这种情况下,通常有一个向量数据库,存储了所有文档的嵌入向量。然后,在运行时,会生成查询文本的嵌入并在向量数据库中高效地进行比较。
虽然存在生产向量数据库,但使用库 autofaiss 可以快速进行实验:
pip install autofaiss sentence-transformers
from autofaiss import build_index import numpy as np from sentence_transformers import SentenceTransformer, util sentences = ["This is a Norwegian boy", "Dette er en norsk gutt", "A red house"] model = SentenceTransformer('NbAiLab/nb-sbert-base') embeddings = model.encode(sentences) index, index_infos = build_index(embeddings, save_on_disk=False) # Search for the closest matches query = model.encode(["A young boy"]) _, index_matches = index.search(query, 1) print(index_matches)
在sts-test数据集上的评估结果:
Pearson | Spearman | |
---|---|---|
Cosine Similarity | 0.8275 | 0.8245 |
Manhattan Distance | 0.8193 | 0.8182 |
Euclidean Distance | 0.8190 | 0.8180 |
Dot Product Similarity | 0.8039 | 0.7951 |
该模型使用以下参数进行训练:
数据加载器:
sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader,参数为16471
{'batch_size': 32}
损失函数:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss,参数为
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
fit()方法的参数:
{ "epochs": 1, "evaluation_steps": 1647, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1648, "weight_decay": 0.01 }
SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) )
该模型由Rolv-Arild Braaten和Per Egil Kummervold训练。文档撰写者:Javier de la Rosa、Rov-Arild Braaten和Per Egil Kummervold。