模型:
hiiamsid/sentence_similarity_spanish_es
这是一个 sentence-transformers 模型:它将句子和段落映射到一个768维的稠密向量空间,可用于聚类或语义搜索等任务。
在安装 sentence-transformers 之后,使用这个模型变得很容易:
pip install -U sentence-transformers
然后可以这样使用模型:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hiiamsid/sentence_similarity_spanish_es')
embeddings = model.encode(sentences)
print(embeddings)
如果没有 sentence-transformers ,可以这样使用模型:首先,将输入通过变换器模型,然后必须在上下文化词嵌入之上应用正确的汇聚操作。
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hiiamsid/sentence_similarity_spanish_es')
model = AutoModel.from_pretrained('hiiamsid/sentence_similarity_spanish_es')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
cosine_pearson : 0.8280372842978689 cosine_spearman : 0.8232689765056079 euclidean_pearson : 0.81021993884437 euclidean_spearman : 0.8087904592393836 manhattan_pearson : 0.809645390126291 manhattan_spearman : 0.8077035464970413 dot_pearson : 0.7803662255836028 dot_spearman : 0.7699607641618339
有关此模型的自动化评估,请参见 Sentence Embeddings Benchmark : https://seb.sbert.net
该模型是使用以下参数训练的:
DataLoader:
torch.utils.data.dataloader.DataLoader长度为360,参数如下:
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
损失:
sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss
fit()方法的参数:
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)