模型:

M-CLIP/XLM-Roberta-Large-Vit-B-32

英文

Multilingual-clip: XLM-Roberta-Large-Vit-B-32

多语言CLIP扩展了OpenAI的英文文本编码器到其他多种语言。此模型仅包含多语言文本编码器。相应的图像模型ViT-B-32可以通过OpenAI的指示 CLIP repository on Github 所找到。下面提供了使用示例。

要求

若要同时使用多语言文本编码器和相应的图像编码器,我们需要安装包 multilingual-clip clip

pip install multilingual-clip
pip install git+https://github.com/openai/CLIP.git

使用方法

从文本编码器中提取嵌入向量可以按照以下方式进行:

from multilingual_clip import pt_multilingual_clip
import transformers

texts = [
    'Three blind horses listening to Mozart.',
    'Älgen är skogens konung!',
    'Wie leben Eisbären in der Antarktis?',
    'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-B-32'

# Load Model & Tokenizer
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)

embeddings = model.forward(texts, tokenizer)
print("Text features shape:", embeddings.shape)

从相应的图像编码器中提取嵌入向量:

import torch
import clip
import requests
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)

print("Image features shape:", image_features.shape) 

评估结果

M-CLIP模型尚未进行全面评估,但在人工翻译的MS-COCO数据集上进行了Txt2Img检索测试,得到以下 R@10 结果:

Name En De Es Fr Zh It Pl Ko Ru Tr Jp
1236321 90.3 - - - - - - - - - -
1237321 91.8 - - - - - - - - - -
1238321 94.3 - - - - - - - - - -
1239321 91.6 89.6 89.5 89.9 88.9 90.1 89.8 80.8 85.5 89.8 73.9
12310321 91.8 88.7 89.1 89.4 89.3 89.8 91.4 82.1 86.1 88.8 81.0
12311321 92.4 90.6 91.0 90.0 89.7 91.1 91.3 85.2 85.8 90.3 81.9
12312321 95.0 93.0 93.6 93.1 94.0 93.1 94.4 89.0 90.0 93.0 84.2

训练/模型细节

有关模型训练和数据的更多详细信息,请参阅 model card