模型:

facebook/mask2former-swin-base-IN21k-ade-semantic

英文

Mask2Former

Mask2Former模型是在ADE20k语义分割(基于IN21k版本,使用Swin骨干网络)数据集上训练的。该模型首次在论文 Masked-attention Mask Transformer for Universal Image Segmentation 中介绍,并在 this repository 中首次发布。

免责声明:发布Mask2Former模型的团队未为该模型编写模型卡片,因此此模型卡片由Hugging Face团队编写。

模型描述

Mask2Former模型通过预测一组掩码和相应的标签来处理实例分割、语义分割和全景分割三个任务,因此,所有这三个任务都被视为实例分割。Mask2Former通过以下方式在性能和效率上优于先前的SOTA模型 MaskFormer :(i) 使用更先进的多尺度可变形注意力变换器替换像素解码器,(ii)采用具有蒙版注意力的Transformer解码器提升性能同时不引入额外计算量,(iii) 通过计算子采样点上的损失而不是整个掩码来提高训练效率。

预期用途和限制

您可以使用此特定检查点进行全景分割。查看 model hub 以寻找您感兴趣的其他任务上的微调版本。

如何使用

以下是使用此模型的步骤:

import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation


# load Mask2Former fine-tuned on ADE20k semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits

# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)

有关更多代码示例,请查看 documentation