Masked-attention Mask Transformer for Universal Image Segmentation
Image segmentation is about grouping pixels with different semantics, e.g.,
category or instance membership, where each choice of semantics defines a task.
While only the semantics of each task differ, current research focuses on
designing specialized architectures for each task. We present Masked-attention
Mask Transformer (Mask2Former), a new architecture capable of addressing any
image segmentation task (panoptic, instance or semantic). Its key components
include masked attention, which extracts localized features by constraining
cross-attention within predicted mask regions. In addition to reducing the
research effort by at least three times, it outperforms the best specialized
architectures by a significant margin on four popular datasets. Most notably,
Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on
COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7
mIoU on ADE20K).