MaIL: A Unified Mask-Image-Language Trimodal Network for Referring Image Segmentation
Referring image segmentation is a typical multi-modal task, which aims at generating a binary mask for referent described in given language expressions.
Prior arts adopt a bimodal solution, taking images and languages as two modalities within an encoder-fusion-decoder pipeline.
However, this pipeline is sub-optimal for the target task for two reasons.
First, they only fuse high-level features produced by uni-modal encoders separately, which hinders sufficient cross-modal learning.
Second, the uni-modal encoders are pre-trained independently, which brings inconsistency between pre-trained uni-modal tasks and the target multi-modal task.
To relieve these problems, we propose a more concise encoder-decoder pipeline with a mask-image-language trimodal encoder, which unifies uni-modal feature extractors and their fusion model into a deep modality-interaction encoder, facilitating sufficient feature interaction across different modalities.
Moreover, for the first time, we propose to introduce instance masks as an additional modality, which explicitly intensifies instance-level features and promotes finer segmentation results.
The proposed mail set a new state-of-the-art on all frequently-used referenting image segmentation datasets, including refcoco, refcoco+, and g-ref, with significant gains, 3%-10% against previous best methods.