UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
Pre-training vision-language models with contrastive objectives has shown
promising results that are both scalable to large uncurated datasets and
transferable to many downstream applications. Some following works have
targeted to improve data efficiency by adding self-supervision terms, but
inter-domain (image-text) contrastive loss and intra-domain (image-image)
contrastive loss are defined on individual spaces in those works, so many
feasible combinations of supervision are overlooked. To overcome this issue, we
propose UniCLIP, a Unified framework for Contrastive Language-Image
Pre-training. UniCLIP integrates the contrastive loss of both inter-domain
pairs and intra-domain pairs into a single universal space. The discrepancies
that occur when integrating contrastive loss between different domains are
resolved by the three key components of UniCLIP: (1) augmentation-aware feature
embedding, (2) MP-NCE loss, and (3) domain dependent similarity measure.
UniCLIP outperforms previous vision-language pre-training methods on various
single- and multi-modality downstream tasks. In our experiments, we show that
each component that comprises UniCLIP contributes well to the final
performance.
Authors
Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim