SimMIM: A Simple Framework for Masked Image Modeling
SimMIM: A Simple Framework for Masked Image Modeling
We present a simple framework for masked image modeling without special designs such as block-wise masking and tokenization via discrete vae or clustering.
To study what let the masked image modeling task learn good representations, we systematically study the major components in our framework, and find that simple designs of each component have revealed very strong representation learning performance : 1) random masking of the input image with a moderately large masked patch size (e.g., 32) makes a strong pre-text task ; 2) predictingraw pixels of rgb values by direct regression performs no worse than the patchclassification approaches with complex designs ; 3) the prediction head can be as light as a linear layer, with no worse performance than heavier ones.
We also leverage this approach to facilitate the training of a 3-b model, that by less data than that in previous practice, we achieve the state-of-the-art on four representative vision benchmarks.
Authors
Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu