Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Using convolutional neural networks (cnns) as backbones achieves great successes in computer vision, this work investigates a simple backbonenetwork useful for many dense prediction tasks without convolutions.
Unlike the recently-proposed transformer model (e.g., vit) that is specially designed for image classification, we propose pyramid vision transformer~(pvt), which overcomes the difficulties of porting transformer to various dense prediction tasks without convolutions.
We show that it inherits the advantages from both cnn and transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing cnn backbones.
We validate pvt by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation.