In this work, we present the Textless Vision-Language Transformer (TVLT),
where homogeneous transformer blocks take raw visual and audio inputs for
vision-and-language representation learning with minimal modality-specific
design, and do not use text-specific modules such as tokenization or automatic
speech recognition (ASR). TVLT is trained by reconstructing masked patches of
continuous video frames and audio spectrograms (masked autoencoding) and
contrastive modeling to align video and audio. TVLT attains performance
comparable to its text-based counterpart, on various multimodal tasks, such as
visual question answering, image retrieval, video retrieval, and multimodal
sentiment analysis, with 28x faster inference speed and only 1/3 of the
parameters. Our findings suggest the possibility of learning compact and
efficient visual-linguistic representations from low-level visual and audio
signals without assuming the prior existence of text. Our code and checkpoints
are available at: this https URL