Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
Transformers have emerged as a powerful tool for a broad range of natural
language processing tasks. A key component that drives the impressive
performance of Transformers is the self-attention mechanism that encodes the
influence or dependence of other tokens on each specific token. While
beneficial, the quadratic complexity of self-attention on the input sequence
length has limited its application to longer sequences -- a topic being
actively studied in the community. To address this limitation, we propose
Nystr\"omformer -- a model that exhibits favorable scalability as a function of
sequence length. Our idea is based on adapting the Nystr\"om method to
approximate standard self-attention with $O(n)$ complexity. The scalability of
Nystr\"omformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE
benchmark and IMDB reviews with standard sequence length, and find that our
Nystr\"omformer performs comparably, or in a few cases, even slightly better,
than standard Transformer. Our code is at
this https URL