Long Range Language Modeling via Gated State Spaces
State space models have shown to be effective at modeling long range
dependencies, specially on sequence classification tasks. In this work we focus
on autoregressive sequence modeling over English books, Github source code and
ArXiv mathematics articles. Based on recent developments around the
effectiveness of gated activation functions, we propose a new layer named Gated
State Space (GSS) and show that it trains significantly faster than the
diagonal version of S4 (i.e. DSS) on TPUs, is fairly competitive with several
well-tuned Transformer-based baselines and exhibits zero-shot generalization to
longer inputs while being straightforward to implement. Finally, we show that
leveraging self-attention to model local dependencies improves the performance
of GSS even further.