Predicting the Limits of Neural Network Architectures
Neural Networks and the Chomsky Hierarchy
Understanding when and how neural networks generalize remains one of the mostimportant unsolved problems in the field.
In this work, we conduct an extensive empirical study (2200 models, 16 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice.
We demonstrate that grouping tasks according to the chomsky hierarchy allows us to forecast whether certain architectures will be able to generalize to out-of-distribution inputs.
Our results show that, for our subset of tasks, rnns and transformers fail to generalize on non-regular tasks, lstms can solve regular and counter-language tasks, and only networks augmented with structured memory (such as a stack or memory tape) can successfully generalize on context-free and context-sensitive tasks.
Authors
Grégoire Delétang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt, Marcus Hutter, Shane Legg, Pedro A. Ortega