Accelerating Neural ODEs Using Model Order Reduction
Mikko Lehtimäki, Lassi Paunonen, Marja-Leena Linne
Embedding nonlinear dynamical systems into artificial neural networks is a
powerful new formalism for machine learning. By parameterizing ordinary
differential equations (ODEs) as neural network layers, these Neural ODEs are
memory-efficient to train, process time-series naturally and incorporate
knowledge of physical systems into deep learning models. However, the practical
applications of Neural ODEs are limited due to long inference times, because
the outputs of the embedded ODE layers are computed numerically with
differential equation solvers that can be computationally demanding. Here we
show that mathematical model order reduction methods can be used for
compressing and accelerating Neural ODEs by accurately simulating the
continuous nonlinear dynamics in low-dimensional subspaces. We implement our
novel compression method by developing Neural ODEs that integrate the necessary
subspace-projection and interpolation operations as layers of the neural
network. We validate our model reduction approach by comparing it to two
established acceleration methods from the literature in two classification
asks. In compressing convolutional and recurrent Neural ODE architectures, we
achieve the best balance between speed and accuracy when compared to the other
two acceleration methods. Based on our results, our integration of model order
reduction with Neural ODEs can facilitate efficient, dynamical system-driven
deep learning in resource-constrained applications.