FFNB: Forgetting-Free Neural Blocks for Deep Continual Visual Learning
Deep neural networks (DNNs) have recently achieved a great success in
computer vision and several related fields. Despite such progress, current
neural architectures still suffer from catastrophic interference (a.k.a.
forgetting) which obstructs DNNs to learn continually. While several
state-of-the-art methods have been proposed to mitigate forgetting, these
existing solutions are either highly rigid (as regularization) or time/memory
demanding (as replay). An intermediate class of methods, based on dynamic
networks, has been proposed in the literature and provides a reasonable balance
between task memorization and computational footprint. In this paper, we devise
a dynamic network architecture for continual learning based on a novel
forgetting-free neural block (FFNB). Training FFNB features on new tasks is
achieved using a novel procedure that constrains the underlying parameters in
the null-space of the previous tasks, while training classifier parameters
equates to Fisher discriminant analysis. The latter provides an effective
incremental process which is also optimal from a Bayesian perspective. The
trained features and classifiers are further enhanced using an incremental
"end-to-end" fine-tuning. Extensive experiments, conducted on different
challenging classification problems, show the high effectiveness of the
proposed method.