We introduce an approach for training variational autoencoders (vaes) that are certifiably robust to adversarial attack.
Specifically, we first derive actionable bounds on the minimal size of an input perturbation required to change a vae s reconstruction by more than an allowed amount, with these bounds depending on certain key parameters such as the lipschitz constants of the encoder and decoder.
In this work we study variational autoencoders (vaes) from the perspective ofharmonic analysis.
By viewing a vae s latent space as a gaussian space, a variety of measure space, we derive a series of results that show that the encoder variance of a vae controls the frequency content of the functions parameterised by the vae encoder and decoder neural networks.
We leverage the equivalent discrete state space representation of markovian gaussian process variational autoencoders (gaussian process variationalautoencoders) to enable a linear-time gaussian process solver via kalman filtering and smoothing.
We show via corrupted and missing frames tasks that our method performs favourably, especially on the latter where it outperforms recurrent neural network-based models.