NeMF: Neural Motion Fields for Kinematic Animation
We present an implicit neural representation to learn the spatio-temporal space of kinematic motions.
Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name neural motion fields (nemf).
Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate and a random vector for controlling the style.
The model is then trained as a variational autoencoder (vae) with motion encoders to sample the latent space.
We train our model with diverse human motiondataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating.
Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou