ClipFace: Text-guided Editing of Textured 3D Morphable Models
We propose ClipFace, a novel self-supervised approach for text-guided editing
of textured 3D morphable model of faces. Specifically, we employ user-friendly
language prompts to enable control of the expressions as well as appearance of
3D faces. We leverage the geometric expressiveness of 3D morphable models,
which inherently possess limited controllability and texture expressivity, and
develop a self-supervised generative model to jointly synthesize expressive,
textured, and articulated faces in 3D. We enable high-quality texture
generation for 3D faces by adversarial self-supervised training, guided by
differentiable rendering against collections of real RGB images. Controllable
editing and manipulation are given by language prompts to adapt texture and
expression of the 3D morphable model. To this end, we propose a neural network
that predicts both texture and expression latent codes of the morphable model.
Our model is trained in a self-supervised fashion by exploiting differentiable
rendering and losses based on a pre-trained CLIP model. Once trained, our model
jointly predicts face textures in UV-space, along with expression parameters to
capture both geometry and texture changes in facial expressions in a single
forward pass. We further show the applicability of our method to generate
temporally changing textures for a given animation sequence.