Neural approximations of scalar and vector fields, such as signed distance
functions and radiance fields, have emerged as accurate, high-quality
representations. State-of-the-art results are obtained by conditioning a neural
approximation with a lookup from trainable feature grids that take on part of
the learning task and allow for smaller, more efficient neural networks.
Unfortunately, these feature grids usually come at the cost of significantly
increased memory consumption compared to stand-alone neural network models. We
present a dictionary method for compressing such feature grids, reducing their
memory consumption by up to 100x and permitting a multiresolution
representation which can be useful for out-of-core streaming. We formulate the
dictionary optimization as a vector-quantized auto-decoder problem which lets
us learn end-to-end discrete neural representations in a space where no direct
supervision is available and with dynamic topology and structure. Our source
code will be available at this https URL
Authors
Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Müller, Morgan McGuire, Alec Jacobson, Sanja Fidler