A Mobile-friendly Neural Light Field for 3D View Synthesis
Real-Time Neural Light Field on Mobile Devices
In this work, we propose an efficient network architecture that runs in real-time on mobile devices for neuralrendering.
Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., (iphone 13) for rendering one image of real 3d scenes.
Additionally, we achieve similar image quality as neural rendering field (nerf) and better quality than mobilenerf (psnr vs. on the real-world forward-facing dataset).
Neural rendering promises to democratize asset creation and rendering, where no mesh, texture, or material is required-only a neural network that learned a representation of an object or a scene from multi-view observations.
To be made widely available, this exciting application requires such methods to run on resource-constrained devices, such as mobile phones, conforming to their limitations in computing, wireless connectivity, and hard drive capacity.
To enable real-time applications, many works have been proposed, yet, they still require high-end graphics for rendering and hence are not available for mobile or edge devices.
In this work, we propose mobiler2l, a real-time neural rendering model built with mobile devices in mind.
Differently, instead of using an mlp, a backbone network used by most neural representations, we show that a well-designed network can achieve real-time speed with the rendering quality similar to mlp.
Result
We design an optimized network architecture that can be trained via data distillation to render high-resolution images, @xmath0, while running in real-time on model devices.
We prove that with our design, neural rendering can be used to build real-world applications, achieving real-time user interaction.