Image: Ben Mildenhall/Google
With RawNeRF, Google scientists introduce a new image synthesis tool that can create well-lit 3D scenes from dark 2D photos.
In the summer of 2020, a Google research team introduced “Neural Radiation Fields” (NeRF) for the first time: in a nutshell, an artificial intelligence system recognizes where light rays end up in images. Based on this, the system can automatically create a 3D scene from multiple photos of the same 2D scene. Imagine a kind of automated photogrammetry that reduces manual effort and number of photos while generating flexible, customizable high-quality 3D scenes.
Over the past two years, Google teams have regularly demonstrated new use cases for NeRF, such as Google Maps Immersive View or rendering 3D Street View based on photos and depth data.
RawNeRF processes RAW images
With RawNeRF, AI researcher Ben Mildenhall’s research team now presents a NeRF that can be trained on RAW camera data. This image data contains the full dynamic range of a scene.
According to the research team, thanks to RAW data training, RawNERF can “reconstruct scenes from extremely noisy images captured in near darkness.” Additionally, the camera’s point of view, focus, exposure, and tone mapping can all be changed after the fact.
“Direct training on raw data effectively turns RawNeRF into a multi-image denoiser capable of combining information from tens or hundreds of input images,” the team write.
If you’re still at CVPR and have the energy to get through another poster session, check out RawNeRF tomorrow morning! We take advantage of the fact that NeRF is surprisingly resistant to image noise to reconstruct scenes directly from raw HDR sensor data. pic.twitter.com/CEeXWSmt9Q
—Ben Mildenhall (@BenMildenhall) June 24, 2022
You can download RawNeRF from Github. There you can also find Mip-NeRF 360, which can render photorealistic 3D scenes from 360-degree images, and Ref-NeRF.