Google’s new image AI brings light into the dark

RawNeRF: Google's new AI image brings light into the dark

Image: Ben Mildenhall/Google

The article can only be displayed with JavaScript enabled. Please enable JavaScript in your browser and reload the page.

With RawNeRF, Google scientists introduce a new image synthesis tool that can create well-lit 3D scenes from dark 2D photos.

In the summer of 2020, a Google research team introduced “Neural Radiation Fields” (NeRF) for the first time: in a nutshell, an artificial intelligence system recognizes where light rays end up in images. Based on this, the system can automatically create a 3D scene from multiple photos of the same 2D scene. Imagine a kind of automated photogrammetry that reduces manual effort and number of photos while generating flexible, customizable high-quality 3D scenes.

Over the past two years, Google teams have regularly demonstrated new use cases for NeRF, such as Google Maps Immersive View or rendering 3D Street View based on photos and depth data.

RawNeRF processes RAW images

With RawNeRF, AI researcher Ben Mildenhall’s research team now presents a NeRF that can be trained on RAW camera data. This image data contains the full dynamic range of a scene.

According to the research team, thanks to RAW data training, RawNERF can “reconstruct scenes from extremely noisy images captured in near darkness.” Additionally, the camera’s point of view, focus, exposure, and tone mapping can all be changed after the fact.

“Direct training on raw data effectively turns RawNeRF into a multi-image denoiser capable of combining information from tens or hundreds of input images,” the team write.

logo

You can download RawNeRF from Github. There you can also find Mip-NeRF 360, which can render photorealistic 3D scenes from 360-degree images, and Ref-NeRF.

Sources: Ben Mildenhall on Github


Be the first to comment

Leave a Reply

Your email address will not be published.


*