Just a few years ago, Berkeley engineers showed us how they could easily turn images into a 3D navigable scene using a technology called Neural Radiance Fields, or NeRF. Now, another team of Berkeley researchers has created a development framework to help speed up NeRF projects and make this technology more accessible to others.
Led by Angjoo Kanazawa, assistant professor of electrical engineering and computer sciences, the researchers have developed Nerfstudio, a Python framework that provides plug-and-play components for implementing NeRF-based methods, making it easier to collaborate and incorporate NeRF into projects. Kanazawa and her team will present their paper on Nerfstudio at SIGGRAPH 2023, and have published it as part of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings.
“Advancements in NeRF have contributed to its growing popularity and use in applications such as computer vision, robotics, visual effects and gaming. But support for development has been lagging,” said Kanazawa. “The Nerfstudio framework is intended to simplify the development of custom NeRF methods, the processing of real-world data and interacting with reconstructions.”
This new framework is already helping a wide cross-section of engineers that employ interactive computer graphics in their work, specifically those seeking to create 3D reconstructions in real-world settings. This includes roboticists who use NeRF for manipulation, motion planning, simulation and mapping, as well as gaming studios and news outlets that use interactive graphics to tell stories.
“Researchers as well as industry groups are now using Nerfstudio because it provides an open-source framework, along with the latest NeRF research. It makes it easier for people to begin using NeRFs without having to start from scratch,” said Matt Tancik, the paper’s lead author and a Ph.D. student in Kanazawa’s lab. “So even if you’re doing cutting-edge research, just having this as a baseline, or a starting point, can speed things up a lot.”
Since the introduction of NeRF, researchers worldwide have been working to improve the core technology, from speeding up real-time image rendering and training to developing new editing features. They also have been trying to make NeRF work in new situations, such as when light changes between photos or when objects move within a scene. But this work is often performed by research groups using proprietary repositories, making it difficult to share these contributions with the larger NeRF community.
Nerfstudio addresses these challenges by providing a modular framework that “consolidates these research innovations.” In addition, it fosters “community-driven development” by making the associated code and data publicly available through open-source licensing.
“We set out to create a platform in which people can create new modules and techniques that others can then use,” said Tancik. “Ultimately, the goal is for Nerfstudio to be an open-source community project that researchers will feel interested in working with and also helping to push further.”
Presently, 20 Berkeley engineers are actively contributing to Nerfstudio and helping to maintain it. And as many as 100 people outside the university have already contributed to the core code since its launch in October 2022.
Nerfstudio also enables users to easily run NeRFs on real-world data they collect, a common challenge for developers. At the same time, it makes this technology more accessible to users without NeRF expertise, such as special effects studios and social media users.
“It’s kind of exciting that everything is out in the open,” said Tancik. “It’s incorporating the cutting-edge research you have, with both researchers wanting to push it forward and people who just want to use the tech.”
More information:
Matthew Tancik et al, Nerfstudio: A Modular Framework for Neural Radiance Field Development, Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings (2023). DOI: 10.1145/3588432.3591516. On arXiv: DOI: 10.48550/arxiv.2302.04264
Citation:
Open-source platform makes it easier to create 3D scenes from images (2023, July 27)
retrieved 27 July 2023
from https://techxplore.com/news/2023-07-open-source-platform-easier-3d-scenes.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.