Nvidia GANverse3D is the AI ​​that renders 2D models 3D

Nvidia GANverse3D is the AI ​​that renders 2D models 3D

The program of Nvidia's Graphics Technology Conference 2021 is particularly dense and varied and, in addition to the usual keynote from Jensen Huang's kitchen and the treasure hunt for a GeForce RTX 3090 that accompanied the start of the event, includes numerous panels for professionals works and creators. This hunt was distinguished by a new symbol, a golden neuron whose dendrites are represented by light bulbs. The reference to genius and creativity was further highlighted during the keynote, summarized in a post on the Nvidia blog with the eloquent title: "Nvidia CEO introduces software, processors and supercomputers for the 'da Vinci' of our time “.

The panels of the GTC 2021 therefore included numerous contents dedicated not only to video game developers, but also to creators, designers and architects. Nvidia aims to differentiate its products, as demonstrated by the announcement of the Grace CPU, the company's first CPU dedicated to data centers. This renewed dynamism takes the form of placing Nvidia Omniverse, a multi-GPU real-time simulation and collaboration platform dedicated to creators, developers and professionals, at the center. We had the opportunity to test the capabilities of Nvidia Omniverse as early as September 2020 with the release of the GeForce RTX 30xx series, thanks to the early access of the Machinima, Reflex and Broadcast functions.

We had the invitation and the pleasure of following one of these, entitled "NVIDIA Graphics / AI Research". Richard Kerris, general manager of Nvidia Omniverse, and Sanja Fidler, director of the Nvidia AI Research Lab in Toronto, presented us with a surprising new application of AI: a new deep learning engine that creates 3D models starting from simple 2D images. Nvidia Omniverse interior. The application has been renamed GANverse3D: the models can then be further transformed into virtual environments. How does this new technology work? Researchers from the Nvidia AI Research Lab - a team of more than 200 scientists specializing in AI, computer vision, self-driving cars, robotics and graphics - have devoted an extensive research paper to answer this question. Produced in collaboration with the Universities of Toronto, Stanford and Waterloo, the Vector Institute and the Computer Science and Artificial Intelligence Laboratory of MIT and placed in peer review on October 18, 2020, the document sets out in detail how GANverse3D works.

The assumption that GANverse3D works for Nvidia Omniverse is that “differentiable rendering” is used to train networks to perform “reverse graphics” tasks, such as predicting three-dimensional geometries from 2D images. To train software, until now, it was necessary to use images from different points of view of the object - in the automotive industry, for example, a photographer had to walk around a parked vehicle and take pictures from different angles. Generative Generative Adversarial Networks (GAN), a recent application of AI, seem to be able to acquire the idea of ​​three-dimensionality during training: a small improvement over the past, but which still required detailed image sources.

The Nvidia AI Research Lab's approach took place in two steps. The first: to exploit GANs as a data generator with multiple points of view, to train an inverse graphical network using a differentiable renderer, not necessarily three-dimensional and based on existing data sets. At this point the "trained" inverse graphic network becomes a teacher for the GAN which will then have to transform a two-dimensional image into a 3D render. GAN thus becomes a controllable 3D “neural renderer”, complementary to traditional graphic renderers. The paper concludes with an important annotation from the researchers, who state that: “our approach achieves significantly higher quality 3D reconstruction results, while requiring 10,000 fewer nodes than standard data sets […]”.

Without resorting to any three-dimensional resources, “we have transformed a GAN model into a very efficient data generator, so that we can create 3D objects from any 2D image on the Web,” said Wenzheng Chen, researcher among the authors of the paper. "Because we trained the GAN on real images, rather than on stock images based on synthetic data, the AI ​​more accurately processes 3D models and pulls them from the real world," said NVIDIA researcher Jun Gao, an author of the project. The research behind GANverse3D will be presented in two upcoming conferences: in May, during the International Conference on Learning Representations, and in June, at the Conference on Computer Vision and Pattern Recognition.



After training, GANverse3D only needs one 2D image to predict a 3D model. As an extension of the Nvidia Omniverse platform, if run on Nvidia RTX GPUs, any two-dimensional image can be transformed into a three-dimensional render fully controllable by developers, who can further modify it. Not only that: thanks to Omniverse Connectors, developers can use their favorite 3D applications in Omniverse to simulate complex virtual worlds even with real-time ray tracing. The set of these features and in particular GANverse3D could help architects, creators, game developers and designers to easily add new objects to their mockups, without the need for specific skills in 3D modeling or with a lower investment of their budget in the realization of the renderings. .

On the Nvidia blog it is possible to appreciate a first application of GANverse 3D, which we were lucky enough to see live during the conference: the creation of a three-dimensional model of KITT, the super-technological car that appeared alongside. by David Hasselhoff in Supercar and Knight Rider. To recreate KITT, the researchers used an image taken from the web and let GANverse3D create a corresponding textured 3D mesh, including various vehicle details such as wheels and headlights. They then intervened with the NVIDIA Omniverse Kit and NVIDIA PhysX tools to convert the base texture into high-quality materials and give KITT a more realistic look. Finally, they inserted it into a dynamic driving sequence together with other cars. All in just over 40 minutes of presentation, streaming the entire process live.

“Omniverse allows researchers to offer interesting and cutting-edge tools directly to creators and users - said Jean-Francois Lafleche, deep learning engineer at NVIDIA - Propose GANverse3D as an extension of Nvidia Omniverse will aid artists in creating richer virtual worlds for game development, urban planning, or even training further, new machine learning models ". In the last hours of the GTC we may have, as anticipated by Jensen Huang in his introductory keynote, new information on the news and research for the "Da Vinci of today" proposed by Nvidia.

Do you want to become a creator or enjoy the news presented during the GTC? Nvidia products are available on Amazon.








Powered by Blogger.