The latest version of Nvidia’s Deep Learning Super Sampling (DLSS) is already a major selling point for some of its best graphics cards, but Nvidia has much bigger plans. According to Bryan Catanzaro, Nvidia’s vice president of Applied Deep Learning Research, Nvidia imagines that DLSS 10 would have full neural rendering, bypassing the need for graphics cards to actually render a frame.
During a roundtable discussion hosted by Digital Foundry, Catanzaro delved deeper into what DLSS could evolve into in the future, and what kinds of problems machine learning might be able to tackle in games. We already have DLSS 3, which is capable of generating entire frames — a huge step up from DLSS 2, which could only generate pixels. Now, Catanzaro said with confidence that the future of gaming lies in neural rendering.
“I feel like we’re going to have increased realism and also, hopefully, make it cheaper to make awesome AAA kinds of environments by moving to much, much more neural rendering,” said Catanzaro. “I feel it’s going to be a gradual process.”
Nvidia’s DLSS 3.5 update is what ray tracing always wanted to be
This Starfield mod adds Nvidia’s DLSS 3 — for free
This AMD GPU could have destroyed Nvidia, but we might never see it
Neural rendering might sound like a futuristic concept, but Nvidia’s had it working for a long time now. It showed a demo of an open-world game being rendered in real time with the use of a neutral network, and this was in 2018 at the NeurIPS Conference. The graphics looked very simple even for 2018 standards, let alone now, but the scene was being generated entirely by neural rendering. All of this was based on data and prompts from the UE4 game engine that determined what objects should be shown in the scene and where they’d be placed.
While Catanzaro believes that the hypothetical DLSS 10 will be a “completely neural rendering system,” he also touched on the human aspect of creating games. If AI can generate everything, won’t that make game devs and artists obsolete? Catanzaro doesn’t believe so.
“The thing about the traditional 3D pipeline and game engines that’s so important is that it’s controllable. You can have teams of artists build things and they have coherent stories and coherent locations and everything. You can actually build a world with these tools, and we’re going to need those tools for sure,” Catanzaro said. “I do not believe that AI is going to build games in a way where you just write a paragraph about ‘make me a cyberpunk game, and I want a lot of neon reflections and really tall buildings with occlusions,’ and pop, comes Cyberpunk.”
Jakub Knapik, vice president of art and global art director at CD Projekt Red (the studio behind Cyberpunk 2077), reacted to the prospect of DLSS 10 by saying, “[it] scares me like crazy.” However, Knapik sees the potential of using machine learning to enhance interactivity in games.
This is also something that Nvidia is working on, under the name of Nvidia ACE. This tech will help developers make their games more interactive and responsive by essentially adding a chatbot to the game’s NPCs. Further updates to the platform even allow devs to crank up how toxic the character should be toward the player.
DLSS 10, if it will even be called that by the time it’s released, is a long while away. However, it definitely seems like the future of gaming may lie in AI, be it for the purpose of upscaling or even generating a huge portion of the game.