Deepfakes aren’t all video sleuths need to worry about in the future. This week, Nvidia announced that its GPU Technology Conference (GTC) was made almost entirely with its own Omniverse CG platform, and the event happened in April. For months, Nvidia fooled everyone into believing its GTC 2021 conference was real — and we’ll see a lot more of that in the coming years.
Since the start of the pandemic, Nvidia CEO Jensen Huang has delivered keynotes from his kitchen. GTC 2021 still featured a kitchen keynote, but this time with an entirely virtual kitchen made in Omniverse. Even more impressive, the Nvidia team managed to create a CG model of Huang that delivered part of the keynote.
Of course, the conference wasn’t entirely fake. Huang still spoke, and the CG model was only on screen for a brief time. “To be sure, you can’t have a keynote without a flesh and blood person at the center. Through all but 14 seconds of the hour and 48-minute presentation — from 1:02:41 to 1:02:55 — Huang himself spoke in the keynote,” Nvidia wrote in a blog post.
Omniverse is Nvidia’s platform for creating and animating 3D models in a virtual space. It uses simulations, material assets, and lighting like other 3D programs, but accelerates them with Nvidia RTX graphics cards. That gives designers a chance to view ray-traced lighting in real-time to adjust the scene accordingly.
As the name implies, Omniverse connects artists and the tools they use. The platform itself supports real-time collaboration, and it brings together assets from multiple 3D applications. In the Connecting in the Metaverse documentary (above), Nvidia specifically calls out Unreal Engine and Autodesk Maya, which some designers used to make the GTC conference along with Omniverse.
Virtual conferences are the new normal for many tech companies, and although some boil down to nothing more than a PowerPoint presentation and a speaker, Nvidia showed that they can be much more. What’s surprising about the GTC 2021 keynote isn’t that it was virtual, but that Nvidia was able to hide that fact for months.
It underscores just how easy it is to trick a large audience into believing graphics are real, and it’s something that we’ll continue to see for years to come. “If we do this right, we’ll be working in Omniverse 20 years from now,” Rev Lebaredian, vice president of Omniverse engineering and simulation at Nvidia, said.
Still, the technology isn’t perfect. During the brief time CG Huang is on screen, it’s easy to see that CG is at work thanks to some stiff animation and a slightly out-of-sync voiceover. The kitchen is a different story. Even after rewatching GTC 2021 knowing that the kitchen is fake, it’s almost impossible to spot the difference between the Omniverse model and the real thing.
And now, tools for developing these kinds of models are easier than ever to access. In addition to free programs like Blender, there are tools like Unreal Engine’s MetaHuman, which can generate a realistic character model in less than an hour.
That’s exciting for the world of CG, but it carries a worry. The rise of deepfakes over the past few years has made it more difficult to tell real from fake, and as Nvidia proved with its GTC 2021 conference, you can trick a large audience into believing something rendered by a computer is real.
Hopefully, those tools will be used for good, like a months-long grift where Nvidia held its tongue about a virtual conference that everyone thought was real.