Nvidia unveils Omniverse Machinima
A demo created “in a few days” using assets from Mount & Blade II: Bannerlord using Omniverse Machinima, Nvidia’s upcoming app for creating 3D animations based on games models using new AI-based tools.
Updated 13 April 2021: Nvidia has now announced the open beta of Omniverse Machinima. Details here.
The app, which is due in beta next month, features AI-based systems for generating character animation from webcam footage and audio recordings, plus real-time effects and rendering tools.
New Nvidia tools for games streamers and machinima creators
Although the biggest news from yesterday’s launch event for Nvidia’s new GeForce RTX 30 Series GPUs was, not surprisingly, the cards themselves, Nvidia also announced two new software products.
The first, Nvidia Broadcast, is aimed at livestreamers and video conferencing, featuring AI-driven tools for noise reduction, replacing the background of webcam footage and automatic video reframing.
The other was Omniverse Machinima, an upcoming app for generating CG animation from games assets and other stock content using a semi-automated AI-driven workflow.
Although dedicated machinima authoring tools aren’t new, the selling point of Omniverse Machinima over Valve’s Source Filmmaker or the internal recording modes of titles like GTA V looks to be its flexibility.
It’s built on Omniverse, Nvidia’s work-in-progress “open collaboration” platform for 3D production, which the firm described as “essentially Google Docs for 3D design” when it was unveiled last year.
Animate stock games characters using new AI-driven tools
With Nvidia’s subsequent demos having focused more on architectural visualisation and industrial design, Omniverse Machinima is one of the first showings of the technology for entertainment work.
Users will be able to import assets from games or from online marketplaces for stock content, or from DCC applications – Omniverse has connections for Autodesk and Adobe software and Unreal Engine.
Characters can then be animated via two AI technologies: Pose Estimator, which generates motion based on video footage of a live actor, and Audio2Face, which generates facial animation from recorded speech.
In the demo at the top of the story, the results look functional, although definitely more into Uncanny Valley territory than those from tools like Adobe’s Character Animator.
The results can be outputted using Omniverse’s real-time renderer, powered by Nvidia’s current-generation RTX GPUs, making it possible to export “film-quality cinematics”.
Pricing and availability
Omniverse Machinima is due in beta in October 2020. Nvidia hasn’t given pricing or system requirements.