Tuesday, April 13th, 2021 Posted by Jim Thacker

Omniverse Machinima and Audio2Face to enter beta

Omniverse Machinima will enable users to create animation using game assets, as shown in this making-of for Nvidia’s original trailer for the app, which is due to launch in open beta at the end of GTC 2021.


Nvidia is to release Omniverse Machinima, its new software for creating cinematic animation from stock game assets, in open beta at the end of this week’s GTC 2021 conference.

Omniverse Audio2Face, a companion tool for generating facial animation automatically from audio sources, is also due to become available in open beta at the end of the week.

Both will be available as apps within Omniverse, Nvidia’s USD-based online collaboration platform, itself currently in free open beta.

Create cinematic-quality animations using stock props and characters from videogames
Originally announced last fall, Omninverse Machinima will enable users to create animation using existing assets from games, using a set of accompanying AI-driven animation tools.

According to Nvidia’s latest blog post, it comes with a library of assets from commercial games including present-day FPS Squad, and medieval strategy title Mount & Blade Warband.

Users will also be able to import stock content from online marketplaces; or custom assets created in other DCC applications, using Omniverse’s growing library of connection plugins for DCC applications.

The assets can then be assembled into background environments within Omniverse: the video above shows a brush-based layout workflow, with a user painting trees over terrain.



Audio2Face converts audio recordings of dialogue into lip-sync and facial animation in real time
For character animation, Nvidia provides a set of AI-trained add-on tools for generating believable human motion from audio or video recordings.

They include Omniverse Audio2Face, Nvidia’s own app for converting audio into facial animation.

As well as processing offline recordings of dialogue, it will be possible to connect a microphone and perform live, driving a 3D character’s facial animation in real time.

Users can “edit various post-processing parameters” to adjust the resulting performance, with support for direct blendshape editing due “at a later date”.

Audio2Face can also retarget facial animation from one character to another.

Full-body pose estimation tools developed by AI specialist wrnch
For full-body animation, Nvidia has partnered with AI specialist wrnch.

Users can record an actor’s movement using an iPad or iPhone camera via free iOS app wrnch Capture, then stream the motion data to a 3D character Omniverse via the wrnch AI Pose Estimator extension.

Add effects and render out using the Omniverse platform’s built-in tools
Once character work is complete, Machinima users can add effects and export the final cinematic animation using the other components of the Omniverse platform.

For smoke, fire and debris, Omniverse integrates Nvidia technologies including real-time dynamics system PhysX, destruction system Blast, and gaseous fluid solver Flow.

The finished animation can then be rendered via Omniverse RTX Renderer, the platform’s GPU-based real-time ray tracing render engine.

Pricing and availability
Omniverse Machinima and Omniverse Audio2Face are due out in open beta later this week, at the end of GTC 2021. Nvidia hasn’t announced a final release date for either app.

Both will be available for Windows 10 only, and require a compatible Nvidia GPU: at least a GeForce RTX 3060 or Quadro RTX 4000 in the case of Machinima; any RTX card in the case of Audio2Face.

The tools are provided as apps within Omniverse, so to use them, you will need to install Omniverse first, then install them via the Apps section of the Omniverse Launcher.


Read Nvidia’s release announcements for Omniverse Machinima and Omniverse Audio2Face

Read more about Omniverse Machinima on Nvidia’s website

Read more about Omniverse Audio2Face on Nvidia’s website