Wednesday, January 5th, 2022 Posted by Jim Thacker

Omniverse Audio2Face now generates facial blendshapes

Nvidia has released Omniverse Audio2Face 2021.3.2, the latest version of its experimental free AI-based software for generating facial animation from audio sources.

The release adds the option to generate a set of facial blendshapes spanning a wide range of expressions for a custom head model, then export them in USD format for editing in software like Maya or Blender.

Generate automatic lip-sync and facial animation for Character Creator characters from audio files
First released last year, Audio2Face is an AI-trained tool for generating facial animation for a 3D character from audio sources: either offline recordings of speech, or a live audio feed.

Along with sister app Omniverse Machinima, the software is one of a set of new games tools Nvidia is developing around Omniverse, its new USD-based real-time collaboration platform.

New options for controlling facial animations via a facial blendshape-based workflow
In the initial release, the only way to modify the animation Audio2Face generates was via post-processing parameters, but Nvidia has since begun implementing an alternative workflow based on facial blendshapes.

Audio2Face 2021.3.0 added the option to link a custom blendshape-driven character asset to the base Audio2Face asset, with 2021.3.1 adding controls for the symmetry of the solve.

To that, Audio2Face 2021.3.2 adds the option to generate a set of blendshapes for a custom head model.

The video above shows the software being used to transfer a set of 46 readymade blendshapes covering a standard range of facial expressions from the A2F asset to a custom head.

The transfer process can be controlled by adjusting correspondence points identifying equivalent features – the locations of the same facial features – on the two head models.

The process preserves UVs, making it possible to reuse the original facial textures.

The resulting set of blendshapes can then be exported in USD format, making it possible to edit individual facial shapes in DCC applications capable of importing USD files, like Maya or Blender.

Other new features: new Streaming Audio Player, support for MetaHumans
Other new features in Audio2Face 2021.3.2 include a Streaming Audio Player, for streaming audio data into the software from external sources like text-to-speech applications via the gRPC protocol.

Since we last wrote about the software, Audio2Face 2021.3.1 added the option to use Audio2Face animations on MetaHumans: 3D characters generated with Epic Games’ free MetaHuman Creator app.

You can see the workflow for transferring facial animation data from Audio2Face to a MetaHuman inside Unreal Engine via the Omniverse Unreal Engine 4 connector in this video.

Pricing and system requirements
Omniverse Audio2Face is available for Windows 10. It requires a Nvidia RTX GPU: the firm recommends a GeForce RTX 3070 or RTX A4000 or higher. All of the Omniverse tools are free to individual artists.

Read a full list of new features in Omniverse Audio2Face 2021.3.2 in the online release notes

Download Omniverse Audio2Face from Nvidia’s product website