Sign up for the newsletter

Signup for the Newsletter

Email

Videos: new advances in markerless facial capture

Tuesday, May 7th, 2013 | Posted by Jim Thacker

YouTube Preview Image

Two interesting – and, it must be admitted, entirely unconnected – videos have surfaced on YouTube showing new advances in markerless facial motion-capture technology.

At the top end of the market, Realtime Facial Animation With On-the-fly Correctives (above), shows work carried out at ILM, and detailed in a paper due to be presented at Siggraph 2013.

The system, which is shown running on a consumer laptop connected to a Kinect, captures facial performance in real time and requires no calibration.

Instead, it starts with a single 3D scan of the actor’s face in a neutral pose, then applies corrective shapes to match the actor’s current expression, based on depth and texture information recorded by the Kinect.

The fitting of the adapative PCA (Principal Component Analysis) model used to generate the corrections improves incrementally over time, so the quality of the output improves progressively through a capture session.

As you might expect from ILM, the results look pretty impressive – although the researcher demonstrating the system probably won’t be winning any voice-acting jobs in the near future.

YouTube Preview Image

At the opposite end of the spectrum, start-up Egyptian developer Snappers Systems has posted a new demo of its own, as-yet-untitled, facial capture system, this time using a head-mounted camera.

Again, the results look good, with some real subtlety in the brow and mouth movements.

There’s no technical information on the website, but you can see earlier demo videos on the company’s Facebook page, and read an older interview with the developers on CGArena.

Tags: , , , , , , , , , , , , , ,

Comments

You can follow any responses to this entry through the RSS 2.0 feed.

You can leave a response, or trackback from your own site.


© CG Channel Inc. All Rights. Privacy Policy.