Tuesday, May 7th, 2013 Posted by Jim Thacker

Videos: new advances in markerless facial capture

Two interesting – and, it must be admitted, entirely unconnected – videos have surfaced on YouTube showing new advances in markerless facial motion-capture technology.

At the top end of the market, Realtime Facial Animation With On-the-fly Correctives (above), shows work carried out at ILM, and detailed in a paper due to be presented at Siggraph 2013.

The system, which is shown running on a consumer laptop connected to a Kinect, captures facial performance in real time and requires no calibration.

Instead, it starts with a single 3D scan of the actor’s face in a neutral pose, then applies corrective shapes to match the actor’s current expression, based on depth and texture information recorded by the Kinect.

The fitting of the adapative PCA (Principal Component Analysis) model used to generate the corrections improves incrementally over time, so the quality of the output improves progressively through a capture session.

As you might expect from ILM, the results look pretty impressive – although the researcher demonstrating the system probably won’t be winning any voice-acting jobs in the near future.

At the opposite end of the spectrum, start-up Egyptian developer Snappers Systems has posted a new demo of its own, as-yet-untitled, facial capture system, this time using a head-mounted camera.

Again, the results look good, with some real subtlety in the brow and mouth movements.

There’s no technical information on the website, but you can see earlier demo videos on the company’s Facebook page, and read an older interview with the developers on CGArena.