The FreeMoCap Project: markerless mocap for $100?
Introducing the @freemocap system! A free, open-source framework for easy-to-use, low-cost markerless motion capture!
✨?✨https://t.co/i2PSSt4PuPThe current iteration relies on #anipose, #openpose and @DeepLabCut. Animation via @matplotlib
#opensource #OpenScience pic.twitter.com/kukQ7EtjFU— Jon Matthis (@JonMatthis) July 21, 2021
Researcher Jonathan Matthis has launched The FreeMoCap Project: an ambitious attempt to develop a low-cost, research-quality markerless optical motion-capture system.
The promising open-source framework can generate full-body skeletal motion from footage of an actor captured on two USB webcams, and comes with a Blender integration plugin.
Mathis says that his ultimate aim is to enable “a 14-year-old with no technical training and no outside assistance to recreate a research-grade motion capture system for less than 100 US dollars”.
Warning: it’s still very early in development
First up, a caveat: in its initial release, The FreeMoCap Project is more “one to watch” than “one to use”.
There isn’t any documentation, so you’ll need a bit of tech savvy to get it to work, and the GitHub repo comes with the disclaimer that “this still isn’t really in a state … for outside users yet”.
An open-source markerless mocap system that runs on consumer webcams
But with that out of the way, let’s take a look at what the FreeMoCap Project aims to do.
The system processes video footage of an actor to estimate a pose for each frame, then translates that to skeletal motion data that could be retargeted to a 3D character.
It’s intended to be hardware-agnostic: the minimum is two consumer-grade USB webcams, and the video embedded in the tweet above used “approx. 100 USD worth of equipment”.
A printed Charuco board – the image is included with the source code of the project – is used as a spatial reference point, but no markers are required on the actor’s body.
And the results look pretty good: there is a degree of noise and jank, particularly when one limb obscures another, but foot placement – often a problem with markerless systems – looks fairly solid.
The system even captures the positions of fingers and facial features, and the developers plan to add support for pupil tracking in future.
(Also, mad props to Matthis for his ability to juggle while skating around on a yoga balance board: something we think that more mocap researchers should be able to do.)
Based on life sciences research tools, but includes a Blender plugin
The FreeMoCap Project is based on open-source tools from the field of life sciences – Matthis himself is an assistant professor in the biology department at Northeastern University in the US.
The initial release uses OpenPose and DeepLabCut for markerless tracking with an Anipose-based backend for camera calibration.
To output animation data, the project comes with an experimental Blender add-on, and the project will “eventually be packaged into a custom build of Blender”.
There doesn’t currently seem to be a way to export data in a format that can be used in other DCC applications, but Matthis has posted on the project’s Discord server:
“It sounds like a lot of animator folk would benefit from a way to output to .fbx, a lot [of] biomechanists would probably like .c3d, and we’ll undoubtedly pop out .csv’s and .json’s too.”
If you want to try it for yourself, Discord is also currently the best place to look for technical support on the project, although Matthis aims to release more documentation and tutorials shortly.
System requirements and availability
The source code for The FreeMoCap Project is available under a GPL licence. The Blender plugin is available under an AGPL licence.
The system requires Anaconda, OpenPose and CUDA – so you will also need a Nvidia GPU to use it – and installs from the command line.
Read more about The OpenMoCap Project on Jonathan Matthis’s website