Wednesday, March 1st, 2023 Posted by Jim Thacker releases its AI-based mocap app for iOS

Originally posted on 26 January 2023 for the beta and updated with details of the final release.

AI-based mocap developer has released its new iOS app.

The app, promoted as an alternative to conventional optical and intertial motion-capture systems, promises to enable users to extract production-quality motion data from footage captured on two to six iPhones.

An AI-assisted markerless motion-capture technology, now available via an iOS app itself has been around for a few years: its AI-trained markerless motion-capture technology was initially available as a client service, and later as a web app before being refocused as an iPhone app.

Although there are already a number of iOS motion-capture apps, several of them free, they’re mainly single-camera systems, and are mainly aimed at quickly generating previs-quality data.’s app is rather different: instead of being used on a single handheld phone, it is designed for use on between two and six tripod-mounted iPhones, arranged to create a capture volume.

That puts it in a similar part of the market to camera-based markerless motion-capture systems like iPi Soft’s iPi Motion Capture, although unlike with iPi, processing is done in the cloud, not the user’s local machine.

Generate production-quality motion-capture data for games and animations with standard iPhones compares the quality of the data generated to commercial inertial capture suits – or, if using the full six iPhones, to commercial marker-based optical capture systems.

In support of that claim, the firm has posted comparison videos between its own technology and existing systems from Rokoko, Xsens, OptiTrack and Vicon on its website.

Although they aren’t independent tests – and some actually show the underlying technology in use with GoPro cameras, not the new iOS app – the results look very promising.

You can see demos created using data, including animated shorts, in-game animation, and mixed reality and virtual reality projects, on’s YouTube channel.

Even works on five-year-old iPhones
The iOS app can be installed on any iPhone running iOS 15.0 or later, and unlike most single-camera apps, it doesn’t require a TrueDepth sensor, so it works with any model from 2017’s iPhone 8 onwards.

The phones must then be fixed in place on tripods or suction mounts to create a capture volume. As well as the iPhones used to capture the footage, you can use an iPad to host the capture session.

The app is designed for rather smaller-scale shoots than marker-based optical capture systems: recommends a maximum capture volume size of 8m x 8m.

However, that gives it some of the portability of inertial capture suits, with the advantage that multiple actors – recommends a maximum of three – can be captured without increasing hardware costs.

Generates animation in FBX format, plus native Blender and Maya files
The footage from the capture sessions is then processed online on’s servers: it can be uploaded directly from the iPhones, or sent to the host device to upload later.

During processing, users can choose to retarget the raw data to a custom character rig, providing it conforms to’s standard 49-bone structure.

Custom rigs can be uploaded in FBX format, with the option to upload a separate Maya file with rig controls.

The resulting animation can be downloaded in FBX, BVH or USD format, as a Maya HumanIK file, or as a Blender scene containing the raw and retargeted motion, the character mesh, and the camera positions.

That makes it possible to use the data in most DCC applications and game engines: the documentation has walkthroughs for Reallusion’s Character Creator, MotionBuilder, Omniverse, Unity and Unreal Engine.

Price, system requirements and availability
The iOS app is compatible with iOS 15.0+ running on an iPhone 8 or later. The app itself is free, and you can trial it on two minutes of footage, after which, processing video requires a subscription.

Creator subscriptions cost $365/year, with a limit on the amount of animation data that can be processed each month equivalent to 30 minutes of footage of one person or 15 minutes of footage of two people.

Processing additional animation consumes credits, which cost $0.04/second/person processed. You can find more details in’s Fair Use Policy.

Download’s new iOS app from the App Store

Find details of how to record mocap data with’s iOS app in the online documentation