Wednesday, April 22nd, 2020 Posted by Jim Thacker

Andersson Technologies releases SynthEyes 2004

The demo for SynthEyes Tracking for Production, The Gnomon Workshop’s guide to using SynthEyes in production. The new 2004 update adds new neural network capabilities to the camera tracking software.


Andersson Technologies has released SynthEyes 2004, the latest version of its 3D tracking software, adding an interesting new machine-learning-based system for identifying tracking features in source footage.

The implentation is described as a “starting point” for future development, and Andersson Technologies currently only recommends it for specific tasks, including digital set reconstruction.

At the time of posting, there don’t appear to be any images the new features on the firm’s website or its YouTube channel, but we’ll update the story if we find any.

New neural networks help identify tracking features in source footage
New machine-learning-trained tools are now starting to appear in DCC applications, for use cases ranging from automatic identification of facial features in compositing to image processing for material authoring.

In the case of SynthEyes, the new neural networks distributed with the software are intended to help identify tracking features in source footage, and work with both automatic and manually supervised tracks.

The primary network is designed to detect “not only the spot features of the normal auto-tracker, but a number of corner, junction, and pipe features suggested by user data”.

Separate networks detect X tracking marks in greenscreen footage and crossing points in lens grids.

According to Andersson Technologies, its implementation enables neural networks to be “updated like scripts”, making it easy to improve the existing networks or add new ones in future releases.

Not a magic solution for tracking yet
The online documentation describes SynthEyes’ neural network capabilities as a “starting point” for future development, so at present, their use in production is limited.

According to the usage guidelines, “In most cases, it will be … best to use SynthEyes’s traditional auto-tracker, which is far faster than the neural nets and produces longer and more accurate trackers.”

Suggested current use cases where the networks may help including tracking buildings for digital set reconstructions, and for the specific tasks to which the individual networks are tailored.

Andersson Technologies also has an interesting summary of the technical challenges of implementing machine-learning-based approaches for camera tracking, which is well worth a read.

Other new features: new lighting-invariant mode for supervised tracking
Other new features in SynthEyes 2004 include a new lighting-invariant tracking mode, which should make supervised tracks more robust on shots with significant changes in overall lighting levels.

There are also a number of workflow improvements and bugfixes: you can find a full list via the link below.

Pricing and availability
SynthEyes 2004 is available for Windows 7+, RHEL/CentOS 7 Linux, and Mac OS X 10.9+.

The new machine learning capabilities are based on the open-source TensorFlow library, so neural processing will only run on the CPU on macOS and systems with AMD GPUs.

New licences cost from $299 to $699, depending on whether you buy the Intro or Pro edition of the software, and for which platforms.

Read a full list of new features in SynthEyes 2004 on Andersson Technologies’ website