Tuesday, July 26th, 2022 Posted by Jim Thacker

Check out cutting-edge renderer Mitsuba 3


Research-oriented rendering system Mitsuba 3 can be compiled for either conventional forward rendering, or for inverse rendering – taking a 2D image and reconstructing a 3D scene that matches it.


Graphics researcher Wenzel Jakob and his colleagues at EPFL’s Realistic Graphics Lab have released Mitsuba 3, the latest version of the free research-oriented rendering system.

Its source code can be compiled can be compiled into variants showcasing various cutting-edge rendering technologies, including two focused on inverse rendering.

Rather than taking a scene and rendering a 2D image from it, they take a 2D image and generate a 3D scene matching it.

A change to try new rendering technologies before they hit the commercial tools
We first covered Mitsuba in 2014, tipping the original version, Mitsuba 0.6, as a technology to watch.

Back then, it was a highly modular rendering framework – its core could be extended with over 100 different plugins – showcasing experimental techniques not then available in commercial tools.

Its next incarnation, Mitsuba 2, focused in on specific areas, including spectral rendering, polarised rendering and differentiable rendering.



Take a 2D image and generate a 3D scene matching it
In Mitsuba 3, there are four default variants: two for conventional forward rendering, and two for inverse rendering: taking a 2D image and reconstructing the properties of a 3D scene matching it.

You can see a nice summary in the video above: as well as conventional geometry, Mitsuba can reconstruct volumetrics, and even surfaces that would generate particular caustic lighting patterns.

Unlike differentiable rendering libraries like PyTorch3D and TensorFlow Graphics, Mitsuba uses ray tracing rather than rasterisation, and unlike neural networks, the reconstruction is physically based.

As a consequence, the results “aren’t tied to Mitsuba, and can be processed by many other tools”, including DCC applications like Blender and 3ds Max.

Needs some tech savvy to get the most out of it
Like its predecessors, Mitsuba 3 is a way to try cutting-edge rendering techniques before they make their way into production tools.

If you want to try it for yourself, be aware that it really is a research tool: as well as computer graphics, it’s intended for image analysis in fields like astronomy, microscopy and medical imaging.

Even to install it requires some familiarity with Python, but the online documentation includes a range of how-to guides, and video tutorials are being steadily added to the RGL’s YouTube channel.

Rendering can be done either on the GPU – Mitsuba uses the CUDA API, so you will need a compatible Nvidia GPU – or on the CPU, via LLVM.

Licensing and system requirements
Mitsuba 3 is compatible with Windows, Linux and macOS and requires Python 3.8+. For GPU rendering, you will need a Nvidia RTX GPU. It installs from the command line: you can find installation instructions here.

The source code is freely available: the licence is a custom copyright notice.


Visit the Mitsuba website

Read more about Mitsuba 3 in the online documentation