Try out GPU-accelerated GI in mental ray
A test scene for mental ray’s new prototype GPU-accelerated GI engine. On a machine with a dual 2.4GHz quad-core Xeon E5620 CPU, 8GB RAM and a Quadro K5000, enabling the GPU roughly halves render time.
Nvidia has called for users to try the new GPU-accelerated GI engine in mental ray 3.12, included in both 3ds Max and Maya 2015, but not enabled as standard. Feedback will be used to shape development of the engine.
Works without the need to tailor your scene set-up
The engine uses brute-force raytracing with results being combined with the primary rendering done on the GPU, meaning that existing custom shaders should work without modification.
In Nvidia’s test scene, shown above, enabling GPU calculation roughly halves render time, from 20 minutes 52 seconds to 11 minutes 34 seconds.
Feature limitations
A number of key features aren’t supported yet, including motion blur and particles; and there is only partial support for scattering shaders, emissive materials and hair.
Your scene geometry and shader data will also need to fit in your graphics card’s on-board memory, and there is currently an absolute limit of 25 million triangles. And obviously, being Nvidia, you’ll need a CUDA GPU.
How to access the engine
The prototype engine can be enabled via the scene options or the command line of the standalone version of mental ray, but Nvidia has posted scripts enabling Max and Maya users to do the same thing via the GUI.
You can download them via the link below.
Read more about the prototype GPU-accelerated GI engine in mental ray on Nvidia’s blog
(Includes download links for the 3ds Max and Maya scripts)