Thursday, May 31st, 2018 Posted by Jim Thacker

Chaos Group on V-Ray Next and the future of rendering


With its Scene Intelligence tools, new GPU rendering architecture and AI-driven denoising, V-Ray Next is a huge leap forward for Chaos Group’s industry-shaping production renderer. We spoke to CCO Lon Grohs about the design decisions that guided the release, the challenges facing V-Ray today, and the forces that could drive the next generation of rendering software.


A lot has happened in the industry since the last major update to V-Ray, back in early 2014. Ray tracing finally replaced rasterisation as the de facto rendering technique for visual effects: a change driven largely by V-Ray itself, along with the growing popularity of Arnold – which, not entirely coincidentally, replaced Mental Ray as Autodesk’s default renderer – and RenderMan’s new RIS architecture.

GPU rendering also came to the fore, with the emergence of a new generation of GPU-based tools like Redshift and OctaneRender, while game engines like UE4 began to penetrate other market sectors, particularly in V-Ray’s original core market of architectural visualisation.

And V-Ray itself expanded. When V-Ray 3.0 was released, there were five native editions of the renderer. Today, there are 11, ranging from DCC and CAD applications to compositors, and even Unreal Engine itself.

V-Ray Next: a Swiss Army knife for 3D professionals
All of which means that V-Ray Next, the 3ds Max edition of which shipped last week, has a lot of bases to cover. Visual effects professionals get a new physically based hair material and support for layered Alembic files. Visualisation artists get new lighting analysis tools and the V-Ray Switch material for generating client variations. And everyone gets the headline features: new Scene Intelligence tools, designed to simplify the process of setting up complex scenes; the V-Ray GPU rendering architecture – a new name for V-Ray RT, now officially focused on production rendering rather than interactive previews; a new AI-driven denoiser based on Nvidia’s OptiX technology; and speed boosts – up to 2x in the case of GPU rendering.

To explore the thinking behind this feature list, along with the challenges facing V-Ray today, we spoke to Lon Grohs, Chaos Group’s chief commercial officer. Below, you can hear his thoughts on how machine learning could help shape the development of rendering tools, on CUDA versus OpenCL in GPU computing, on when we might see DirectX Raytracing become available in desktop software, and on how Chaos Group meets the challenge of developing two ostensibly competing renderers, following its acquisition of Corona.

But first – just why is the new release called V-Ray Next?



CG Channel: Let’s start with the name. Why V-Ray Next rather than V-Ray 4.0?

Lon Grohs: That’s actually kind of easy. It was simply because ‘V-Ray Four For 3ds Max’ didn’t sound great, especially if we got to a 4.4 service pack, and it became ‘V-Ray Four Four For 3ds Max’.

CGC: But it’s more than that, surely. Isn’t it a way of signalling the scale of the changes?

LG: Yes, absolutely. We wanted to put a stake in the ground to let people know that this is not just a normal release. It’s the continuation of a process, but also the groundwork for our next generation of rendering.

As a baseline, Next is 25% faster pretty much for all things … because of what we’ve been able to accomplish under the hood. Just from hitting render, you’re going to get a pretty massive speed boost.

Scene Intelligence made its debut in the 3.x series with Adaptive Lights and automatic sampling, but we’ve made V-Ray Next capable of doing a quick learn from the scene. When it does a pre-pass like the light cache it can figure out all sorts of information from the scene to optimise rendering.

Our GPU rendering has also made some significant improvements, the most major being that the code base is now running on a new architecture … a multi-kernel system rather than a megakernel. The reason that’s important is that the multi-kernel can take better advantage of the parallel processing [offered by the GPU] and we can add production-level features on top of the GPU rendering code without slowing it down.

CGC: How big an update is it in comparison to V-Ray 3.0?

LG: During the 3.x series, every service pack was jam-packed with new features. Even when we got to 3.5 and 3.6, we debated whether they could be a 4.0 release. But while that was going on, [we had developers], for lack of a better description, tearing apart the code: what they called ‘breaking the SDK’.

Part of that was getting rid of [legacy code], like the old adaptive sampler. It was left in 3.x, in case someone needed it to be compatible with an old scene, but there’s only the newer variance-based sampler in Next.

We’ve also added support for the SSE 4.2 instructions which work on more modern processors, and we’ve got our own custom Intel Embree ray tracing library, which speeds up anything from proxies to motion blur.

CGC: How long has that process of ‘breaking the SDK’ been going on?

LG: It started 12 to 18 months ago. The research was running in parallel with the development of the service packs. For the first time in the history of the company, we even stopped nightly builds for a while, because the team were so deep into [the work].


Dabarti’s showreel for V-Ray GPU. The move from supporting the GPU for interactive previews to its use for full-blown production rendering has been a key focus for Chaos Group’s recent development work.


CGC: What do you see as the main changes currently facing V-Ray users?

LG: Rendering is in an interesting [place right now]. Overall, the biggest demand we’ve seen from customers is to make it easier. In the old days, you had to tweak a lot of settings and fine-tune everything to get the performance. Now you can make the renderer smart enough and fast enough that you can simplify that.

At the same time, we have a loyal group of power users and when they see something that they used to change disappear, they get nervous.

CGC: Because if you haven’t changed the settings yourself, you somehow haven’t done your job?

LG: Right. People assume that Vlado [Chaos Group CTO Vlado Koylazov] must have some kind of secret sauce. But even he tells people that whenever he has to troubleshoot a scene, he will literally set it to the default settings and it almost always renders faster.

It’s also just muscle memory at some point. ‘I did this, and I had success with it.’ But when a lot of the code changes under the hood, you have to hide stuff so users can’t go back; they can’t use that muscle memory.

CGC: You also have the new Scene Intelligence features. Are they all based on light cache analysis?

LG: It’s crucial to several of them [like the new Adaptive Dome Light and automatic exposure system] but there are some other things that don’t require the light cache.

CGC: Are they all analytic, or are any of them based on machine learning?

LG: Scene Intelligence works a bit like machine learning, but it happens locally on the user’s workstation. Right now, the only thing that’s based on AI is the AI denoiser, and that’s a data set Nvidia had precalculated for OptiX. We do have some ideas about how we can learn from more scenes, but right now Scene Intelligence is a finite thing that’s happening on the user’s workstation.

CGC: So there’s no aggregation of data between users?

LG: No, not yet.

CGC: Is that a direction you’re interested in?

LG: Oh yes. Adobe is really doing some amazing stuff with machine learning in Sensei and so on. One of the things that really excited me when I saw Dimension [Adobe’s new 3D software for graphic designers, into which V-Ray is integrated as a rendering engine – Ed.] was the option to take a photo, bring it into a 3D scene and use it as a background that Dimension, through machine learning, converts into a 360-degree dome and turns into HDR. And I assume that one of the ways that they’ve been able to do that is through Adobe Stock. They literally have hundreds of thousands of images at their disposal that they can learn from.

CGC: But presumably you could do something similar with your users?

LG: Exactly. If we could allow people to opt into it … cloud rendering could be a nice melting pot for that. [Chaos Group is launching its own V-Ray Cloud service – Ed.] Imagine if anything ever cloud rendered was something that you could analyse. You’d have so much data about how to render images faster or better.


The most recent V-Ray showreel indicates how widely the renderer is now used in visual effects, with clients including Industrial Light & Magic, MPC, Pixomondo, Blur Studio, Zoic Studios and Scanline VFX.


CGC: Who do you see as a ‘typical’ V-Ray user now? Is there such a thing?

LG: It has very wide audience: from people in visual effects like ILM and Scanline VFX, to arch viz, to product designers, to automotive, to artists who are just doing great characters. It’s a lot of stuff to keep straight. It’s like trying to make a Swiss Army knife that isn’t too big to carry in your pocket.

CGC: How does the user base break down between visual effects and visualisation?

LG: It varies for each version of V-Ray. Maya trends a lot heavier for VFX. In 3ds Max, the primary audience historically has been architectural visualisation [but] there’s still quite a bit of visual effects, automotive is still a big segment, and you’ve got a lot of schools teaching product design with it. If you look at a typical Max user, they’re a generalist. They’re going to do a bit of modelling, a bit of lighting, a bit of rendering.

CGC: What difference have renderers like Redshift made to the use of V-Ray in VFX?

LG: Redshift had GPU rendering as a production renderer a little sooner than we did. If you look at that idea of the generalist, it’s someone who’s … trying to get through a lot of work fast. GPU rendering has really come into the mix because if I invest a couple of thousand bucks in some powerful graphics cards, I still have the [same] footprint of my workstation, but now I can shoot through look development and iterate a lot faster.

At the same time, I know that if I’m going to use Redshift, it’s GPU-only, so I’m going to set up my scene to start from the GPU. What was happening with V-Ray was that because we have two render engines, and everybody was used to the CPU version, they were starting their scenes on the CPU and trying to switch over to the GPU … and it wouldn’t support some particular shader, and they would give up.

So we’d tell them, ‘Work just like you would with a regular GPU renderer’. Author your scene for the GPU from the beginning.’ And there was this ‘aha’ moment. So [in V-Ray Next], we’ve made the UI modal. When you’re in V-Ray GPU, only those features capable of rendering on the GPU are shown.

CGC: And those features have increased in V-Ray Next. But another issue cited when using V-Ray RT GPU in production was stability when rendering animation. Is that something that V-Ray GPU addresses?

LG: You can definitely use it for animation work. It’s totally ready to go. And actually, it’s very well suited to animation because you can render on the GPU or the CPU, so you get the frames ready to go [on your workstation], then kick them out to the farm. A lot of people don’t have GPU farms, so for those folks who have a CPU farm already, it fits beautifully. It’s still V-Ray GPU, so the frames match one to one.


V-Ray Next extends the list of features supported when rendering on the GPU, including glossy Fresnel effects, the new physically based hair shader and volumetrics, as shown in the video above.


CGC: What’s the status of OpenCL in V-Ray GPU? Your blog posts only mention CUDA.

LG: We keep some support for it, but it’s not really advised. We’ve had a good relationship with AMD, but there have been driver problems and all kinds of things. And then of course they’ve released their own renderer, which complicates things.

The reality is that [something] like 99% of our customers are on Nvidia hardware. And Nvidia has really helped us push our development forward.

CGC: Would it be fair to say that V-Ray GPU is an Nvidia-only renderer, in the sense that you won’t get the same performance on other manufacturers’ hardware?

LG: Yes, I would say effectively so. We’ve tried to keep the door open, because at the core of what we want to do is provide [for] any customer rendering on any hardware they choose. That’s why we’ve kept the OpenCL implementation. But the Nvidia one is in a much better place.

CGC: In your blog posts, you also mention that you’re looking Microsoft’s new DirectX Raytracing.

LG: [When DXR was announced at GDC this year] a lot of people [thought] we must have been caught off-guard, that we weren’t expecting it. No, no, no. We’ve known it was coming. We’re the largest company in the world solely dedicated to ray tracing technology. We’ve been dedicated to optimising ray tracing for 20 years. So we’re pretty excited about it.

We used to have to make the [case for] why ray tracing is superior to rasterised graphics. But now we’re in a new conversation: ray tracing in real time. And yes, you [have it] as a concept, but you don’t have the ray tracing that you’re used to … all of the complexity that you see when you’re watching Game of Thrones or The Avengers. That’s not what real-time ray tracing is doing yet. It’s very limited. It’s one bounce of shadows, one of reflections. Which is definitely the way to go; it’s just not as ‘real’ as full ray tracing.

CGC: How will it be before we see any new features based on DXR in products like V-Ray?

LG: I think we’re at least 18 months off from that. There’s a lot of research that needs to go on on both sides – software and hardware. And let’s keep in mind that that Star Wars demo was rendered on insane GPUs. [Around $60,000 worth of hardware, according to this article by Ars Technica – Ed.] I think you’re going to see a lot of rapid experimentation, a lot of demos like that, [before it gets] into the users’ hands, [in a way that] they can leverage it on their own desktop.


A scene from Blur Studio’s Destiny 2 ‘Last Call’ game trailer rendered on two Nvidia GV100 GPUs with AI denoising enabled. Chaos Group says that the system will be a ‘game changer’ for preview work.


CGC: We touched on the new AI denoiser earlier. How will that affect artists’ workflows?

LG: For look development, we think it’s going to be a game-changer. It’s extremely impressive when working in IPR on previews. When you want to do final rendering, you kick back over to V-Ray’s regular denoiser.

CGC: Which is the same policy that Isotropix has just adopted in Clarisse.

LG: Exactly. It’s pretty interesting that you can integrate it [OptiX-based denoising] right as it is, but it doesn’t work great for, say, animation. Our [native] denoiser also supports animation because it looks a frame ahead, a frame behind. We’ve also now added in the ability to take all of your denoised render layers and recomp them back to the beauty pass. So you have a denoised beauty pass. I think that’s going to work really well for people who are trying to push through a lot of frames, like in television.

CGC: Was it an issue that the original data set the AI denoiser was trained on was generated in Iray?

LG: Not that we noticed. Because ray tracers generally have Monte Carlo-type noise, it was pretty standard. If it could have more information about how V-Ray sampling works, could it be improved? Maybe. Over time, we think we’ll get a data set generated in V-Ray and try it out. But I think it’s worked okay.



V-Ray Next’s new VRayLightingAnalysis render element in false color mode after a render is complete. The update provides lighting analysis functionality previously only available in the now-defunct Mental Ray.


CGC: V-Ray Next also introduces features that were previously native to 3ds Max. How come?

LG: A couple of years ago we helped bring the V-Ray physical camera to all Max users by integrating it directly in the application as the 3ds Max Physical Camera. Since then, we’ve decided that we need to bring our own V-Ray Physical Camera back to give us more control over the UI and to add new features like the point-and-shoot exposure [system].

We’ve also included some lighting analysis tools in Next, and part of the reason that came about is that, as Mental Ray is not included in Max any more, and it was the only renderer to support lighting analysis, Autodesk needed a solution for customers that were using the feature to be able to render it. So we stepped in and created the ability to render out [heat maps] and light value overlays.

CGC: You announced V-Ray Cloud earlier this year. Does Next include any cloud-related features?

LG: One of the things that’s a challenge for 3ds Max users is that there’s no native Linux version, which makes it difficult, or just expensive, to render on the cloud. We’re trying to help people set up their scenes so that they can render with V-Ray Standalone, since if they can render on Standalone, they can render on Linux. It means that they may have to pick and choose certain features, or that certain plugins might not work, so we’re including a cloud check utility to help people figure out what they can do and what they can’t.

CGC: Presumably the new V-Ray Plugin material also helps there.

LG: I call it a ‘backstage pass’ to everything V-Ray can render. For instance, Maya has its own procedurals, and for V-Ray to support them, we’ve created V-Ray versions of them. [With VrayPluginNodeMtl] if you want a Maya procedural texture, you could load it directly from Max. Or in Modo, there’s support for PBR shaders for Unreal and Unity, so if you want to create a PBR shader in Max, you can just call one up. The beauty of it is that any of those [materials] can render with Standalone, so you can send them to the farm or the cloud.


A recent Corona demo, showing the upcoming Corona Renderer 2 rendering a V-Ray scene. Despite increased interoperability, few features have yet made their way from one renderer to the other.


CGC: Chaos Group owns two renderers now: V-Ray and Corona. How does that affect development?

LG: Each renderer has its own philosophy. Corona is aimed at point-and-shoot simplicity and V-Ray offers more flexibility. It’s great to be able to bounce ideas between teams.

CGC: One Corona feature that V-Ray users often ask for is Interactive LightMix.

LG: That’s definitely a popular request, and [we] have some ideas of how to integrate something like that that isn’t a dupe of the light mixer in Corona or Maxwell: it could be done through some of our light elements. We’ve also been looking at light path expressions. Something like that could make it’s way into V-Ray.

CGC: Is there anything else from Corona that has influenced the design of V-Ray Next?

LG: Vlado says that one area of influence from the Corona team was the desire to make things easier. It was good to talk to those guys about the design of the dome light. It helps to bounce ideas back and forth.

CGC: And how does the new version naming work going forwards? What’s next after V-Ray Next?

LG: I wish that I could tell you that we’ve thought that far ahead. [Laughs] But most people are calling the next version V-Ray 5 internally. I jokingly call it ‘the V-Ray After Next’, like the day after next.

We have plans for things we want to include in service packs, but some of that stuff is still getting researched. We’re experimenting more with what we can do with Scene Intelligence, and with out-of-core rendering on the GPU, so there are things that are going to improve throughout the release cycle.

Read more about the new features in V-Ray Next for 3ds Max on Chaos Group’s website