Thursday, August 26th, 2010 | Posted by Leonard Teo
Brad Peebler from Luxology posts some of their research findings on the GPU vs CPU rendering.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
But this is 12 CPU’S vs. 2 GPU’s – a six time speedup, really.
The 12 core machine, additionally, would cost quite a bit more than a medium-end GPU. How about tests on a more normal machine, with a year-old dual proc, and a similar GPU.
Did you also check Arion from RandomControl? It is a hybrid renderer as they call it, using all gpu’s and cpu’s on the local system, but also with the possibility to extend that to the network and using this power from each machine within the same subnet..
It’s not done yet, as in, it doesn’t support all materials yet, but they’re working on it, and I must say, compared to other cpu-only renderers it’s quite fast. And true, modo renders really fast too, so the difference is less noticable.
A few months ago we tested several GPU-renderers (iRay, Arion, Octane, etc) and our conclusion was that Arion came closest to a production-ready renderer, as it could also render out different passes, and it’s fast with normal lights/hdri/daylight system, while Octane was much slower with that (except for the directlight + AO mode, that was more equal to Arion speed)
I’m curious where this is going, but both modo CPU rendering and Arion hybrid rendering suggest a very nice and fast rendering future
Uuuh wrong, it was 2 CPU’s against 2 GPU’s. The CPU’s each had 6 core’s while the GPU’s each had 192 Cuda cores. The Quadro cards used (FX4800 I believe) cost around 1500 euro’s a pop which is a.f.a.i.k. more expensive than the CPU’s used. I think you get a similar comparison when you take 1 year old CPU’s and GPU’s.
jeremy, apples to apples means = CPU Cores vs CUDA cores. The Quadro FX 4800 has 192 CUDA Cores and the Quadro FX 5800 has 240 CUDA .
12 CPU Cores against 432 CUDA Cores – do you get it now?
Very informative talk, thanks!
i think, the reported comparisons are not representative. Have a look at V-Ray RT CPU versus GPU and you will find very significant speed increases. Why did you ignore it?
This wasn’t actually meant as a comparison at all. Luxology, much like the rest of us, had heard that GPU rendering could significantly boost render speeds. After many many internal tests, they simply decided that GPU rendering is not robust enough yet for them to consider it as path for development. It’s not a slight on GPU render engines or some sort of propaganda, it’s just Luxology being open with its customers about why they’ve decided that pursuing a GPU based rendering solution isn’t right for them at this time.
This comparison is an absurd, And I bet IT´S FAKE! Brad Peebler wants to sell his product and is able to do anything. Open scenes with unoccluded objects are very easy to render on any engine. The problem is interior scenes, full of glass and occluded objects. Don´t listen to this.
Agree with Chris. Luxology’s results are by no means a slight on GPU rendering. I am very happy that they’ve taken the stand to just post their findings, as unpopular as they may be. There has been a lot of hype that GPU rendering is somehow going to be a magical solution for rendering. It might one day…might not…we’ll see. Having more honest reports like this is good for the community overall.
Findings!? Meh, this is Intel propaganda, nothing more that is why it is there the logo in the Video. You can get 480 CUDA cores in a GTX480 with cost at $500 while a FX4800 gets you at $1500!
Well – for a very high end workstation, the cores-to-cores comparison stands. I think most people are more likely to be using 2 to 4 cores, maybe 8, CPU-wise. Also, they are unlikely to go for the $1500 GPU. A better comparison would be a more common machine with a more common (i.e.: mid to high-end gaming) GPU. I would be pretty surprised if the comparison came out the same.
Additionally, Modo’s extremely well-optimized renderer, which is blazingly fast, is not necessarily a good comparison to the GPU code they used, which requires entirely different kinds of optimization. It sounds in the video that the GPU code was pretty hacky and primitive.
I know I’ve seen some pretty fast particle and rendering engine tests done on GPU’s.
@Diogo Moita This video was leaked from the registered users forum, it wouldn’t make sense as a sales pitch considering the audience was meant to be licensed users.
Wouldn’t cores-to-cores rather be a melons-to grapes comparison, given the cores’ characteristics? The only homogeneous metric here is dollars-to-dollars IF image quality was the same, I think.
Comparing cores to cores is one thing but what if your scene can’t fit into your graphics card’s memory?…6, 8, 12gb of RAM compared to 512, 1024, possibly 2048mb on a graphics card…that’s not a lot of space for textures and, as an example, i-ray is unable to use the GPU *AT ALL* if the scene doesn’t fit completely onto the GPU’s memory.
New After Effects template mimics natural media and print effects, from ink and paint to pencil scribble. Read more »
March 27, 2015
Cool new tool will let you paint in virtual watercolour or acrylics, with fluid flow and colour blending. Read more »
Multi-application network rendering controller now fully compatible with OS X as well as Windows. Read more »
March 26, 2015
LollipopShaders' RenderMan RIS Starter Kit also includes its RIS Integrator tool for AO and a 1K HDRI. Read more »
The man behind DC Comics' iconic collectibles explains how Gnomon helped him to a career in CG. Read more »
November 21, 2014
CG Channel is part of the Gnomon group of companies: