Sign up for the newsletter

Signup for the Newsletter

GPU vs CPU Rendering Talk by Luxology

Thursday, August 26th, 2010 | Posted by Leonard Teo

Brad Peebler from Luxology posts some of their research findings on the GPU vs CPU rendering.

Related Links
Octane Render


You can follow any responses to this entry through the RSS 2.0 feed.

You can leave a response, or trackback from your own site.

14 responses to “GPU vs CPU Rendering Talk by Luxology”

  1. jeremy said:

    But this is 12 CPU’S vs. 2 GPU’s – a six time speedup, really.

    The 12 core machine, additionally, would cost quite a bit more than a medium-end GPU. How about tests on a more normal machine, with a year-old dual proc, and a similar GPU.

    6:07 pm on Thursday, August 26, 2010

  2. Stefan said:

    Did you also check Arion from RandomControl? It is a hybrid renderer as they call it, using all gpu’s and cpu’s on the local system, but also with the possibility to extend that to the network and using this power from each machine within the same subnet..
    It’s not done yet, as in, it doesn’t support all materials yet, but they’re working on it, and I must say, compared to other cpu-only renderers it’s quite fast. And true, modo renders really fast too, so the difference is less noticable.
    A few months ago we tested several GPU-renderers (iRay, Arion, Octane, etc) and our conclusion was that Arion came closest to a production-ready renderer, as it could also render out different passes, and it’s fast with normal lights/hdri/daylight system, while Octane was much slower with that (except for the directlight + AO mode, that was more equal to Arion speed)
    I’m curious where this is going, but both modo CPU rendering and Arion hybrid rendering suggest a very nice and fast rendering future 🙂

    2:46 am on Friday, August 27, 2010

  3. Marc said:

    Uuuh wrong, it was 2 CPU’s against 2 GPU’s. The CPU’s each had 6 core’s while the GPU’s each had 192 Cuda cores. The Quadro cards used (FX4800 I believe) cost around 1500 euro’s a pop which is a.f.a.i.k. more expensive than the CPU’s used. I think you get a similar comparison when you take 1 year old CPU’s and GPU’s.

    5:39 am on Friday, August 27, 2010

  4. Jann said:

    jeremy, apples to apples means = CPU Cores vs CUDA cores. The Quadro FX 4800 has 192 CUDA Cores and the Quadro FX 5800 has 240 CUDA .

    12 CPU Cores against 432 CUDA Cores – do you get it now?

    5:41 am on Friday, August 27, 2010

  5. Ezpen said:

    Very informative talk, thanks! 🙂

    5:51 am on Friday, August 27, 2010

  6. Ron Martin said:

    Hi Brad,

    i think, the reported comparisons are not representative. Have a look at V-Ray RT CPU versus GPU and you will find very significant speed increases. Why did you ignore it?

    7:21 am on Friday, August 27, 2010

  7. Chris said:

    This wasn’t actually meant as a comparison at all. Luxology, much like the rest of us, had heard that GPU rendering could significantly boost render speeds. After many many internal tests, they simply decided that GPU rendering is not robust enough yet for them to consider it as path for development. It’s not a slight on GPU render engines or some sort of propaganda, it’s just Luxology being open with its customers about why they’ve decided that pursuing a GPU based rendering solution isn’t right for them at this time.

    10:46 am on Friday, August 27, 2010

  8. Diogo Moita said:

    This comparison is an absurd, And I bet IT´S FAKE! Brad Peebler wants to sell his product and is able to do anything. Open scenes with unoccluded objects are very easy to render on any engine. The problem is interior scenes, full of glass and occluded objects. Don´t listen to this.

    12:25 pm on Friday, August 27, 2010

  9. Leonard Teo said:

    Agree with Chris. Luxology’s results are by no means a slight on GPU rendering. I am very happy that they’ve taken the stand to just post their findings, as unpopular as they may be. There has been a lot of hype that GPU rendering is somehow going to be a magical solution for rendering. It might one day…might not…we’ll see. Having more honest reports like this is good for the community overall.

    6:19 pm on Friday, August 27, 2010

  10. LL said:

    Findings!? Meh, this is Intel propaganda, nothing more that is why it is there the logo in the Video. You can get 480 CUDA cores in a GTX480 with cost at $500 while a FX4800 gets you at $1500!

    10:20 pm on Friday, August 27, 2010

  11. jeremy said:

    Well – for a very high end workstation, the cores-to-cores comparison stands. I think most people are more likely to be using 2 to 4 cores, maybe 8, CPU-wise. Also, they are unlikely to go for the $1500 GPU. A better comparison would be a more common machine with a more common (i.e.: mid to high-end gaming) GPU. I would be pretty surprised if the comparison came out the same.

    Additionally, Modo’s extremely well-optimized renderer, which is blazingly fast, is not necessarily a good comparison to the GPU code they used, which requires entirely different kinds of optimization. It sounds in the video that the GPU code was pretty hacky and primitive.

    I know I’ve seen some pretty fast particle and rendering engine tests done on GPU’s.

    12:01 pm on Wednesday, September 1, 2010

  12. Eric said:

    @Diogo Moita This video was leaked from the registered users forum, it wouldn’t make sense as a sales pitch considering the audience was meant to be licensed users.

    8:18 pm on Wednesday, September 1, 2010

  13. Snafu said:

    Wouldn’t cores-to-cores rather be a melons-to grapes comparison, given the cores’ characteristics? The only homogeneous metric here is dollars-to-dollars IF image quality was the same, I think.

    3:00 am on Tuesday, September 7, 2010

  14. Steve said:

    Comparing cores to cores is one thing but what if your scene can’t fit into your graphics card’s memory?…6, 8, 12gb of RAM compared to 512, 1024, possibly 2048mb on a graphics card…that’s not a lot of space for textures and, as an example, i-ray is unable to use the GPU *AT ALL* if the scene doesn’t fit completely onto the GPU’s memory.

    3:11 am on Thursday, October 21, 2010

Leave a Reply

© CG Channel Inc. All Rights. Privacy Policy.