Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a subtle distinction here which I believe is the source of this disagreement. Raytracing is indeed embarrassingly parallel - you can render a 5 megapixel image on 5 million different machines in the time it takes to render a single pixel on one machine.

However, each machine will be executing entirely different instructions after a very short period - there's not much "coherency" between adjacent rays, because all it takes is to clip the corner of an object and suddenly you're bouncing around a completely different part of the scene. This is a difficulty for GPUs, which are not true parallel clusters. What they do well is running the same set of calculations on different data, at the same time - in other words, not raytracing. I believe this is what the parent meant by "data dependency" - there are a lot of divergent branches, and the calculations that you do depend entirely on scene data.

Intel's Larrabee architecture would have made GPUs behave like genuine clusters. I think it's a bit sad we don't have general-purpose clusters in our machines, just the hobbled GPUs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: