Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPUs are largely best suited for providing high throughput for doing similar computations across wide datasets. So if you can break down your algorithm into a series of steps which are largely independent and have limited flow-of-control it might be well suited to the task. If you need to have a lot of random access, or branching logic, it may not work so well. But often times it's possible to re-structure an algorithm designed for CPU to perform better on GPU.

But how many stocks even are there? You might not even have enough parallel operations to saturate a modern GPU.

Out of curiosity, why use WebGPU for this? If you're really trying to do something high performance, why not reach for something like CUDA?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: