Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This depends on the algorithm. If it has lots of branching and control flow, then it's unlikely to run well. If it's based on something like FFT or any sort of linear algebra really, then it should run really well.

WebGPU is accessible enough maybe you should try it! You'll learn a lot either way, and it can be fun.



Since you mentioned FFT, I wondered whether GPU-based computation would be useful for audio DSP. But then, lots of audio DSP operations run perfectly well even on a low-end CPU. Also, as one tries to reduce latency, the buffer size decreases, thus reducing the potential to exploit parallelism, and I'm guessing that going to the GPU and back adds latency. Still, I guess GPU-based audio DSP could be useful for batch processing.


Yes. Probably the single most effective use of the GPU for audio is convolutional reverb, for which a number of plugins exist. However, the problem is that GPUs are optimized for throughput rather than latency, and it gets worse when multiple applications (including the display compositor) are contending for the same GPU resource - it's not uncommon for dispatches to have to wait multiple milliseconds just to be scheduled.

I think there's potential for interesting things in the future. There's nothing inherently preventing a more latency-optimized GPU implementation, and I personally would love to see that for a number of reasons. That would unlock vastly more computational power for audio applications.

There's also machine learning, of course. You'll definitely be seeing (and hearing) more of that in the future.


Already on it! https://github.com/webonnx/wonnx - runs ONNX neural networks on top of WebGPU (in the browser, or using wgpu on top of Vulkan/DX12/..)


Isn't there something like SciPy which you can run on WebGPU?


https://wgpu-py.readthedocs.io/en/stable/guide.html :

> As WebGPU spec is being developed, a reference implementation is also being build. It’s written in Rust, and is likely going to power the WebGPU implementation in Firefox. This reference implementation, called wgpu-native, also exposes a C-api, which means that it can be wrapped in Python. And this is what wgpu-py does.

> So in short, wgpu-py is a Python wrapper of wgpu-native, which is a wrapper for Vulkan, Metal and DX12, which are low-level API’s to talk to the GPU hardware.

So, it should be possible to WebGPU-accelerate SciPy; for example where NumPy is natively or third-partily CUDA-accelerated

edit: Intel MKL, https://Rapids.ai,

> Seamlessly scale from GPU workstations to multi-GPU servers and multi-node clusters with Dask.

Where can WebGPU + IDK WebRTC/WebSockets + Workers provide value for multi-GPU applications that already have efficient distributed messaging protocols?

"Considerable slowdown in Firefox once notebook gets a bit larger" https://github.com/jupyterlab/jupyterlab/issues/1639#issueco... Re: the differences between the W3C Service Workers API, Web Locks API, and the W3C Web Workers API and "4 Ways to Communicate Across Browser Tabs in Realtime" may be helpful.

Pyodide compiles CPython and the SciPy stack to WASM. The WASM build would probably benefit from WebGPU acceleration?


At this point, there are very few tools actually running on WebGPU, as it's too new. One to watch for sure is IREE, which has a WebGPU backend planned and in the works.


There is wonnx a rust based webgpu enabled onnx runtime for ml stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: