Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sorry, I didn't mean for my comment to sound too harsh. I just dislike these "lost art" or "what your doctor didn't know about" style headlines. I appreciate that there are people working on graphics/GPU programming and I didn't mean to shut you down or anything.

At any rate:

> Traditional graphics APIs like OpenGL are centered around a fixed-function pipeline tailored [...] render passes to accomplish multi-stage algorithms.

This whole paragraph is inaccurate. At least point out what version of OpenGL you are referring to. And no need to use the past tense once you do that.

> which lets you treat the GPU as one giant SIMD processor

This analogy is not accurate or useful for the reasons that are already obvious to you. I think it mostly just confuses the kind of reader that does not have the relevant experience.

> and move them back and forth between host and device with explicit copy calls.

Moving to/from host/device in cuda/opencl/vulkan/etc does require explicit copy calls (and for good reason since the shit goes over the PCI bus on discrete architectures.)

> User-driven pipeline: You define exactly what happens and when instead of using a predefined fixed sequence of rendering stages.

You can do the same with compute on OpenGL/vulkan/etc. Like above, specify what version of OpenGL you are talking about to avoid confusion.

> In OpenGL, the output of your computation would ultimately be pixels in a framebuffer or values in a texture

Confusing for the same reason, especially because this statement now uses the word "computation", unlike the statements leading to it.

Personally, I would just rewrite this entire section to point out the limitations of pre-compute graphics APIs (whether it's OpenGL, earlier versions of DirectX, or whatever.)

> and other graphics specific concepts I hijacked

What does 'hijacked' mean here? You're not hijacking anything, you are using the old APIs as intended and using the right terms in the description that follows.

> bypassing any interpolation

"Filtering" is a better choice of word. And you're not bypassing it as much as you are simply not doing any filtering (it's not like filtering is part of the FFP or HW and you're "bypassing" it.)

> A Framebuffer Object (FBO) is a lightweight container

Actually, an FBO is a big turd that incurs heavy runtime validation on the driver side if you ever think about changing the targets. You might actually want to point that out since it is relevant to your implementation. I wouldn't use "lightweight" to describe it anyway.

> we “ping-pong” between them

Yeah, you might be better off creating two separate FBOs per my point above. Vendor-specific territory here, though. But I think the OpenGL wiki touches on this if I remember correctly.

> All of this happens entirely on the GPU’s VRAM bus

What is the "this" that this refers to? If you mean rendering to textures, this statement is inaccurate because there are also several layers of cache between the SIMDs and the VRAM. I think you could just say that the rendering stays on device local memory and call it a day without getting into more detail.

> FBOs form the data bus

I find this new term misleading given that you just talked about a "VRAM bus" in the paragraph above. I'd avoid introducing this new "data bus" term altogether. It doesn't seem like a useful abstraction or one that is described in any detail, so it does not add/take away much from the rest of the article.

> Instead of using fragment shaders to shade pixels for display, we hijack them as compute kernels

Just avoid this hijacking analogy altogether. I think it only adds confusion. You are implementing a compute workload or "kernel" per CUDA terminology in a fragment shader; can just call it that.

> each fragment invocation becomes one “thread”

Actually, each fragment invocation _is_ a thread per every HW vendor's own terminology, so no need for the quotes here. Of course, you haven't introduced the term "thread" up until this point (first and only occurrence of the word), which is the real problem here. A brief section on GPU architecture could help.

> Per-pixel work item: Each fragment corresponds to one matrix element (i, j). The GPU runs this loop for every (i, j) in parallel across its shader cores.

What is "this loop" you are referring to? I know which it is, but there is no loop in the shader code (there's the one for the dot product, but that's not the relevant one.) This is confusing to the reader.

> All it does is draw two triangles which covers the entire view port.

Let's draw a single triangle that covers the whole viewport while we're at it. It's more efficient because it avoids double-fragging the diagonal. https://wallisc.github.io/rendering/2021/04/18/Fullscreen-Pa...

> Reusable: Because the vertex work is identical for all operations, we compile it once and reuse it across every matrix multiply, activation, and bias-add pass.

To be clear, you compile the vertex shader once, but you're still going to have to link N programs. I don't think this is worth pointing out because linking is where most of the shit happens.

> While hijacking WebGL

No hijacking.

> There’s no on-chip scratchpad for blocking or data reuse, so you’re limited to element-wise passes.

The on-chip memory is still there, it's not just accessible by fragment shaders in old APIs.

> Texture size limits: GPUs enforce a maximum 2D texture dimension (e.g. 16 K×16 K).

I haven't checked, but I would bet this is either an API limitation, or a vendor-specific limit. So to say that "GPUs enforce" would be misleading (is it really the HW or the API? And is it all GPUs? some? vendor-specific?)

I haven't checked the neural net side of things or the shaders in any detail.

Also, I think a better title would be "Running GPT-2 in WebGL 1.0 / OpenGL 2.0", dropping the subtitle. It's more specific and you might even get more clicks from people looking to implement the stuff on older hardware. No lost art that isn't lost or rediscovery.



Thanks for your input. Lots of good points on the technical side. Will go through and make some edits later tonight or tomorrow.

> You're not hijacking anything, you are using the old APIs as intended and using the right terms in the description that follows.

When it comes to the use of the word "hijacking", I use it in to refer to the fact that using graphic shaders for general computation wasn't initially intended. When NVIDIA allowed programmable vertex and pixel shaders, they had no idea that it would be used for anything else other than graphics rendering. So when I say I "hijack" a fragment shader to compute layers of a neural network instead of as a part of a rendering pipeline, this is what I mean. I don't see a problem with this use of language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: