Yes, this is the case. During training, the model will get a sequence of text (ex, 512 tokens long) with a percentage of them masked out (with a special <MASK> token). It learns how to unmask those tokens to construct the original text.
In the case that you mentioned, if we had 4 <MASK> tokens in a row, all we are doing for decoding is predicting what those 4 tokens should be.
Generally, this does not seem to be a significant problem, as there are usually multiple ways to express an idea in varying lengths. Also, with confidence-aware parallel decoding, it can usually avoid the scenario you mentioned, as focusing on decoding the highest confident tokens will generally avoid such scenarios with a well trained model.
If you couldn't tell, the post was about using shader programming for general-purpose computation, specifically. Yes, WebGL adds compute shaders, but the point of the article was to use the graphics pipeline specifically. If you say there are statements that are incorrect or inaccurate, pointing them out would be very much appreciated :)
Sorry, I didn't mean for my comment to sound too harsh. I just dislike these "lost art" or "what your doctor didn't know about" style headlines. I appreciate that there are people working on graphics/GPU programming and I didn't mean to shut you down or anything.
At any rate:
> Traditional graphics APIs like OpenGL are centered around a fixed-function pipeline tailored [...] render passes to accomplish multi-stage algorithms.
This whole paragraph is inaccurate. At least point out what version of OpenGL you are referring to. And no need to use the past tense once you do that.
> which lets you treat the GPU as one giant SIMD processor
This analogy is not accurate or useful for the reasons that are already obvious to you. I think it mostly just confuses the kind of reader that does not have the relevant experience.
> and move them back and forth between host and device with explicit copy calls.
Moving to/from host/device in cuda/opencl/vulkan/etc does require explicit copy calls (and for good reason since the shit goes over the PCI bus on discrete architectures.)
> User-driven pipeline: You define exactly what happens and when instead of using a predefined fixed sequence of rendering stages.
You can do the same with compute on OpenGL/vulkan/etc. Like above, specify what version of OpenGL you are talking about to avoid confusion.
> In OpenGL, the output of your computation would ultimately be pixels in a framebuffer or values in a texture
Confusing for the same reason, especially because this statement now uses the word "computation", unlike the statements leading to it.
Personally, I would just rewrite this entire section to point out the limitations of pre-compute graphics APIs (whether it's OpenGL, earlier versions of DirectX, or whatever.)
> and other graphics specific concepts I hijacked
What does 'hijacked' mean here? You're not hijacking anything, you are using the old APIs as intended and using the right terms in the description that follows.
> bypassing any interpolation
"Filtering" is a better choice of word. And you're not bypassing it as much as you are simply not doing any filtering (it's not like filtering is part of the FFP or HW and you're "bypassing" it.)
> A Framebuffer Object (FBO) is a lightweight container
Actually, an FBO is a big turd that incurs heavy runtime validation on the driver side if you ever think about changing the targets. You might actually want to point that out since it is relevant to your implementation. I wouldn't use "lightweight" to describe it anyway.
> we “ping-pong” between them
Yeah, you might be better off creating two separate FBOs per my point above. Vendor-specific territory here, though. But I think the OpenGL wiki touches on this if I remember correctly.
> All of this happens entirely on the GPU’s VRAM bus
What is the "this" that this refers to? If you mean rendering to textures, this statement is inaccurate because there are also several layers of cache between the SIMDs and the VRAM. I think you could just say that the rendering stays on device local memory and call it a day without getting into more detail.
> FBOs form the data bus
I find this new term misleading given that you just talked about a "VRAM bus" in the paragraph above. I'd avoid introducing this new "data bus" term altogether. It doesn't seem like a useful abstraction or one that is described in any detail, so it does not add/take away much from the rest of the article.
> Instead of using fragment shaders to shade pixels for display, we hijack them as compute kernels
Just avoid this hijacking analogy altogether. I think it only adds confusion. You are implementing a compute workload or "kernel" per CUDA terminology in a fragment shader; can just call it that.
> each fragment invocation becomes one “thread”
Actually, each fragment invocation _is_ a thread per every HW vendor's own terminology, so no need for the quotes here. Of course, you haven't introduced the term "thread" up until this point (first and only occurrence of the word), which is the real problem here. A brief section on GPU architecture could help.
> Per-pixel work item: Each fragment corresponds to one matrix element (i, j). The GPU runs this loop for every (i, j) in parallel across its shader cores.
What is "this loop" you are referring to? I know which it is, but there is no loop in the shader code (there's the one for the dot product, but that's not the relevant one.) This is confusing to the reader.
> All it does is draw two triangles which covers the entire view port.
> Reusable: Because the vertex work is identical for all operations, we compile it once and reuse it across every matrix multiply, activation, and bias-add pass.
To be clear, you compile the vertex shader once, but you're still going to have to link N programs. I don't think this is worth pointing out because linking is where most of the shit happens.
> While hijacking WebGL
No hijacking.
> There’s no on-chip scratchpad for blocking or data reuse, so you’re limited to element-wise passes.
The on-chip memory is still there, it's not just accessible by fragment shaders in old APIs.
> Texture size limits: GPUs enforce a maximum 2D texture dimension (e.g. 16 K×16 K).
I haven't checked, but I would bet this is either an API limitation, or a vendor-specific limit. So to say that "GPUs enforce" would be misleading (is it really the HW or the API? And is it all GPUs? some? vendor-specific?)
I haven't checked the neural net side of things or the shaders in any detail.
Also, I think a better title would be "Running GPT-2 in WebGL 1.0 / OpenGL 2.0", dropping the subtitle. It's more specific and you might even get more clicks from people looking to implement the stuff on older hardware. No lost art that isn't lost or rediscovery.
Thanks for your input. Lots of good points on the technical side. Will go through and make some edits later tonight or tomorrow.
> You're not hijacking anything, you are using the old APIs as intended and using the right terms in the description that follows.
When it comes to the use of the word "hijacking", I use it in to refer to the fact that using graphic shaders for general computation wasn't initially intended. When NVIDIA allowed programmable vertex and pixel shaders, they had no idea that it would be used for anything else other than graphics rendering. So when I say I "hijack" a fragment shader to compute layers of a neural network instead of as a part of a rendering pipeline, this is what I mean. I don't see a problem with this use of language.
Thanks for the comment! I did this as a final project in a graphics class where we mainly used WebGL for all the assignments. Seeing the improvements a WebGPU port would bring would be cool to see!
A few weeks back, I implemented GPT-2 using WebGL and shaders. Here's a write-up over how I made it, covering how I used textures and frame buffer objects to store and move around weights and outputs from calculations while using WebGL.
Request -- big bold link to a working web page right at the top? I read the page, I read your github, and I saw instructions to clone and run a node site, and was like "..nah" I think github pages will serve this up for free if you like.
Here's a link to the github repo. At the top of the README it has a demo of GPT-2 running and the visualizations of the attention matrices and transformer block outputs
Kudos for going down the WebGL route and not the Chrome only WebGPU approach that most likely some people expect.
It is going to take at least yet another year, for WebGPU 1.0 to be available on stable versions on other browsers, Chrome still hasn't stable WebGPU on GNU/Linux, and it is already much far ahead with extensions that most likely won't be on the 1.0 MVP of the other browsers.
Member of Firefox's WebGPU team here. I'm curious what extensions you're referring to!
We're hoping to ship on Windows stable in the next couple of Firefox releases. Other platforms should follow in the following year. If you want, I'd encourage you to try out WebGPU on Nightly on Linux and see if you run into any issues!
Not my repo, but I think there are others that allow this functionality via the same method the “iMessage Wrapped” project accessed your message history. Since they’re also MCP servers, they should work seamlessly with Claude
https://github.com/willccbb/imessage-mcp/tree/main