It is easy to make benchmarks where JS is faster. JS inlines at runtime, while wasm typically does not, so if you have code where the wasm toolchain makes a poor inlining decision at compile time, then JS can easily win.
But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).
Things might be getting better for JS, but just looking over those briefly, they don't look memory constrained, which is the main place where I've seen significant speedups. Also, simpler code makes JIT optimizations look better, but that level of performance won't be consistent in real world code.
You might be right in your use case, but still, JS is not the benchmark to beat. Native Client was already almost as fast as native code, started up almost instantly, and didn't get a decade of engineering with who knows how much money behind it invested into it.
Webassembly that was supposed to replace it needs to be at least as good, that was the promise. We're a decade in, and still Wasm is nowhere near while it has accumulated an insane amount of engineering complexity in its compiler, and its ability to run native apps without tons of constraints and modifications is still meh as is the performance.
To be fair, Native Client achieved much of its speed by reusing LLVM and the decades of work put into that excellent codebase.
Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.
And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.
I think youre referring to PNaCl(as opposed to Native Client), which did away with the arch-specific assembly, and I think they shipped the code as LLVM IR. These are 2 completely separate things, I am referring to the former.
I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.
As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.
But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.
For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.
Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.
But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.
> I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,
That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.
This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.
I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.
I would take these benchmarks with a pinch of salt. Within a single function, it's very easy to optimize JS because you know every way a single variable will be defined. When you have to call a function, the data type of the argument can be anything the caller passes to the function, which makes optimization far more complex.
In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.
By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.
People working in languages/libraries/codebases where LLMs aren't good is a thing. That doesn't mean they aren't good tools, or that those things won't be conquered by AI in short order.
I try to assume people who are trashing AI are just working in systems like that, rather than being bad at using AI, or worse, shit-talking the tech without really trying to get value out of it because they're ethically opposed to it.
A lot of strongly anti-AI people are really angry human beings (I suppose that holds for vehemently anti-<anything> people), which doesn't really help the case, it just comes off as old man shaking fist at clouds, except too young. The whole "microslop" thing came off as classless and bitter.
the microslop thing is largely just a backlash at ms jamming ai into every possible crevice of every program and service they offer with no real plan or goals other than "do more ai"
Game playing is the next frontier. Model economically valuable tasks as games and have the agents play/compete. Alphabench and Vendingbench show the potential of this approach.
A decade of reinforcement and agentic learning was spent playing games (Google Deepmind AlphaGo, AlphaStar, OpenAI Five), including against each other. So what makes it a new frontier?
Its application to LLMs to push capabilities. We're going to tap out expert feedback, and objective/competitive arenas are going to be the only way to progress at a reasonable speed.
The difference is going to be instead of starting from pre-existing games and hoping that "generalizes" to intelligence, this time people are going to build gamified simulators of economically valuable stuff. This is feasible now because we can use LLMs to help generate these games much faster than we would have been able to previously.
I thought it was already pretty well established that Torvalds is a jerk? Or, at a minimum, somewhat petulant.
But also a good example of someone’s accomplishments .. arguably being worth something even if that’s true. I made my whole existence off of Linus’s handiwork and owe him a debt of gratitude for it. I probably still get more in monthly residuals than 90% of the people who wrote anything I deployed. Who cares what I think of anyone personally?
I’d hate to be so deranged about anyone that I can’t see any good in their accomplishments. I’m not exactly Miss Manners in the professional or personal realm either, don’t let me cast the first stone.
Id even go as far as saying that Linus’s are way more important and that Steve’s destroyed society but that’s enough out of me. Even if that’s my opinion, I’m still saying that about a trillion dollar company and that’s still someone’s yardstick for success. Genius is genius, accomplishments are accomplishments
… and god what a grey and insecure and screwed up IT world this would be if neither of those people ever existed and Microsoft ruled the world. Either we wouldn’t even have functional cash registers let alone any other technical pillars or infrastructure… or we’d all be in our rightful BSD utopia right about now.
To emphasize the difference between Linus and Steve. Steve seemed to be 100% an asshole when he wasn't performing, whereas Linus is (afaik) mostly very opinionated and doesn't care about being diplomatic at all, but not fundamentally a bad human being.
Skills don't make any difference above having markdown files to point an agent to with instructions as needed. Context7 isn't any better than telling your agent to use trafilatura to scrape web docs for your libs, and having a linting/static analysis suite isn't a harness thing.
3.7 was kinda dumb, it was good at vibe UIs but really bad at a lot of things and it would lie and hack rewards a LOT. The difference with Opus 4.5 is that when you go off the Claude happy path, it holds together pretty well. With Sonnet (particularly <=4) if you went off the happy path things got bad in a hurry.
Less true than you think. A lot of the progress in the last year has been tightening agentic prompts/tools and getting out of the way so the model can flex. Subagents/MCP/Skills are all pretty mid, and while there has been some context pruning optimization to avoid carrying tool output along forever, that's mainly a benefit to long running agents and for short tasks you won't notice.
reply