Hacker Newsnew | past | comments | ask | show | jobs | submit | 22c's commentslogin

> I've thought about building the same thing, by using beads... Glad someone in the hivemind did it.

Gas Town is from the creator of beads.


This makes even more sense, it definitely feels like a logical step with beads. I use Beads a lot, its one of the few things I use with Claude Code.

Sidenote from the article, but TIL Mark Atwood is no longer at Amazon.


> What’s old is new again

Let's go back even further.. I get strong nForce vibes from that extract!


Carmack's comments and the comments in the thread entirely surprise me.

256kbit/s was pretty much the standard ADSL speed 20 years ago. I remember thinking it was lucky some of my friends had 512kbit/s and 1500kbit/s was considered extremely fortunate.

Even still calls over Skype worked fine, you could run IRC or MSN Messenger while loading flash games or downloading MP3s. You could definitely play games like Starcraft, Age of Empires, Quake, UT2004, etc. on a 256k ADSL line. Those plans were also about 8x the price of this plan, not even adjusting for inflation.

Not only that, those lines were typically only 64k upload speed. The usefulness of a 500kbit/s up/down line is incredibly high. I think the only reason it might seem less useful now is that web services are not typically optimised to be usable on dial-up speeds like they were 20 years ago.

With the right setup and having feeds/content download asynchronously rather than "on-demand", 500kbit/s is still plenty of internet by today's standards.


No need to be surprised, 512 kbps isn't enough because it would take a gif half a minute to load at those speeds. We just didn't send gifs back then.


We totally did, and they loaded and played progressively. More like we weren't pushing 20MB of JavaScript to people browsers.


The Dancing Baby gif, which was abnormally large, and went viral via email in 1996, is around 220 KB. At this speed, it would load in 3.5 seconds. And being 4 seconds long, it could stream.


No even that was fine and common. Massive blocks of ads, analytics, etc werent the norm though and i for one miss a time we wouldnt conceive of introducing it.


Browsing the internet on a 256 kbps SDSL modem in BeOS in 1998 is still the fastest web experience I’ve ever had.


Unfortunately it wouldn’t work. All web applications need to redesign their apps for this speed.


Does it mean that they can be stored at room temperature, in humid conditions, etc? ie. requiring no HVAC/dehumidifiers or whatever else might be needed to reliably store archive media?

That's my charitable interpretation.


On the idea of replacing ones self with a shell script, I think there's nothing stopping people (and it should probably be encouraged) with replacing ones use of an LLM with an LLM generated "shell script".

Using an LLM to count how many Rs are in the word strawberry is silly. Using it to write a script to reliably determine how many <LETTER> are in <WORD> is not so silly.

The same goes for many repeated task you'd have an LLM naively perform.

I think that is essentially what the article is getting at, but it's got very little to do with MCP. Perhaps the author has more familiarity with "slop" MCP tools than I do.


There is at least some awareness already when it comes to the performance of the regex engine:

https://github.com/openai/tiktoken/blob/main/src/lib.rs#L95-...


Hmm kinda makes sense to keep them separate because the agents perform differently, right?

You might want to tell Claude not to write so many comments but you might want to tell Gemini not to reach for Kotlin so much, or something.

A unified approach might be nice, but using the same prompt for all of the LLM "coding tools" is probably not going to be as nice as having prompts tailored for each specific tool.


Put in the file:

Instructions for Claude:

- ...

- ...

Instructions for Gemini:

- ...

- ...


But then the irrelevant instructions are just cluttering the context.


Most of the instructions will be suitable for all the models. Only a few will be model-specific.


that just overloads the agents context and decreases performance.


The entire article, probably quite intentionally, seems to overuse semicolons (in my opinion). I say this as a semicolon enjoyer, but I think the overuse of semicolons in this article leads the reader to a bit of semicolon fatigue by the end of it.


Someone better versed with parsing crab-grade enterprise-ready Rust could probably give more insight but I think it boils down to a lot of

    assert 1 != 0;
kind of lines...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: