I think the idea is that the trillionaires won't need us at all when the food supply is fully automated. They might keep a small population for genetic diversity, but that's about it.
I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.
My poorly informed hope is that that we can have mixture of experts with highly tuned models on areas of focus. If I'm coding in language Foo, I only care about a model that understands Foo and its ecosystem. I imagine that should be self-hostable now.
A model that only understands, say, Java is useless : you need a model that understands English and some kind of reasoning and has some idea of how the human world works, and also knows Java. The vast majority of the computational effort is spent on the first two, the second is almost an afterthought. So, a model that can only program in Java is not going to be meaningfully smaller than a model that can program in ~all programming languages.
Sure, but in the context I was considering, creativity itself wasn't a concern.
For coding, creativity is not necessarily a good thing. There are well established patterns, algorithms, and applications could reasonably be construed as "good enough" to assist with the coding itself. Adding a human language model over that to understand the user's intents could be considered an overlay on the coding model.
I confess that this is willful projection of my hope to be able to self-host agents on affordable hardware. A frontier model on powerful hardware would always be preferable but sometimes "good enough" is just that.
I want to self-host too, but I've spent the last few weeks playing with Claude code on my hobby projects - it solves abstract problems with code, and gives actionable reviews, whereas qwen code with qwen3-coder-480 seems to just write simple code and gives generic feedback.
As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.
Hey would you be willing to share your claude.md? I'm only starting out with AI coders, and while it often makes good choices for straightforward things, I find the token usage gets bigger and bigger as it proceeds down a list of requirements - my working hypothesis is that it's having to re-read everything as the project gets more complicated and doesn't have a concept of "this is where I go to kick it for this kind of thing".
been using iterm for 10 years. Didn't update recently. claude code is the only new factor in my setup. I can visibly predict as i am using claude code when its about to happen (when conversation goes above 200 messages and then uses sub agents leading to somehow infinite rerendering of the message timeline and they seemingly use a html to bash rendering thing because ... ) so yeah maybe you are right iterm is not able to handle those rerendering or maybe the monitor is broken.
I use xterm, and the visual glitch doesn't crash anything, so maybe try that? I suspect though maybe you're using much longer sessions than I do, with the talk of sub agents and all.
I've mostly just been using it for single features and then often just quitting it until I have the next dumb idea to try out.
could you highlight what in the original article made you think they were banning their kids from social media entirely? or were you trying to explain something else?
reply