Hacker Newsnew | past | comments | ask | show | jobs | submit | exe34's commentslogin

I use anki daily and I like that it doesn't nag me.

I think the idea is that the trillionaires won't need us at all when the food supply is fully automated. They might keep a small population for genetic diversity, but that's about it.

It's a government of capitalists by capitalists for capitalists.

I think it's possible, but the current trend is that by the time you can run x level at home, they have 10-100x in the frontier models, so if you can run today's Claude.ai at home, then software engineering as a career is already over.

My poorly informed hope is that that we can have mixture of experts with highly tuned models on areas of focus. If I'm coding in language Foo, I only care about a model that understands Foo and its ecosystem. I imagine that should be self-hostable now.

A model that only understands, say, Java is useless : you need a model that understands English and some kind of reasoning and has some idea of how the human world works, and also knows Java. The vast majority of the computational effort is spent on the first two, the second is almost an afterthought. So, a model that can only program in Java is not going to be meaningfully smaller than a model that can program in ~all programming languages.

my suspicion is that this is not how intelligence works. creativity comes from cross breeding ideas from many domains.

Sure, but in the context I was considering, creativity itself wasn't a concern.

For coding, creativity is not necessarily a good thing. There are well established patterns, algorithms, and applications could reasonably be construed as "good enough" to assist with the coding itself. Adding a human language model over that to understand the user's intents could be considered an overlay on the coding model.

I confess that this is willful projection of my hope to be able to self-host agents on affordable hardware. A frontier model on powerful hardware would always be preferable but sometimes "good enough" is just that.


I want to self-host too, but I've spent the last few weeks playing with Claude code on my hobby projects - it solves abstract problems with code, and gives actionable reviews, whereas qwen code with qwen3-coder-480 seems to just write simple code and gives generic feedback.

You can run quite powerful models at home on a maxed out Mac Studio. The difference between those and SoTA is more like 2x.

just had a quick look, but at this price, I could have Claude pro for 62 years...

hah I ran out of tokens a bit before it hit I reckon.

same here, and I just got started, Hm..

As long as your bullet points+prompt are shorter than the output, couldn't you post that instead? The only time I think an LLM might be ethically acceptable for something a human has to read is if you ask it to make it shorter.

I write the full article in my Czenglish (English influenced by Czech sentence structure). Then I let it rewrite it in proper English.

So it's me doing the writing and GPT making it sound more English.


Hey would you be willing to share your claude.md? I'm only starting out with AI coders, and while it often makes good choices for straightforward things, I find the token usage gets bigger and bigger as it proceeds down a list of requirements - my working hypothesis is that it's having to re-read everything as the project gets more complicated and doesn't have a concept of "this is where I go to kick it for this kind of thing".

I've been running claude code on a 13 year old potato and it's never used 136GB of RAM - possibly because I only have 8GB.

Its vram or something makes the OS completely busy even I have only 32 gb ram. task manager shows 100+ gbs forcing to terminate

is that vram on your GPU? I don't think claude code uses that.

Not on GPU, I think it's just paged memory. You are right claude-code isn't running the model locally. Today I've had to kill it 5 times till now.

edit: https://ibb.co/Fbn8Q3pb

that's the 6th


Why do you think it's Claude and not iTerm?

been using iterm for 10 years. Didn't update recently. claude code is the only new factor in my setup. I can visibly predict as i am using claude code when its about to happen (when conversation goes above 200 messages and then uses sub agents leading to somehow infinite rerendering of the message timeline and they seemingly use a html to bash rendering thing because ... ) so yeah maybe you are right iterm is not able to handle those rerendering or maybe the monitor is broken.

I use xterm, and the visual glitch doesn't crash anything, so maybe try that? I suspect though maybe you're using much longer sessions than I do, with the talk of sub agents and all.

I've mostly just been using it for single features and then often just quitting it until I have the next dumb idea to try out.


Thank you for the giggle, I misread this as a statement of faith and a non-sequitur.

I had an fMRI and also believe in dead salmon now, it's a common side effect but it's worth it for the diagnostic data they get.

Yeah, really needed the comma on the left side of the parenthesis.

could you highlight what in the original article made you think they were banning their kids from social media entirely? or were you trying to explain something else?

The GGP, not the original article, said "they prevent [emp. mine] their kids from using them".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: