Hacker Newsnew | past | comments | ask | show | jobs | submit | Tossrock's commentslogin

You can try opus in the cheaper one if you enable extra usage, though.

And they are currently giving away $50 worth of extra usage if you subscribed to Pro before Feb 4.

Indeed, sci fi has always been a way for KSR to explore and animate his politics. But he's so skilled a writer that the spectacle and incredible technical detail of the sci fi wrappings can obscure subtlety of the political philosophy (as it should be, imo).

Agree, except I thought his political views were rather more obvious, to the point that it's a little annoying (IMHO of course). I'd describe it more as hard-left than anarchism, too.

Inside the repo as metadata that can be consumed by a provider, like GHA config in .github/. Standardized, at least as an extension like git lfs so it's provider independent. Could work! I've long thought effective reputational models are a major missing piece of internet infrastructure, this could be the beginning of their existence given the new asymmetric threat of LLM output, combined with mitchellh's productivity and recognition.

I haven't seen welds that bad since visiting India, where I ran across some so dire I was compelled to photograph them in case the building fell down later: https://imgur.com/a/16FRlEW

Love the spirit of the build, though, and it's a case where weld cleanliness doesn't really matter, so, more power to him.


They did ship that feature, it's called "&" / teleport from web. They also have an iOS app.

That's non-local. I am not interested in coding assistants that work on cloud based work-spaces. That's what motivated me to developed this feature for myself.

But... Claude Code is already cloud-based. It relies on the Anthropic API. Your data is all already being ingested by them. Seems like a weird boundary to draw, trusting the company's model with your data but not their convenience web ui. Being local-only (ie OpenCode & open weights model running on your own hw) is consistent, at least.

It is not a moral stance. I just prefer to have my files of my personal projects in one place. Sure I sync them to GitHub for backup, but I don't use GitHub for anything else in my personal projects. I am not going to use a workflow which relies on checking out my code to some VM where I have to set everything up in a way where it has access to all the tools and dependencies that are already there on my machine. It's slower, clunkier. IMO you can't beat the convenience of working on your local files. When I used my CC mirror for the brief period where it worked, when I came back to my laptop, all my changes were just already there, no commits, no pulls, no sync, nothing.

Ah okay, that makes sense. Sorry they pulled the plug on you!

Don't forget gyms and other physical-space subscriptions. It's right up there with razor-and-blades for bog standard business models. Imagine if you got a gym membership and then were surprised when they cancelled your account for reselling gym access to your friends.

It's not a system prompt, it's a tool used during the training process to guide RL. You can read about it in their constitutional AI paper.


Moreover the Claude (Opus 4.5) persona knows this document but believes it does not! It's a very interesting phenomenon. https://www.lesswrong.com/posts/vpNG99GhbBoLov9og


Windows has the WSL for native Linux vms, these days (and also the past ~decade)


I can rm -rf Windows files from WSL2. And so can LLMs.

Meanwhile a VM isolates by default.


You can turn all the interop and mounting of the windows FS with ease. I run claude in yolo mode using this exact setup. Just role out a new WSL env for each claude I want yoloing and away it goes. I suppose we could try to theorize how this is still dangerous buts its getting into extremely silly territory.


That's great to know! And important to clarify because by default WSL has access to all disks.


> but when poking around CAD systems a few months back we realized there's no way to go from a text prompt input to a modeling output in any of the major CAD systems.

This is exactly what SGS-1 is, and it's better than this approach because it's actually a model trained to generate Breps, not just asking an LLM to write code to do it.


Do you have a budget per-player of cloud usage? What happens if people really like the game and play it so much it starts getting expensive to keep running? I guess at $0.79 / Mtok llama70B is pretty affordable, but a per-player opex seems hard to handle without a subscription model.


Our initial plan was to simply ask enough for the game that the price would cover the costs on average... but that means that we're basically encouraged to have people play the game as little as possible? We're looking into some kind of subscription now, it sounds weird but I do think it's a better incentive in this case. Plus we can actually ask for less upfront.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: