I'm not a physicist, but after getting into the rotten fruit this fall, I would bet my friend's horse could launch a space shuttle from her arse. Such a sweet mare, but she has no hesitation blasting Venetian atmosphere right into your face while you're scraping the shit out of her feet. At least she has the decency to make eye contact while doing it
I wonder if generated skills could be useful to codify the outcome of long sessions where the agent has tried a bunch of things and then finally settled on a solution based on a mixture of test failures and user feedback
Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.
Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.
The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.
That's fine, but it's a very different claim to the one you made at first.
In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.
I do have a local model path (Qwen3-4B) for testing.
The tradeoff is simply model quality vs locality, which is why Linggen focuses on controlling retrieval rather than claiming zero data ever leaves the device. Using a local LLM is straightforward if that’s the requirement.
There is a large region of the upper atmosphere called the thermosphere where there is still a little bit of air. The pressure is extremely low but the few molecules that are there are bombarded by intense radiation and thus reach pretty high temperatures, even 2000 C!
But since there are so few such molecules in any cubic meter, there isn't much energy in them. So if you put an object in such a rarefied atmosphere. It wouldn't get heated up by it despite such a gas formally having such a temperature.
The gas would be cooled down upon contact with the body and the body would be heated up by a negligible amount
These satellites will certainly be above the themosphere. The temperature of the sparse molecules in space is not relevant for cooling because there are too few of them. We're talking about radiative cooling here.
Can I organize skills hierarchically? If when many skills are defined, Claude Code loads all definitions into the prompt, potentially diluting its ability to identify relevant skills, I'd like a system where only broad skill group summaries load initially, with detailed descriptions loaded on-demand when Claude detects a matching skill group might be useful.
There's a mechanism for that built into skills already: a skill folder can also include additional reference markdown files, and the skill can tell the coding agent to selectively read those extra files only when that information is needed on top of the skill.
Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.
This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use
reply