Hacker Newsnew | past | comments | ask | show | jobs | submit | ithkuil's commentslogin

The Link Between a Horse's Arse and the Space Shuttle • Physics Forums https://share.google/UnmMwwQv9kyksKhkI

I'm not a physicist, but after getting into the rotten fruit this fall, I would bet my friend's horse could launch a space shuttle from her arse. Such a sweet mare, but she has no hesitation blasting Venetian atmosphere right into your face while you're scraping the shit out of her feet. At least she has the decency to make eye contact while doing it


I wonder if generated skills could be useful to codify the outcome of long sessions where the agent has tried a bunch of things and then finally settled on a solution based on a mixture of test failures and user feedback

yeah I have a “meta” skill and often use it after a session to instruct CC to update its own skills/rules. get the flywheel going

Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.

Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.

The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.


That's fine, but it's a very different claim to the one you made at first.

In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.


That’s true — Linggen can’t control the behavior of Claude or any other cloud LLM.

What it can control is the retrieval boundary: what gets selected locally and exposed to the model. If nothing is returned, nothing is sent.

If a strict zero-exfiltration setup is required, then a fully local model would indeed be the right option.


I do have a local model path (Qwen3-4B) for testing.

The tradeoff is simply model quality vs locality, which is why Linggen focuses on controlling retrieval rather than claiming zero data ever leaves the device. Using a local LLM is straightforward if that’s the requirement.


Is there a possible future where the inference usage increases because there will be many many more customers and R&D grows Lower than inference?

Or is it already saturated?


There is a large region of the upper atmosphere called the thermosphere where there is still a little bit of air. The pressure is extremely low but the few molecules that are there are bombarded by intense radiation and thus reach pretty high temperatures, even 2000 C!

But since there are so few such molecules in any cubic meter, there isn't much energy in them. So if you put an object in such a rarefied atmosphere. It wouldn't get heated up by it despite such a gas formally having such a temperature.

The gas would be cooled down upon contact with the body and the body would be heated up by a negligible amount


These satellites will certainly be above the themosphere. The temperature of the sparse molecules in space is not relevant for cooling because there are too few of them. We're talking about radiative cooling here.

indeed. talking about temperature is incomplete without other aspects such as pressure

Pressure matters

Can I organize skills hierarchically? If when many skills are defined, Claude Code loads all definitions into the prompt, potentially diluting its ability to identify relevant skills, I'd like a system where only broad skill group summaries load initially, with detailed descriptions loaded on-demand when Claude detects a matching skill group might be useful.

There's a mechanism for that built into skills already: a skill folder can also include additional reference markdown files, and the skill can tell the coding agent to selectively read those extra files only when that information is needed on top of the skill.

There's an instruction about that in the Codex CLI skills prompt: https://simonwillison.net/2025/Dec/13/openai-codex-cli/

  If SKILL.md points to extra folders such as references/, load only the specific files needed for the request; don't bulk-load everything.

yes but those are not quite new skills right?

can those markdown in the references also in turn tell the model to lazily load more references only if the model deems they are useful?


Yes, using regular English prompting:

  If you need to write tests that mock
  an HTTP endpoint, also go ahead and
  read the pytest-mock-httpx.md file

Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.

This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use


Are you saying that the internet business didn't grow a lot after the bubble popped?


In particular with general relativity.

Quantum field theory (QFT) is quantum mechanics + special relativity


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: