Hacker Newsnew | past | comments | ask | show | jobs | submit | jrecyclebin's commentslogin

The weakest part is the last one - and it's a big one. Personalsit.es is just a flat single-page directory (of thumbnails, even, not content - so the emphasis is design.) To be part of the conversation, you'd list there and hope someone comes along. Compare with Reddit where you start commenting and you're close-to-an-equal with every other comment.

Webmentions do get you there - because it's a commenting system. But for finding the center of a community, it seems like you're still reliant on Bluesky or Mastodon or something. (Which doesn't "destroy all websites.") Love the sentiment ofc.


Great advertising for vidalias. I simply have to try one now.


They're really good. The apple thing is no joke. Vidalia and Walla-Walla onions are top tier alliums.


author here.. our Vidalia season usually starts in late April - FYI. If you visit our website, submit your email there and I'll drop you a note when our order lines are open.


Good luck finding them anywhere right now


Skill descriptions get dumped in your system prompt - just like MCP tool definitions and agent descriptions before them. The more you have, the more the LLM will be unable to focus on any one piece of it. You don't want a bunch of irrelevant junk in there every time you prompt it.

Skills are nice because they offload all the detailed prompts to files that the LLM can ask for. It's getting even better with Anthropic's recent switchboard operator (tool search tool) that doesn't clutter the system prompt but tries to cut the tool list down to those the LLM will need.


Can I organize skills hierarchically? If when many skills are defined, Claude Code loads all definitions into the prompt, potentially diluting its ability to identify relevant skills, I'd like a system where only broad skill group summaries load initially, with detailed descriptions loaded on-demand when Claude detects a matching skill group might be useful.


There's a mechanism for that built into skills already: a skill folder can also include additional reference markdown files, and the skill can tell the coding agent to selectively read those extra files only when that information is needed on top of the skill.

There's an instruction about that in the Codex CLI skills prompt: https://simonwillison.net/2025/Dec/13/openai-codex-cli/

  If SKILL.md points to extra folders such as references/, load only the specific files needed for the request; don't bulk-load everything.


yes but those are not quite new skills right?

can those markdown in the references also in turn tell the model to lazily load more references only if the model deems they are useful?


Yes, using regular English prompting:

  If you need to write tests that mock
  an HTTP endpoint, also go ahead and
  read the pytest-mock-httpx.md file


> Anthropic's recent switchboard operator

I don’t know what this is and Google isn’t finding anything. Can you clarify?



While I totally agree with you, I also can see a world where we just throw a ton of calls in the MCP and then wrap it in a subagent that has a short description listing every verb it has access to.


Absolutely. Remember these are just tools, how each one of us uses them it's a diffrent story. A lot can be leveraged as well by adding a couple of lines to CLAUDE.md on how he should use this memory solution, or not, it's totally up to anyone. You can also have a subagent that is responsible for project management that is in charge of managing memory or having a coordinator. Again a lot of testing needs to be done :)


Something of a logical leap here: if LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first? Sounds like companies that deserve to go away.


> If LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first?

We see companies layoff workers for all sorts of short-sighted reasons. They'll mass layoff to reduce labor costs for short term profits and stock price increases, so the execs and shareholders can cash out. AI is just the current reason the executive class has decided to use for the layoffs they were going to do regardless.


Further: business and management are exceptionally fad-driven, for numerous information-theoretic reasons.

Performance is difficult to measure and slow to materialise. At the same time, everyone, especially senior leadership and managers, is desperately competitive, even where that competition is on the perception rather than reality of performance. There's a very strong follow-the-herd / follow-the-leader(s) mentality, often itself driven by core investors and creditors.

A consequence is a tremendous amount of cargo-culting, in the sense of aping the manifest symbols of successful (or at least investor-favoured) firms and organisations, even where those policies and strategies end up incurring long-term harms.

Then there's the apparent winner-take-all aspect of AI, which if true would result in tremendous economic power, if not necessarily financial gains, to a very small number of incumbents. Look at the earlier fallout of the railroad, oil, automobile, and electronics industries for similar cases.

(I've found over the years various lists of companies which were either acquired or went belly-up in earlier booms, they're instructive.)

NB: you'll find fad-prone fields anywhere a similar information-theoretic environment exists: fashion, arts, academics, government, fine food, wine collecting, off the top of my head. Oh, and for some reason: software development.


Yep, those are the companies that would go away.


LLMs are just a stock price preserving excuse to do layoffs from previous overhiring.


Yes. A lot of these people should have been laid off anyway. The Musk Twitter massacre taught everybody a lesson, and layoffs were hot before AI was even the main concern.

Also, the DEI massacre is probably going to develop (or has developed) into a full scale HR/Social PR massacre. Instead of getting yelled at for doing the wrong thing, better to do nothing but make more money. And a side-benefit is that firing all of those people makes it even easier to fire more people. (Is that the singularity?)

I don't doubt that some industries are going to be nearly wiped out by AI, but they're going to be the ones that make sense. LLMs are basically super-google translate, and translators and maybe even language teachers are in deep trouble. In-betweeners and special effects people might be in even more trouble than they already were. Probably a lot more stuff that we can't even foresee yet. But for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.


> for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.

Beautiful description of AI. It’s the tech equivalent of the placebo effect. It does truly work for some, until you look closely and it’s actually a bunch of hot air.

Is a placebo worth a trillion dollars?


Yeah exactly. The question should always be - are these layoffs incremental because of AI? If not, then they should not count in this kind of analysis.


> The Musk Twitter massacre taught everybody a lesson

Well, depends on which lesson. "The company can still run" or "we actually won't build anything new for years".

Twitter released a couple things that were being worked on before the acquisition, and then nothing else (grok comes from a different company which later was merged into it, but obviously had different employees).


> The Musk Twitter massacre taught everybody a lesson

That companies can be kept on KTLO mode with only a skeleton crew?

I think everybody knew that already. The hot takes that Twitter was going to disappear were always silly, probably from people butthurt that a service they liked was being fundamentally changed.


Or maybe companies are letting people go for other reasons and blaming it on AI?


Lots of gold in this article. It's like discovering a basket of cheat codes. This will age well.

Great links, BAML is a crazy rabbithole and just found myself nodding along to frequent /compact. These tips are hard-earned and very generously given. Anyone here can take it or leave it. I have theft on my mind, personally. (ʃƪ¬‿¬)


Idk the punishment just doesn't match the crime. Can't they just confiscate the computer? Or pressure the ISP to cancel his account? Tbh I get that the Feds are going route around and through anything that stands in their way.

Instead we're left up to state thuggery.


Conveniently left out from the wife's story is the husband's corporate sabotage, FBI monitoring circumvention, CSAM searches and many parole violations.

3 years sounds about right to me.


Ok well - appreciate the further details.


I'm already not going back to the way things were before LLMs. This is fortunately not a technology where you have to go all-in. Having it generate tests and classes, solve painful typing errors and help me brainstorm interfaces is already life-changing.


I am in a similar place, I think, to you. Caveat: I don't spend a lot of my day-to-day coding anyway, so that helps, but I've never even tried Cursor or Windsurf. I just poke copilot to write whole functions, or ask ChatGPT for things that seem like they'd be tedious or error-prone for me. Then I spend 3-5 minutes tying those things together and test. It saves about 80% of the time, but I still end up knowing exactly how the code works because I wrote most of the function signatures and reviewed all the code.

I know in the very right circumstances the "all-in" way could be faster, but it's already significant that I can do 5x as many coding tasks or do one in a fifth the time. Even if it never ever improves at all.


One of the main things I put in my instructions is "hey I'm a solo dev and it's just you and me working on this stuff, so I'm looking for all responses to be concise." I think it helps to give your situation so that the output can be in the proximity of "solo dev" content - which is going to be more concise and practical by nature.

Kind of like telling it to generate Ghibli pics. These things are best at imitation.


That's probably a good one to keep in there, I'll try it too. Maybe it'll help with the "Here's the plan:" where it starts listing phases and hours/days/weeks for each phase even though it's going to be doing most of the work asap


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: