Hacker Newsnew | past | comments | ask | show | jobs | submit | Fannon's commentslogin

I'd be interested in how you setup those repos for non-coding tasks, thanks for sharing!

For non‑coding work we still treat a repo as the source of truth, but it’s mostly Markdown, checklists and assets rather than code.

Think of it as a structured project brain that AI can read, update and score.

For example, for social media management our repo's outline structure that are getting to be is as follows (still WIP):

social-media/ README.md # How this repo works, scoring rules /config platforms.yaml # Accounts content_guidelines.md # Brand voice, do/don’t list /planning 2025-12-calendar.md # Calendar 2025-12-campaign-x402.md # Campaign brief, goals, KPIs /drafts 2025-12-05-x-new-feature.md 2025-12-16-x-new-feature.md /assets images/ video/ copy-snippets.md /published 2025-12-05-x-new-feature.md # Final copy + URLs + timestamp 2025-12-16-x-new-feature.md /reports 2025-01-05-metrics.md # CTR, saves, comments, etc.

Daily tasks are then deterministic checklists inside the repo, e.g. “Create 3 drafts for next Tuesday with images in /assets/images and entries added to 2025-12-calendar.md under campaign X”.


This is nice, but that it goes into its vendor specific .codex/ folder is a bit of a drag.

I hope such things will be standardized across vendors. Now that they founded the Agentic AI Foundation (AAIF) and also contributed AGENTS.md, I would hope that skills become a logical extension of that.

https://www.linuxfoundation.org/press/linux-foundation-annou...

https://aaif.io/


This is interesting. Also "Discover tools on-demand". Are there any stats or estimates how many tools an LLM / agent could handle with this approach vs. loading them all into context as MCP tools?


What i have read its in the range of 60-80.

(shameless plug: im building an cloud based gateway where the set of servers given to an mcp client can be controlled using "profiles": https://docs.gatana.ai/profiles/)


Let me take the other position in this comment: I also see that how MCP works really helped its quick adoption. Because you could just build a local MCP server as a proxy around existing APIs and functionality, there is no need to touch anything existing. And MCP often starts as "MCP Server" that is basically a software artifact that you'd just configure and run - often locally. I don't think that just doing REST or extending existing REST APIs wouldn't have delivered this part of the MCP success story.

But now that many companies focus on MCP as a remote API, the question obviously comes up why not just use standard API protocols for that and just optimize the metadata for AI consumption.


What bothers me about MCP is that there is not even a standard way to describe an entire MCP server in a single JSON file. Like OpenAPI for REST. This makes exchanging metadata and building catalogs unnecessarily unstandardized.

The article also mentioned that OpenAPI is too verbose: I totally see that, but you could optimize this by stripping an OpenAPI file down to the basics that you need for LLM use, maybe even using the Overlay spec. Or you convert your OpenAPI files to the https://www.utcp.io format that pylotlight mentioned.

Some "curation" of what's really relevant for AI consumption may be a helpful anyway, as too many tools will also lead to problems in picking the right ones.


Nice and simple, very responsive!

I did something similar a long time back, but not really focused on something that tiles up: https://svg-generator.netlify.app/


It would have replied the same if you claimed of existing books that they don't exist :)


Thanks for posting the reddit comment, it nicely explains the line of thinking and the current adoption of MCP seems to confirm this.

Still, I think it should only be an option, not a necessity to create an MCP API around existing APIs. Sure, you can do REST APIs really badly and OpenAPI has a lot of issues in describing the API (for example, you can't even express the concept of references / relations within and across APIs!).

REST APIs also don't have to be generic CRUD, you could also follow the DDD idea of having actions and services, that are their own operation, potentially grouping calls together and having a clear "business semantics" that can be better understood by machines (and humans!).

My feeling is that MCP also tries to fix a few things, we should consider fixing with APIs in general - so at least good APIs can be used by LLMs without any indirections.


What bothers me is that some programmers think that writing the code more dense is already better. But I would argue it's not the characters / less lines of code which creates the complexity, but how many logical concepts (?) you utilize to solve the problem.

Using a smaller set of different concepts also helps reducing the cognitive load, even if it leads to more verbosity ("less clever code").


Everything has tradeoffs, but there's value in reducing both line and character counts as well.

For example, nobody ever uses anything other than ijk for loop indices unless the index is particularly meaningful or they've written a deeply nested abomination. Why? It's not just laziness in typing; it gives more relative room for characters that matter. Longer names and patterns are acceptable if you can't make your point clearly enough with few characters, but length isn't the goal; communication is.

It's important to limit lines of code too (and their widths) because if an idea doesn't fit comfortably on your screen then you won't be able to leverage the pattern-recognition parts of your brain to figure out what's going on.

From a different perspective, you know that feeling you get when somebody dumps a 1000-line PR on you (or an excessively long HN comment...)? It's hard to digest because you can't grok the whole thing at once and have to switch to carefully analyzing each component just to even have the context to then give the thing a proper review. If that same PR could be wired together with a few high-level concepts (less code, but more involved baseline knowledge required to understand it), it would be instantly understandable to somebody with the same background.


This is close to what I would have written. It is almost never about actually the line count. Not even SLOC. For example different languages lend themselves to breaking lines to a different degree. In Scheme I almost always write (define name \nl (lambda (arguments) \nl ...)) Did I now waste a line? Of course not. It is still the same number of concepts and basically tokens involved as would be in Python, when I write "def name(arguments):" but Python doesn't lend itself that well to line breaking, because of its (annoying) whitespace sensitivity. Neither does this make code any more or less readable nor does it increase the chance for bugs. Same goes for many other constructs in both languages. Take a "(cond ...)" for example. I will break some lines there, because the language makes it easy, by having everything delimited with parents, while in Python I will have to type additional visual clutter to do the same.


Here to recommend this article, really helped me to understand inheritance better. Liskov Substitution is just one aspect / type of it and may conflict with others.

https://www.sicpers.info/2018/03/why-inheritance-never-made-...


Very Good; Gives the "Correct" overview of different usages (i.e. policy) of Inheritance (i.e. mechanism).

Quote: Inheritance was never a problem: trying to use the same tree for three different concepts was the problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: