Hacker Newsnew | past | comments | ask | show | jobs | submit | tristanz's commentslogin

You can combine MCPs within composable LLM generated code if you put in a little work. At Continual (https://continual.ai), we have many workflows that require bulk actions, e.g. iterating over all issues, files, customers, etc. We inject MCP tools into a sandboxed code interpreter and have the agent generate both direct MCP tool calls and composable scripts that leverage MCP tools depending on the task complexity. After a bunch of work it actually works quite well. We are also experimenting with continual learning via a Voyager like approach where the LLM can save tool scripts for future use, allowing lifelong learning for repeated workflows.


That autocompounding aspect of constantly refining initial prompts with more and more knowledge is so interesting. Gut feeling says it’s something that will be “standardized” in some way, exactly like what MCP did.


Yes, I think you could get quite far with a few tools like memory/todo list + code interpreter + script save/load. You could probably get a lot farther though if you RLVRed this similar to how o3 uses web search so effectively during it's thinking process.


Not considering the potential for AI consciousness and suffering seems very shortsighted. There are plausible reasons to believe that both could emerge from an RL processes coupled with small architectural and data regime changes. Today's models have inherent architectural limits around continual learning that make this unlikely, but that will change.


Coming from TRPC, ORPC is a breath of fresh air. I tested out a migration from TRPC to ORPC and I found two things particularly beneficial.

The first is contract-first development which separates the contract from the implementation. This allows you to avoid codebase dependencies between your server and client. TRPC works fine when you only use your client from your server package, but if you need to export it elsewhere, e.g. a public SDK, you can easily end up with circular dependency issues and a bunch of pain.

The second is OpenAPI support. TRPC doesn't support OpenAPI generation and trpc-openapi is unmaintained. ORPC has first class OpenAPI support, which means you can use ORPC internally but expose a public OpenAPI API to customers and generate OpenAPI based clients if you want to.

I'm hoping this project gets traction since it is amazingly well done. I have zero affiliation or interest in ORPC to be clear, I just loved it from my quick tests.


As an FYI, this is fine for rough usage, but it's not accurate. The OpenAI APIs inject various tokens you are unaware of into the input for things like function calling.


ChatGTP maybe, but OpenAI hasn't even tried to train a model to replace search.

Until somebody tries to fine-tune a model using RLHF explicitly with the goal of replacing Google it's very hard to know what the resulting experience would look like. It could be shocking if ChatGTP is any guide.



To get 10x I think you need to wed it to solving broader workflow challenges, like dbt does today.


Collaborative incremental improvement of models would be extremely disruptive. While this happens via research, it's massively inefficient, particularly as pretrained models get larger and span multiple modalities.


I assume you mean "disruptive" in the sense that it would enable an uninteresting status quo to be replaced with something more dynamic, rather than actually disrupting peoples' work with models.


Interesting - the modular idea is one of the most interesting to me. The recent hierarchical transformers papers hint that models can be made smaller and might open the door to modular approaches, which could mean highly nuanced customization of your domains of interest, and fitting the model size to the capacity of consumer hardware like phones.

Thanks for the effort you're putting into this!


Continual | Frontend, Fullstack, and ML Engineer | SF, REMOTE | https://continual.ai.

Continual is building the missing AI layer for the modern data stack. We're hiring multiple roles (frontend, full stack, ML engineer) at all levels. This is an opportunity to get in early on a massive opportunity in one of the most interesting areas of technology – democratizing operational AI/ML.

Email tristan@continual.ai to learn more.


This is one of the most interesting and ambitious feature stores I've seen. It's also another example of Chinese companies really innovating on AI/ML infra.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: