Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> [The] way to do is to register a tool / function to load and extend the base prompt and presto - you have implemented your own version of skills.

So are they basically just function tool calls whose return value is a constant string? Do we know if that’s how they’re implemented, or is the string inserted into the new input context as something other than a function_call_output?





No. You basically call a function to temporarily or permanently extend the base prompt. But of course you can think of other patterns to do more interesting things depending on your use-case. The prompt selection is a RAG.

Did some research and it's a bit more nuanced than this, though still RAG at its core: each skill has a name and brief description that's included verbatim into every prompt, and a Bash "cat" is triggered as a standard tool call to load the full skill specification from disk.

https://platform.claude.com/docs/en/agents-and-tools/agent-s...

And as implemented in Codex: https://github.com/openai/codex/pull/7412/changes#diff-35647...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: