Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Self-driving" is just "reactive" with an open recursive structure. In principle, a network that processes a prompt, generates a plan, recurses a finite number of times, judges how well it did, generates a training plan to improve, outputs a corresponding follow-up prompt, and then waits for you to press a button before it repeats the whole thing with the follow-up prompt, ad infinitum, is still "reactive" - but nobody would argue that whoever presses the button is performing irreplaceable cognitive labor.

So I just don't think this captures an important distinction at the limit. If a system can generate a good action plan, turning it into an agent is just plumbing.



Actually, we can't be certain that humans themselves are not reactive. It is just that their reactions are either built in (self-preservation, reproduction), or based on much broader input (sensory, biochemistry, etc.). The current level of reactivity of the LLMs is very limited by their architectures, though, and as long as these architectures stay in place, you can't expect them to be "self-driven".


I just don't think this is the case. I suspect reactivity in LLMs is mostly limited by training. Human text data is just not suited to the way an AI needs to output data to plan long action chains - justification, not reasoning.


This might be true as well along with what I said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: