But prediction as the basis for reasoning (in epistemological sense) requires the goal to be given from the outside, in the form of the system that is to be predicted. And I would even say that this problem (giving predictions) has been solved by RL.
Yet, the consensus seems to be we don't quite have AGI; so what gives? Clearly just making good predictions is not enough. (I would say current models are empiricist to the extreme; but there is also rationalist position, which emphasizes logical consistency over prediction accuracy.)
So, in my original comment, I lament that we don't really know what we want (what is the objective). The post doesn't clarify much either. And I claim this issue occurs with much simpler systems, such as lambda calculus, than reality-connected LLMs.
> But prediction as the basis for reasoning (in epistemological sense) requires the goal to be given from the outside, in the form of the system that is to be predicted.
Prediction doesn't have goals - it just has inputs (past and present) and outputs (expected inputs). Something that is on your mind (perhaps a "goal") is just a predictive input that will cause you to predict what happens next.
> And I would even say that this problem (giving predictions) has been solved by RL.
Making predictions is of limited use if you don't have the feedback loop of when your predictions are right or wrong (so update prediction for next time), and having the feedback (as our brain does) of when your prediction is wrong is the basis of curiosity - causing us to explore new things and learn about them.
> Yet, the consensus seems to be we don't quite have AGI; so what gives? Clearly just making good predictions is not enough.
Prediction is important, but there are lots of things missing from LLMs such as ability to learn, working memory, innate drives (curiosity, boredom), etc.
Yet, the consensus seems to be we don't quite have AGI; so what gives? Clearly just making good predictions is not enough. (I would say current models are empiricist to the extreme; but there is also rationalist position, which emphasizes logical consistency over prediction accuracy.)
So, in my original comment, I lament that we don't really know what we want (what is the objective). The post doesn't clarify much either. And I claim this issue occurs with much simpler systems, such as lambda calculus, than reality-connected LLMs.