This is a good way of framing that we don't understand human creativity. And that we can't hope to build it until we do.
i.e. AGI is a philosophical problem, not a scaling problem.
Though we understand them little, we know the default mode network and sleep play key roles. That is likely because they aid some universal property of AGI. Concepts we don't understand like motivation, curiosity, and qualia are likely part of the picture too. Evolution is far too efficient for these to be mere side effects.
(And of course LLMs have none of these properties.)
When a human solves a problem, their search space is not random - just like a chess grandmaster's search space of moves is not random.
How our brains are so efficient when problem solving while also able to generate novelty is a mystery.
i.e. AGI is a philosophical problem, not a scaling problem.
Though we understand them little, we know the default mode network and sleep play key roles. That is likely because they aid some universal property of AGI. Concepts we don't understand like motivation, curiosity, and qualia are likely part of the picture too. Evolution is far too efficient for these to be mere side effects.
(And of course LLMs have none of these properties.)
When a human solves a problem, their search space is not random - just like a chess grandmaster's search space of moves is not random.
How our brains are so efficient when problem solving while also able to generate novelty is a mystery.