The models are currently trained on a static set of human “knowledge” — even if they “know” what novelty is, they aren’t necessarily incentivized to identify it.
In my experience, LLMs currently struggle with new ideas, doubly true for the reasoning models with search.
What makes novelty difficult, is that the ideas should be nonobvious (see: the patent system). For example, hallucinating a simpler API spec may be “novel” for a single convoluted codebase, but it isn’t novel in the scope of humanity’s information bubble.
I’m curious if we’ll have to train future models on novelty deltas from our own history, essentially creating synthetic time capsules, or if we’ll just have enough human novelty between training runs over the next few years for the model to develop an internal fitness function for future novelty identification.
My best guess? This may just come for free in a yet-to-be-discovered continually evolving model architecture.
In either case, a single discovery by a single model still needs consensus.
It's a good question. A related question is: "what's an example of something undeniably novel?". Like if you ask an agent out of the blue to prove the Collatz conjecture, and it writes out a proof or counterexample. If that happens with LLMs then I'll be a lot more optimistic about the importance to AGI. Unfortunately, I suspect it will be a lot murkier than that - many of these big open questions will get chipped away at by a combination of computational and human efforts, and it will be impossible to pinpoint where the "novelty" lies.
Good point. Look at patents. Few are truly novel in some exotic sense of "the whole idea is something never seen before." Most likely it is a combination of known factors applied in a new way, or incremental development improving on known techniques. In a banal sense, most LLM content generated is novel, in that the specific paragraphs might be unique combinations of words, even if the ideas are just slightly rearranged regurgitations.
So I strongly agree that, especially when are talking about the bulk of human discovery and invention, the incrementalism will be increasingly in striking distance of human/AI collaboration. Attribution of the novelty in these cases is going to be unclear, when the task is, simplified something like, "search for combinations of things, in this problem domain, that do the task better than some benchmark" be that drug discovery, maths, ai itself or whatever.
The models are currently trained on a static set of human “knowledge” — even if they “know” what novelty is, they aren’t necessarily incentivized to identify it.
In my experience, LLMs currently struggle with new ideas, doubly true for the reasoning models with search.
What makes novelty difficult, is that the ideas should be nonobvious (see: the patent system). For example, hallucinating a simpler API spec may be “novel” for a single convoluted codebase, but it isn’t novel in the scope of humanity’s information bubble.
I’m curious if we’ll have to train future models on novelty deltas from our own history, essentially creating synthetic time capsules, or if we’ll just have enough human novelty between training runs over the next few years for the model to develop an internal fitness function for future novelty identification.
My best guess? This may just come for free in a yet-to-be-discovered continually evolving model architecture.
In either case, a single discovery by a single model still needs consensus.
Peer review?