Maybe "solved problem" is an overly strong statement here. I was responding to:
> Are there any effective ways to add extra knowledge to an LLM, ways that are more than just demos or proofs of concept?
Adding to the context is certainly "effective" and more than just a proof-of-concept/demo. There are many production systems out there now using context-filling tools, most notably GPT-5 with search itself.
I do think it's only recently (this year) that models got reliable enough at this to be useful. For me it was o3 that first appeared strong enough at selecting and executing search tools for this trick to really start to shine.
Since this is an anecdote, it is unfalsifiable, and can't support these softened claims, either. The endless possibilities of incorrect contexts inferred from simply incomplete or adjacent contexts would prevent the ability to manage information quantity versus quality. I want to continue to provide my own unfalsifiable anecdote though, that all of this is a way to just name a new rug to sweep the problems under, and feels to me the kind of problem that if we knew how to solve we wouldn't use these models at all.
> Are there any effective ways to add extra knowledge to an LLM, ways that are more than just demos or proofs of concept?
Adding to the context is certainly "effective" and more than just a proof-of-concept/demo. There are many production systems out there now using context-filling tools, most notably GPT-5 with search itself.
I do think it's only recently (this year) that models got reliable enough at this to be useful. For me it was o3 that first appeared strong enough at selecting and executing search tools for this trick to really start to shine.