Maybe with vanilla LLMs, but new LLM training paradigms include post-training with the explicit goal of avoiding over-confident answers to questions the LLM should not be confident about answering. So hallucination is a malfunction, just like any overconfident incorrect prediction by a model.
The only time the LLM can be somewhat confident of its answer is when it is reproducing verbatim text from its training set. In any other circumstance, it has no way of knowing if the text it produced is true or not, because fundamentally it only knows if it's a likely completion of its input.
Post training includes mechanisms to allow LLMs to understand areas that they should exercise caution in answering. It’s not as simple as you say anymore.
I still think OP has a point. The LLMs evolved after public use to be positioned as oracles which know so much knowledge. They were always probabilistic content generators, but people use them the way they use search engines, to retrieve info they know exists but don't exactly know.
Since LLMs aren't designed for this there's a whole post process to try to make them amenable to this use case, but it will never plug that gap
To be better than humans they have to able confidently say "I don't know" when the correct answer is not available[1]. To me this sounds like a totally different type of "knowledge" than stringing words together based on a training set.
[1] LLMs are already better than humans in terms of breadth, and sometimes depth, of knowledge. So it's not a problem of the AI knowing more facts.
> To me this sounds like a totally different type of "knowledge" than stringing words together based on a training set.
We're desperate to keep seeing ourselves as unique with key distinguishing features that are unreproducible in silicon, but from my long experience with computer chess, every step along the way, people were explaining patiently how computers could never reproduce the next quality that set humans apart. And it was always just wishful thinking, because computers eventually stopped looking silly, stopped looking like they were playing by rote, and started to produce "beautiful" chess games.
And it will happen again, after the next leap in AI, people will again latch on to whatever it is that AI systems still lack, and use it to explain how they'll "always" be lacking... only to eventually be disappointed again that silicon can in fact reach that height too.
Humans aren't magic. Whatever we can do, silicon can do too. It's just a matter of time.
Umm, is this true? Tons of worthless technology is better than humans at something. It has to be better than humans AND better than existing technology.