If people already understand what "hallucination" means, then I think it's perfectly intuitive and educational to say that, actually, the LLM is always doing that, just that some of those hallucinations happen to coincidentally describe something real.
We need to dispell the notion that the LLM "knows" the truth, or is "smart". It's just a fancy stochastic parrot. Whether it's responses reflect a truthful reality or a fantasy it made up is just luck, weighted by (but not constrained to) its training data. Emphasizing that everything is a hallucination does that. I purposefully want to reframe how the word is used and how we think about LLMs.
We need to dispell the notion that the LLM "knows" the truth, or is "smart". It's just a fancy stochastic parrot. Whether it's responses reflect a truthful reality or a fantasy it made up is just luck, weighted by (but not constrained to) its training data. Emphasizing that everything is a hallucination does that. I purposefully want to reframe how the word is used and how we think about LLMs.