For the first half of what you said: I will note that "wine capital of France" is a completely different claim than "capital of France", even if many of the words are the same. For the rest: I'll just leave this here for everyone else to judge which of us is being the pedant, and which is arguing just to keep arguing.
As for the second half: I am almost in agreement with your overall point here. LLMs are plausible text generators. Yes, I'm with you there. But LLMs are marketed as more than that, and that's the problem. They're marketed by their makers as more than that.
This is not a technical problem, it's a marketing problem. You can't yell at people for accusing a plausible text generator of "hallucinating", when they were sold it as being more than just a plausible text generator. (The were sold it as being "AI", which is something that you might realistically be able to accuse of hallucinating.) The LLM creators have written a check that their tech, by its very nature, cannot cash. And so their tech is being held to a standard that it cannot reach. This isn't the fault of the tech; it's the fault of the marketing departments.
>But LLMs are marketed as more than that, and that's the problem. They're marketed by their makers as more than that.
The new snake oil, same as the old snake oil. This is no different than any other tech bubble. Nobody paying attention should think otherwise. I don't care how it's marketed, I mean half the US is going to vote for a serial rapist conman thanks to some twisted marketing. People are idiots and are easily fooled, and this has gone on as long as there have been humans. I'm not sure what to say about "marketing".
So finally we can sort of agree on something. But I still think you're giving the LLMs too much credit in suggesting that they will always infallibly say "Paris" when asked where is the capital of France. There's simply no mechanism for the LLM to understand "Paris" or "France" or "Capital". If I asked the LLM that question 1,000,000 times, do you really think it would result in "Paris" 1,000,000 times? I kind of doubt it.
The problem is with the person who is expecting truth from an LLM. So far I don't really see too many people putting absolute faith in anything an LLM is telling them, but maybe those people are out there.
No, I never said that an LLM would always say "Paris". I said that Paris is the actual correct answer. I don't give LLMs that kind of credit; I'm not sure what I said that made you think that I do.
As for the second half: I am almost in agreement with your overall point here. LLMs are plausible text generators. Yes, I'm with you there. But LLMs are marketed as more than that, and that's the problem. They're marketed by their makers as more than that.
This is not a technical problem, it's a marketing problem. You can't yell at people for accusing a plausible text generator of "hallucinating", when they were sold it as being more than just a plausible text generator. (The were sold it as being "AI", which is something that you might realistically be able to accuse of hallucinating.) The LLM creators have written a check that their tech, by its very nature, cannot cash. And so their tech is being held to a standard that it cannot reach. This isn't the fault of the tech; it's the fault of the marketing departments.