Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's very subjective. An LLM could return statements like "Global warming is real and man-made", and it also could produce a result like "Global warming is a hoax", and it's definitely up to the reader as to whether the LLM is "hallucinating". It doesn't matter how readable or grammatically correct the LLM is, it's still up to the reader to call bullshit, or not.


If you ask about opinions, sure. Because there are no "true" opinions.

If you ask about the capital of France, any answer but "Paris" is objectively wrong, whether given by a human or LLM.


Paris has not always been the capital of France. Many other cities around France have been capital.

https://en.wikipedia.org/wiki/List_of_capitals_of_France

There's practically no subject you could bring up that an LLM wouldn't "hallucinate" or give "wrong" information about given that garbage in -> garbage out, and LLMs are trained on all the garbage (as well as too many facts) they've been able to scrape. The LLM lacks the ability to reason about what century the prompt is asking about, and a guess is all it is programmed to do.

Also, if you ask 100 French citizens today what the true capital of France is, you're not always going to get "Paris" as a reply 100 times.


But if you ask "what is the capital of France" (not what it has been, but what it is), there is actually only one correct answer. "Capital" has a definition, and the headquarters of the government of France has a definite location. Sure, some French citizens will give a different answer. Some people will say the earth is flat, too. They are wrong.


But we're talking about LLMs in this thread, and the example I used of French citizens not always saying Paris is the capital of France is just an example of how topics can be subjective. If you have something pertinent to the LLM discussion, then please reply.


The capital of France is not subjective. People say stuff. Some of it is subjective, and some of it is just wrong.

So, was your comment about Paris about the LLM discussion, or wasn't it? Because you're the one who brought it up, so if we got off topic, blame yourself.

I have asserted that the LLMs are sometimes flat-out wrong. You have answered that point with an example of... what were you trying to say? That humans can also be wrong? If so, that's true, but so what? We were talking about LLMs. Or were you trying to say that even something like the capital of France is actually subjective? If so, you're wrong. "The capital of France" has only one correct answer, even if there are other answers in the training data.

Or are you trying to say that it's not the LLM's fault, because the wrong answers are in the training data? That's true, but it's irrelevant. The LLM is still giving wrong answers to questions that have objectively correct answers. The LLM had that in its training data; that's what LLMs are; but that doesn't actually make the answer any less wrong.

So what's your actual argument here? You seemed to be headed toward a "no answer can actually be objectively correct", which is both lousy epistemology, and completely unworkable in real life. But then you seemed to veer into... something. What are you actually trying to claim?

This is sounding kind of harsh, but I really am not following what your actual point is.


>>The capital of France is not subjective. People say stuff. Some of it is subjective, and some of it is just wrong.

>"The capital of France" has only one correct answer, even if there are other answers in the training data.

"Champagne is the Champagne capital of France". "Bordeaux is the red wine capital of France". See how easy it is? You're pedantry is only proving that you're a pedant and can't accept anything but you being the only one who is correct. Ease up, bro. We can both be right.

But none of that is about LLMs, I'm just proving a separate point.

>So what's your actual argument here?

A system that's programmed to generate plausible sounding text is "right" when it generates plausible sounding text. It's not "hallucinating", it's not "lying", it's not "wrong", it is operating exactly as designed, it was never programmed to deliver "the truth". It's up the the reader to decide if the output is acceptable, which is entirely subjective to what the reader thinks is right. If the LLM says "Bordeaux is the red wine capital of France" are you going to shit on it and say it's somehow "wrong"? NO ITS WROOOONG THERE CAN ONLY BE ONE TRUE CAPITAL OF FRANNNNNNCEEEE!!! Go ahead and die on that hill if you must.

If this were another website, I'd have blocked you already, because this is entirely a waste of my time.


For the first half of what you said: I will note that "wine capital of France" is a completely different claim than "capital of France", even if many of the words are the same. For the rest: I'll just leave this here for everyone else to judge which of us is being the pedant, and which is arguing just to keep arguing.

As for the second half: I am almost in agreement with your overall point here. LLMs are plausible text generators. Yes, I'm with you there. But LLMs are marketed as more than that, and that's the problem. They're marketed by their makers as more than that.

This is not a technical problem, it's a marketing problem. You can't yell at people for accusing a plausible text generator of "hallucinating", when they were sold it as being more than just a plausible text generator. (The were sold it as being "AI", which is something that you might realistically be able to accuse of hallucinating.) The LLM creators have written a check that their tech, by its very nature, cannot cash. And so their tech is being held to a standard that it cannot reach. This isn't the fault of the tech; it's the fault of the marketing departments.


>But LLMs are marketed as more than that, and that's the problem. They're marketed by their makers as more than that.

The new snake oil, same as the old snake oil. This is no different than any other tech bubble. Nobody paying attention should think otherwise. I don't care how it's marketed, I mean half the US is going to vote for a serial rapist conman thanks to some twisted marketing. People are idiots and are easily fooled, and this has gone on as long as there have been humans. I'm not sure what to say about "marketing".

So finally we can sort of agree on something. But I still think you're giving the LLMs too much credit in suggesting that they will always infallibly say "Paris" when asked where is the capital of France. There's simply no mechanism for the LLM to understand "Paris" or "France" or "Capital". If I asked the LLM that question 1,000,000 times, do you really think it would result in "Paris" 1,000,000 times? I kind of doubt it.

The problem is with the person who is expecting truth from an LLM. So far I don't really see too many people putting absolute faith in anything an LLM is telling them, but maybe those people are out there.


No, I never said that an LLM would always say "Paris". I said that Paris is the actual correct answer. I don't give LLMs that kind of credit; I'm not sure what I said that made you think that I do.


This got real tedious...


lame response.


I said often not subjective. When it says that there are two "r"s in "strawberry", it is wrong, and it is not subjective at all. There's no wiggle room. It's wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: