Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OK - there's always a nonzero chance of hallucination. There's also a non-zero chance that macroscale objects can do quantum tunnelling, but no one is arguing that we "need to live with this" fact. A theoretical proof of the impossibility of reaching 0% probability of some event is nice, but in practice it says little about whether we can exponentially decrease the probability of it happening or not to effectively mitigate risk.


Plus, why do we care about that degree? If we could make it so humans don't hallucinate too that would be great, but it ain't happening. Humans memory gets polluted the moment you feed them new information, as evidence by how much care we have to give when trying to extract information when it matters, like law enforcement.

People rag on LLMs constantly and i get it, but they then give humans way too much credit imo. The primary difference i feel like we see with LLMs vs Humans is complexity. No, i don't personally believe LLMs can scale to human "intelligence". However atm it feels like comparing a worm brain to a human intelligence and saying that's evidence that neurons can't reach human intelligence level.. despite the worm being a fraction of the underling complexity.


Humans have two qualities that make them infinitely superior to LLMs for similar tasks.

a) They don't give detailed answers for questions they have no knowledge about.

b) They learn from their mistakes.


> there's always a nonzero chance of hallucination. There's also a non-zero chance that macroscale objects can do quantum tunnelling, but no one is arguing that we "need to live with this" fact.

True, but it is defeatist and goes against a good engineering/scientific mindset.

With this attitude we'd still be practicing alchemy.


Exactly.

LLMs will sometimes be inaccurate. So are humans. When LLMs are clearly better than humans for specific use cases, we don't need 100% perfection.

Autonomous cars will sometimes cause accidents. So do humans. When AVs are clearly safer than humans for specific driving scenarios, we don't need 100% perfection.


If we only used LLMs for use cases where they exceed human ability, that would be great. But we don't. We use them to replace human beings in the general case, and many people believe that they exceed human ability in every relevant factor. Yet if human beings failed as often as LLMs do at the tasks for which LLMs are employed, those humans would be fired, sued and probably committed.

Yet any arbitrary degree of error can be dismissed in LLMs because "humans do it too." It's weird.


I don't think it's true that modern LLMs are used to replace human beings in the general case, or that any significant number of people believe they exceed human ability in every relevant factor.


> When AVs are clearly safer than humans for specific driving scenarios, we don't need 100% perfection.

People didn't stop refining the calculator once it was fast enough to beat a human. It's reasonable to expect absolute idempotent perfection from a robot designed to manufacture text.


Maybe, down the line. The calculator went through a long period of perfecting until it became as powerful as they are today. It’s only natural LLMs will also take time. And much like calculators moving from stepped drums, to vacuum tubes, to finally transistors, the way we build LLMs are sure to change. Although I’m not quite sure idempotence is something LLMs are capable of.


A Scientific HP calculator from late 80's was powerful enough to cover most of Engineering classes.


Sure, but that doesn’t mean they haven’t improved. Try calculating 99! on a TI-59. I doubt it can do it, and I know the modern TI-30XIIS can’t do it, but my Numworks can (although doing it twice forces it to clear that section of RAM). The calculator space may be slow to improve, as most non-testing calculations have went to computers, but that doesn’t mean they’re not useful, especially with scripting languages allowing me to convert between whatever units I want or calculate anything easily.


Not sure absolute perfection is a concept that can exist in the universe of words.


Yeah if the computer had been wrong 3 out of 10 times it never would have been a thing.


LLMs will always have some degree of inaccuracy.

FtFY.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: