Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seeing a problem you've seen many times and have memorized and plowing through it without "concentrating" enough to see the subtle differences is a failure mode that occurs in humans as well. We don't say "humans can't reason" just because this happens so it makes little sense to say the same for LLMs. The important bit is that it can solve it if nudged from memory, same as people.


Humans are wired fundamentally to be irrational - our perceptual/cognitive apparatus is deeply flawed - umpteen studies show this - so this is a given.

But, we also discovered a way to think/model which seems to work amazingly - which is the scientific method or reasoning. But this language is not natural to the way humans operate at all. It is a struggle for most of us to think in that manner. thats why math/science is difficult for most of us, and these were discovered only in the last 2000 years.

LLMs cannot yet represent conceptual relationships deterministically/symbolically. At some point in the future, perhaps they can, but the current generation has a long way to go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: