Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans can also spend an entire lifetime (or more, across multiple generations) being absolutely, inexorably and violently certain they are correct about something and still be 100% wrong.

I am not disagreeing that either people or LMMs are not extremely helpful in many or most instances. But if the best we can do with this technology is to make human-comparable mistakes WAY faster and more efficiently, I think as a species we’re in for a lot more bad times before we get to graduate to the good times.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: