Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We shouldn’t anthropomorphize LLMs—they don’t “struggle.” A better framing is: why is the most likely next token, given the prior context, one that reinforces the earlier wrong turn?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: