Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs are just trying to mimic (predict) human output, and can obviously do a great job, which is why they are useful.

I was just referring to when LLMs fail, which can be in non-human ways, not only the way in which they hallucinate, but also when they generate output that has the "shape" of something in the training set, but is nonsense.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: