Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The hallucinations seem to be related to AI's agreeableness. They always seem to tell you what you want to hear except when it goes against significant social narratives.

It's like LLMs know all possible alternative theories (including contradictory ones) and which one it brings up depends on how you phrase the question and how much you already know about the subject.

The more accurate information you bring into the question, the more accurate information you get out of it.

If you're not very knowledgeable, you will only be able to tap into junior level knowledge. If you ask the kinds of questions that an expert would ask, then it will answer like an expert.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: