Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m not sure what this paper is supposed to prove and find it rather trivial.

> All of the LLMs knowledge comes from data. Therefore,… a larger more complete dataset is a solution for hallucination.

Not being able to include everything in the training data is the whole point of intelligence. This also holds for humans. If sufficiently intelligent it should be able to infer new knowledge, refuting the very first assumption at the core of the work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: