Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recently asked a leading GenAI chatbot to help me understand a certain physics concept. As I pressed it on the aspect I was confused about, the bot repeatedly explained, and in our discussion, consistently held firm that I was misunderstanding something, and made guesses about what I was misunderstanding. Eventually I realized and stated my mistake, and the chatbot confirmed and explained the difference between my wrong version and the truth. I looked at some sources and confirmed that the bot was right, and I had misremembered something.

I was quite impressed that it didn't "give in" and validate my wrong idea.



I've seen similar results in physics. I suspect LLMs are capable of redirecting the user accurately when there have been long discussions on the web about that topic. When an LLM can pattern-match on whole discussions, it becomes a next-level search engine.

Next, I hope we can somehow get LLMs to distinguish between reliable and less-reliable results.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: