Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use LLMs whenever I'm coding, and it makes mistakes ~80% of the time. If you haven't seen it make a huge mistake, you may not be experienced enough to catch them.




Hallucinations, no. Mistakes, yes, of course. That's a matter of prompting.

> That's a matter of prompting.

So when I introduce a bug it's the PM's fault.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: