Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>They were under the impression they could in fact change the AI's mind.

They aren't really wrong here. LLMs are often trained on input. Have you considered you might just be taking their anthropomorphism a little too literally? People have used these anthropomorphic metaphors for computers since the Babbage machine.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: