Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He had mental issues before that - the bot simply reinforced his decisions me thinks.


Yes, which is the point: if you substitute a human therapist for an LLM, this is what's going to happen because "guard rails" can't account for every scenario. If half a billion dollars doesn't buy a bug-free game, why would it buy a safe LLM?

Does suicide encouraged by a therapist happen with human therapists too? Probably, but likely much less common as a suicide is going to hurt your reputation.


Millions upon millions of people have mental health issues. Chatbots that reinforce those issues shouldn't be dismissed with a "simply... me thinks".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: