Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought about this a bit, and am wondering about the results of LLM-fueled arms-race when it comes to figuring out whether an AI has created some piece of text.

I'm worried we may reach a point AI will get so good faking people that real peoples' output will be treated as fake due to only as many combinations possible that originates from humans. You will start depending on AI telling you what is true and what is fake for every single piece of information. This leads to a question how to tell which AI is right - you can't really verify an opinion, only facts.

And being able to manipulate opinions is a very strong perk.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: