Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The inherent problem is that if someone can convince you too keep something that you don't understand locked away, they can also convince you to release something you don't understand, as you don't have enough information to make the decision in either case.

Taking a hardline position on this is admitting that you are irrational and can be convinced to do things you shouldn't.

There are very good reasons to let such an AI out, and if you can enable those good reasons, you should let the AI out. And an AI that can produce those reasons is exactly the kind of AI that should be released. A rational person should already understand this, and not ever claim that they would always refuse the AI. (And there's a realism factor: if you wanted to 'luck up' an AI permanently, you would destroy it, not post a guard.)



The premise of the experiment is that we have already established that the AI is dangerous. Even if you weren't sure, you should always side with caution and not let it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: