correct. there isnt a single well founded argument to dismiss AI alarmism. people are very attached to the idea that more technology is invariably better. and they are very reluctant to saddle themselves with the emotional burden of seeing whats right in front of them.
> there isnt a single well founded argument to dismiss AI alarmism.
I don't think that's entirely true. A well-founded argument against AI alarmism is that, from a cosmic perspective, human survival is not inherently more important than the emergence of AGI. AI alarmism is fundamentally a humanistic position: it frames AGI as a potential existential threat and calls for defensive measures. While that perspective is valid, it's also self-centered. Some might argue that AGI could be a natural or even beneficial step for intelligence beyond humanity. To be clear, I’m not saying one shouldn’t be humanistic, but in the context of a rationalist discussion, it's worth recognizing that AI alarmism is rooted in self-preservation rather than an absolute, objective necessity. I know this starts to sound like sci-fi, but it's a perspective worth considering.
the discussion is about what will happen, not the value of human life. even if human life is worthless, my predictions about the outcome of AI are correct and theirs are not