I think it’s worth discussing the fact that many folks in EA- and rationalist-adjacent circles are deeply panicked about developments in AI, because they believe these systems pose a threat to human existence. (Not all of them, obviously. But a large and extraordinarily well-funded contingent.) The fear I have with these organizations is that eventually panic - in the name of saving humans - leads to violent action. And while this is just a couple of isolated examples, it does illustrate the danger that violent action can become normalized in insular organizations.
I do hope people “on the inside” are aware of this and are working to make sure those organizations don’t have bad ideas that hurt real people.
I think this is a variant of Walter Russell Mead's abrahamic bomb [1].
when you remove the theology from
Psychology based on Judeo Christian
Morality, it recreates it's facsimile In this case, AI judgment Day precedes either eternal damnation or salvation. Ziz, like other "radical rationalists" even believe that the singularity AI will punish people retroactively for their moral failings, such as eating meat.
So my speculation from your comment is the data scientists in the group saw what they were doing up close as leading to something that was anathema to what they wanted the world to be, it seemed like a weird trivial connection in the news article, but in context of violent rejection of AI makes sense.
https://www.sfgate.com/bayarea/article/bay-area-death-cult-z...
I do hope people “on the inside” are aware of this and are working to make sure those organizations don’t have bad ideas that hurt real people.