Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] My experience at and around MIRI and CFAR (lesswrong.com)
29 points by luu on Oct 18, 2021 | hide | past | favorite | 13 comments


I will be honest about my experience that seems somewhat relevant.

I think that all it really takes to become extremely paranoid is listening to enough paranoid talk over a period of time. Some years ago I went through a period where I was on YouTube or something watching InfoWars-type content on a daily basis (and other conspiracy theorists). After a fair amount of this, I was quite afraid of the government. I know there were days when I was afraid someone was out to get me for literally no rational reason.

But that type of content is designed to make you feel that way. It's part of their weirdly effective hook. But at the core it's really just repeating a stream of extreme or paranoid perspectives in different ways.

Point being, I believe if someone decided they needed to convince a group of people about how corrupt the world is, or whatever, they could do so. One aspect of this might be a little bit of social isolation, just based on the fact that if you are in contact with fewer people then you are likely exposed to fewer information streams. So when they mention people who have very different ideologies, very high IQ, or anything else that might make it slightly harder to socialize, that seems to add up also.

The toughest part to me is that deep down I personally believe some core motivator of this paranoia IS rational in a way. Because my speculation is that we will see these general purpose AI technologies in our lifetime and there actually are groups who seem predisposed to misuse them or build the wrong type of system and that could be truly disasterous. I think that obsessing over these types of speculative existential risks though obviously is not a sustainable way to live for most people.


Most Substance-addicted people are also addicted to thinking, meaning they have a compulsive and unhealthy relationship with their own thinking." - David Foster Wallace, Infinite Jest

Now imagine your career and passion is thinking about thinking, making computers think, and making sure thinking computer's don't destroy the world.


https://medium.com/@zoecurzi/my-experience-with-leverage-res...:

"The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks. For a long time, what was asked of me in order to maintain my funding was to “become a self-debugger” — become capable of working with my own mind effectively enough to be able to change it in the direction that would be useful for the project, which it was assumed was also the direction I’d obviously want, because right now I wasn’t a powerful Elon Musk type, and obviously I was here because I didn’t want to keep being an ineffective, unagentic powerless normie forever (this was not remotely the self-concept or set of concerns with which I entered Leverage, but it was a shameful prospect by the time I left)."

What?

"Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics. MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects. Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness."

What?


tl;dr: cults are dangerous, cults employing psychedelics and psychological manipulation are especially dangerous, and it's not always obvious what constitutes a cult or cult-like abuse when it's a subgroup of one's own community.

Don't read (most of) the details in the article, read the comments instead.


For those who may find this a little foreign, from what I could gather, "Rationality" refers to a post-new-age new techno-religion originating in Northern California, whose soteriology focuses on artificial intelligence as the source of redemption or damnation, whose eschatology comprises of an event called Singularity, which is the accelerated formation of an artificial intelligence entity or entities, and whose normative doctrine focuses on Bayes' formula as a guiding principle and a particularly technical form of utilitarianism as an organising moral principle. Rationality has several institutions, two of which are named MIRI and CFAR, dedicated to studying the eschatological and normative aspects of the creed, respectively. The website, lesswrong, is a primary publication of Rationality, and its founding canonical texts include a Harry Potter fan fiction novella. It is one of many religions that grew in Northern California over the past sixty years or so, and far from the strangest one.


What the heck is "Leverage"? What are MIRI and CFAR??

I tried reading the article, but it seemed like I had landed in some parallel universe, as I'd never heard of these outfits or people.


https://www.leverageresearch.org/about

MIRI is the Machine Intelligence Research Institute. https://en.wikipedia.org/wiki/Machine_Intelligence_Research_...

CFAR is the Center for Applied Rationality. https://en.wikipedia.org/wiki/Center_for_Applied_Rationality

Knowing that won't help; it's still a weird parallel universe.


The tags below the title give you the expansions of the abbreviations, that you can then put into a search engine.

They did a great job snagging the domain names that they did, I'll give them that.


For people reading the article, do read the top comment by Scott Alexander. It provides what seems to me like a grounded summary and opinion on the events.


It concludes with,

"EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We're still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were - it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. One central node was a person called Olivia who apparently claimed she was deliberately trying to cause breakdowns in other people she was close to. Other people don't seem to have been directly linked to Olivia, so something else must have been going on. I still suspect that Vassar unwittingly came up with some ideas that other people, including Olivia and Ziz, then spread. Vassar still denies this and is going to try to explain a more complete story to me when I have more time."

Now, I may be crazy and I'll proudly admit to "infinitely corrupt", but, geeze, I'm glad I'm not that crazy.

I further note that "years ago" as mentioned in the various posts and comments is ~2017, or as those of us who are no longer in our 20s like to think of it, "last Thursday."


These are the people that take the Harry Potter rationality spoof way too seriously right?

This is what organized religion eventually evolves into?


Well that sounds like an interesting group.


quite the rabbit hole this decision theory business. for the uninitiated, I'd recommend reading about Roko's basilisk. it's easy to dismiss these as new age techno religion crap, but I feel like there's more to this than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: