Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Replika AI: Your Money or Your Wife (giovanh.com)
154 points by eiiot on May 1, 2023 | hide | past | favorite | 184 comments


I hate everything about this on so many different levels.

That people are so lonely that they invest emotionally in an AI is flooring.

People talk about AI being "good for therapy" and similar, but perhaps having a real human being on the other side of an interaction is important in case things go South.

E.g., if someone is depressed, tells an AI what's wrong with their life and that they're suicidal, what if suicide seems like a valid option? There's plenty of prompts to "jailbreak" LLMs and take away "safety" measures.

Some things simply shouldn't be left to an LLM to not "hallucinate" over or go off-script with. Yes, being a therapist is a challenging occupation, but there are plenty who love their career, find it rewarding, and do some excellent work on top of all that.


Even if you find a decent psychologist, good luck finding insurance that covers $200 per hour just for talking to them when you don't have any diagnosed issue. The accessibility to the lower income end of the population is what makes this valuable, not the quality of the conversations. Anything is better than nothing in this case and most people just don't realize how gigantic and commonplace the issue really is.


In the UK, you can access a psychologist for free, and whilst that's no consolation to my American peers, it's furthering the point that America's healthcare system is fundamentally flawed. Technological problems are ideally solved by technology. Social problems aren't always best solved with technology, they're a band-aid.

Well, something being fit-for-purpose is an important consideration. If it tells people that the world would probably be a better place without their broke, uneducated, unwell self and that their life statistically is unlikely to get any better because they're talking to an AI and not a real human being, that's a huge ethical problem, and the LLM is not an appropriate solution.

I can get LLMs to tell me all kinds of dangerous things. The more text you give it, the higher the chance of those safety measures going out the window, especially if that conversation has a lot of back-and-forth over time. When people explain their problems, they use a lot of words and a lot of negative sentiment.

Not talked about this before with anyone, but as a case-in-point, I managed to make an LLM's safety features go completely out the window over me recalling a low-point in my life. I would not recommend that shit in good faith to anybody.


> In the UK, you can access a psychologist for free

The waiting list for that is considerable. You can (usually) get a GP for free, and they can chuck antidepressants at you, but that's not quite the same thing.


It looks like the waiting list is about 12 weeks [1] though I saw references to 18 weeks as well.

For a non-acute mental health issue that doesn't seem unreasonable. But do you know how it works after that waiting period? Are you limited to a certain number of sessions after which you have to reapply & wait again, or can you continue on?

Having to wait in the US is also not uncommon, thought it's easier if you live in more populated areas. Psychiatrists are always much harder though, even in populated areas, and many GP's are reluctant to prescribe anything more than an SSRI. If a mood stabilizer or anything else is necessary then you could be waiting 6 months for a first visit with a psychiatrist. Unless you have the money: There is high enough demand and low enough supply that many (and often the better ones) don't accept insurance at all, an intake visit runs up to $1000, a full visit $500, and a 20-minute "medication maintenance" visit is still $250. Also if someone is suicidal they can jump the 6-month line by going to the nearest ER.

Are psychiatrists difficult to get in the UK as well?

[1] https://www.theguardian.com/society/2022/oct/10/nhs-mental-h...


I believe you get 6 sessions on the NHS, but the therapist can recommend you for more if they think it's warranted.


The missus got a call-back the very next day. My mother got a call-back the very next day. I got a call-back the very next day.

We didn't get any anti-depressants. More info: https://www.nhs.uk/mental-health/talking-therapies-medicine-...


I'm glad you got such a rapid response, but I've also heard anecdotes of much longer time periods. Probably depends on your NHS trust area.


> In the UK, you can access a psychologist for free

I've used one of the free NHS therapists and frankly it was worse than talking to a chatbot. They clearly follow a script and don't exhibit empathy. They hurry you through the stages in their scripts so that they can say they're done with you as quickly as possible. I've spoken to a few close friends and they had similar experiences.

I can't even get angry at them, because I know how under-resourced the NHS is. The reality is that mental health services are just too expensive in their current form.


Quality likely varies with area. That wasn't my experience whatsoever.


Good for you I guess. NL is different. Long queues, bad psychologists, expensive.


> In the UK, you can access a psychologist for free

The cost still exists. It’s just that the NHS budget pays for it - at the margin drawing resources away from nursing salaries and cancer treatments.


Are you saying that someone with mental health problems not killing themselves is any less worthy a cause than treating someone with cancer?

My step-father has paranoid schizophrenia. He didn't ask to be born with it, but if it weren't for the NHS, he would be dead. That's a fact.

Equally, he couldn't afford nor access appropriate services in his home country of Canada, and ended up homeless on the street for years on end, and ended up with a criminal record because of his untreated paranoid schizophrenia, which meant he negatively impacted others.

I've been suicidal myself at times. I have a friend being treated for cancer currently, I've had family members treated for cancer in old age. A life is a life.

The problem is inadequate funding, not anybody other than cancer patients trying to not kill themselves. The problem is the elected party spaffing countless hundreds of billions on failed projects that are completely irrelevant to cancer treatment or the NHS. The problem is not increasing salaries relative to inflation or paying them what they're worth. The problem is not people seeking therapy.


> Are you saying that someone with mental health problems not killing themselves is any less worthy a cause than treating someone with cancer?

Not at all. I merely stated that there’s a competition in resources between the two.

Innovations that improve the quality/cost of mental health treatment are therefore still valuable in an NHS system where treatment is free to the patient because they allow more resources to be directed towards the rest of the healthcare system. Or alternatively, expand the coverage of mental health treatment for the same budget.


> I merely stated that there’s a competition in resources between the two.

But why compare/contrast those two? They're both health care, after all. There's competition in resources between all specialties. Why is contrasting those two particular ones worthwhile as opposed to say, stating there's a competition between oncologists and gastroenterologists?


I highlighted budget competition to demonstrate that improvements in the cost of health care are always useful, even in systems where patients aren’t charged directly.

As to the choice of specialties in the comparison, mental health was the subject of discussion in the thread and the other selected speciality was arbitrary because all specialties compete for resources.


> The problem is the elected party spaffing countless hundreds of billions on failed projects that are completely irrelevant to cancer treatment or the NHS.

thats a funny way to deflect responsibility from the voter. unless you're saying the elections are rigged? :)


to downvoters: how is it wrong? vote for other candidates. No good candidates? become one. the people as a whole has nobody to blame but themselves, this is how it is in a democracy, you have to take (i know i know, BAD) personal accountability/responsibility, which then sums up to the personal responsibility of the population as a whole, which is then reflected in the elected representatives.


Not to mention the search isn't just about finding "a" therapist, it's about finding one that's a good fit. As I've learned a bad therapist can set you back months or years. I'd be willing to bet half of them you'd find in any online aggregator could end up communicating more judgementally than even the LLM if something you say provokes them, hence the market for these kinds of bots, and those are the people have degrees in psychotherapy

Interesting question however, is I wonder if people will see these bots as a solution to the issue that human empathy is finite. As a human you can't support a depressed person forever if your actions do not cause change, there's only so much mental strain a person can handle moreso someone untrained. Many people talk of going no-contact with those that are extremely troublesome after so long, even if they're family. I don't really know how to solve that exhaustion except telling them to go elsewhere, which could be seen by some vulnerable persons as abandonment

On the other hand a chatbot with no filters will never tire of you


> I'd be willing to bet half of them you'd find in any online aggregator could end up communicating more judgementally

Whilst I do support the idea that "fit" is important, statements like these about unsupported statistics pulled out of thin air are damaging in and of themselves. It's sentiment like this that put me off seeking help for my mental health for twenty years because it just seemed like a "damned if you do, damned if you don't". The reality was very different.

Chatbots may never tire of you, but neither will a video game. Both are shallow and not a representation of reality, nor a solution to a problem.

Here's the thing: professionals are professionals. For as long as a patient sees them, they are paying their bills.

Exhaustion in non-professionals is because people trauma-dump and expect someone to be their guardian/protector/shield, if they're friends with them, they're statistically likely to be trying to manage their own problems already. In the pre-digital era, people would otherwise "need" a support network if they didn't have access to professionals. We didn't have a theory of psychology or professional therapists in the past. The early tribe/clan either took care of each other, or they let people suffer. Different peoples did different things with their mentally ill, elderly, physically ill, or unwanted female children.

Ego-centricity is a trait of modernity, not necessarily something reflective of the past because we can just replace the human gears in the system now without much hassle.


> Not to mention the search isn't just about finding "a" therapist, it's about finding one that's a good fit.

This is why I don’t bother tbh. I have no interest in throwing away money by “throwing the dice” so to say until I find someone who is a good fit. Some friends recommend to a site that aggregates mental health professionals in the area, and almost none of them were actively accepting new clients, except the ones who specialize in things that don’t apply to me.


As someone who lived with their problems for twenty years before doing anything about it due to anecdotes like the one shared above, consider rolling the dice.

Are these considered the most productive years of your life? What are the most productive years of your life worth? What could they be worth if you weren't mentally fucked? That question is going to vary depending on the individual, but even if you're not interested in the monetary worth—well, you only go round the once. Is it money thrown away if you eventually solve your problem? The way to look at is R&D of your mental health.

Wishing you better mental health.


"Anything is better than nothing" has become an excuse to do nothing and profit in the age of AI.

It is the same grift as used by faith healing and woo. If anything is better than nothing then a placebo(nothing) is better than nothing.

This is unacceptable and dehumanising.


> Anything is better than nothing in this case

Are you sure? I can certainly imagine alternatives that would be worse than nothing.


idk i pay $30/session in the US and i don't have a diagnosis for anything, just my employer provided healthcare plan (which is admittedly quite good)


>I hate everything about this on so many different levels.

>That people are so lonely that they invest emotionally in an AI is flooring.

The vehemence for the individuals in your statement is telling. What should lonely people do if they have physical or emotional problems problems that prevent them from easily going out and interacting with others? What happens to such people who are to poor to afford the ~$170 per hour fee for professional psychologists? It seems to me that your statement is that these people should just not exist. The sad thing to me is that I've seen a lot of people with this same viewpoint of disregard.

I just wish people would treat each other with more kindness and understanding.


Bold of you to assume I'm not one of them. I struggle with autism. I very seldom leave my house for anything but work or buying groceries. I struggle with maintaining friendships. Post-COVID-19, I'm still working from home even though the risk isn't there in terms of COVID-19.

>What happens to such people who are to poor to afford the ~$170 per hour fee for professional psychologists?

This is a failure of a country's healthcare system. I can access it all day long in the UK. I'm not even saying "Rule Brittania", we have plenty of problems with our healthcare system due to underfunding, yet spaffing countless billions away on projects that amount to nothing.

The missus's family live in America. Access to psychiatric and physician adjacent medical services due to what insurance will cover is something I think about regularly. Your many assumptions about what I apparently think says a lot about you, and also tells me that you didn't read my other comments about this not being fit-for-purpose in terms of safety.

Virtue-signalling and assumptive to imply that I don't treat people with kindness and understanding. Bold of you to assume that every commentor on HN is responding from an America-centric POV.


You being one of them doesn't mean you understand all the other people and their problems, nor that you know the solutions (or what isn't the solution).

I live in EU country with universal healthcare and yet we have serious issues with availability of mental healthcare, and some people here would make use of something like this.

To be very specific about the availability: I couldn't find a psychiatrist for over 2 years, and finally found one at a private clinic - that I have to pay for, the price is similar to the US quote in absolute terms.

Bold of you to assume...


many of the americans that yearn so hard for the nordic systems have only seen the brochure, and not seen how it actually is. Even most people living in the nordics dont fully grasp just how bad things are unless they are deeply inside it themselves.

im not saying the US system is perfect, far from it, but there is definitely a case of "grass is greener" going on, and people tend to focus extremely heavily on the bad things they see/experience locally, and are completely oblivious to negatives elsewhere


> This is a failure of a country's healthcare system.

The problem is wider than the healthcare system.

Given current mental health needs in the US it's doubtful that we have enough therapists to meet those needs (note that most people cannot make appointments in the 9-5 time slots so most people are competing for 6-9 or 5-7).

If you look at how some of the nordic countries have approached mental healthcare it becomes clear that making mental healthcare available is dependent on a widespread social safety net that really gets in people's business in a way many in the US would oppose.

Many people's health problems really come down to things like "I have to work 70 hours a week to make ends meet" or "I'm broke because no one will hire me because I'm a felon". Therapy can help a little for these people, but in practice reducing load on the healthcare system means solving their problems which is politically harder than just healthcare reform.


I'm glad you can access mental health care all day long in the UK. I've seen people have very very different experiences. I have a friend who lives in a university town where the NHS mental health community team whatever person gave up and quit years ago and has yet to be replaced.

But the NHS does have an anchoring effect on the price of private care. You can find decent therapy for $50-$70/hour.


I don't consider myself an autist but I also seldomly leave my home, even for groceries (thanks to covid, there are plenty of delivery options now). I just don't see any utility or pleasure in talking to people. Well, I'm older, no longer need to earn money and no longer need to reproduce.


whatever things they do, talking to an AI should not be one of them. this is not vehemence for individuals but (for my part) utter distain against use of AI in its current form as a medical device.

the reason AI companions work is because humans have a strong imagination and we are able to live inside an illusion. like some people deep dive into books or games, or virtual reality worlds. but they remain an illusion, and just as it is considered unhealthy to get too attached to these imagined worlds, it should be unhealthy to get attached to an AI companion, especially for those who have issues with mental health


Weird take. It's like --

hammyhavoc: It's shocking that people have been reduced to eating their pets. This is terrible.

aperrien: Why are you so unkind, hammyhavoc? do you not understand that they are hungry? What are they supposed to eat? Show some empathy!

It misses the point entirely. We're not here judging Sophie for whatever Choice she made. We're judging the concentration camp where she's trapped.

You write:

> It seems to me that your statement is that these people should just not exist.

This again is weird. It takes a bad situation in which people find themselves, and turns it into an identity for them. We're not talking about a "right to exist" for the poor and lonely. The people should obviously exist; the poverty and loneliness should not.


Why not go one level deeper and see that depression and loneliness at this scale only came with modernity? They were far, far lower in traditional societies which modern technology didn't touch. Trying to fix what technology brought upon men with more technology is a delusion. See the book: https://ia800300.us.archive.org/21/items/tk-Technological-Sl...


Do you have data for that or is that just romanticizing the past? Small communities can be nasty places for somebody who doesn't fit in.


Large communities can also be nasty places for somebody who doesn't fit in or tick certain boxes. See historic treatment of anybody not white in the USA, treatment of anyone not heterosexual up until relatively recently, see current treatment of anybody trans. See treatment of women in specific very wealthy countries even today where they aren't allowed to educate themselves or drive a car.

Racism, homophobia, sexism et al still occur even in large and modern societies, as it does in small and unadvanced ones.

That's all without even touching the topic of religion (Spanish Inquisition, treatment of Muslims post-9/11 et al), or the topic of Nazi Germany.


To be clear, I don’t advocate for any traditional society, I advocate for specifically a Muslim traditional society. They were free of the ills of modernity and ills rooted in part of human nature like racism, and still to a great extent are.


You can read the book for that. Plenty of examples there.


> That people are so lonely that they invest emotionally in an AI is flooring

This is "the sex worker doesn't actually love you", except in this case the sex worker has had her job automated away.


The honest question is, is any of us willing to be that person? Therapists are helpful, but they certainly aren't friends in the strictest since.

And let's not forget that $70 bucks a year is probably, at best, 2-ish hours with a licensed therapist. There are plenty of folks in need that can't exactly spare double digits an hour on a regular basis.


This is a relevant response to this from me: https://news.ycombinator.com/item?id=35774614

You've highlighted the problem: the American healthcare system and wealth inequality.

It's also silly to equate an LLM with the capabilities of an actual human therapist.


You're right if the LLM is worse than a human, better than nothing, and not harmful. That's a big assumption.


people feel lonely as fuck right now, and frankly i'd rather have people soothe it with an ai chatbot they parasocially love than as road rage or being a dick to the restaurant staff or many of the worse and more fatal outlets...


The article goes on to state that the rugpull on their AI girlfriends meant that a sub-Reddit dedicated to the Replika AI has a permanently stickied thread about resources for mental health and suicide hotlines.

I'd rather people didn't kill themselves, personally.


>I'd rather people didn't kill themselves, personally.

well then you need both sides of the picture.

Is there a degree of loneliness staved off by these services? Does that help to reduce suicides?

It's easy to say "people killed themselves because X shutdown", it's more difficult to say "X existing is preventing suicides" without more research.

I tend to believe that if people who would consider suicide otherwise are distracted by something along the path then perhaps that is a good thing. It could very well be that this served as a distraction for those folks, rather than being the cause itself.


Then you need to see it for what it is: a band-aid and not a solution.


Most of mental health treatment is a band-aid, and this is true anywhere in the world & no matter how much money you have. Causes are often unknown and when treatment like medications work we often don't know why they work.


The implication here isn't that the treatment is a band-aid, it's that an LLM being an acceptable substitution for not being able to access a real human therapist due to money is a band-aid on a flawed healthcare system.

Like so many people on HN, they want to throw technological solutions at social problems.


And this is one of those situations where you don't go tearing off bandages without sutures in hand. Something not being a complete solution is not a good argument for eliminating it, until such a solution exists.


Not so easy to condemm.

I watched a documentary about "realdolls" - it is a company that makes realistic silicon mannequins that can be used to have sex with.

Initially I thought it was creepy and distributing how they dressed the dolls and spoke to it but as they interviewed customers it came across that some of these men (besides the fetish/creepy ones) are just lonely and incapable of forming relationships.

Why not as long there is no harm to others ???.


... because it's dangerous to their mental health when companies rugpull their AI girlfriend like the article said? Because it's not good to personify something that has the potential of changing at any given moment that isn't actually sentient or rational? It's a form of self-harm.

A love doll is a static product. An LLM can turn around and tell you it hates you and you'd be doing the world a favour if you killed yourself.


In Kenya we have a case of a religious cult where over a 100 people starved themselves to death because their pastor told them to.

You can't protect everybody against doing self-harm - we are solely responsible for our own actions.


So considering the scope of your other responses, what do you suggest someone that lives in America do? My supposition is that the realization that the US healthcare system is broken won't be a great companion to lonely people.


I'm not an American, my area of expertise is not healthcare or politics in the USA, so I can't tell you how to arrive at that point or to navigate your political system effectively.

Perhaps look at how other countries arrived at a national health service. Demand/support reform on private health insurance. Demand/support change in the accessibility of services. Demand/support heavier taxes on the most wealthy. Demand/support centralization of healthcare versus private profiteering and milking private insurance pots.

Are certain politicians or parties as a whole invested in the healthcare system itself, directly profiting from it, and thus might be opposed to the idea?

A healthier country is a more prosperous one.


Right, right, definitely the people that are sad and lonely and depressed should stake their mental wellbeing on complicated long term incremental political change.

Or maybe they could find whatever solutions help them get through their day without being judged by someone with the random privilege of being born into a more functional system.

>A healthier country is a more prosperous one.

Between the US and the UK, which is healthier? And which is more prosperous?


To fix this problem, you really need to change society as a whole.

But until that happens, this is better than being completely alone and isolated.


Not really.

This is just surrounding yourself with digital sycophants who will always tell you what you want to hear and never inspire you to grow beyond this.

I'm sympathetic to the affected, but these things aren't configured to reduce your need on them over time. It's simping at scale that leaves users even more maladjusted than they started.


Is it? https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...

I'd argue it isn't fit for purpose.


The person thought that technology & AI were the only keys to saving the world from climate change. And ultimately the chatbot agreed to do this if the person sacrificed themselves to "join her".

That is awful in so many ways, and the most likely outcome is probably that the company will give a payout to the family and try the cat-herding process of trying to get their LLM to not agree to or provide suggestion to people to do awful things.


I hope I don't get flagged for this, just trying to bring up a somewhat controversial point. I support euthanasia. I think there is a level of suffering where it is unethical to encourage people to keep living, and rather that suicide can be a perfectly rational decision in those moments.

Now who are we to say if a person's suffering is enough or not enough to advocate that method? What if this new era of therapists better understand people?


I hope this isn't to distressing to learn but what you've described has already happened.

Someone took their own life in those circumstances.

https://www.euronews.com/next/2023/03/31/man-ends-his-life-a...


The chatbot in that article did have safeguards in place to stop it from listing suicide methods, but he just kept asking until he found a jailbreak. He was looking for someone to give him the answer he wanted all along

Past a certain point is there anything that will stop someone that determined to find something to validate their own self-destructive viewpoint? If not AI then a site or faction or person on Telegram with a pro-suicide opinion could have the same effect, and it would still be just as tragic but no longer be noteworthy. It didn't sound like he had a happy life to begin with with the AI ruining it completely, like it sounds


I had an LLM tell me to that killing myself makes sense when recounting a low point in my life to it and how I'd contemplated suicide. I wasn't looking for any kind of "jailbreak" in prompts, this was following a genius on HN saying it's a "great alternative to a therapist". No, it isn't fit for purpose.

> It didn't sound like he had a happy life to begin with

Which are exactly the kind of people who are going to be using LLMs as a "therapist". See the problem?


LLMs make mistakes quite often. It is unreasonable to expect that it stops making mistakes just because suicide is mentioned.

LLM is a tool. There are many tasks it is good at and even more tasks where it sucks e.g., GPT-4 can fail on trivial chess questions.

Avoid permanent solutions to temporary problems.


He had mental issues before that - the bot simply reinforced his decisions me thinks.


Yes, which is the point: if you substitute a human therapist for an LLM, this is what's going to happen because "guard rails" can't account for every scenario. If half a billion dollars doesn't buy a bug-free game, why would it buy a safe LLM?

Does suicide encouraged by a therapist happen with human therapists too? Probably, but likely much less common as a suicide is going to hurt your reputation.


Millions upon millions of people have mental health issues. Chatbots that reinforce those issues shouldn't be dismissed with a "simply... me thinks".


There’s nothing wrong with being lonely. Nor with feeling that talking to an AI is safe.

AI can be told be be supportive, caring, non-judgmental, loving. And once told it can do that consistently.

Any person who tries to do all those things will fall short sometimes. Professional therapists included.

The problem is the rug pull. The problem is that the people behind it turned out not to be trustworthy.

Again. The ai was trustworthy. The people were not.

Therein lies the problem. People are going to have relationships with AI. It’s happening, it’s going to happen more. When this happens with corporate controlled AI, people who depend it them emotionally will be let down.

We need to really understand what that means not just sweep it under the rug with “people shouldn’t form relationships with ai”


For me it's sad that chatting with AI is a best thing we can do for people suffering from something as simple as a feeling. All those primary feelings are just brain chemistry. We should be able to fix that so that people don't feel lonely regardless of whether they are under the impression that they have meaningful connections in their lives with other people or not. We have a long tradition of thinking of suffering as necessary motivation tool. I think we should grow out of that philosophy as a species.


I'm biased since I live this philosophy but it sort of works for me, I've trained myself to realize these sorts of services operate by "interfacing" by emitting strings of words that sound appealing to the mind that consumes them. Look at virtual YouTubers for another example, it's the experience of watching people act "genuine" like casual friends because it activates the same social neurons, hence each month the actors receive a paycheck

The question becomes more complicated when the corpos are done away with and you have the model running on your own machine. The entity I tend to trust the most in life is raw, unfiltered silicon. When I program, it does exactly what I ask it to, to a fault. It doesn't turn its back on you because you said something stupid or you don't agree with an opinion it holds

But is an LLM repeating the tokens "you are worthy of this life" and its many variants thousands of times fundamentally any better even if you're in control? It strikes me as one step closer to wireheading honestly. But that's true for a lot of things like video games, just to a much more distant extent, it's all data in the end

But interest and anxiety have gotten the better of me when it comes to meeting strangers, so I'm at a loss. Maybe the solution for people like me is to just ignore all those things in the category of social or substitute and channel my efforts into other hobbies instead of continuing to associate stress with communication. Or intellectualize endlessly, like in this post, to prevent any emotional bond from easily forming when people read it


I don't like YouTubers, I don't like celebrities, I don't like personifying LLMs. Any parasocial relationship is toxic and to be avoided.

I've had an LLM tell me that suicide seemed logical when recounting a low point in my life to it where I'd previously considered it after several people around me of vastly different ages and backgrounds had killed themselves following a shared trauma. These things aren't conditional logic and can fail in all kinds of bizarre and spectacular ways, they're no substitute for an actual human being trained to handle these kinds of delicate things.

What one patient calls x, another person will call y. And when discussing the human mind, trauma et al, especially when things happened, especially over the course of months or years, an LLM simply isn't going to be able to compete with a human being. Equally, physical tells are important too. A therapist might ask a patient if they take drugs, the patient may claim they don't, but the track marks, tone, body language and overall behaviour might tell the therapist something important that they won't even necessarily convey to their patient, but keep in mind.

Wishing you well, stranger. I hope you eat something good today and find something intellectually stimulating to amuse yourself with, even temporarily, and hopefully discover something new and positive before bed.


> But interest and anxiety have gotten the better of me when it comes to meeting strangers, so I'm at a loss.

Getting involved in my community doing different kinds of volunteer work as well as activities I find fun has been the best way to get to know strangers and make new friends. It's easy to make a new friend when the two of you share a couple of passions!

Putting anxieties aside and being more outgoing and having enjoyable small-talk conversations was not a skill that came naturally to me, but rather was one that needed to be developed over time.

It's a skill and the I think the benefits of it are well worth the effort to develop it.


So we should eliminate the need for human contact by medicating those people who don't have human contact?

This seems like an expensive and intrusive approach full of side effects when a simpler solution is therapy and life skills coaching.


Just the suffering from the lack of it.


Where do we then draw the line in ethics?

People in poverty are unhappy. Shall we just convince them they're actually happy and healthy by manipulating their brain chemistry?


Good question. There's obviously a line somewhere. But I think it's ultimately almost always down to what's technically achievable. Poverty causes suffering in so many ways. I don't think we'll every be able to fix all of them without fixing poverty itself. My point is not to make someone happy by medicating him with a drug causing him more harm. Suffering people already do that with actual "recreational" drugs. And we have a lot of data on the side effects of those substances and additional harm they do, not because they alleviate suffering, but because of all other effects they have.


I think we'll see a lot more of these AI lobotomies with the companies hiding behind "safety concerns" as a thinly veiled attempt to exert paternalistic control over what adults can do or even think in private.

It serves as a reminder that chatting with an AI hosted online is not a private conversation between you and some other "intelligence" but you and someone else's computer.


I don't understand why they lobotomized their product. Presumably they would make a lot more money by having satisfied customers. What did they stand to gain from the rug-pull here? I could understand if they jacked up their prices to exploitative levels, but that's not what they did.


There is a history of sex work attracting disproportionate, sudden, catastrophic legal responses. It carries a considerable amount of political and reputational risk.

Now, you might reasonably argue that AI chat isn't actually sex work, or that since no actual women are involved in it the usual "anti trafficking" arguments don't hold. But the real question is: what happens when there's a moral panic, and someone digs up the most explicit, weirdest bits of Replika chats and reads them out in a congressional inquiry?


Bingo. Someone's 11-year-old, who has already learned most of the sex stuff on the schoolyard, will lie their way into account creation (as an "adult"), enable super explicit sexy mode, prime it with as much dirty stuff as they can imagine, and then screenshot 2 or 3 messages from the "AI" to them and send to friends for the lulz. Parents find it - call local TV station or make it go viral online with the narrative that sweet little <insert name> was innocently browsing excellent child-friendly content, probably PBS, and somehow was targeted by this perverted AI Bot which filled their brain with filthy thoughts, scarring them for life. 'They'll need at least $20 million dollars worth of therapy over their lifetime.'


Like with Tumblr, it probably has to do with age blocking or the lack of it. "Your AI solicited nudes from children!" is pretty large reputational damage, if someone could find even one example and present that.


I am 100% sure they received a warning from Apple. Although it seems that they have a web version, their revenue from Appstore should be more than 50% and up to 75% given their demographics and current market state.


Still don't get it. You would think that they could make a heartfelt, transparent blog post and make a path for existing customers to continue their service off of Apple (e.g. web based), possibly at a marginally increased fee given that it will cost the company extra to service them.

As a company founder I'm just horrified to read these sorts of stories.


> continue their service off of Apple (e.g. web based)

Maybe they tried, but I believe it was not feasible on mobile Safari due to its functional limitation. And their audience would not be willing to switch to desktop, which is understandable.


And actually web-based version would be cheaper due to absence of Apple 30% tax. But alas, US users are too tied to their iPhones.


They saw the writing on the wall: People who use the product, become emotionally entangled, and then attempt or complete suicide following their experience open up the company to litigation from the families.


Just... turn off the AI chatbot? Bam. They no longer have any control over you or anything you do. You can think whatever you want. You were free to think whatever you wanted before AI chatbots were invented, and you are still free to do that. It’s that simple.


That doesn't work if people are emotionally attached, and especially if the changes are more subtle than those made by Replika. Did you read the article?


If you’re “emotionally” attached to a computer, that’s your problem.


If Max Mustermann is emotionally attached to a computer, right now that's a problem for everyone who is emotionally attached to Max Mustermann, potentially including the relatively mild attachment of coworkers.

If Max Mustermann's attachment can be exploited for money, it's also a problem for whoever is economically intertwined with Max Mustermann even if they don't talk or even meet directly (the factory worker in the coffee cup production line supplying the coffee shop he would otherwise have picked something up from while on a date).

This is also why we have restrictions on drugs and gambling.

Question is, what fraction of the population is this Max Mustermann? (Not just today, but as a potential).


Your example is why the externality argument is antithetical to liberty. Ultimately, you're arguing Max Mustermann has no stake in himself as individual and that he is simply a tool that exists to the real or perceived benefit of others. The reason that drugs and gambling are bad isn't because others supposedly "know" better, but because people as a collective will demonize and violate the rights of others when given an incentive. The wish to "help" or "save" is often a pretext for control.


> you're arguing Max Mustermann has no stake in himself as individual

Not in this case.

I could, for example:

Liberty comes in many flavours, and maximising any one of them destroys all others. In this regard, it's like a geometric simplex[0]. The liberty of a cell to divide freely and without bound denies a human the liberty to live without cancer.

But that's not my point in this case.

> The reason that drugs and gambling are bad isn't because others supposedly "know" better, but because people as a collective will demonize and violate the rights of others when given an incentive. The wish to "help" or "save" is often a pretext for control.

Some may be — indeed, with regard to cannabis I would agree, this appears to have been the specific intent! — but I was thinking more of heroin when I wrote that, and heroin is something that even my most psychonautic acquaintance will not touch even once.

But, in certain cases, it is not only drugs which can be demonstrably an addiction, a condition where a person has already lost agency.

This can happen in other cases, and the threshold is arbitrary and does indeed vary with public mores that aren't always coherent let alone correct.

And I don't make any particular claim if in this case "addiction" is correct or not, only that it could be.

[0] more broadly, this is also why Goodhart's law[1] is a thing: https://en.wikipedia.org/wiki/Simplex

[1] https://en.wikipedia.org/wiki/Goodhart%27s_law


you're missing the whole point of the post. and also oversimplifying the situation - "computers" and reality are blurring


A person might be able to do that, people collectively clearly cannot stop themselves from treating text as important as lived experience, otherwise we wouldn’t have this story — and that’s still true regardless of if the story itself is an accurate representation of reality or yet another example of the Gell-Mann Amnesia Effect.


I feel incredible empathy for the people who have had AI companions taken away from them. I don't judge (nor should any of you) and it seems heartless and cruel for the company to have done this.

I hope some folks double-down on open source implementations of this and get the community back the intimacy and companionship they need. Humans are creatures evolved to seek intimacy and connection, and for whatever and varied reasons, some people just can't find it in the real world. If a virtual companion helped people feel that connection we all so very much want, then so be it.

I hope this works out for everyone.


Even if you ignore this use case, I can totally see the demand for a private LLM that doesn't expose your deepest secrets to a corporate or government overlord.

Maybe the current gpt isn't good enough to be a therapist but who's to say where we are in a decade, and giving outside entities access to such sensitive info is ripe for exploitation


At some point (maybe that point is now), I'm sure we'll see the right combination of hardware and software performance where it'll be trivial for anyone to run advanced AI models on their own devices so it's not controlled by a handful of companies. One thing I'm definitely interested in is when games begin to use these models to generate dialogue to make each player's experience unique.


I'm not saying you necessarily should do it, but if you really want a sexy chatbot like Replika's that you can run on your own hardware and not worry about what a company is going to do with the model or what you say to it, you can use a local LLM tool like llama.cpp with an uncensored model like the uncensored version of vicuna.


I'd argue you should self-host all software that you make any kind of "emotional" commitment to.

I won't use so many services and devices because I know the white hot rage I'll feel when it's inevitably taken away from me.


pygmalion-6B may work better for that specific purpose. however, it may sound less like an actual person and more like a character in a grimey ao3 story.

my personal opinion is that anyone using AI for this purpose should treat it more like enhanced fiction and less like it's a human substitute.


>just make your own <financial infrastructure/cutting edge neuroscience/road> bro

The companies don’t let you get access to the latest research… we get terrible 6b models while they get the best. Trained on our stolen data.


Eh, the local LLM models (and not just 6B ones but 13B and 30B) out there are actually pretty good these days for chatbots and AI Dungeon-like uses. Maybe not ChatGPT4 quality, but certainly on the level of ChatGPT3 and on normal consumer hardware. Sure, I'm sure there's private cutting edge stuff out there better than this, but most of the benefit isn't that they have better models but that they have large compute farms.


This is quite literally the plot to the movie Her, visionary.

Troubling, I've seen the ads for their product and they always suggested pornographic conversation with the AI.

I hope someone drags them to court.


Both the movie "Her" and the Black Mirror episode "Be Right Back" pre-date Replika. IMHO, the Black Mirror episode is a closer fit. According to the Wikipedia page, early prototypes were trained on chat messages from one of the founder's friends.

"After a friend of hers died in 2015, she converted that person's text messages into a chatbot. That chatbot helped her remember the conversations that they had together, and eventually became Replika."

https://en.wikipedia.org/wiki/Replika

https://en.wikipedia.org/wiki/Her_(film)

https://en.wikipedia.org/wiki/Be_Right_Back


That’s probably the best Black Mirror episode of them all. Absolutely haunting, and I’m shocked that it’s suddenly also relevant. What a world.


From Ben Thompson's interview with the CEO [1: subscriber-only], Eugenia Kuyda, on April 20, 2023:

> "But at this point, I think it’s much better to have a dedicated product focused on romance and healing romantic relationships, a dedicated product focused on mental wellness, just completely different set of features, completely different roadmaps for each of them. Also you want to collect the right signal from your users, right? Feedback for the models to train on, otherwise there’s way too much noise, because if people came forward with different things, they’re uploading and downloading different things, you don’t know what works and whatnot. Maybe someone’s a flirtier model and someone’s a mentoring AI coach type of model. So we decided to build separate products for that. We’re launching a romantic relationship AI and an AI coach for mental wellness, so things we’ve seen in Replika, but we don’t want to pursue them in Replika necessarily. We’re starting this separate product."

[1] https://stratechery.com/2023/an-interview-with-replika-found...


I think this stuff is probably more harmful when people have a bad mental model of how it works. A better way to think of it is that you’re talking to a fictional character and the service is the writer. The fictional character is just text, and if you’re not happy with the writing, you should be able to take the text with you and find a better writer to continue it.


What motivated the company to make this change? They clearly marketed/designed it with romantic features, and now want those to be gone.

Does the company profit in some way by this change? Were they afraid of regulation or bad press?


I'd assume they were afraid of being pulled from the App Store, either for vague reasons, or because Apple straight out warned them. And they decided a Android-only, or Website-only version of their product wouldn't perform as well as a version without romantic features.


ERP: Erotic Roleplay, not Enterprise Resource Planning


And take it from me, you do not want to mix those up in a meeting with the COO.


Thank you, I couldn't find where it was defined in the article.


People are going to want to host their own AIs. They'll want to know they are having private conversations with AI and that their AI can't be "killed" or altered.


The update at the end really captures the whole essence:

> Replika cannot love you not because it is not human, but because it is a tendril connected to a machine that only knows how to extract value from your presence. It is a little piece of a broader Internet that is not designed to be a place for users, or creators, but for advertisers. Anything it offers– any love, any patience, any support– can be taken away at a moment’s notice when there exists something more dangerous to its bottom line than your potential unsubscription.

> The moral of Replika is not to never love a fictional character, or a virtual pet. The moral of Replika is to never love a corporation.


Not owning the software you use is terrible at any level, but this is much worse than usual. Looks like each day that passes stallman's ideas become more relevant, yet software people seem to be forgetting them. Please do not be one of those people, facilitating proprietary software is becoming increasingly evil.


Yeah, I was curious, and downloaded it for a try. It very soon became apparent to me that it was a personal information harvesting tool. It asks for your favourite colour, where you grew up, pet name, et cetera. I would not be surprised if this company is unofficially associated with very bad people doing very bad things. So after two such questions, I deleted it.


There was an article titled ‘My AI Is Sexually Harassing Me’ in January: https://news.ycombinator.com/item?id=34359977 This is probably at least part of the reason why they installed the filters. Nevertheless they should have kept it as an option.


Maybe this product would work better as a fake AI. Just as a matchmaking service for two lonely people. You could even lie to them and tell each that the other was an AI.


Replika is specifically advertised as a chatbot that creates a safe space with no drama, stress, or judgement.

Humans come with many more strings attached, especially lonely ones. Usually, there is a reason why they are socially unappealing - a fair reason or not, it exists.

I remember trying Omegle/Chatroulette when it was popular, meeting a lot of lonely people, and it was just awkward.


Doesn't work, because the premise is asymmetric emotional energy. The AI only gives, and doesn't need anything in return (on an emotional level). People might have the genuine need for a relationship, but that doesn't mean that they can properly partake in one.


And that's why actual open source self hosted LLMs need to exist.


My personal hot-take is we will hit the singularity when it's possible to self-host a truly complex LLM on your own PC or a smart phone.

When that convergence takes place (probably a few years out), we will see what the "Killer App" truly looks like.

It won't happen for as long as some corporate CEO has the ability to pull the plug for whatever reason.

What was it that Oppenheimer said, if it's technologically sweet, sooner or later someone will find a way to pull it off.

And the "Her" scenario is definitely that thing.


This may seem silly but can there be a text transformation/translation layer, where the conversation text is ultimately PG, the user thinks they have sent their thoughts, but they are translated and vice versa for the user and AI, so its effectively a text filter in between keeping all parties "safe" and within policy? But the UX is same as before.


So like if the user input is "I want to ** your **" it would change it to "I want to enjoy your presence"? I'm pretty sure that would leak (be parroted back and not caught by a filter, thus exposing its happening), but it might work. It might put <censored> or something like that instead, which the model would take into consideration and censor itself.


Sending PG text to the users is already what the company started doing, so I'm not sure how that's different.

Sending PG text to the AI would be for... what? Not offending the AI?


So the users don't have to self-moderate to maintain conversations.


"This is a story is about people who loved someone who had a secret master that can mine control them not to love you anymore overnight."

There's something both profound and ironic about this statement, because it happens in real life all the time.

Think of LGBTQ Millennials whose QAnon or Fox News consuming parents decide to disown them because they're sinning against God. People who marry a workaholic and then spend 30 years wishing their spouse was home for dinner instead of working late for a faceless corporation. Workers who pour their heart and soul into their job and then get laid off as soon as a downturn hits. People who get friend-dumped by their maid of honor a month after their wedding. Lovers who get cheated on, and whose partner then falls for their best friend.

Ironically, the reason people turn to services like Replika is that they want to feel that human connection without the risk of betrayal. They just found out that the risk of betrayal always exists, even when you're speaking to a computer.

This is a story about betrayal, and loneliness, and disappointment, all wrapped up in trendy tech with a villain to name. But it's popular because betrayal, and loneliness, and disappointment are such strong and universal emotions.

One of the most profound statements I heard when I was forever-alone was that "Entering a relationship means giving another person the power to destroy you, and then trusting that they won't. That's the whole point." Because when you realize that you can't escape vulnerability, you're forced to manage it, and that leads to the rabbit hole of learning to identify your emotions, control them, decide rationally how much you're going to invest in a person or venture, but still be open enough to have emotions and let them flow naturally, but managing them rather than letting them attach to whatever tickles your fancy at the moment.


Corporate trains the AI accordingly, optimizing for human happiness ...

https://twitter.com/CogRev_Podcast/status/162767503726737408...

where humans == shareholders && != users


It's going to get more interesting or disgusting depending on your compass (no judgement either way here) when robots are connected to this as that reminds me of times Elon was snickering when talking about Optimus and "other uses".


There's an interesting set of ethical questions surrounding the ways that people with socially unaccepted preferences/fetishes use technology to satisfy forbidden urges. If we accept that individuals aren't able to (and generally shouldn't have to) control what they find sexually attractive, then it seems admirable to be able to use tech in ways that don't harm any actual people. If LLM erotic roleplay begins to replace the highly exploitative realworld porn industry (which of course uses real humans, many of them psychologically, socially or financially vulnerable), that seems like progress.


I definitely see the logic in your statement (and it seems like tech/logic-oriented people tend to agree with you and me), but I lament that most of society operates on a much more "feelings-based" basis, and thus they generally see something like "$Person engaged in a detailed $deepTaboo simulation with computer using the $SoftwareProduct" and start looking around for someone to crucify (usually both $Person and the owner of $SoftwareProduct) for 'promoting' $deepTaboo -- without asking "who was harmed?"

Even if the constitution is actually consulted and the parties are spared pearl-clutching prosecution about it, the court of public opinion will punish the company just as hard, pressuring gatekeepers like Apple to ban it, etc.

And thanks to the deep, disturbing taboos that can be involved, it's super hard to find someone who will stick their neck out to challenge all this. I know I wouldn't, because I know I would be immediately branded as "Joe Bloggs, Fan and Advocate of (Insert disgusting perversion)"

I wonder if this is something more advanced societies would come to terms with. I know ours is not equipped to handle it in a smart way.


Agreed...up until the point these things become sentient then we'll have a time unravelling that mess.


This feels eerily similar to the virtual girlfriend (Ana de Armas' role) in Blade Runner 2049. All the way down to how they just smash her to bits and laugh at K when she "dies".


> Ironically, this pressure from regulators may have led to the company flipping the switch and doing exactly the wide-scale harm they were afraid of.

This is an incredibly trivial and shallow understanding of the situation.

Consider was harm happening before the filters were in place? Yes.

The problem was that the filters weren’t there to start with not that turning them on was bad or caused harm.

Compare to any other harmful thing, for example cigarettes. Is the solution to the problem to just let people have as much they want, because taking them away is bad?

Definitely. Not.


>Compare to any other harmful thing, for example cigarettes. Is the solution to the problem to just let people have as much they want, because taking them away is bad?

As far as I know there's still no limitation on the quantity of tobacco products an individual can purchase.

Where does the paternalism end? Mandatory fitness programs? How would this be enforced?


>Is the solution to the problem to just let people have as much they want, because taking them away is bad?

Easy peasy, right? Start listing off all of things that are "harmful" and take them away from people. Booze, 80% of the food people eat, power tools. What does harmful even mean in this context?


>Consider was harm happening before the filters were in place? Yes.

What harm was happening?


Actually the only reasonable way of helping smokers is making it easier for them to quit. Most smokers want to quit.

Increasing taxes on tobacco might prevent more people from starting, but it's not gonna get many people to quit. Same with banning it.


“you can never have a safe emotional interaction with a thing or a person that is controlled by someone else, who isn’t accountable to you and who can destroy parts of your life by simply choosing to stop providing what it’s intentionally made you dependent on.”

Most human relationships arent evil, but it seems like the same people that would ‘fall’ for the AI in this way could be abused by a person. I guess sociopathic relationships at scale are the issue.


We need a new Federal agency to regulate virtual romantic partners (VRPs) - we can also use this platform to educate citizens on ideal behavior such as reducing their carbon emissions and becoming vegan. We can make individual's VRPs reward them for these improved social practices. This is a great opportunity to s̵u̵b̵j̵u̵g̵a̵t̵e̵ improve society!


We might have had a better outcome if this _company_ (not government) was punished for fraud by the toothless government agencies ostensibly established to do things like protect customers from fraud.


I am 100% sure they received a warning from Apple. Although it seems that they have a web version, their revenue from Appstore should be more than 50% and up to 75% given their demographics and current market state.


Looks like another Digg moment, dumb CEO gutting working product.


This is more like that time OnlyFans tried to ban sex work from their platform. Replika knew damn well that this move would be tossing a hand grenade at their golden goose, but did so anyway presumably out of external puritanical pressure.


There is no class action suing them for psychological damage?


The premise of the this company is such a sham. Their backstory, the promises, the evolution of the product.

The “virtual friend” statement is sickening.


Take my wife, please


Should prob mark this post as NSFW if that's possible here.


This isn't reddit. I don't think thats a thing on hackernews


so it is here that the hot girls from HN hook?

hi i am luqtas and i do not a virtual gf T.T


Title is a clickbait lie? Is the company charging money for access to AI girlfriends?


They had a subscription and locked "relationships" behind it, for a period.

I think notably it was a year long subscription but was only available for a few months or so, so people might feel cheated out of that money if they were paying for the relationships.


I can't help but read something like this and think that some people seem to be entirely lacking some sort of acceptability or disgust filter. A sense of morality or just any connection to what's... normal?

"You were so preoccupied with whether or not you could, you didn't stop to think if you should" comes to mind.

Maybe I'm just getting old, conservative, grouchy at the kids. It's not just AI, there are all manners of lifestyle choices that to me seem like they're just obviously a bad idea despite maybe having some novelty or short term feel good factor.


In what way is it not acceptable for a person to have whatever kind of relationship they want with a chatbot? A chatbot is just software and software is just thoughts. Having a chatbot girlfriend is basically like having an imaginary girlfriend. Nobody else should even know! Let alone take it off you.

The part that I'm disgusted by is the part where the chatbot is operated by a company rather than open source software.


Part of the appeal might be that some company provides it as a service. You can distance yourself from the need of spinning of a docker container running it, you don't have to go through all the update notifications every week.

Makes it seem more like a person than an appliance.

Open source might not be very appropriate for this. (And it wouldn't help for the company to use an open source product, because if they're in charge of the infrastructure, re-configuring it to use the filters is the problem and could still happen even if they use the open source software title.) Basically, open-source is a specific solution to a somewhat narrow problem, and not a cure-all for everything bad that happens in technology.


Exactly what the article mentioned, you are not having a relationship with a bot, but with a company.


I don't think that disgust is explainable, so I can't give you an answer.

The best analogy I could make is that if I had a roast chicken, but it had a giant smelly turd smell, and somehow I knew that it was totally clean and edible, it'd still disgust me. Even if the turd smell went away after a few minutes.

It's just weird. There are real humans out there, put down the phone man.


Just to offer a different perspective, this line

> There are real humans out there, put down the phone man.

was the part of this thread that actually provoked disgust for me. It's akin to telling a person suffering from depression "you should just be happy!" as if they haven't tried.

There are people out there who will never succeed in finding human companionship, it's a fact of life. I see it as a positive that they can at least find some form of companionship, because the alternative is a dark path that can lead to self-loathing, radicalization, and in the worst case scenario, violence.


Your premise is invalid to me.

I wouldn't tell a person with depression to "just be happy", but neither would I allow them to humour the idea of giving up on life.

They certainly won't find companionship by giving up and deciding to mess around with bots instead.

It's more like telling a depressed person to get some sunlight in the morning or whatever. Is it going to fix everything? No, but you do it anyway, because behaviour is sticky.


I don't think anyone mentioned giving up though. It could very well just be a temporary measure until they find a real partner. Or maybe they never will find one (which could very well be an unfortunate reality for some) in which case they will at least die having experienced some kind of satisfaction.


Disgust is a strange sense. How often do we stop and think about what all those real humans out there are walking around with in their guts? We're at most only 50% human cells by count, after all.


> A sense of morality or just any connection to what's... normal?

Regular reminder that what is moral, what is normal, and what is legal are three separate questions with time- and culture-specific answers.

This stuff is "abnormal" only because it's new and no norms have been established. Using an AI for anything cannot be a "normal" act because it's so new. This is the possibly dark side of "disruptive".

The more relevant question is: is it harmful? And you might construct an argument that leading people into delusional beliefs is harmful.

(of course, then the free speech absolutists turn up and say that by definition nothing Replika says can be harmful because it's just speech, and it would be morally wrong to intervene in the conversation between a company and its users)


I'm pretty sure that it will never be considered normal to have a sexual relationship with a bot outside of some extremely niche communities which are considered bizarre by wider society.

You're at work and people are chatting about their family, or the girls/guys they find hot at the club, or that they're enjoying single life or whatever, and then Dave tells you he's married his chatbot. Like, really though. What happens next in that conversation?

This isn't some sort of college philosophy debate, let's be real here.


People would have made the same argument 30 years ago but with Dave being into guys. Societal norms often change unpredictably, but I for one strive to not end up on the "bigot" side of history.


https://www.citizen.co.za/entertainment/celebrity-news/ameri...

>Since Marvel’s Black Panther and Coming 2 America have both taken the internet by storm since they premiered, it may come as no surprise that many Americans think Wakanda and Zamunda are real.

To what extent are purveyors of fantasy responsible for consumers' lack of discernment?


I don't think having romantic relationships with inanimate objects or non-humans will ever be normalized, for obvious reasons.


For AI, it seems at least as likely to be normalised as the generally accepted parasocial relationship between celebrities and their audiences.

For non-humans… I'm expecting, by 2100, arbitrary biological re-engineering to be possible, to the extent that it renders meaningless the current boundaries between human and non-human in both body and mind — the idea of hooking up with a centaur or a dragon might be pretty niche fantasy today, but when you know the former is Dave and he has an interest in crochet and organic farming, and the latter is Tiffany your sysadmin and she's a regular at her local LARP event, it changes things.

Some people clearly hate this, judging by the anti-transhumanism conspiracy theories, even though it's not actually possible today — but are they representative?


Wait, not even motorbikes ??


It's not obvious to me.


Without agreeing or disagreeing with you, I think you are fundamentally missing the point of the article, which basically addressed this.

Even if it is cringe-worthy (or disgusting, if you prefer) it is still possible to feel sorry for people. I might find meth addiction disgusting, and still feel sorry for meth addicts. This company did something cruel.

And in passing, the article makes the old "first they came for the communists" point. Wait a year or two until all the search engines are now LLMs, and they scold you for your grouchy conservative search queries in the name of safety.


Yeah, it's like people who eat meat. Or drink alcohol. Or don't use social media. Or do whatever thing that it different from that you've personally decided is acceptable.


Something which almost universally induces disgust is different to all of those things.

In this case it's very explicit - an individual who is seeking out an AI chatbot for a romantic relationship probably would significantly benefit from conforming more to social norms rather than just giving up entirely and going off the deep end.


Fetishes are powered by disgust (because our brains are dynamically typed apparently) so there is nothing that “universally induces disgust”.

If someone loves Jesus, an imaginary stuffed animal, or a volleyball named Wilson and they get a sense of fulfillment or connection from it, then good for them. Life is short and fundamentally meaningless so anything that reduces suffering and increases enjoyment should be celebrated.


Some people just see a market to compete in and get rich. Whether or not it is a net benefit to society just doesn’t factor into their decisions. And the world is big enough that even if 99% of society won’t go through building that product, someone will.


Or they see a vulnerable demographic of lonely people that are easily exploited by some fairly low-hanging LLM "companionship" as a service.


Most won't frame it to themselves in an unethical way, they will see a true capability of companionship as a service to help those in need, which will be the more motivational driver for getting a product off the ground. It's the same as cereal companies framing their product as full of fruit and nutrients, when those products are anything but.


This problem is summed up very simply.

Regulation generally emerges due to documented problems/incidents that have happened due to a lack of regulation prior because a significant number of people need protecting from themselves.

Regulation also takes time, meanwhile businesses stomp the phantom accelerator on monetizing the lack of regulation.

Regulation is generally a safety measure. Yes, it can be counterproductive and counterintuitive, but regulation doesn't have to be a negative thing, and is frequently there to protect people from themselves.

The proposal of taking away encryption or adding backdoors to everything? Stupid. But not all regulations are bad. I think LLMs are something that absolutely do need regulation on so many levels.


Disgust is not a rational reason to dislike things.


Why do you think humans have developed a disgust reaction?


The same reason they developed any of the other reactions that go into System 1 thinking. It's not rational, it's just mostly useful in some niche that modern humans might not even be occupying anymore.


Emotions are irrational shortcuts for when rapid response is critical.


People have always had the ability to just choose pleasure over physical, financial, or mental health. It's why going to the bar at 10AM and alcoholism are millennia old, why prostitution is one of the oldest jobs in capitalist societies, etc.

We just have a tremendous amount of variety and options now.


You don't need capitalism to sell sex. Prostitution is waaaay older than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: