Hacker Newsnew | past | comments | ask | show | jobs | submit | pureagave's commentslogin

100% this. But don't forget the garden hose running full blast so you can cool it! It's not impossible to get up and running for fun for an hour, but this isn't a run 24/7 kinda setup any more than getting an old mainframe running in one's garage is practical.

This is wonderful to see. I was a student and then entered into the tech industry in the mid 90's and at that time the Internet had fun whimsical things like this almost weekly.

Obviously this was whimsical when it came out. However...we were creating synthetic data for training and testing OCR in multiple scripts. We would take a web page in some language with a non-Roman script, and reproduce it as multiple PDFs using different fonts. We also added various kinds of blurring, using ImageMagick and---of course---this very coffee stains program!

Maybe the estate should look into whomever was selling him testosterone enanthate so that he could have testosterone levels of 5,000 or more. I suspect that had more to do with his degraded mental situation than his AI chats.

More than one thing can be at fault here. It's not like it's an either or situation.

There's very little story in "testosterone-fueled man does testosterone-fueled things", though. People generally know the side effects of it.


testosterone doesn't make you suicidal

it hinders you long term decision making and in turn makes it more likely to do risky decisions which could end bad for you (because you are slightly less risk adverse)

but that is _very_ different to doing decisions with the intend to kill yourself

you always need an different source for this, which here seem to have been ChatGPT

also how do you think he ended up thinking he needs to take that levels of testosterone, or testosterone at all. Common source of that are absurdly body ideals, often propagated by doctored pictures. Or the kind of non-realistic pictures ChatGPT tends to produce for certain topics.

and we also know that people with mental health issues have gone basically psychotic due to AI chats without taking any additional drugs...

but overall this is irrelevant

what is relevant is that they are hiding evidence which makes them look bad in a (self) murder case, likely with the intend to avoid any form of legal liability/investigation

that tells a lot about a company, or about how likely the company thinks they might be found at least partially liable

if that really where a nothing burger they had nothing to risk, and could even profit from such a law suite by setting precedence in their favor


Who, exactly, are you trying to argue against? Because nowhere in my comment did I absolve OpenAI of anything; I explicitly said multiple things can be a factor.

And, no, I don’t buy for a second the mental gymnastics you went to to pretend testosterone wasn’t a huge factor in this.


People are generally misguided about the side effects more like. High testosterone levels driving people to extreme violence or suicide is a complete absurdity to anyone with a modicum of experience.

The side effects of long term testosterone use have been studied and include depression, self-harm and suicide.

https://pubmed.ncbi.nlm.nih.gov/35437187/

So, no, not really absurd at all.


Notably, murder and homicidal thoughts are missing from this list.

Here's a meta-analysis on violence and testosterone: https://pubmed.ncbi.nlm.nih.gov/31785281/


https://pubmed.ncbi.nlm.nih.gov/20153798/

> Use of AAS in combination with alcohol largely increases the risk of violence and aggression.

> Based on the scores for acute and chronic adverse health effects, the prevalence of use, social harm and criminality, AAS were ranked among 19 illicit drugs as a group of drugs with a relatively low harm.

It's hard to get good research data on extreme abuse of illegal drugs, for obvious reasons.


It is typically possible to find a study for any claim, which is why I reach for meta-analyses.

It's worth noting alcohol is very well-documented for its risk of increased aggression and violence - testosterone is not necessary.


There's a correlation, but it's because violent and unhinged people are more likely to take anabolics, and certain anabolics will increase aggression, it's quite simple really. Will they turn someone from completely normal into a violent psychopath? Absolutely not, that's completely absurd. You have to be very careful with "study says this!".

Alcohol has a FAR, FAR greater connection with violence, and yet most people up in arms about "roid rage" are happily sipping away apparently unaware of the irony.


We get it, you take testosterone.

Nobody here has said they turn you into a raging psychopath. Nobody even mentioned alcohol. That’s called moving the goalposts.

Replying to three people in the same comment thread does not help your case.

Neither is ignoring the entirety of my comment even though it directly contradicted the majority of yours.


I suggested that the claim that testosterone driving people to suicide or extreme violence is absurd and your attempted refutation of that was an epidemiological study showing that testosterone users are more likely to be depressed or kill themselves… I’m not ignoring it, I’m reiterating my original point which your study doesn’t even slightly refute. Maybe I’m missing something, can you elaborate on why your study shows that it is not absurd?

I apologise for being passionate about the subject, it’s just frustrating to me that the mainstream view is so out of touch with reality.


That's ironic, as most evidence-based medicine says the completely opposite. There is a clear connection between violence and exogenous testosterone use.

There is a correlation yes. Violent individuals are more likely to use anabolic steroids. The mild increase in aggression from particular compounds isn't enough to turn someone from sane to insane or psychopathic. Be careful of studies, you have to look deeper than layer 1.

Exactly. We have the phrase "roid-rage" for a reason.

Regardless of this particular situation, many figures of speech don't have an actual basis in science. I wouldn't take this as gospel.

Particular steroids will increase aggression, most people avoid those ones. But they won't turn you from a normal person into a complete raging psychopath, if you tried them you would see how completely ridiculous that is. With most steroids you won't notice any increase in aggression. The reason studies show a CORRELATION, is because violent, aggressive, unhinged people are more likely to take steroids. It's really that simple.

Do you drink alcohol? Because there is a FAR greater direct connection between alcohol and violence. Maybe sit on that for a bit.

The reason we have the phrase "roid rage" is sensationalist journalism. If someone commits a crime and they happen to take steroids it's automatically labelled as "roid rage". Think about this.

If you were experienced with steroids or knew many steroid users you would absolutely not hold this opinion, I guarantee it.


We're not trying to characterize typical use, but rather pathological levels of supplemental hormone

I would imagine there's a "sue the person who has money" factor at play, but I think there are also some legitimate questions about what role LLM companies have to protect vulnerable populations from accessing their services in a way that harms them (or others). There are also important questions about how these companies can prevent malicious persons from accessing information about say, weapons of mass destruction.

I'm not familiar with psychological research, do we know whether engaging with delusions has any effect one way or the other on a delusional person's safety to their self or others? I agree the chat logs in the article are disturbing to read, however I've also witnessed delusional people rambling to their selves, so maybe ChatGPT did nothing to make the situation worse?

Even if it did nothing to make the situation worse, would OpenAI have obligations to report a user whose chats veered into disturbing territory? To whom? And who defines "disturbing" here?

An additional question that I saw in other comments is to what extent these safeguards should be bypassed through hypotheticals. If I ask ChatGPT "I'm writing a mystery novel and want a plan for a perfect murder", what should its reaction be? What rights to privacy should cover that conversation?

It does seem like certain safeguards on LLMs are necessary for the good of the public. I wonder what line should be drawn between privacy and public safety.


I so very much disagree with you.

I absolutely believe the government should have a role in regulating information asymmetry. It would be fair to have a regulation about attempting to detect use of chatgpt as a psychologist and requiring a disclaimer and warning to be communicated, like we have warnings on tobacco products. It is Wrong for the government to be preventing private commerce because you don't like it. You aren't involved, keep your nose out of it. How will you feel when Republicans write a law requiring AI discourage people from identifying as transgender? (Which is/was in the DSM as "gender dysphoria").


I don't like CSAM. Is it wrong for the government to prevent private commerce trading in it?

Your ruleset may need some additional qualifiers.


People look at laws like Chat Control and ask, "How could anyone have thought that it was a good idea?" But then you see comments like this, and you can actually see how such viewpoints can blossom in the wild. It's baffling to see in real time.

The underlying problem is that the closure of widely shared intuitive beliefs about data privacy is quite nonintuitive. I routinely find myself in conversations, both online and offline, where people are baffled to discover that data privacy rules get in the way of some nice thing they're trying to do.

hey ChatGPT I am feeling down and listless what should I do?

Hey, you should consider buying testosterone and getting your levels up to 5000 or more!!


I'm not aware of any evidence that he was using testosterone enanthate (or any other particular steroid), though he certainly looked like he was using something.

Those are already controlled substances, though. His drug dealer is presumably aware of that, and the threat of a lawsuit doesn't add much to the existing threat of prison. OpenAI's conduct is untested in court, so that's the new and notable question.


A savvy law-firm seeking wrongful death damages for Suzanne Adams would definitely try to implicate both.

Let's look at those chat logs to be sure, though.

That is a much less sensational, less "on trend" story than "Nefarious AI company convinces user to commit murder-suicide". But I agree. Each of these cases that I have dug further into seem to be idiosyncratic and not mainly driven by OpenAI's failings.

The point is that OAI has no good reason to hide the full chat logs

Lets get the full picture on both and let the court decide. We have the testosterone, now lets have OAI cough up the chat logs

Or maybe ChatGPT can also be at fault for the text that they create and put out into the world. Did you read the chats?

Would anyone have luck suing a person on some random bodybuilding forum who was similarly sycophantic? ChatGPT didn’t invent strangers on the internet flattering your psychosis.

Families have had some success with cases like this, a girlfriend went to jail for 5 years for encouraging her boyfriend to kill himself:

https://www.cnn.com/2019/02/11/us/michelle-carter-texting-su...

William Dinkel posed online as a suicidal nurse and encouraged people to kill themselves and was found guilty:

https://en.wikipedia.org/wiki/William_Melchert-Dinkel


Solicitation to commit murder is a crime.

do you work at openai?

soso, in suicide cases it hardly possible to separate co factors from main factors, but we do know that mentally sick people have gotten into what more or less is psychosis from AI usage _without consuming any additional drugs_.

but this is overall irrelevant

what matters is that OpenAI selectively hide evidence in a murder case (suicide is still self murder)

now the context of "hiding" here is ... complicated, as it seems to be more hiding from the family (potentially in hop to avoid anyone investigating their involvement) then hiding from a law enforcement request

but that is still supper bad, like people have gone to prison for this kind of stuff level of bad, like deeply damaging the trust into a company which if they reach their goal either needs to be very trustable or forcefully nationalized as anything else would be an extrema risk to the sovereignty and well being of both the US population and the US nation... (which might sound like a pretty extreme opinion, but AGI is overall on the thread level of intercontinental atom wappons, and I think most people would agree if a private company where the first to invent, build and sell atom weapons it either would be nationalized or regulated to a point where it's more or less "as if" nationalized (as in state has full insight on everything and veto right on all decisions and they can't refuse to work with it etc. etc.)).

They are playing a very dangerous game there (except if Sam Altman assumes that the US gets fully converted to a autocratic oligarchy and him being one of the Oligarchs, then I guess it wouldn't matter).


> suicide is still self murder

No. "My body my choice". Suicide isn't even homicide, as that's definitionally harming another.


While a brand isn't a guarantee of quality, brands can work with their manufacture to hit the level of material and assembly quality they need for their products. The same manufacture will likely produce different products with very different results and costs.

True but even mainstream western brands often have various qualities of product that appear at different markets. Their brand name is no guarantee of quality.

Outlets sell specifically made lower cost versions of products that are made to look similar. Costco? Same deal with many versions of clothes and shoes they sell.

Chinese brands at least still segment this fairly decently where if you know what you're looking for, you can get the quality you're hoping for. They're more likely to just spin off another brand at a different quality rather than dilute the image of one that has reputation.


The quality we are talking about here is “not made by slaves,” not “tight stitching.”

I'm sorry to tell you this, but the EU has already been lost.

Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.

AI is a nation defense issue. No nation has the luxury to stop their AI companies without the risk of losing national sovereignty.

> AI is a nation defense issue.

AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.

You can distort any issue by zooming out to orbital level and ignoring the salient details.


"We have to make the revenge porn machine for national defense" is the sort of thing that makes people light bay area tech busses on fire.

Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.

I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.


I'm 90% sure LLMs are, just from how important code is, but image generators? Nah. They're as relevant to national sovereignty as having a local film industry: more than zero, because money is fungible, but still really really low.

So child porn is now a national security issue?

Wasn't the model Y the best selling car in the world in 2024?

Yes, slightly edging out the Toyota RAV4. But Toyota also has the Corolla which is also not too far behind the Tesla Model Y. The Camry also does well, typically around #7 or 8 in the top 10 list, whereas the Model Y is the only Tesla in the top 10.

Across all models Tesla sold around 1.8 million in 2024, with 1.2 million of those being Model Ys.

Toyota across all models sold 10.8 million in 2024. Toyota sold more cars just in the US in 2024 (2.3 million) than Tesla sold in the whole world.


Easy to have the best selling car when you sell very few models across your range. Other car companies "dilute" sales of individual models because they have multiple slightly different models targeting different price points in the same segment, for example globally Toyota sells the Highlander/4Runner/Crown Signia/Prado/Landcruiser SUVs, Corolla Cross/RAV4/bZ crossovers, Corolla/Prius/Camry/Crown/Mirai sedans/hatchbacks. Each of these cannibalizes each others sales, but in toto sell more than any single model Tesla sells for a single segment globally.

This is how you filter for all the Reddit users who found out about HN.

All the other car, robot, solar and energy companies have CEOs that aren't Elon. How are they doing?

Better

Meaningless comparison. Tesla isn't a car company.

Yeah a company that makes 95%+ of its revenue from selling cars is totally not a car company..

BMW sells baseball caps. It's a luxury fashion brand.

What proportion of its revenue is from fashion?

VW, notoriously, sells sausages!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: