Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I even checked one of his responses in WhatsApp if it's AI by asking the Meta AI whether it's AI written, and Meta AI also agreed that it's AI written

I will never understand why some people apparently think asking a chat bot whether text was written by a chat bot is a reasonable approach to determining whether text was written by a chat bot.





I know someone who was camping in a tent next to a river during a storm, took a pic of the stream and asked chatgpt if it was risky to sleep there given that it "rained a lot" ...

People are unplugging their brains and are not even aware that their questions cannot be answered by llms, I witnessed that with smart and educated people, I can't imagine how bad it's going to be during formative years


Sam Altman literally said he didn't know how anyone could raise a baby without using a chatbot. We're living in some very weird times right now.

He didn’t say “how could anyone”. His words:

"I cannot imagine figuring out how to raise a newborn without ChatGPT. Clearly, people did it for a long time, no problem."

Basically he didn’t know much about newborn and relied on ChatGPT for answers. That was a self deprecating attempt on a late night show. Like every other freaking guests would do, no matter how cliché. With a marketing slant of course. He clearly said other people don’t need ChatGPT.

Given all of the replies on this thread, HN is apparently willing to stretch the truth if Sam Altman can be put under any negative light.

https://www.benzinga.com/markets/tech/25/12/49323477/openais...


I disagree with the use of “literally” by the person above you, since Sam didn’t literally say those words (unless you subscribe to the new meaning of “literally” in the dictionary, of course).

At the same time, their interpretation doesn’t seem that far off. As per your comment, Sam said he “cannot imagine figuring out how” which is pretty close to admitting he’s clueless how anyone does it, which is what your parent comment said.

It’s the difference between “I don’t know how to paint” and “I cannot imagine figuring out how to paint”. Or “I don’t know how to plant a garden” and “I cannot imagine figuring out how to plant a garden”. Or “I don’t know how to program” and “I cannot imagine figuring out how to program”.

In the former cases, one may not know specifically how to do them but can imagine figuring those out. They could read a book, try things out, ask someone who has achieved the results they seek… If you can imagine how other people might’ve done it, you can imagine figuring it out. In the latter cases, it means you have zero idea where to start, you can’t even imagine how other people do it, hence you don’t know how anyone does do it.

The interpretation in your parent comment may be a bit loose (again, I disagree with the use of “literally”, though that’s a lost battle), but it is hardly unfair.


The interpretation is very off. You are way too focused on whether the first sentence is quote accurately. But

>Clearly, people did it for a long time, no problem.

In fact means Altman thinks the exact opposite of "he didn't know how anyone could raise a baby without using a chatbot" - what he means is that while it's not imaginable, people make do anyway, so clearly it very much is possible to raise kids without chatgpt.

What the gp did is the equivalent of someone saying "I don't believe this, but XYZ" and quoting them as simply saying they believe XYZ. People are eating it up though because it's a dig at someone they don't like.


I think what Altman defenders in this particular thread are failing to realise is that his real comment is already worthy of scrutiny and ridicule and it is dangerous.

Saying “no no, he didn’t mean everyone, he was only talking about himself” is not meaningfully better, he’s still encouraging everyone to do what he does and use ChatGPT to obsess about their newborn. It is enough of a representation of his own cluelessness (or greed, take your pick) to warrant criticism.


> One example given by Altman was meeting another father and hearing that this dad's six-month-old son had already started crawling, while Altman's had not. That prompted Altman to go to the bathroom and ask ChatGPT questions about when the average child crawls and if his son is behind.

> The OpenAI CEO said he "got a great answer back" and was told that it was normal for his son not to be crawling yet.

To be fair, that is a relatable anxiety. But I can't imagine Altman having the same difficulties as normal parents. He can easily pay for round the clock childcare including during night-times, weekends, mealtimes, and sickness. Not that he does, necessarily, but it's there when he needs it. He'll never know the crushing feeling of spending all day and all night soothing a coughing, congested one-year-old whilst feeling like absolute hell himself and having no other recourse.


Kinda ironic how the rest of the replies treat it as the truth without checking!

We should refrain from the common mistake of anthropomorphizing Sam Altman.

Sounds like a great way for someone to accidentally harm their infant. What an irresponsible thing to say. There are all sorts of little food risks, especially until they turn 1 or so (and of course other matters too, but food immediately comes to mind).

The stakes are too high and the amount you’re allowed to get wrong is so low. Having been through the infant-wringer myself yeah some people fret over things that aren’t that big of a deal, but some things can literally be life or death. I can’t imagine trying to vet ChatGPT’s “advice” while delirious from lack of sleep and still in the trenches of learning to be a parent.

But of course he just had to get that great marketing sound bite didn’t he?


Sam Altman decided to irresponsibly talk bullshit about parenting because yes, he needed that marketing sound bite.

I cannot believe someone will wonder how people managed to decode "my baby dropped pizza and then giggled" before LLMs. I mean, if someone is honestly terrified about the answer to this life-or-death question and cannot figure out life without an LLM, they probably shouldn't be a parent.

Then again, Altman is faking it. Not sure if what he's faking is this affectation of being a clueless parent, or of being a human being.


That’s not the questions people will ask though. They’ll go “what body temperature is too high?” Baby temperatures are not the same as ours. The threshold for fevers and such are different.

They will ask “how much water should my newborn drink?” That’s a dangerous thing to get wrong (outside of certain circumstances, the answer is “none.” Milk/formula provides necessary hydration).

They will ask about healthy food alternatives - what if it tells them to feed their baby fresh honey on some homemade concoction (botulism risk)?

People googled this stuff before, but a basic search doesn’t respond with you about how it’s right and consistently feed you emotionally bad info in the same fashion.


Agreed. I wasn't defending Altman!

I was mostly responding to the section about how those people should not be parents but I must’ve misread tone/missed something.

I was mostly arguing that Altman's statements, if taken at face value, show him to be unfit to be a parent. I stand by this, but mostly because I think people like him -- Altman, Musk, I tend to conflate -- are robots masquerading as human beings.

That said, of course Altman is being cynical about this. He's just marketing his product, ChatGPT. I don't believe for a minute he really outsources his baby's well-being to an LLM.


Ahhh ok thank you for clarifying that for me!

For people invested in AI it is becoming something like Maslow's Hammer - "it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail"

Wow, that's profoundly dangerous. Personally, I don't see how anyone could raise a kid without having a nurse in the family. I wouldn't trust AI to determine if something were really a medical issue or not, and would definitely have been at the doctors far, far more often otherwise.

You don't need nurses -_-, just your own parents or someone who had kids before and some random books for theoretical questions.

Raising a kid is really very natural and instinctive, it's just like how to make it sleep, what to feed it when, and how to wash it. I felt no terror myself and just read my book or asked my parents when I had some stupid doubt.

They feel like slightly more noisy cats, until they can talk. Then they become little devils you need to tame back to virtue.


To be fair he can't imagine many other aspects of what it is like to be a normal human being.

Sam Altman has revealed himself to be the type of tech bro who is embarrassingly ignorant about the world and when faced with a problem doesn’t think “I’ll learn how to solve this” but “I know exactly what’ll fix this issue I understand nothing about: a new app”.

He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit; he’s baffled that chatbots are making social media feel fake; the thing you mentioned with raising a child…

https://www.startupbell.net/post/sam-altman-told-investors-b...

https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-...

https://futurism.com/artificial-intelligence/sam-altman-cari...


> He said they have no idea how to make money, that they’ll achieve AGI then ask it how to profit

Seems reasonable to me. If it can't answer that it doesn't work well enough.


Ironic, given Sam Altman's entire fortune and business model is predicated on the infantilization of humanity.

Why can’t llm answer that question? Photo itself ought to be enough for a bit of information (more than the bozo has to begin with, at least), and ideally its pulling location from metadata and pulling flash flood risk etc from the area

Probably the correct answer the LLM should give is "if you have to ask, definitely don't do that". Or... it can start asking diagnostic questions, expert-system style.

But yeah, I can imagine a multi-modal model actually might have more information and common sense than a human in a (for them) novel situation.

If only to say "don't be an idiot", "pick higher ground" . Or even just as a rubber duck!


I uploaded a simple spreadsheet that was 8 rows and 12 columns. Not even 100 full cells. They were filled with plain text numbers and names, and a few dozen had green blocks, otherwise no other info/styling and no formulas. I asked ChatGPT “how many cells are green.” It told me 13 (there were over 30). I uploaded a photo. Still couldn’t do it.

I understand there are things a typical LLM can do and things that it cannot, this is mostly just because I figured it couldn’t do it and I just wanted to see what would happen. But the average person is not really given much information on the constraints and all of these companies are promising the moon with these tools.

Short version: It definitely did not have more common sense or information than a human, and we all know it sure would have given a very confident answer about conditions in the area to this person that were likely not correct. Definitely incorrect if it’s based off a photo.

In my experience when it has to crawl the Internet it’s particularly flaky. The other day I queried who won which awards in the game awards. 3 different models got it wrong, all of them omitted at least 2 categories. You could throw a rock on a search engine and find 80 lists ready to go.


If you pay for the llm and turn thinking up to max , it succeeds at many tasks that it normally fails on free version

No it was not like that. I assumed it was AI that was my interpretation as a human. And it was kind of a test to see what AI would say about the content.

seems like an unrelated anecdote, but thanks for sharing.

This is a couple of years old now, but at one point Janelle Shane found that the only reliable way to avoid being flagged as AI was to use AI with a certain style prompt

https://www.aiweirdness.com/dont-use-ai-detectors-for-anythi...


Gemini now uses SynthID to detect AI-generated content on request, and people don't know that it has a special tool that other chatbots don't, so now people just think chatbots can tell whether something is AI-generated.

Well, case in point:

If you ask an AI to grade an essay, it will grade the essay highest that it wrote itself.


Is this true though? I haven't done the experiment, but I can envision the LLM critiquing its own output (if it was created in a different session) and iteratively correcting it and always finding flaws in it. Are LLMs even primed to say "this is perfect and it needs no further improvements"?

What I have seen is ChatGPT and Claude battling it out, always correcting and finding fault with each other's output (trying to solve the same problem). It's hilarious.


There is a study in German that came to this conclusion, there's an english news article discussing it at https://heise.de/-10222370

Pangram seems to disagree. Not sure how they do it, but their system reliably detected AI in my tests.

https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...


Citations on this?

https://arxiv.org/abs/2412.06651 (in German, hopefully machine translation works well)

English article:

https://www.heise.de/en/news/38C3-AI-tools-must-be-evaluated...

If you speak German, here is their talk from 38c3: https://media.ccc.de/v/38c3-chatbots-im-schulunterricht


Why would it lie? Until it becomes Skynet and tries to nuke us all, it is omniscient and benevolent. And if it knows anything, surely it knows what AI sounds like. Duh.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: