Because "educating you better than most things" and "unavoidably generating bullshit" are not mutually exclusive. It's entirely possible that AI is better than human conversation on the spectrum of "adhering perfectly to truth" but it's far-and-away less credible than an encyclopedia or even a peer-reviewed paper.
Publishers know they cannot publish false information without spoiling their reputation. ChatGPT lies like it's life depends on it. Therefore, me and many others identify ChatGPT by it's willingness to create tangents of pure and unadulterated bullshit.
I remember many instances in childhood where I began to realize authority figures, parents/teachers/etc., that I had been trusting to teach me the ways of life weren’t all knowing. Moments of “lies” (using your term, really just being incorrect) where the cracks began to show.
I treat ChatGPT like a 90th percentile educator across many problem domains. Is it going to be wrong? Yes. Is to going to be very wrong? Yes. Is it capable of generating tangents of pure and unadulterated bullshit? Yes. But that was true of every teacher, professor, and mentor I’ve ever had.
Just because my 8th grade algebra teacher wasn’t as accurate as an encyclopedia or a published textbook didn’t prevent them from filling the role I rely on ChatGPT to fill now.
Edit: also, peer review isn’t a great example of a system that eliminates bs.
> Just because my 8th grade algebra teacher wasn’t as accurate as an encyclopedia or a published textbook didn’t prevent them from filling the role I rely on ChatGPT to fill now.
Sure - but I also cross-referenced my teacher when I was in math class. You know your teacher is wrong because you're also following the steps and showing your work. When your results digress, you cross-reference the steps and determine where you went wrong.
Ultimately that's what I fear people won't do with ChatGPT. When they see an equation up on the digital "whiteboard" they only know how to copy it down, not how to double-check the work. This happens already with code boilerplate, but I see it in HN comments when people have ChatGPT write them a nonsense treatise on whatever the topic-du-jour is. You really cannot "learn" much from a system that has no barriers or alarm system when it's outright fabricating things with no basis in reality.
My view is that widespread access to and use of generalized LLMs as trusted tools for learning is problematic partly because of the serious lack of critical thinking and basic education in the US. Which itself has nothing to do with LLMs right now, but is certainly a big part of why we’re in the current situation we are in. gestures around wildly
LLMs aren’t going to do much to fix that in the short term given the slop in, slop out problems.
> Publishers know they cannot publish false information without spoiling their reputation.
How did that saying go? You sweet summer child...
Reputation doesn't matter. It hasn't mattered for a while. There's too much confusion, you can't get no relief, and there's definitely not enough time in a day to care.
Most non-fiction publishing either is, or is funded by, advertising industry. I.e. pathological liars. You better believe most of the stuff those people publish is intentionally at the very least bullshit (in the sense of not caring whether it's true or false -- see most content marketing), and a lot of it is plain lies.
ChatGPT gets confused and fabricates stuff as much as a person speaking whatever comes to their mind. But at the very least, it's not lying to you intentionally. Which is why it's, for now, useful as a bullshit filter for the rest of the Internet.
> ChatGPT gets confused and fabricates stuff as much as a person speaking whatever comes to their mind.
Which is useless for the same reason you wouldn't "learn" from a friend that says "I just watched a pig fly!"
> But at the very least, it's not lying to you intentionally.
It's in fact worse that way. If someone lies with intent then they can at least admit it when I challenge their rhetoric. If ChatGPT can't lie intentionally, how is it supposed to know when it's deliberately telling the truth?
Publishers know they cannot publish false information without spoiling their reputation. ChatGPT lies like it's life depends on it. Therefore, me and many others identify ChatGPT by it's willingness to create tangents of pure and unadulterated bullshit.